text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
View all journals
Tuning characteristics of low-frequency EEG to positions and velocities in visuomotor and oculomotor tracking tasks
Reinmar J. Kobler1,
Andreea I. Sburlea ORCID: orcid.org/0000-0001-6766-34641 &
Gernot R. Müller-Putz ORCID: orcid.org/0000-0002-0087-37201
Scientific Reports volume 8, Article number: 17713 (2018) Cite this article
Brain–machine interface
Movement decoders exploit the tuning of neural activity to various movement parameters with the ultimate goal of controlling end-effector action. Invasive approaches, typically relying on spiking activity, have demonstrated feasibility. Results of recent functional neuroimaging studies suggest that information about movement parameters is even accessible non-invasively in the form of low-frequency brain signals. However, their spatiotemporal tuning characteristics to single movement parameters are still unclear. Here, we extend the current understanding of low-frequency electroencephalography (EEG) tuning to position and velocity signals. We recorded EEG from 15 healthy participants while they performed visuomotor and oculomotor pursuit tracking tasks. Linear decoders, fitted to EEG signals in the frequency range of the tracking movements, predicted positions and velocities with moderate correlations (0.2–0.4; above chance level) in both tasks. Predictive activity in terms of decoder patterns was significant in superior parietal and parieto-occipital areas in both tasks. By contrasting the two tracking tasks, we found that predictive activity in contralateral primary sensorimotor and premotor areas exhibited significantly larger tuning to end-effector velocity when the visuomotor tracking task was performed.
Access to neural activity through various recording modalities allowed us to study its tuning characteristics in upper-limb movements from microscale up to macroscale levels. At the microscale level, neural spiking activity in primary motor1 and premotor2 as well as posterior parietal3 areas is tuned to reach direction among other movement parameters4. By exploiting these tuning characteristics, non-human primates4,5 and selected humans6 with spinal cord injuries have been able to control artificial end-effectors in a 3D world. At the macroscale level, in terms of non-invasively accessible neural activity, spatiotemporal tuning characteristics are not yet clearly understood with regard to upper-limb movements.
The results of functional Magnetic Resonance Imaging (fMRI) studies in humans have revealed a fronto-parietal reach network comprising dorsal premotor (PMd) and medial areas of the superior parietal lobule (SPL)7,8. This network is active during executed and observed reaching movements7,8 as well as during saccadic eye movements9 and exhibits directional tuning10. The fMRI findings, in conjunction with the successful decoding of positions and velocities from low-frequency electrocorticography (ECoG) signals11, suggested that information about directional movement parameters might be accessible from outside the brain. Not much later, research groups reported successful classification of reach directions12, and regression of end-effector positions and velocities13 on the basis of low-frequency magnetoencephalographic (MEG) and electroencephalographic (EEG) signals. Since then, research in this context has focused on regression of end-effector positions and velocities or classification of reach direction in center-out tasks with linear models14. In this paper, we focus on the regression approach.
A general limitation of studying reaching movements with a regression approach in center-out tasks is that the 2D or 3D position and velocity vectors of the end-effector point always in the same direction - the direction of the target stimulus. As a consequence, the position and velocity signals are strongly correlated during the reaching movement. For this reason, it is difficult to identify the covariate (target position, end-effector position or velocity) to which the recorded neural activity is preferentially tuned15. Alternatively, by studying continuous movements in a pursuit tracking task (PTT), instantaneous position and velocity can be decorrelated15. In a PTT, the goal is to track a moving target with an end-effector. This requires the brain to visually monitor the moving target stimulus in relation to the end-effector so that the end-effector movement can be updated to achieve the goal. In such a visuomotor (VM) task, the eyes naturally track the moving target stimulus16. As a consequence of this natural tracking behavior, the oculomotor network, which also spans parietal and frontal regions17, is activated at the same time as the reaching network9.
To facilitate natural behavior and isolate the neural activity related to the involvement of the upper-limb, a control condition can be introduced. In the control condition, participants would perform an oculomotor (OM) task by tracking a target stimulus only with their eyes9,18. In the other condition (VM task), upper-limb movement is additionally involved in the tracking. By contrasting these conditions, it should be possible to infer whether low-frequency EEG carries more information about end-effector positions and velocities during the performance of the VM or OM task, and to identify where the differences are expressed at the cortical level. We hypothesized that contralateral, primary motor and premotor areas carry more information about end-effector positions and velocities when the VM task is being performed, and that activity in areas related to the reaching and oculomotor networks is tuned to positions and velocities in both tasks.
Here, we present the tuning characteristics of low-frequency EEG activity to positions and velocities during continuous tracking movements. In two conditions, participants were asked to track a pseudo-randomly moving target either visually (OM task) or by additionally controlling a cursor with their right arm (VM task). We evaluated our approach offline by examining the recordings of healthy participants. Our experimental results confirmed that low-frequency EEG carries information about target and cursor positions and velocities in both conditions. More interestingly, when contrasting conditions, we found that the low-frequency EEG carried more information about the instantaneous cursor velocity during the VM task rather than during the OM task. The differences were mainly reflected in the premotor and contralateral primary sensorimotor areas. The temporal tuning characteristics of these differences indicated that the predictive neural activity preceded cursor velocity with 150 ms. Therefore, we could show that low-frequency EEG activity, originating in premotor and primary sensorimotor areas, can at least offline be used to predict the velocities of executed upper-limb movements.
To test our hypotheses, we recorded high-density EEG and electrooculography (EOG) from 15 healthy participants during a two-dimensional PTT. In every trial, the PTT was preceded by a short, visually guided, center-out reaching task. Here, we present our findings during the PTT. Figure 1 depicts the experimental setup and paradigm. The paradigm separated two conditions. In the first condition (execution, VM task), participants were asked to track a pseudo-randomly moving target with their gaze and right hand by manipulating a cursor (Fig. 1a). In the second condition (observation, OM task), participants were asked to track the moving target with their gaze, while keeping their right hand in a resting position. To obtain similar visual input and tracking dynamics in both conditions, we replayed the participant's most recent, matching, executed cursor trajectory in observation condition trials. All results presented subsequently were determined after pre-processing, correcting for EOG artifacts19 and downsampling the recorded data to 10 Hz (see Methods). Throughout the text, grand average results are presented in the form of the mean value and its standard error.
Experimental setup and paradigm. (a) Participants sat in a comfortable chair positioned 1.4 m from a computer screen. Both arms were supported at the same height. The right arm rested on a table at a comfortable position. The friction between arm and surface was reduced by a sleeve and a circular pad positioned between hand and table. Palm position movements were recorded by a LeapMotion controller (LeapMotion Inc., USA) located 20 cm above the hand. Forward/backward hand movements on the table were mapped to upward/downward cursor movements on the screen. (b) Each trial started with a 3–4 s break during which the target (large ball) resided in the center. A 2 s baseline period was initiated when the target turned yellow. During this period, participants were asked to keep their hand in the resting position and, thereby, the cursor in the center of the screen. A visual cue indicated the condition, either execution (green target) or observation (blue target), followed by a center-out task in four directions. The direction was indicated by the target movement (0.5 s duration; arrows visualize movement in the individual images). After 1 s of fixation, a pursuit tracking task was performed for 16 s. A colored target stimulus (yellow, green, or blue) instructed the participants to fixate and track the target with their eyes. In the execution condition the participants controlled the cursor, while in the observation condition, the computer replayed a previously executed cursor trajectory which matched to the current target trajectory. See Supplementary Video S1 for examples.
Tracking analysis
To analyze the tracking dynamics, we computed cross-correlations between the positions and velocities of both stimuli in execution and observation conditions. Figure 2a summarizes the grand average cross-correlations in the execution condition. The large cross-correlations (r > 0.7) observed between signals of the same movement parameter (e.g, target and cursor position) show that participants complied with the instruction to minimize the distance to the target. Cross-correlations between the position and velocity of the same stimulus were negligible (|r| < 0.02; e.g, target position and target velocity), while we observed moderate (|r| ~ 0.4; e.g, target position and cursor velocity) cross-correlations across stimuli. Our target trajectory generation procedure ensured decorrelated horizontal and vertical components. Hence, cross-correlations observed with any signal from the other component were negligible (|r| < 0.05). Figure 2b summarizes the grand average cross-correlations in observation condition. We did not detect significant differences to the execution condition results (Fig. 2c).
Group-level stimuli cross-correlations in both conditions. (a) Cross-correlations between the two-dimensional movement parameters (target position, target velocity, cursor position, cursor velocity) in execution condition. (b) Cross-correlations in observation condition. (c) P-values for paired Wilcoxon sign rank tests between conditions. P-values were adjusted55 for 28 comparisons to control the false discovery rate (FDR) at a level of 0.05.
To estimate the temporal dependencies among the four movement parameters (target position, target velocity, cursor position, cursor velocity) per component, we computed cross-correlations over lags in the interval [−0.5 s, 0.5 s] in steps of 0.1 s. Figures 3a–d depict the results for the horizontal (a,b) and vertical (c,d) components. We aligned the individual figures based on the peak cross-correlations between pairs of movement parameters. For example, the origin in Figure 3b is shifted by −0.525 s compared to that in Figure 3a because the cursor velocity was maximally correlated with cursor position 0.525 s in the future. In the execution condition, the participants reacted with their hand movements (cursor trajectories) to the pseudo-random target trajectories. This means that the properties of the target trajectories (e.g, cross-correlation peak between target position and velocity) also determined the properties of the cursor trajectories.
The cross-correlation peak between target and cursor position can be used to infer information about the participants' tracking behavior. We used the lag of the cross-correlation peak to estimate the latency between the target and cursor. In the execution condition, the latency reflected the duration that a participant took to adjust the cursor movement to the pseudo-random target movement. The cross-correlation between the target and cursor position peaked at a delay of 153 ± 19 ms at group level. After accounting for a 55 ± 1 ms delay, introduced by our online processing system which transformed hand movements into cursor movements, the average latency of hand movements was approximately 100 ms. This result is in accordance with the findings of behavioral studies, which report that a minimum latency of 80–100 ms is needed for a visual or proprioceptive signal to influence an ongoing movement20,21.
Movement parameter tuning curves
We estimated tuning curves for each movement parameter with a single sample, sliding-window, linear regression approach13,22. The regression approach is outlined in Figure 3e. At different lags, a partial least squares (PLS)23 estimator was used to decode each movement parameter from the EEG within the sliding window (one sample). This modelling approach implied that the relevant activity in the signal used for decoding (EEG) has to be in the same frequency range as the signal to be decoded (e.g, horizontal cursor velocity)24. To extract the relevant activity in the frequency range (0.3 to 0.6 Hz) of the target and cursor trajectories, we bandpass-filtered the EEG (Supplementary Fig. S3 shows power spectral densities of the bandpass-filtered EEG and the movement parameters). In a cross-validation scheme, we computed correlations between the signals to be decoded (e.g, horizontal cursor velocity) and their estimates for each lag to generate the tuning curves.
Grand average movement parameter auto-/cross-correlation curves, and movement parameter tuning curves. (a–d) Grand average stimuli auto- and cross-correlations. (a) Auto- and cross-correlation curves of horizontal components relative to the horizontal cursor position during execution (solid lines) and observation (dashed lines). Movement parameters are color-coded. Shaded areas represent the standard-error of the mean. Cross-correlations were evaluated for lags, ranging from −0.5 s (leading relative to cursor position) to 0.5 s (lagging) in 0.1 s steps. (b) Auto- and cross-correlation curves of horizontal components relative to the horizontal cursor velocity. (c), Auto- and cross-correlation curves of vertical components relative to the vertical target position. (d) Vertical target velocity. (e) Outline of the regression approach. After EEG preprocessing (including bandpass-filtering), a sliding window (one sample) was used to decode the movement parameters at different lags. (f–i) Grand average correlations between movement parameters and their estimates at different lags (tuning curves). (f) Tuning curves for the horizontal target position (blue) and velocity (orange). The mean and its standard error summarize the results for execution (solid lines), observation (dashed lines) and their paired difference (dash dotted lines). Cross-correlation peaks between target position and velocity were used to align the time-lag axes. Lags with significant differences between conditions (paired Wilcoxon sign-rank tests, FDR adjustment for 88 comparisons, 0.05 significance level) are highlighted (*). (g) Tuning curves for horizontal cursor position (violet) and velocity (green). (h) Tuning curves for vertical target position (blue) and velocity (orange). (i) Tuning curves for vertical cursor position (violet) and velocity (green).
Figures 3f–i summarize the grand average tuning curves for both conditions. Due to the independence between the horizontal and vertical components (Fig. 2), the tuning curves in Figures 3f–i are complementary. For both components, the grand average correlations ranged from 0.2 to 0.4. We used a shuffling approach to estimate chance levels for each participant. The chance levels were similar across components, conditions and lags (target position rchance = 0.13 ± 0.003, target velocity rchance = 0.12 ± 0.002, cursor position rchance = 0.12 ± 0.003, cursor velocity rchance = 0.10 ± 0.002). Compared to chance level, the observed correlations were significant for all participants, components, conditions, movement parameters and lags. As in Figures 3a–d, we aligned the tuning curves (Fig. 3f–i) according to the peak cross-correlations between pairs of movement parameters. After the alignment, we observed three effects.
As a first effect, we found that in the observation condition the tuning curves in Figures 3f–i (dashed lines) were modulated by the auto-/cross-correlation with target position. That is, an increase in the tuning curve of a movement parameter coincided with an increase in the absolute auto-/cross-correlation between the movement parameter and the target position signal. We observed this effect for all movement parameters due to the dependencies between them. The dependencies are reflected in the auto-/cross-correlation curves (Fig. 3a–d). For example, the tuning curve of the vertical target position (Fig. 3h, blue dashed line) exhibited a similar waveform compared to the target position's autocorrelation curve (Fig. 3c, blue line). The tuning curve of the horizontal cursor position (Fig. 3g, violet dashed line) and its cross-correlation curve with the target position (Fig. 3a, blue line) represents another example. The size of the effect was approximately 0.1 for both components and maximal for target position. In the execution condition (solid lines), we detected the same modulation. However, it was partially masked by the other effects.
The second effect observed concerns the vertical component (Fig. 3h,i). The paired differences between execution and observation conditions (dash-dotted lines) exhibited a positive effect on all movement parameters and lags. That is, in the execution condition, the low-frequency EEG contained significantly more information about the movement parameters of the vertical component. The effect was largest for vertical cursor position and velocity with an average difference in correlation of 0.05 (Fig. 3i, violet and green dash-dotted lines).
The third effect concerned the differences in tuning curves for both components (Fig. 3f–i, dash-dotted lines). The differences were modulated by the absolute auto-/cross-correlation between a movement parameter and cursor velocity (Fig. 3a–d, green lines). The effect was prominent for the horizontal component and largely masked by the second effect for the vertical component. For example, the difference in tuning curves for the horizontal cursor position (Fig. 3g, violet dash-dotted line) resembled the absolute value of its cross-correlation curve with the horizontal cursor velocity (Fig. 3a, green line). The size of the effect was maximal (approx. 0.07) for the horizontal cursor velocity at lag 0 (Fig. 3g, green dash-dotted line). Taken together, we inferred that the extracted EEG carried significantly more information about the instantaneous (lag 0 s) cursor velocity in the execution condition.
We were also interested in assessing which brain areas encoded more information in the execution condition than in the observation condition. To determine which brain areas contributed to the third effect, we selected the cursor velocity decoder models at a lag of 0 s as representatives and computed their associated activation patterns25. The patterns were subsequently mapped to the cortical surface by applying EEG source imaging26,27 on a template head model. In source space, we computed pairwise differences between the conditions for the Euclidean norm of each voxel (see Methods).
Figure 4 depicts the grand average difference in pattern norms for the horizontal (Fig. 4a) and vertical (Fig. 4b) cursor velocities at lag 0. We defined eight anatomical regions of interest (ROIs) to span areas related to the fronto-parietal reaching network7,9. They are dorsomedial occipital cortex (DMOC), superior parietal lobule (SPL), fronto-central (FC) and primary sensorimotor areas (SM) of both hemisphere (Fig. 4c). We summarized the pattern activity of each ROI as the average of its voxels. Figures 4d,e depict the distribution of both horizontal and vertical components for all participants. Regarding the horizontal component (Fig. 4d), we observed a positive effect in FC and left SM areas. For the vertical component (Fig. 4e), we observed positive effects in right SPL and both FC areas. Considering their positive sign, the results indicate that the activity in the areas contained more information about the instantaneous cursor velocity in execution condition.
Grand average pattern activity difference between conditions in source space for single-lag (lag 0) cursor velocity decoder models. (a) Horizontal cursor velocity pattern. Voxel color indicates the sign of the difference in norms; positive (red) indicates larger pattern activity in execution. Voxels with a difference in norms less than half of the absolute maximum are shaded with gray to emphasize the sites with the largest effects. (b) Vertical cursor velocity pattern. (c) Anatomical regions of interest (ROI)s, covering dorsomedial occipital cortex (DMOC), superior parietal lobule (SPL), fronto-central (FC) and primary sensorimotor areas (SM) of both hemispheres. (d) Density estimates of the differences in ROI activity for participants for the horizontal cursor velocity. Each point represents one participant. Density curves follow the ROI color-coding scheme. (e) As in (d) for vertical cursor velocity.
In Figure 5 we show the difference in pattern norms for all single-lag models (Fig. 3f–i), to demonstrate how the differences in tuning curves are reflected on the cortical surface. Negative lags indicated leading brain activity (causal tuning), while positive lags indicated lagging brain activity (anti-causal tuning). The difference in activation patterns in fronto-central and contralateral sensorimotor areas was tuned to the horizontal and vertical cursor velocity in the [−0.5, 0.1] s interval and peaked around −0.1 to −0.2 s (Fig. 5b,d; bottom rows). As before, due to the temporal dependence between the position and velocity signals (Fig. 3a–d), we also observed tuning effects for the position signals. The difference in activation patterns was anti-causally tuned to cursor position for lags in the range [0, 0.5] s (Fig. 5a,c; bottom rows). Overall, the strength of the differences was more pronounced for the horizontal component. Similar to the cursor velocity pattern at lag 0 (Fig. 4b,e), we observed a positive effect in SPL only on the vertical component movement parameters (Fig. 5c,d). With respect to vertical cursor velocity (Fig. 5d; bottom row), the positive effect started in the right SPL at lag −0.3 s, peaked at lag 0 s, subsequently translated to the left SPL and faded at lag 0.3 s.
Grand average pattern activity differences between conditions for all single-lag decoder models. (a) Single-lag decoder patterns for the horizontal target (top) and cursor (bottom) positions for lags ranging from −0.5 s (brain activity leading relative to the position signals) to 0.5 s (brain activity lagging). As before, cross-correlation peaks between the target and cursor positions were used to align the time-lag axes. The red color indicates larger voxel activity in the execution condition. (b) Single-lag decoder patterns for the horizontal target (top) and cursor (bottom) velocities. (c) As in (a) for the vertical positions. (d) As in (b) for the vertical velocities.
Multiple-lag cursor velocity prediction
To exploit the tuning of neural activity over multiple lags, we extended the feature set by using multiple samples in the sliding window, linear regression approach. We evaluated sliding windows covering the samples at lags [−0.1, 0] s to [−0.5, 0] s in 0.1 s steps to predict the horizontal and vertical cursor velocities. The correlations between the recorded and decoded cursor velocities initially increased, but became saturated for windows exceeding [−0.3, 0] s (Fig. S4). For the [−0.3, 0] s window, the grand average test set correlations were rexe = 0.40 ± 0.02, robs = 0.36 ± 0.04 for the horizontal component and rexe = 0.41 ± 0.03, robs = 0.33 ± 0.03 for the vertical component.
To visualize the decoded cursor velocities, we selected a representative trajectory and summarized the results over participants. Figures 6a–c show the recorded target, cursor and the decoded cursor velocities for this particular trajectory in both conditions. The small standard-error around the recorded cursor velocities (green shaded area) demonstrates that the participants were tracking the target consistently. Compared to the recorded cursor velocities, the decoded cursor velocities exhibited more variance over participants than their neural predictions (Fig. 6c). Still, the grand average decoded cursor velocities were strongly correlated for both components and conditions. The grand average correlations were rexe = 0.83 ± 0.02, robs = 0.82 ± 0.02 on average for the 90 trajectories on the horizontal component, and rexe = 0.85 ± 0.02, robs = 0.80 ± 0.03 on the vertical component. This reflects a 0.40 gain in correlation at the group level compared to the results at participant level.
Grand average cursor velocity prediction for a [−0.3, 0] s estimation window. (a–c) Illustrations of executed and decoded cursor velocities for a specific target trajectory. (a) Grand average horizontal target (orange line), cursor (green line) and decoded cursor velocity in execution (gray solid line) and observation (gray dashed line) conditions. Shaded areas summarize the standard error of the mean. (b) As in (a) for the vertical component. (c) 2D representation for single time points. Dots indicate the group-level average. Dispersion over participants is summarized by the square root of the covariance matrix. (d–g) Grand average multiple-lag decoder patterns. (d) Horizontal cursor velocity patterns in the execution (left) and observation (right) conditions. Pattern activity norms were averaged over lags. The voxel color indicates strength of activity. (e) Vertical cursor velocity patterns in the execution (left) and observation (right) conditions. (f) Difference between lag-averaged pattern norms for the horizontal component. The voxel color indicates the sign and strength of the difference in the pattern activity. (g) As in (f) for the vertical component.
As before, we computed activation patterns and projected them to the cortex. Figures 6d,e depict the grand average patterns (average over participants and lags), and Table 1 lists the p-values of non-parametric permutation paired t-tests for the eight ROIs. Compared to chance level, the pattern activity was significant in DMOC areas in both conditions. SPL pattern activity was significant in the execution and mainly in the observation condition; the effect on vertical cursor velocity did not reach significance in observation condition. FC pattern activity was generally larger in the execution condition (Fig. 6f,g). The differences observed between execution and observation conditions were in line with the single-lag results (Fig. 3f–i). They were significant in right FC and left SM for the horizontal component, and in right SPL for the vertical component. The effects on left FC (and right FC for the vertical component) did not reach significance.
Table 1 Significance of ROI activation for multiple lag cursor velocity decoders (exe vs. shuffled exe, obs vs. shuffled obs, and exe vs. obs).
Visual tracking analysis
We examined the EOG signals to compare the visual tracking behavior between conditions by computing cross-correlations between the horizontal and vertical target position and the associated EOG derivative signals. The cross-correlations peaked at lag 0 for both conditions, indicating that the participants' gaze was focused on the target's instantaneous position. By comparing the correlation values at lag 0, we detected significant differences among conditions and components (significance levels were Bonferroni corrected from 0.05 to 0.01 for 5 two-sided paired Wilcoxon sign-rank tests). We found a slightly lower degree of correlation in the execution condition compared to the observation condition for the horizontal component (rexe = 0.88 ± 0.01, robs = 0.90 ± 0.02, p = 0.00537), while the degree of correlation was higher in the execution condition for the vertical component (rexe = 0.79 ± 0.02, robs = 0.70 ± 0.04, p = 0.00153). Within conditions, the degrees of correlation were higher for the horizontal component (pexe = 0.00012; pobs = 0.00006).
In the execution condition, the VM task required the processing of visual feedback about the cursor in relation to the moving target, while in the observation condition, the cursor was not task-relevant. Authors of previous behavioral studies have reported a reduced blink rate (BR) if more visual information was processed28,29. In our study, we detected blinks by thresholding the vertical EOG derivative19. As predicted, we found a significantly lower BR in terms of blinks per second (bps) in the execution condition as compared to the observation condition (BRexe = 0.019 ± 0.006 bps, BRobs = 0.028 ± 0.006 bps, p = 0.0015).
We have presented a novel paradigm, which was tailored to study the tuning characteristics of human, low-frequency EEG to target and cursor (end-effector) positions and velocities in the presence of eye movements. Our paradigm allowed us to distinguish between two conditions with similar tracking dynamics, but with different cursor-control origin. By not inhibiting eye movements during the PTT, we could study tracking movements in a natural fashion and focus on the effects related to the involvement of the upper limb. We presented evidence that this involvement indeed influences the spatiotemporal expression of information about end-effector positions and velocities in the low-frequency EEG activity.
In a PTT, participants typically manipulate an end-effector to minimize its distance to a target. Typically, task compliance results in high cross-correlations between movement parameters of the same type (e.g, position). However, the cross-correlations between positions and velocities depend on the properties of the target trajectories. We created a trade-off between task difficulty, bandwidth and steepness of the increase in correlation over lags. By using the 0.3 to 0.6 Hz band, we could study EEG in a similar frequency range as those examined in previous studies13,22, and shift the peak in cross-correlation between target velocity and position to 0.55 s. After accounting for the dependence between the movement parameters by aligning the tuning curves, we determined one effect in both conditions and two effects in the difference between conditions.
The effect observed in both conditions and components concerned the modulation of the tuning curves by the amount of cross-correlation between a movement parameter and target position. This effect let us to infer that information about the instantaneous target position was encoded in the low-frequency EEG. In the observation condition, the effect was prominent, while it was partially masked by the other effects in the execution condition. The target position was particularly relevant during the PTT. In both conditions, the participants had to keep their gaze fixated on the target. The fact that a peak in the correlation between target position and EOG derivatives occurred at lag 0 confirmed that the participants were able to accomplish the task. This finding is in accordance with findings for human smooth pursuit behavior for a bandlimited pseudo-randomly moving stimulus30. As a consequence, eye movement artifacts were also phase locked to the target position signal. To assess which sources contributed to the observed effect, we computed patterns for target position decoders at lag 0 and projected them to source space (Fig. S5). The grand average patterns for both conditions indicated that the contributions originated from a combination of brain activity with the largest predictive activity in parieto-occipital areas and residual eye artifacts.
As with the modulation of the tuning curves with target position in both conditions, we observed a modulation of the differences between conditions with cursor velocity. By mapping single-lag cursor velocity model patterns to source space and computing differences between conditions, we observed a stronger effect during the execution condition in both FC and contralateral SM ROIs (Fig. 5). The FC ROIs covered dorsal premotor (PMd) and supplementary motor areas (SMA). Their involvement in reaching is in accordance with the findings of imaging studies in humans31. The difference in activation patterns in FC and contralateral SM ROIs was sustained over multiple lags, with its peak activity leading cursor velocity by approximately 150 ms. The delay can be reduced to about 95 ms by accounting for the 55 ms delay between the hand and cursor movement, introduced by the online processing system. The remaining 95 ms could be explained by motor output delays32. Estimating the actual motor output delay is not a straightforward task, since it depends on the task demands and the type of perturbation among other factors32. However, Paninski et al. studied the tuning of movement parameters in a comparable PTT15. They investigated M1 single unit activity in non-human primates and reported that neural activity was tuned to cursor velocity in a [−400, 400] ms lag range (peak at −100 ms; neural activity leading). Thus, a stronger degree of tuning of neural activity to cursor velocity in motor areas during the execution condition offers a plausible explanation. An alternative explanation would be offered by anti-causal tuning to the cursor position for lags in the range [0, 500] ms (Fig. 5a,c). Tuning to the cursor position peaked at 300 to 400 ms, which would reflect feedback processing (neural activity lagging). Experimental results on decoding movement parameters in behaving non-human primate spiking activity1,4 and local field potentials33, together with results of studies on human MEG34 and ECoG35, show that tuning of SM activity to movement parameters yields peaking activity around 100 ms before the movement. Taken together, the differences in the activation patterns reflects more likely information about upcoming cursor velocities. Consequently, the observed effects for the cursor positions can be explained by the cross-correlations between the movement parameters (Fig. 3a–d).
The third effect observed concerned the vertical component alone. In the execution condition, we observed that the correlations of the tuning curves were generally higher (Fig. 3h,i), a 0.1 higher correlation between vertical EOG derivative and target position and a decrease in blink rate. Moreover, the activation patterns were significantly stronger in the SPL (Fig. 5b,d, Table 1). Relative decreases in the blink rate have been shown to be related to the processing of more visual information28 and more demanding tasks29. This reflects the difference between visuomotor (VM) and oculomotor (OM) tasks studied here. Behavioral and decoding results combined indicate greater engagement in tracking vertical component signals in the VM task. We offer two non-exclusive explanations for this phenomenon. First, unlike the horizontal component, the vertical component mapping was not congruent; this means that forward hand movements were mapped to upward cursor movements. Therefore, the increase in SPL activity could be explained by the integration of incongruent proprioceptive and visual information. Second, we studied two stimuli, moving in two uncorrelated dimensions, which meant that the oculomotor system had to keep track of both dimensions. The findings of behavioral studies30 and our results show that smooth pursuit is more accurate for the horizontal component. Accurate control of the upper-limb in the VM task could require the visual system to extract more information about the vertical component and, as a side effect, improve smooth pursuit. Since the SPL is involved in smooth pursuit control36, the increase in information about the vertical component could be explained.
By combining multiple lags to predict cursor velocities we could raise grand average decoder correlations by around 0.05 to 0.4 in the execution condition and to 0.35 in the observation condition. These correspond to correlations reported in previous EEG decoding studies in center-out13 and continuous movement tasks37,38. By averaging over participants, the correlations improved drastically to 0.8. The 2D plots in Figure 6c illustrate the reason for this effect. The grand average decoded cursor velocity is frequently in the same quadrant as the recorded cursor velocity; however, the variance among participants is considerable. Thus, the individual correlations are substantially lower. We inferred that the signal to noise ratio could be drastically improved by averaging the response over participants and consequently that the low-frequency EEG strongly correlates with positions and velocities at the group level in both conditions.
The grand average multiple lag cursor velocity decoder model patterns (Fig. 6d,e) demonstrate that the contributing sources were primarily of cortical origin in both conditions. Therefore, it is unlikely that the cursor velocity decoders relied on residual eye movement artifacts. It is also unlikely that arm or neck movement artifacts contributed, considered that there was no arm movement in the observation condition, and that the difference in decoder patterns (Fig. 6f,g) were primarily located in contralateral primary sensorimotor, fronto-central and parietal areas. In both conditions and in both components, pattern activity was strongest in the parieto-occipital and parietal areas (Fig. 6d,e). The associated DMOC and SPL ROIs showed significantly stronger pattern activity compared to the patterns of shuffled data (Table 1). This is in accordance with an increase in blood oxygenation level dependent (BOLD) activity in these areas during executed and observed reaching movements39. Moreover, the strong tuning of parieto-occipital and parietal areas in both conditions, reported here, is in accordance with the modulation of BOLD activity by movement direction in an fMRI adaptation study10. Since there was no significant difference in the DMOC ROIs between conditions (Table 1; exe - obs), we inferred that the predictive activity in the parieto-occipital areas was not specific to the VM task. That is, the predictive activity in parieto-occipital areas did not require the involvement of the upper-limb.
In conclusion, we demonstrated that low-frequency EEG carries information about target and cursor positions and velocities, which is primarily encoded in fronto-parietal and parieto-occipital networks. By contrasting between the decoder patterns of the VM and OM tracking tasks, we found that the degree of tuning in fronto-central and contralateral primary sensorimotor areas to the instantaneous cursor velocity was significantly larger in the VM tracking task. The temporal tuning characteristics indicate that neural activity lead cursor velocity by approximately 150 ms (hand velocity by 95 ms). Altogether, the presented results on spatial and temporal tuning characteristics of the low-frequency EEG extend the findings of previous decoding studies. Moreover, we believe that it is possible to transfer our findings to individuals with tetraplegia, since the participants in this study moved their right arm only during the VM task, but the decoder correlations were clearly above chance level in both tracking tasks. Future closed-loop studies need to investigate whether the tuning characteristics of low-frequency EEG can be exploited to control an end-effector and whether the control skill can be improved.
Fifteen people, aged 23.8 ± 0.8 years, participated in this study. All received payment to compensate for their participation. Nine of the participants were female. All participants self-reported to have normal or corrected-to-normal vision and to be right handed. Eleven participants had previously participated at least once in an EEG experiment. All signed an informed consent form after they had been instructed about the purpose and procedure of the study. The experimental procedure conformed to the declaration of Helsinki and was approved by the ethics committee of the Medical University of Graz (approval number 29-058 ex 16/17).
Experimental set-up
Figure 1a depicts the recording environment. Participants sat in a shielded room, positioned 1.4 m away from a computer screen. Their left arm was supported by an arm rest, while the right arm was supported by a planar surface at the same height. To reduce friction between the right arm and the surface, participants were asked to wear a sleeve and place their hand on a circular pad. A LeapMotion controller (LeapMotion Inc., USA), placed 20 cm above the hand, was used to record the right hand's palm position. After participants found a comfortable resting position, the right hand's palm position was mapped to the origin (center of the screen) in the virtual environment. In analogy to the interaction with a computer by using a computer mouse, we decided to map rightward/forward hand movements to rightward/upward cursor movements. In order to create a trade-off between movement range and movement/muscular artifacts in the EEG, we mapped a circle with a 5 cm radius around the resting position to a circle with a 16 cm radius on the screen. The limits of the circle on the screen were indicated by the bounds of a virtual grid. E.g. by moving their hand 5 cm to the right, the participants could make the cursor touch the grid on the right side.
The experimental procedure consisted of 4 blocks, lasting 3 hours in total. In the first block, participants were asked to familiarize themselves with the paradigm (approx. 10 min). In the second and fourth block, eye artifacts (blinks and eye movements) and resting activity were recorded for 5 min. The detailed procedure is described in19. In the third block, participants performed the main experimental task according to the paradigm illustrated in Figure 1b. Each trial implemented a center-out reaching task followed by a PTT. A yellow target stimulus marked the beginning of a trial. It triggered the participants to fixate their gaze upon the target. The paradigm distinguished between two conditions. In the observation condition (blue target), participants merely tracked the moving target visually while the computer replayed a previous cursor trajectory. In the execution condition (green target), participants additionally had to minimize the distance between the target and cursor by moving their right hand and thereby the cursor. A total of 180 trials (90 per condition, pseudo randomly distributed) were recorded in 20 runs with short breaks in between. We additionally recorded 180 short trials (90 per condition, pseudo randomly distributed with the other trials within the 20 runs). A short trial ended after the center-out task. The data recorded during short trials were not used in this analysis. Supplementary Movie S1 shows the tracking behavior of representative participants in both conditions during long (center-out + PTT) and short (center-out) trials.
Target trajectories were generated offline and were identical across participant. Twelve base target trajectories were sampled from pink noise, which was band-passed in the frequency range of 0.3 to 0.6 Hz according to the procedure described by Paninski et al.15. We sampled the horizontal and vertical components independently so that they were uncorrelated. The trajectory pool was extended by adding rotated (90°, 180° and 270°) and mirrored versions of the base target trajectories. This yielded a total of 96 trajectories; 90 of these were randomly distributed over the 180 trials (once per condition). This procedure ensured uncorrelated position and velocity signals at lag 0 (Fig. S1).
The results of pilot studies revealed that the tracking dynamics varied among participants and over time. To achieve similar and participant specific tracking dynamics between the conditions, we implemented an adaptive approach. In observation condition trials, the most recent cursor trajectory of all matching versions (original, rotated and/or mirrored) of the associated base target trajectory was selected for replay. Details about the cursor trajectory replay procedure are described in the supplementary methods.
Data recording and pre-processing
All data was recorded using the labstreaming layer (LSL) protocol (https://github.com/sccn/labstreaminglayer). 64 active electrodes (actiCAP, Brain Products GmbH, Germany) were placed on the scalp according to the 10–10 system. The reference and ground electrodes were positioned at the right mastoid and AFz. Six additional active electrodes were placed at the superior, inferior and outer canthi of the right and left eyes to record EOG. Figure S2 visualizes the locations of all 70 electrodes. EEG and EOG data were recorded at 1 kHz (BrainAmp, Brain Products GmbH, Germany). The paradigm was implemented in Python 2.7 based on the simulation and neuroscience application (SNAP) platform (https://github.com/sccn/SNAP) and the 3D engine Panda3D (https://www.panda3d.org). The screen position signals of the visual stimuli (cursor, target) were recorded via LSL at 60 Hz and synchronized offline with the EEG signals by means of a photodiode, which captured an impulse on the screen at the start of each trial. All signals were then resampled to 200 Hz.
The pre-processing pipeline is depicted in Figure 7 and was implemented in Matlab (Matlab 2015b, Mathworks Inc., USA) and the open source software EEGLAB40 version 14.1.1. EEG data were high-pass filtered (0.25 Hz cut-off frequency, Butterworth filter, eighth order, zero-phase). Data cleaning was initiated by a spherical interpolation of channels with poor signal quality (visual inspection). We interpolated 2.1 channels on average (Table S1). Eye movements and blinks were attenuated by applying the artifact subspace subtraction algorithm (outline in subsection eye artifact correction). The EEG channels were subsequently converted to common average reference (CAR). We then applied robust principal component analysis (Robust PCA)41 to attenuate occasional electrode pops and low-frequency drifts. The motivation behind Robust PCA is to separate a data matrix X (raw EEG) into a sum of a low rank matrix L (EEG) and a sparse matrix S (occasional single or few electrode outliers, e.g, pops). The optimization problem can be formulated as
$${\rm{\min }}\,{\Vert {\bf{L}}\Vert }_{\ast }+{\rm{\lambda }}\,{\Vert {\bf{S}}\Vert }_{1}\,{\rm{s}}.{\rm{t}}.\,{\bf{X}}={\bf{L}}+{\bf{S}}$$
and solved iteratively41. We fixed the regularization parameter \({\rm{\lambda }}=\frac{1.5}{\sqrt{{\rm{N}}}}\) with N being the number of samples. All subsequent processing steps were applied to the extracted low rank data matrix L. We epoched the data into 14 s trials, starting 1 s after tracking onset. Trials were marked for rejection if (1) the EEG signal of any channel exceeded a threshold of +/−200 µV or had an abnormal probability or kurtosis (more than 6 standard deviations beyond the mean), (2) the correlation of any EOG derivative (HEOG/VEOG) with the target position (horizontal/vertical) was improbable (more than 4 standard deviations beyond the mean), and (3) if a tracking error appeared (i.e, if hand tracking was lost or jerky). We applied the joint probability and kurtosis rejection criteria twice to detect gross outliers in the first iteration, and subtle outliers in the second iteration. All criteria combined marked an average of 16% of the trials for rejection. Supplementary Table S1 lists detailed information for each participant. Before actually rejecting trials, a low-pass filter (0.8 Hz cut-off frequency, Butterworth filter, fourth order, zero-phase) was used to extract EEG signals in the frequency of the target and cursor movements.
Stimuli position signals were low-pass filtered at 5 Hz (cut-off frequency, moving average finite impulse response filter, 17 filter taps, zero-phase) before velocities were extracted by computing first order, finite differences. Thereafter, the brain and stimuli signals were merged and resampled at 10 Hz. Then, the previously marked trials were rejected. Optionally, samples at various lags were concatenated to extend the feature space before fitting a regression model.
Signal pre-processing pipeline. After synchronization, brain signals were resampled, high-pass filtered and bad channels were spherically interpolated. Then, eye artifacts were attenuated19, followed by a conversion to a common average reference. Next, Robust PCA41 was applied to attenuate single electrode outliers. A subsequent low-pass filter was applied to extract the EEG signals in the frequency range of the target and cursor movements. Stimuli position signals were low-pass filtered before computing velocities and then concatenated to the EEG. After epoching, marked trials were rejected. Samples were optionally concatenated to extend the feature space for PLS regression.
Eye artifact correction
The eye artifact correction approach is based on a block design19,42. We fitted a linear eye artifact model to the recordings of blocks 2 and 4 (eye artifacts and resting brain activity) and applied the correction to the data of block 3.
The eye artifact model assumes a linear and stationary mixing of eye artifact sources \({{\bf{s}}}^{({\rm{a}})}({\rm{t}})\) (\({n}_{artifactsources}\times 1\)) with brain activity, denoted as noise \({\bf{n}}({\rm{t}})\) (\({n}_{artifactsources}\times 1\)). The activity at the EEG and EOG channels \({\bf{x}}(t)\) (\({n}_{channels}\times 1\)) is then
$${\bf{x}}({\rm{t}})={{\bf{A}}}^{(a)}{{\bf{s}}}^{({\rm{a}})}({\rm{t}})+{\bf{n}}({\rm{t}})$$
with a \({n}_{channels}\times {n}_{artifactsources}\) mixing matrix \({{\bf{A}}}^{({\bf{a}})}\). The brain activity \({{\bf{x}}}_{c}(t)\) can be recovered by subtracting the eye artifact activity at each channel
$${{\bf{x}}}_{{\rm{c}}}({\rm{t}})={\bf{x}}({\rm{t}})-{\hat{{\bf{A}}}}^{({\rm{a}})}{\hat{{\bf{s}}}}^{({\rm{a}})}({\rm{t}})\approx {\bf{n}}({\rm{t}})$$
if \({\hat{{\bf{A}}}}^{({\bf{a}})}\) and \({\hat{{\bf{s}}}}^{({\rm{a}})}(t)\) are good estimates of the unknown true mixing matrix and eye artifact sources. We applied the artifact subspace subtraction algorithm19,43 to compute the estimates. The algorithm estimates the source signals \({\hat{{\bf{s}}}}^{({\rm{a}})}(t)\) by linearly combining all channels
$${\hat{{\bf{s}}}}^{({\rm{a}})}({\rm{t}})={{\bf{V}}}^{({\rm{a}})}{\bf{x}}({\rm{t}})$$
with a \({n}_{artifactsources}\times {n}_{channels}\) unmixing matrix \({{\bf{V}}}^{({\rm{a}})}\). The correction in Equation (3) simplifies to
$${{\bf{x}}}_{{\rm{c}}}({\rm{t}})={\bf{x}}({\rm{t}})-{\hat{{\bf{A}}}}^{({\rm{a}})}{\hat{{\bf{s}}}}^{({\rm{a}})}({\rm{t}})=({\bf{I}}-{\hat{{\bf{A}}}}^{({\rm{a}})}{{\bf{V}}}^{({\rm{a}})}){\bf{x}}({\rm{t}})$$
The eye artifact model parameters (\({\hat{{\bf{A}}}}^{({\rm{a}})}\) and \({{\bf{V}}}^{({\rm{a}})}\)) were estimated in a two step approach19. First, penalized logistic regression was used to estimate each eye artifact source signal (e.g. horizontal eye movements) and its associated mixing coefficients (columns of \({\hat{{\bf{A}}}}^{({\rm{a}})}\)). Second, given the mixing matrix \({\hat{{\bf{A}}}}^{({\bf{a}})}\) and the covariance matrix of the channels during resting brain activity \({{\bf{R}}}_{n}\) (\({n}_{channels}\times {n}_{channels}\)), the unmixing matrix \({{\bf{V}}}^{({\rm{a}})}\) can be computed via regularized weighted least squares43:
$${{\bf{V}}}^{({\rm{a}})}={({\hat{{\bf{A}}}}^{({\rm{a}}){\rm{T}}}{{\bf{R}}}_{n}{\hat{{\bf{A}}}}^{({\rm{a}})}+{\boldsymbol{\Lambda }})}^{-1}{\hat{{\bf{A}}}}^{({\rm{a}}){\rm{T}}}{{\bf{R}}}_{n}$$
with \({\boldsymbol{\Lambda }}\) being a \({n}_{artifactsources}\times {n}_{artifactsources}\) diagonal regularization matrix. The original publication19 contains details about the model fitting procedure, choice of regularization parameters and a comparison to state of the art eye artifact correction approaches.
Movement parameter estimation
As low-frequency EEG is strongly correlated over time and space, there is considerable multicollinearity among the extracted features. An application of the partial least squares (PLS) regression44 method is particularly suitable in this scenario. As in22, we fit one model per movement parameter, condition and participant.
Let \({\bf{X}}\) be a \(F\times N\,\,\)matrix of \(F\) predictor variables with \(N\) samples (i.e, the EEG data), and let \({\bf{y}}\) be a \(1\times N\) vector representing the dependent variable (i.e, a particular movement parameter). The predictor variables are modelled as
$${\bf{X}}={\bf{P}}{\bf{T}}+{\bf{E}}$$
with \({\bf{T}}\) representing a \(D\times N\) matrix of latent components and \({\bf{E}}\), a \(F\times N\) matrix of additive independent and identically distributed (iid) noise. \({\bf{P}}\), representing a \(F\times D\,\,\)matrix, projects the latent components \({\bf{T}}\) to the observed predictors \({\bf{X}}\). The goal of applying PLS regression is to find latent components \({\bf{T}}\) that have maximal covariance with the dependent variable \({\bf{y}}\), while reducing the dimension from \(F\) to \(D\). The dependent variable is then modelled as
$${\bf{y}}={{\bf{v}}}^{{\rm{T}}}{\bf{T}}+{\bf{g}}$$
with \({\bf{v}}\), representing a \(D\times 1\) weight vector, and additive iid noise \({\bf{g}}\). Here, we applied the SIMPLS algorithm23 to estimate \({\bf{P}}\) and \({\bf{v}}\) for \(D\,=\,10\) latent components. The estimates can be combined to a \(F\times 1\) weight vector to directly estimate the dependent variable
$$\hat{{\bf{y}}}={\hat{{\bf{w}}}}^{{\rm{T}}}{\bf{X}}$$
from the predictor variables \({\bf{X}}\).
The model was evaluated by applying 10 times a 5-fold cross validation (CV). That is, the data was randomly partitioned to 5 folds. Then model parameters were fit to 4 folds. Model prediction was tested on the held out fold by computing the Pearson correlation coefficient \({r}_{y\hat{y}}\) between \({\bf{y}}\) and \(\hat{{\bf{y}}}\). This was repeated until each fold was tested once. Thereafter, the random partitioning was repeated another 9 times, resulting in 50 estimates of \({r}_{y\hat{y}}\).
Chance level performance was estimated by applying the 5-fold CV to shuffled data. We broke the association between \({\bf{X}}\) and \({\bf{y}}\) while maintaining the correlation structure by randomly exchanging \({\bf{y}}\) across trials. The shuffling and 5-fold CV procedure was repeated 100 times.
To interpret the extracted models, we transformed weight vectors to activation patterns25. We scaled the unit-less patterns45 with the standard-deviation of \(\hat{{\bf{y}}}\) to express the patterns in terms of voltages. The scaled pattern associated with an estimated weight vector is then
$$\hat{{\bf{a}}}={\hat{{\boldsymbol{\Sigma }}}}_{{\bf{X}}}\hat{{\bf{w}}}{\hat{\sigma }}_{\hat{{\bf{y}}}}^{-1}$$
with empirical covariance matrix \({\hat{{\boldsymbol{\Sigma }}}}_{{\bf{X}}}\) and standard-deviation \({\hat{{\rm{\sigma }}}}_{\hat{{\bf{y}}}}\) of predictors and estimated dependent variable, respectively. Analytical shrinkage regularization46 was applied to compute the estimate \({\hat{{\boldsymbol{\Sigma }}}}_{{\bf{X}}}\). We then summarized the 50 CV models by computing the geometric median47 across their patterns. This procedure yielded a representative pattern per movement parameter, condition and participant. To summarize the patterns obtained from chance level models, we randomly picked 50 patterns associated with 10 out of all 100 repetitions and computed their geometric median.
Pattern source mapping
We applied EEG source imaging26,27 to map the scaled patterns from channel space (i.e, scalp level) to source space (i.e, cortical surface). Head models were created by co-registering the ICBM152 boundary element model (BEM) template48 with recorded electrode positions (ELPOS, Zebris Medical Gmbh, Germany) using the open source software Brainstorm49 version 19-Jan-2018. The BEM comprised three layers (cortex, skull, scalp) with relative conductivities (1, 0.008, 1). The cortex was modelled with 5001 voxels. BEM and electrode positions were co-registered by three anatomical landmarks (nasion, left and right preauricular points). Due to deviations between participant and template anatomy, we completed the co-registration by projecting floating electrodes to the scalp layer (Fig. S2b). OpenMEEG50,51 was used to compute the forward model; that is, to describe the propagation of the electric fields from cortex to scalp. sLORETA52 was applied to compute the corresponding inverse model for unconstrained sources. For unconstrained sources the activity at each voxel is described 3 components (x, y, z coordinates). We used three minutes of resting EEG, recorded during blocks two and four, to estimate sensor noise. The pre-processing of resting EEG was conducted indentically as explained above. We then estimated the noise covariance matrix by applying analytical shrinkage regularization46.
Before mapping the regression model patterns to source space, we normalized them to alleviate participant-dependent scaling. Since the scaled patterns reflected the potential at the scalp, their magnitude reflected the magnitude of the recorded signals. However, in the EEG, the global field power can vary considerably among participants. To compensate for this effect, we normalized the patterns by the average channel power. The average channel power was estimated by taking the median of the diagonal elements of the noise covariance matrix. The inverse scalar was then applied to scale participant-specific patterns. We then projected the final channel space patterns onto source space in Brainstorm, extracted the Euclidean norm of the three components (x, y, z coordinates) per voxel and optionally averaged over lags if the model comprised multiple lags.
Source space statistics
Group level analysis was performed by computing paired differences between patterns of a movement parameter in source space. Significance was assessed at eight regions of interest (ROI)s, which have consistently been associated with movement processing. The ROIs are depicted in Figure 4c and cover fronto-central, primary sensorimotor, parietal and parieto-occipital areas. Activity at each ROI was summarized by the mean of its voxels. Significant ROIs were detected by applying two-tailed non-parametric permutation paired t-tests53,54 with 1000 repetitions. Regarding multiple comparisons, we controlled the false discovery rate (FDR) at a significance level of 0.05 by adjusting the p-values55.
Code availability
The codes used for data collection and analysis in this study are available from the corresponding author upon request.
The data that support the findings of this study are available from the corresponding author upon request.
Georgopoulos, A. P., Kalaska, J. F., Caminiti, R. & Massey, J. T. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. J. Neurosci. 2, 1527–1537 (1982).
Caminiti, R., Johnson, P. B., Galli, C., Ferraina, S. & Burnod, Y. Making arm movements within different parts of space: the premotor and motor cortical representation of a coordinate system for reaching to visual targets. J. Neurosci. 11, 1182–1197 (1991).
Kalaska, J. F., Caminiti, R. & Georgopoulos, A. P. Cortical mechanisms related to the direction of two-dimensional arm movements: relations in parietal area 5 and comparison with motor cortex. Exp. Brain Res. 51, 247–260 (1983).
Carmena, J. M. et al. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol. 1, E42 (2003).
Article CAS PubMed PubMed Central Google Scholar
Wessberg, J. et al. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature 408, 361–365 (2000).
ADS CAS Article PubMed Google Scholar
Hochberg, L. R. et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442, 164–171 (2006).
Culham, J. C. & Valyear, K. F. Human parietal cortex in action. Curr. Opin. Neurobiol. 16, 205–212 (2006).
Filimon, F., Nelson, J. D., Hagler, D. J. & Sereno, M. I. Human cortical representations for reaching: mirror neurons for execution, observation, and imagery. Neuroimage 37, 1315–1328 (2007).
Filimon, F., Nelson, J. D., Huang, R.-S. & Sereno, M. I. Multiple parietal reach regions in humans: cortical representations for visual and proprioceptive feedback during on-line reaching. J. Neurosci. 29, 2961–2971 (2009).
Fabbri, S., Caramazza, A. & Lingnau, A. Tuning curves for movement direction in the human visuomotor system. J. Neurosci. 30, 13488–13498 (2010).
Schalk, G. et al. Decoding two-dimensional movement trajectories using electrocorticographic signals in humans. J. Neural Eng. 4, 264–275 (2007).
Waldert, S. et al. Hand movement direction decoded from MEG and EEG. J. Neurosci. 28, 1000–1008 (2008).
Bradberry, T. J., Gentili, R. J. & Contreras-Vidal, J. L. Reconstructing three-dimensional hand movements from noninvasive electroencephalographic signals. J. Neurosci. 30, 3432–3437 (2010).
Robinson, N. & Vinod, A. P. Noninvasive Brain-Computer Interface: Decoding Arm Movement Kinematics andMotor Control. IEEE Systems, Man, and Cybernetics Magazine 2, 4–16 (2016).
Paninski, L., Fellows, M. R., Hatsopoulos, N. G. & Donoghue, J. P. Spatiotemporal tuning of motor cortical neurons for hand position and velocity. J. Neurophysiol. 91, 515–532 (2004).
Sailer, U., Flanagan, J. R. & Johansson, R. S. Eye-hand coordination during learning of a novel visuomotor task. J. Neurosci. 25, 8833–8842 (2005).
Perry, C. J., Amarasooriya, P. & Fallah, M. An Eye in the Palm of Your Hand: Alterations in Visual Processing Near the Hand, a Mini-Review. Front. Comput. Neurosci. 10 (2016).
Pereira, M., Sobolewski, A. & del R. Millán, J. Action Monitoring Cortical Activity Coupled to Submovements. eNeuro 4, ENEURO.0241–17.2017 (2017).
Kobler, R. J., Sburlea, A. I. & Müller-Putz, G. R. A Comparison of Ocular Artifact Removal Methods for Block Design Based Electroencephalography Experiments. In Proceedings of the 7th Graz Brain-Computer Interface Conference 236–241 (2017).
Desmurget, M. & Grafton, S. Forward modeling allows feedback control for fast reaching movements. Trends Cogn. Sci. 4, 423–431 (2000).
Haith, A. M., Pakpoor, J. & Krakauer, J. W. Independence of Movement Preparation and Movement Initiation. J. Neurosci. 36, 3007–3015 (2016).
Ofner, P. & Müller-Putz, G. R. Using a noninvasive decoding method to classify rhythmic movement imaginations of the arm in two planes. IEEE Trans. Biomed. Eng. 62, 972–981 (2015).
de Jong, S. SIMPLS: An alternative approach to partial least squares regression. Chemometrics Intellig. Lab. Syst. 18, 251–263 (1993).
Antelis, J. M., Montesano, L., Ramos-Murguialday, A., Birbaumer, N. & Minguez, J. On the usage of linear regression models to reconstruct limb kinematics from low frequency EEG signals. PLoS One 8, e61976 (2013).
ADS CAS Article PubMed PubMed Central Google Scholar
Haufe, S. et al. On the interpretation of weight vectors of linear models in multivariate neuroimaging. Neuroimage 87, 96–110 (2014).
Michel, C. M. et al. EEG source imaging. Clin. Neurophysiol. 115, 2195–2222 (2004).
Michel, C. M. & Murray, M. M. Towards the utilization of EEG as a brain imaging tool. Neuroimage 61, 371–385 (2012).
Veltman, J. A. & Gaillard, A. W. Physiological workload reactions to increasing levels of task difficulty. Ergonomics 41, 656–669 (1998).
Wilson, G. F. An Analysis of Mental Workload in Pilots During Flight Using Multiple Psychophysiological Measures. Int. J. Aviat. Psychol. 12, 3–18 (2002).
Collewijn, H. & Tamminga, E. P. Human smooth and saccadic eye movements during voluntary pursuit of different target motions on different backgrounds. J. Physiol. 351, 217–250 (1984).
Battaglia-Mayer, A. A Brief History of the Encoding of Hand Position by the Cerebral Cortex: Implications for Motor Control and Cognition. Cereb. Cortex. https://doi.org/10.1093/cercor/bhx354 (2018).
Miall, R. C. & Wolpert, D. M. Forward Models for Physiological Motor Control. Neural Netw. 9, 1265–1279 (1996).
Article PubMed MATH Google Scholar
Mehring, C. et al. Inference of hand movements from local field potentials in monkey motor cortex. Nat. Neurosci. 6, 1253–1254 (2003).
Jerbi, K. et al. Inferring hand movement kinematics from MEG, EEG and intracranial EEG: From brain-machine interfaces to motor rehabilitation. IRBM 32, 8–18 (2011).
Pistohl, T., Ball, T., Schulze-Bonhage, A., Aertsen, A. & Mehring, C. Prediction of arm movement trajectories from ECoG-recordings in humans. J. Neurosci. Methods 167, 105–114 (2008).
Krauzlis, R. J. Recasting the smooth pursuit eye movement system. J. Neurophysiol. 91, 591–603 (2004).
Lv, J., Li, Y. & Gu, Z. Decoding hand movement velocity from electroencephalogram signals during a drawing task. Biomed. Eng. Online 9, 64 (2010).
Kim, J.-H., Bießmann, F. & Lee, S.-W. Decoding Three-Dimensional Trajectory of Executed and Imagined Arm Movements From Electroencephalogram Signals. IEEE Trans. Neural Syst. Rehabil. Eng. 23, 867–876 (2015).
Filimon, F. Human cortical control of hand movements: parietofrontal networks for reaching, grasping, and pointing. Neuroscientist 16, 388–407 (2010).
Delorme, A. & Makeig, S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 134, 9–21 (2004).
Candès, E. J., Li, X., Ma, Y. & Wright, J. Robust principal component analysis? J. ACM 58, 1–37 (2011).
MathSciNet Article MATH Google Scholar
Schlögl, A. et al. A fully automated correction method of EOG artifacts in EEG recordings. Clin. Neurophysiol. 118, 98–104 (2007).
Parra, L. C., Spence, C. D., Gerson, A. D. & Sajda, P. Recipes for the linear analysis of EEG. Neuroimage 28, 326–341 (2005).
Wold, S., Sjöström, M. & Eriksson, L. PLS-regression: a basic tool of chemometrics. Chemometrics Intellig. Lab. Syst. 58, 109–130 (2001).
Ofner, P., Schwarz, A., Pereira, J. & Müller-Putz, G. R. Upper limb movements can be decoded from the time-domain of low-frequency EEG. PLoS One 12, e0182578 (2017).
Bartz, D. & Müller, K.-R. Covariance shrinkage for autocorrelated data. In Advances in neural information processing systems 1592–1600 (2014).
Weiszfeld, E. Sur le point pour lequel la somme des distances de n points donnés est minimum. Tohoku Math. J. 43, 355–386 (1937).
MATH Google Scholar
Fonov, V. et al. Unbiased average age-appropriate atlases for pediatric studies. Neuroimage 54, 313–327 (2011).
Tadel, F., Baillet, S., Mosher, J. C., Pantazis, D. & Leahy, R. M. Brainstorm: a user-friendly application for MEG/EEGanalysis. Comput. Intell. Neurosci. 2011, 879716 (2011).
Kybic, J. et al. A common formalism for the integral formulations of the forward EEG problem. IEEE Trans. Med. Imaging 24, 12–28 (2005).
Gramfort, A., Papadopoulo, T., Olivi, E. & Clerc, M. OpenMEEG: opensource software for quasistatic bioelectromagnetics. Biomed. Eng. Online 9, 45 (2010).
Pascual-Marqui, R. D. Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Methods Find. Exp. Clin. Pharmacol. 24(Suppl D), 5–12 (2002).
Nichols, T. E. & Holmes, A. P. Nonparametric permutation tests for functional neuroimaging: a primer with examples. Hum. Brain Mapp. 15, 1–25 (2002).
Maris, E. & Oostenveld, R. Nonparametric statistical testing of EEG- and MEG-data. J. Neurosci. Methods 164, 177–190 (2007).
Yekutieli, D. & Benjamini, Y. Resampling-based false discovery rate controlling multiple test procedures for correlated test statistics. J. Stat. Plan. Inference 82, 171–196 (1999).
The authors acknowledge Joana Pereira, Catarina Lopes Dias, Lea Hehenberger, Martin Seeber, Patrick Ofner, Andreas Schwarz and David Steyrl for their valuable comments and Maria Höller for her support during data acquisition. This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Consolidator Grant 681231 'Feel Your Reach').
Institute of Neural Engineering, Graz University of Technology, Graz, Austria
Reinmar J. Kobler, Andreea I. Sburlea & Gernot R. Müller-Putz
Reinmar J. Kobler
Andreea I. Sburlea
Gernot R. Müller-Putz
R.J.K., A.I.S. and G.R.M. conceived the study. R.J.K. implemented the paradigm and performed the acquisition. R.J.K. conducted the analysis. R.J.K., A.I.S. and G.R.M. interpreted the data. R.J.K. created the Figures and Tables. R.J.K. wrote the draft of the manuscript. R.J.K., A.I.S. and G.R.M. edited the manuscript.
Correspondence to Gernot R. Müller-Putz.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Video
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Kobler, R.J., Sburlea, A.I. & Müller-Putz, G.R. Tuning characteristics of low-frequency EEG to positions and velocities in visuomotor and oculomotor tracking tasks. Sci Rep 8, 17713 (2018). https://doi.org/10.1038/s41598-018-36326-y
DOI: https://doi.org/10.1038/s41598-018-36326-y
Visuomotor (VM)
Temporal Tuning Characteristics
Pattern Decoding
Motion Parameters
Parieto-occipital Area
Epileptic seizure endorsement technique using DWT power spectrum
Anand Ghuli
Damodar Reddy Edla
João Manuel R. S. Tavares
The Journal of Supercomputing (2022)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
About Scientific Reports
Guide to referees
Guest Edited Collections
Scientific Reports Top 100 2019
Scientific Reports Top 10 2018
Editorial Board Highlights
Author Highlights
10th Anniversary Editorial Board Interviews
Search articles by subject, keyword or author
Show results from All journals This journal
Explore articles by subject
Guide to authors
Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
nature.com sitemap
Protocol Exchange
Nature portfolio policies
Author & Researcher services
Scientific editing
Nature Masterclasses
Nature Research Academies
Libraries & institutions
Librarian service & tools
Partnerships & Services
Nature Conferences
Nature Africa
Nature Italy
Nature Japan
Nature Korea
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
Proving that an uncountable set has an uncountable subset whose complement is uncountable.
How does one prove that an uncountable set has an uncountable subset whose complement is uncountable. I know it needs the axiom of choice but I've never worked with it, so I can't figure out how to use. Here is my attempt (which seems wrong from the start):
Let $X$ be an uncountable set, write $X$ as a disjoint uncountable union of the sets $\{x_{i_1},x_{i_2}\}$ i.e $X=\bigcup_{i\in I}\{x_{i_1},x_{i_2}\}$ where $I$ is an uncountable index (I'm pretty sure writing $X$ like this can't always be done), using the axiom of choice on the collection $\{x_{i_1},x_{i_2}\}$ we get an uncountable set which say is all the ${x_{i_1}}$ then the remaining ${x_{i_2}}$ are uncountable.
Anyway how is it done, properly?
I know the question has been asked in some form here but the answers are beyond my knowledge.
elementary-set-theory
Your idea is generally correct.
Using the axiom of choice, $|X|=|X|+|X|$, so there is a bijection between $X$ and $X\times\{0,1\}$. Clearly the latter can be partitioned into two uncountable sets, $X\times\{0\}$ and $X\times\{1\}$.
Therefore $X$ can be partitioned to two uncountable disjoint sets.
Indeed you need the axiom of choice to even have that every infinite set can be written as a disjoint union of two infinite sets, let alone uncountable ones.
Asaf Karagila♦Asaf Karagila
$\begingroup$ Thanks Assaf, does the proof $|X|=|X|+|X|$ require some advanced knowledge(If so could you direct me to a proof as Google and Jech's book gave me no results) or should I be able to do it without much knowledge? $\endgroup$ – user10444 Feb 21 '14 at 22:34
$\begingroup$ It's quite easy if you know a bit about ordinals. It's pretty much the same proof that $\aleph_0+\aleph_0=\aleph_0$ (define a notion of parity on ordinals, and just repeat the same trick as before); if not you can still do it with Zorn's lemma by considering the partial order whose elements are $(S,f)$ where $S\subseteq X$ and $f$ is a bijection between $S$ and $S\times\{0,1\}$, ordered by pointwise inclusion. It's non-empty because $X$ is infinite, so it has countable subsets; and it's easy to verify Zorn's lemma. The maximal elements must be co-finite to $X$ and removing a finite set is fine $\endgroup$ – Asaf Karagila♦ Feb 21 '14 at 22:38
$\begingroup$ Also, it appears in Jech Set Theory (3rd Millennium edition, 2006) on page 31; and also Asaf, with just one s. :-) $\endgroup$ – Asaf Karagila♦ Feb 21 '14 at 22:40
$\begingroup$ Then I shall get started with studying ordinals.Oh and sorry about that, I even wrote complement incorrectly(I'm sleep deprived). Thanks for the help again! $\endgroup$ – user10444 Feb 21 '14 at 22:48
$\begingroup$ Studying ordinals is probably a good idea in general. If you're unfamiliar with well-orders, then perhaps working through a Zorn's lemma based argument (as I suggested above) is a good idea for moving forward with this topic. $\endgroup$ – Asaf Karagila♦ Feb 21 '14 at 22:52
Sorry for the necropost; I came across this and wanted to share a proof using only Zorn's lemma (i.e. an "elementary" proof).
Edit: "elementary" might not be the best word to use here. Perhaps "easy" is better. See comments.
Let $ P=\left\{ A \subset X\times X\colon\phi(A)\right\} $ where $\phi$ is the proposition given by: $$ \phi(A)=\forall(x,y)\in X\times X\colon(x,y)\in A\implies\psi(A,x,y) $$ and $$ \psi(A,x,y)=\forall(w,z)\in A\colon\left(x=w\iff y=z\right)\wedge x\neq z\wedge y\neq w. $$
Example: If $X=\mathbb{N}$, the set $A=\{(1,2),(3,4)\}$ would be in $P$, but the set $A^{\prime}=\{(1,2),(3,1)\}$ would not (the number $1$ appears twice).
Define a partial order on $P$ by inclusion $\subset$. Trivially, every chain in $P$ has an upper bound given by the union of the elements in that chain. By Zorn's lemma, $P$ has a maximal element, say $$ A^{\star}=\{(x_{\alpha},y_{\alpha})\}. $$ Let $X_{1}=\{x_{\alpha}\}$ and $X_{2}=\{y_{\alpha}\}$. Then $X_{1}$ and $X_{2}$ are disjoint by construction and necessarily uncountable (otherwise $X$ is not uncountable). Let $ Z= X_{1}\sqcup X_{2} $ and note that $X\setminus Z$ has at most one element, for otherwise we contradict the maximality of $A^{\star}$. Lastly, let $$ X_{1}^{\prime}= X_{1}\sqcup(X\setminus Z), $$ so that $X= X_{1}^{\prime}\sqcup X_{2}$, as desired.
parsiadparsiad
$\begingroup$ How is this "elementary"? Zorn's lemma is equivalent to the axiom of choice. $\endgroup$ – Noah Schweber Jan 22 '16 at 19:26
$\begingroup$ Of course. I meant in the sense that a reader with knowledge of Zorn's lemma and no previous knowledge of ordinals can digest the above (I have edited the post to reflect your point; thanks!) $\endgroup$ – parsiad Jan 22 '16 at 20:24
$\begingroup$ Ah, I see. +1. ${}$ $\endgroup$ – Noah Schweber Jan 22 '16 at 20:24
Not the answer you're looking for? Browse other questions tagged elementary-set-theory or ask your own question.
Union, intersection and complement of a set
Disjoint $AC$ equivalent to $AC$
A well-order on a uncountable set
False proofs claiming that $\mathbb{Q}$ is uncountable.
Proof explanation for the statement that $\Bbb R$ can be partitioned into a union of uncountable sets where the index set is also uncountable
The set of all functions from $(0,1) \to \mathbb N$ is countable or uncountable?
Proving the union of countably many countable sets is countable without axiom of choice.
Invariant set is union of singletons in Hutchinson's theorem
Why does the proof that an infinite set has a countable subset work when we have a choice function?
|
CommonCrawl
|
Brain Informatics
Identification and classification of brain tumor MRI images with feature extraction using DWT and probabilistic neural network
N. Varuna Shree
T. N. R. Kumar
The identification, segmentation and detection of infecting area in brain tumor MRI images are a tedious and time-consuming task. The different anatomy structure of human body can be visualized by an image processing concepts. It is very difficult to have vision about the abnormal structures of human brain using simple imaging techniques. Magnetic resonance imaging technique distinguishes and clarifies the neural architecture of human brain. MRI technique contains many imaging modalities that scans and capture the internal structure of human brain. In this study, we have concentrated on noise removal technique, extraction of gray-level co-occurrence matrix (GLCM) features, DWT-based brain tumor region growing segmentation to reduce the complexity and improve the performance. This was followed by morphological filtering which removes the noise that can be formed after segmentation. The probabilistic neural network classifier was used to train and test the performance accuracy in the detection of tumor location in brain MRI images. The experimental results achieved nearly 100% accuracy in identifying normal and abnormal tissues from brain MR images demonstrating the effectiveness of the proposed technique.
Image segmentation MRI DWT Morphology GLCM PNN
In image processing, images convey the information where input image is processed to get output also an image. In today's world, the images used are in digital format. In recent times, the introduction of information technology and e-healthcare system in medical field helps clinical experts to provide better health care for patients. This study reveals the problem segmentation of abnormal and normal tissues from MRI images using gray-level co-occurrence matrix (GLCM) feature extraction and probabilistic neural network (PNN) classifier. The brain tumor is an abnormal growth of uncontrolled cancerous tissues in the brain. A brain tumor can be benign and malignant. The benign tumor has uniformity structures and contains non-active cancer cells. The malignant tumor has non-uniformity structures and contains active cancer cells that spread all over parts.
According to world health organization, the grading system scales are used from grade I to grade IV. These grades classify benign and malignant tumor types. The grade I and II are low-level grade tumors while grade III and IV are high-level grade tumors. Brain tumor can affect individuals at any age. The impact on every individual may not be same. Due to such a complex structure of human brain, a diagnosis of tumor area in brain is challenging task.
The malignant-type grade III and IV of tumor is fast growing. Affects the healthy brain cells and may spread to other parts of the brain or spinal cord and is more harmful and may remain untreated. So detection of such brain tumor location, identification and classification in earlier stage is a serious issue in medical science. By enhancing the new imaging techniques, it helps the doctors to observe and track the occurrence and growth of tumor-affected regions at different stages so that they can take provide suitable diagnosis with these images scanning.
The key issue was detection of brain tumor in very early stages so that proper treatment can be adopted. Based on this information, the most suitable therapy, radiation, surgery or chemotherapy can be decided. As a result, it is evident that the chances of survival of a tumor-infected patient can be increased significantly if the tumor is detected accurately in its early stage.
The segmentation was employed to determine the affected tumor part using imaging modalities. Segmentation is process of dividing the image to its constituent parts sharing identical properties such as color, texture, contrast and boundaries.
The research paper is organized as follows: Sect. 2 presents the related works literature survey, Sect. 3 presents the materials and methods with the steps used in the proposed technique, Sect. 4 presents the results and discussion, Sect. 5 presents the performance analysis, and finally Sect. 6 contains the conclusion and future scope.
2 Literature survey
Analyzing and processing of MRI brain tumor images are the most challenging and upcoming field. Magnetic resonance imaging (MRI) is an advanced medical imaging technique used to produce high-quality images of the parts contained in the human body and it is very important process for deciding the correct therapy at right stage for tumor-infected individual.
Many techniques have been proposed for classification of brain tumors in MR images such as fuzzy clustering means (FCM), support vector machine (SVM), artificial neural network (ANN), knowledge-based techniques, and expectation-maximization (EM) algorithm technique which are some of the popular techniques used for region-based segmentation and so to extract the important information from the medical imaging modalities.
Bahadure et al. proposed BWT and SVM techniques image analysis for MRI-based brain tumor detection and classification. In this method, accuracy of 95% was achieved using skull stripping which eliminated all non-brain tissues for the detection purpose [1]. Joseph et al. [2] proposed segmentation of MRI brain images using K-means clustering algorithm along with morphological filtering for the detection of tumor images. The automated brain tumor classification of MRI images using support vector machine was proposed by Alfonse and Salem [3]. The accuracy of a classifier was improved using fast Fourier transform for the extraction of features and minimal redundancy maximal relevance technique was used for reduction of features. The accuracy obtained from this proposed work was 98%.
The brain MRI image contains two regions which are to be separated for the extraction of brain tumor regions. One part of region contains the tumor abnormal cells, whereas the second region contains the normal brain cells [4]. For the brain tumor segmentation, Zanaty [5] proposed an approach based on hybrid type, with the combination of seed growing, FCM, and Jaccard similarity coefficient algorithm with the measure of gray and white segmented tissue matter from tumor images. An average score of S of 90% segmentation was achieved with noise level of 9–3%.
To manage and to address protocols of different images and nonlinearity of real data an effective classification based on contrast of enhanced MRI images, Yao et al. [6] proposed an methodology which included extraction of textures features with wavelet transform and SVM with an accuracy of 83%. For the classification and brain tumor segmentation, Kumar and Vijayakumar [7] proposed methodology using principal component analysis (PCA) and radial basis function kernel with SVM. They obtained an accuracy of 94% with this method. An artificial neural network tool as both classifier and segmentation was used for the effective classification of brain tumor from MRI images was proposed by Sharma et al. [8] with the utilization of textural primitive features which achieved an accuracy of 100%.
For the medical image segmentation, a localized fuzzy clustering with the extraction of spatial information was proposed by Cui et al. [9]. The author used Jaccard similarity index as a measure of segmentation claiming an accuracy of 83–95% and differentiating in to white, gray and cerebrospinal fluid.
For the brain tumor image segmentation, active contour method was applied to solve the problem based on intensity homogeneities on MRI images was proposed by Wang et al. [10]. For the automatic extraction of features and tumor detection a with an enhanced feature using Gaussian mixture model applied on MRI images with wavelet features and principal component analysis was proposed by Chaddad [11] with an accuracy of T1- weighted 95% and T2- weighted 92% for FLAIR MRI weighted images.
The author Sachdeva et al. [11] used an artificial neural network and PCA–ANN for the multiclass brain tumor MRI images classification, segmentation with dataset of 428 MRI images and an accuracy of 75–90% was achieved.
The literature survey above gives a clear view of the techniques that were invented only to obtain the segmentation—region of interest, some techniques for extracting features and some to train and test using the classifiers for classification only. Much effective segmentation with the combined feature extraction could not be conducted, and only few features were extracted which resulted in low accuracy in tumor identification and detection. The classifiers used to train the features are also not much effective.
In this paper, we have combined discrete wavelet transformation (DWT) with the extraction of textural and GLCM features followed by morphological operations with probabilistic neural network as a classifier tool. The study deals with the extraction of features from the segmented region to detect and classify the normal and abnormal tumor cells of medical brain MRI images for a large database. Our outcome leads to conclusion that with this proposed method it makes clinical experts easy to take a decision regarding diagnosis and also scanning.
3 Proposed methodology
This describes the materials, the source from which the brain image data collected and the algorithms for brain MRI segmentation and feature extraction. The methodology proposed includes application on brain MRI images of 256 × 256, 512 × 512 pixel size on dataset. It is converted into gray scale for further enhancement. The following discussion deals with implementation of algorithm.
3.1 Preprocessing
The preprocessing step improves the standard of the brain tumor MR images and makes these images suited for future processing by clinical experts or imaging modalities. It also helps in improving parameters of MR images. The parameters includes improvement in signal-to-noise ratio, enhancement in visual appearance of MR images, the removal of irrelevant noise and background of undesired parts, smoothing regions of inner part, maintaining relevant edges [12].
3.1.1 Segmentation
The segmentation is a process where the image is partitioned into different regions. Let an entire region of image be represented by S. Segmentation process can be viewed as partition of S into p subregions like S1, S2, S3, …Sp. Certain conditions has to satisfied such as the segmentation must be intact; that is each and every pixel should be within the region, every points in the regions should be connected in some sense, regions should be disjoint, etc.
3.1.2 Region growing
Region growing is grouping of pixels or subregions into larger regions based on certain criteria. The main aim was to select a 'seed' points and attach each of these seed to those neighboring pixels having identical properties to grow region. A set of seeds was taken as input within the image and marked the objects to be segmented. The region grows iteratively by estimating all unallocated neighboring pixels of the region. The similarity was the measure of difference between pixel's intensity value and the region's mean, δ. The pixel with the smallest difference measured this way was allocated to the respective region. This was continued until all pixels were allocated to a region. Seeded region growing requires seeds as additional input. The results depend on the selection of seeds [13]. The measurement was based on mean value of the pixel intensity. The image gets segmented; this image was used to identify the desired tumor region.
3.2 Morphological operations
Morphology deals with study of shapes and boundary area extraction from brain tumor images. Morphological operation is rearranging the order of pixel values. It operates on structuring element and input images. Structuring elements are attributes that probes a features of interest. The basic operations used here are dilation and erosion. Dilation operation adds the pixels to boundary region, while erosion removes the pixels from the boundary region of the objects. These operations were carried out based on the structuring elements. Dilation chooses highest value by comparing all pixel values in neighborhood of input image described by structuring element, whereas erosion chooses the lowest value by comparing all the pixel values in the neighborhood of the input image [14].
3.3 Feature extraction
Feature extraction is process of extracting quantitative information from an image such as color features, texture, shape and contrast. Here, we have used discrete wavelet transform (DWT) for extracting wavelet coefficients and gray-level co-occurrence matrix (GLCM) for statistical feature extraction.
3.3.1 Feature extraction using DWT
The wavelet was used to analyze different frequencies of an image using different scales. Here, we are using discrete wavelet transform (DWT) which is powerful tool for feature extraction. It was used to extract coefficient of wavelets from brain MR images. The wavelet localizes frequency information of signal function which was important for classification.
2D discrete wavelet transform was applied that resulted in four subbands LL(low–low), HL(high–low),LH(low–high), HH(high–high) with the two-level wavelet decomposition of Region of Interest (ROI). The 2D level decomposition of an image displays an approximation with detailed three images that represents low and high-level frequency contents in an image, respectively [15]. The wavelets approximations at first and second level are represented by LL1, LL2, respectively; these represent the low-frequency part of the images. The high-frequency part of the images are represented by LH1, HL1, HH1, LH2, HL2 and HH2 which gives the details of horizontal, vertical and diagonal directions at first and second level, respectively. We have used low-level image, where LL1 represents the approximation of original image and is further decomposed to second-level approximation and details of image. The process was repeated until we obtained the desired level of resolution.
By using 2D discrete wavelet transform, the images were decomposed into spatial frequency components were extracted from LL subbands and since HL subbands have higher performance when compared to LL, we have used both LL and HL for better analysis which describes image text features [16]. The different frequency components and each component were studied with resolution matched to its scale and expressed as:
$$ {\text{DWT}}\;p({\text{s}}) = \left\{ {_{{di,j\,\,=\,\,\sum {p({\text{s}})g\,*\,i({\text{s}} - 2ij)} }}^{{di,j\,\,=\,\,\sum {p({\text{s}})h\,*\,i({\text{s}} - 2ij)} }} } \right. $$
The coefficients di,j refers to the component attribute in signal p(s) corresponding to the wavelet function, whereas bi,j refer to the approximated components in the signal. The functions h(s) and g(s) in the equation represent high-pass and low-pass filters coefficients, respectively, while parameters i and j refer to wavelet scale and translation factors.
3.3.2 Feature extraction using GLCM
Texture analysis differentiates normal and abnormal tissues easily for human visual perception and machine learning. It also provides variation between malignant and normal tissues, which may not be visible to human eye. It improves the accuracy by choosing effective quantitative features for early diagnosis. In the first step, the first-order statistical textural analysis-features information from the histogram of image intensities was extracted and frequencies of gray level at a random image positions were measured. It does not consider correlation or co-occurrences, between pixels. In the second step, the second-order textural analysis-features were extracted based on probability of gray levels at random distances and over entire image orientations.
The statistical features were extracted using gray-level co-occurrence matrix (GLCM), also known gray-level spatial dependence matrix (GLSDM). GLCM was introduced by Haralick [17]. It is an approach that describes the spatial relation between pixels of various gray-level values [15]. Gray-level co-occurrence matrix (GLCM) is 2D histogram in which (p,q)th elements is the frequency of event p occurs with q. It is a function of distance S = 1, angle (at 0 (horizontal), 45° (with the positive diagonal), 90° (vertical) and 135° (negative diagonal) and gray scales p and q, and calculates how often a pixel with intensity p, occurs in relation with another pixel q at a certain distance S and orientation. In this method, gray-level co-occurrence matrix was initiated and the textural features such as contrast, correlation, energy, homogeneity, entropy and variance were obtained from LL and HL subbands of first four levels of wavelet decomposition [18]. The textural features extracted are listed below:
Contrast (CONT) Measurement of pixel intensities and its neighbors above image and given by the equation:
$$ {\text{CONT}} = \sum\limits_{x = 0}^{m - 1} {\sum\limits_{y = 0}^{n - 1} {(x - y)^{2} f(x,y)} } $$
Energy (ENG) Energy defines the quantitative amount of repetitive pixel pairs. It is the measurement of affinity in an image, given by equation:
$$ {\text{ENG}} = \sqrt {\sum\limits_{p = 0}^{i - 1} {\sum\limits_{q = 0}^{j - 1} {f^{2} (p,q)} } } $$
Correlation (COR) The measurement of spatial features dependencies between the pixels.
$$ {\text{COR}} = \frac{{\sum\nolimits_{p = 0}^{i - 1} {\sum\nolimits_{q = 0}^{j - 1} {(p,q)f(p,q) - M_{{pM_{q} }} } } }}{{\sigma_{p} \sigma_{q} }} $$
Homogeneity (HOM) Measurement of local uniformity in an image. It is also known as inverse difference moment and contains a single or more range of values to distinguish between textured and non-textured.
$$ {\text{HOM}} = \sum\limits_{p = 0}^{i - 1} {\sum\limits_{q = 0}^{j - 1} {\frac{1}{{1 + (p - q)^{2} }}f(p,q)} } $$
Entropy (ENT) It calculates the designated interference of the textural image. It is given as:
$$ {\text{ENT}} = - \sum\limits_{p = 0}^{i - 1} {\sum\limits_{q = 0}^{j - 1} {f(p,q)\log_{2} f(p,q)} } $$
After the textural features extraction, the following features assessment parameter are also required to be obtained for better analysis on brain MRI images.
Peak signal-to-noise ratio (PSNR) Is a measure used to evaluate the characteristic features of reconstructed image from processed image. It is given as:
$$ {\text{PSNR}} = 20\log_{10} \frac{{2^{m} - 1}}{\text{MSE}} $$
Lower the value of mean square error and higher value of peak signal-to-noise ratio indicate better signal-to-noise ratio.
Mean Square Error (MSE) Measure of fidelity of signal or image. It was used to compare two images by giving quantitative or similarity scores.
$$ {\text{MSE}} = \frac{1}{P \times Q}\sum {\sum {\left( {f(i,j) - f^{R} (i,j)} \right)^{2} } } $$
These extracted statistical features were fed into probabilistic neural network (PNN) classifier as an input for training and testing the performance of classifier in the classification of brain tumor images into normal and abnormal.
3.4 Probabilistic neural network (PNN)
In early 1990s, D.F Specht introduced feed-forward neural network named as probabilistic neural network (PNN). It is derived from Bayesian network and statistical algorithm called Kernel Fisher discriminant analysis. It is composed of four nodes or layers: input layer, hidden layer, pattern layer and output layer. PNN formulates the weighted neighbors in the form of neural network [19].
The input layer consists of 'P' no of neurons that is dependent on categorical variables of various features extracted using gray-level co-occurrence matrix (GLCM). The input node weights were kept 1, and these values were fed into hidden layer. In the pattern layer, Radial basis functions were calculated and were fed in to summation layer. The summation layer adds the weighted values of activation in each class present in hidden layer. The values of summation layer were fed to output layer. The output layer chooses the highest of the probabilities, 1 indicates positive for the target class type and 0 indicates negative for non-targeted class type [20].
4 Result and discussion
In this research, we have used two datasets, one was trained dataset collected from Web sites www.diacom.com and the other was test dataset. These datasets were built by experienced radiologists; this includes sample images of five patients with all modalities. The data were collected from digital imaging and communications in medicine dataset. We have considered 650 collected samples from the 25 images of DICOM dataset, of which 18 are infected tumor brain tissues and others normal for the analysis.
Form the survey, the directional features extracted from LL and HL subbands wavelet transform gives the detailed information of different directions with more systematic with characterization changes in biological tissues.
The MRI images were decomposed into five different levels from which the detailed coefficients from LL and HL subbands were selected. These subbands were obtained from wavelet decomposed; the statistical textural features such as energy, correlation, entropy, and homogeneity were extracted using gray-level co-occurrence matrix (GLCM).
The textural features obtained from different levels of wavelet decomposition were taken into consideration and were used as input from training and testing the performance of PNN classifier.
The image 1 to image 10 shows different levels of subbands up to 5th level of wavelet decomposition. These extracted features were used as input vectors for training and testing the performance of PNN classifier. Tables 1 and 2 show the statistical textural features such as correlation, contrast, energy, homogeneity and entropy obtained from gray-level co-occurrences matrix formed from different levels of LL and HL subbands of all five levels of trained and tested images (Figs. 1 and 2).
The statistical features obtained from gray-level co-occurrence matrix (GLCM) of LL and HL subbands of trained images
The statistical features obtained from gray-level co-occurrence matrix (GLCM) of LL and HL subbands of tested images
Diagram of probabilistic neural networks
Brain tumor image dataset
The performance analysis of segmented images with the calculation of area is tabulated in Table 3. A lower value of MSE and a higher value of PSNR indicate better signal-to-noise ratio in the extracted image.
The performance evaluation and area calculation of tumor extracted region of trained images
PSNR
Area of image in pixel
Area of tumor region
From the observation, the contrast of trained MR images obtained was found to be more when compared to tested MR images, whereas the homogeneity of trained MR images was found to be less when compared to tested MR images. Similarly, the entropy and energy are found more in trained MR images when compared to tested MR images. With this proposed methodology and with the help of statistical textural features (contrast, correlation, energy, homogeneity and entropy) procured from LL and HL subbands classified the brain tumor images into normal and abnormal. The differences in statistical textural feature values of trained and tested brain tumors were found to be very useful in manipulating the performance of the PNN classifier in training and testing.
The observation results are shown in Fig. 3 representing original images (a) column wise, (b) preprocessed images obtained by filtering of noise, (c) region-based segmentation images, (d) extracted tumor-affected region from segmented images, area of the tumor-affected region.
Observational results of an image a original images, b preprocessed images, c region segmentation tumor image, d extracted tumor images, e area of extracted tumor region
5 Performance analysis
The trained dataset images for which the features extracted were trained using probabilistic neural network (PNN) classifier for the classification purpose, whereas the test dataset was not trained using PNN classifier, only the statistical and textural features were extracted. The accuracy of trained and tested image was compared based on the classification of normal and abnormal tumor tissues. Figure 4 shows the accuracy results in classification of normal and abnormal tumor tissues.
Comparison of trained and tested dataset classification using probabilistic neural networks
Accuracy or correct rate of classification is the efficiency of appropriate classification to the total number of classification tests [19]. This process of brain tumor classification has been performed on various normal and abnormal MR images, and the accuracy of the PNN classifier is manipulated, using the equation given below:
$$ {\text{Accuracy}}\left( \% \right) = \frac{{{\text{Correct}}\;{\text{cases}}}}{{{\text{Total}}\;{\text{number}}}} \times \, 100 $$
6 Conclusion and future scope
In this research, we have used brain MR images, segmented into normal brain tissue (unaffected) and abnormal tumor tissue (infected). To remove a noise and smoothen the image, preprocessing is used which also results in the improvement of signal-to-noise ratio. Next, we have used discrete wavelet transform that decomposes the images and textural features were extracted from gray-level co-occurrence matrix (GLCM) followed by morphological operation. Probabilistic neural network (PNN) classifier is used for the classification of tumors from brain MRI images.
From the observation results, it can be clearly expressed that the detection of brain tumor is fast and accurate when compared to the manual detection carried out by clinical experts. The performance factors evaluated also shows that it gives better outcome by improving PSNR and MSE parameters.
The proposed methodology results in accurate and speedy detection of tumor in brain along with identification of precise location of the tumor.
In identification and classification into normal and abnormal tumors from brain MR images, accuracy of nearly 100% was achieved for trained dataset because the statistical textural features were extracted from LL and HL subbands wavelet decomposition and 95% was achieved for tested dataset. With the above results, we conclude that our proposed method clearly distinguishes the tumor into normal and abnormal which helps in taking clear diagnosis decisions by clinical experts.
In the future work, different classifiers can be used to increase the accuracy combining more efficient segmentation and feature extraction techniques with real- and clinical-based cases by using large dataset covering different scenarios.
Bahadure NB, Ray AK, Thethi HP (2017) Image analysis for MRI based brain tumor detection and feature extraction using biologically inspired BWT and SVM. Int J Biomed Imaging 2017, Article ID 9749108, 12 pagesGoogle Scholar
Joseph RP, Singh CS, Manikandan M (2014) Brain tumor MRI image segmentation and detection in image processing. Int J Res Eng Technol 3, eISSN: 2319-1163, pISSN: 2321-7308Google Scholar
Alfonse M, Salem M (2016) An automatic classification of brain tumors through MRI using support vector machine. Egypt Comput Sci J 40:11–21Google Scholar
Coatrieux G, Huang H, Shu H, Luo L, Roux C (2013) A watermarking-based medical image integrity control system and an image moment signature for tampering characterization. IEEE J Biomed Health Inform 17(6):1057–1067CrossRefGoogle Scholar
Zanaty EA (2012) Determination of gray matter (GM) and white matter (WM) volume in brain magnetic resonance images(MRI). Int J Comput Appl 45:16–22Google Scholar
Yao J, Chen J, Chow C (2009) Breast tumor analysis in dynamic contrast enhanced MRI using texture features and wavelet transform. IEEE J Sel Top Signal Process 3(1):94–100CrossRefGoogle Scholar
Kumar P, Vijayakumar B (2015) Brain tumor MR image segmentation and classification using by PCA and RBF kernel based support vector machine. Middle East J Sci Res 23(9):2106–2116Google Scholar
Sharma N, Ray A, Sharma S, Shukla K, Pradhan S, Aggarwal L (2008) Segmentation and classification of medical images using texture-primitive features: application of BAM-type artificial neural network. J Med Phys 33(3):119–126CrossRefGoogle Scholar
Cui W, Wang Y, Fan Y, Feng Y, Lei T (2013) Localized FCM clustering with spatial information for medical image segmentation and bias field estimation. Int J Biomed Imaging 2013, Article ID 930301,8 pagesGoogle Scholar
Chaddad A (2015) Automated feature extraction in brain tumor by magnetic resonance imaging using Gaussian mixture models. Int J Biomed Imaging 2015, Article ID868031, 11 pagesGoogle Scholar
Sachdeva J, Kumar V, Gupta I, Khandelwal N, Ahuja CK (2013) Segmentation, feature extraction, and multi class brain tumor classification. J Digit Imaging 26(6):1141–1150CrossRefGoogle Scholar
Demirhan A, Toru M, Guler I (2015) Segmentation of tumor and edema along with healthy tissues of brain using wavelets and neural networks. IEEE J Biomed Health Inform 19(4):1451–1458CrossRefGoogle Scholar
Dubey RB, Hanmandlu M, Gupta K (2009) Region growing for MRI brain tumor volume analysis. Indian J Sci Technol 2(9), ISSN: 0974-6846Google Scholar
Sawakare S, Chaudhari D (2014) Classification of brain tumor using discrete wavelet transform, principal component analysis and probabilistic neural network. Int J Res Emerg Sci Technol 1(6), E-ISSN: 2349-7610Google Scholar
Haralick RM, Shanmugam K, Dinstein I (1973) Textural features for image classification. IEEE Trans Syst Man Cybern 3(6):610–621CrossRefGoogle Scholar
Shinde MV et.al (2014) Brain tumor identification using MRI images. Int J Recent Innov Trends Comput Commun 2(10), ISSN: 2321-8169Google Scholar
Kharat KD, Kulkarni PP, et al. (2012) Brain tumor classification using neural network based methods. Int J Comput Sci Inform 1(4), ISSN (PRINT): 2231-5292Google Scholar
Jadhav C et al (2014) Study of different brain tumor MRI image segmentation techniques. Int J Comput Sci Eng Technol (IJCSET) 4(4):133–136Google Scholar
Madhikar GV, Lokhande SS (2014) Detection and classification of brain tumour using modified region growing and neural network in MRI images. Int J Sci Res (IJSR) 3(12):5Google Scholar
Vaishali et al. (2015) Wavelet based feature extraction for brain tumor diagnosis—a survey. Int J Res Appl Sci Eng Technol (IJRASET) 3(V), ISSN: 2321-9653Google Scholar
1.Department of CS&EMSRITBangaloreIndia
Varuna Shree, N. & Kumar, T.N.R. Brain Inf. (2018) 5: 23. https://doi.org/10.1007/s40708-017-0075-5
Accepted 22 December 2017
|
CommonCrawl
|
The initial-boundary value problems for a class of sixth order nonlinear wave equation
Infinitely many positive solutions of fractional nonlinear Schrödinger equations with non-symmetric potentials
November 2017, 37(11): 5603-5629. doi: 10.3934/dcds.2017243
The index bundle and multiparameter bifurcation for discrete dynamical systems
Robert Skiba 1, and Nils Waterstraat 2,
Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Chopina 12/18, 87-100 Torun, Poland
School of Mathematics, Statistics & Actuarial Science, University of Kent, Canterbury, Kent CT2 7NF, United Kingdom
Received March 2017 Revised May 2017 Published July 2017
We develop a K-theoretic approach to multiparameter bifurcation theory of homoclinic solutions of discrete non-autonomous dynamical systems from a branch of stationary solutions. As a byproduct we obtain a family index theorem for asymptotically hyperbolic linear dynamical systems which is of independent interest. In the special case of a single parameter, our bifurcation theorem weakens the assumptions in previous work by Pejsachowicz and the first author.
Keywords: Homoclinic solutions, index bundle, bifurcation points, Stiefel-Whitney classes, Fredholm maps.
Mathematics Subject Classification: Primary: 58E07; Secondary: 37C29, 19L20, 47A53.
Citation: Robert Skiba, Nils Waterstraat. The index bundle and multiparameter bifurcation for discrete dynamical systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5603-5629. doi: 10.3934/dcds.2017243
A. Abbondandolo and P. Majer, On the global stable manifold, Studia Math., 177 (2006), 113-131. doi: 10.4064/sm177-2-2. Google Scholar
D. Arlt, Zusammenziehbarkeit der allgemeinen linearen Gruppe des Raumes $c_0$ der Nullfolgen, Invent. Math., 1 (1966), 36-44. doi: 10.1007/BF01389697. Google Scholar
M. F. Atiyah, Thom complexes, Proc. London Math. Soc.(3), 11 (1961), 291-310. doi: 10.1112/plms/s3-11.1.291. Google Scholar
M. F. Atiyah and I. M. Singer, The index of elliptic operators. Ⅳ, Ann. of Math.(2), 93 (1971), 119-138. doi: 10.2307/1970756. Google Scholar
M. F. Atiyah, K-Theory, Addison-Wesley, 1989. Google Scholar
T. Bartsch, The global structure of the zero set of a family of semilinear Fredholm maps, Nonlinear Anal, 71 (1991), 313-331. doi: 10.1016/0362-546X(91)90074-B. Google Scholar
G. E. Bredon, Topology and Geometry, Graduate Texts in Mathematics, 139 Springer, 1993. doi: 10.1007/978-1-4757-6848-0. Google Scholar
W. A. Coppel, Dichotomies in Stability Theory, Lecture Notes in Math., vol. 629, Springer-Verlag, New York, 1978. Google Scholar
M. Crabb and I. James, Fibrewise Homotopy Theory, Springer Monographs in Mathematics. Springer-Verlag London, Ltd., London, 1998. doi: 10.1007/978-1-4471-1265-5. Google Scholar
A. Dold, ÜUber fasernweise Homotopieäquivalenz von Faserräumen, Math. Z., 62 (1955), 111-136. doi: 10.1007/BF01180627. Google Scholar
P. M. Fitzpatrick and J. Pejsachowicz, Nonorientability of the index bundle and several-parameter bifurcation, J. Funct. Anal., 98 (1991), 42-58. doi: 10.1016/0022-1236(91)90090-R. Google Scholar
D. Henry, Geometric Theory of Semilinear Parabolic Equations, Springer-Verlag, New York, 1981. Google Scholar
W. Hurewicz and H. Wallmann, Dimension Theory, Princeton Mathematical Series, 4 Princeton University Press, 1941. Google Scholar
T. Hüls, Homoclinic trajectories of non-autonomous maps, J. Difference Equ. Appl., 17 (2011), 9-31. doi: 10.1080/10236190902932742. Google Scholar
K. Jänich, Vektorraumbündel und der Raum der Fredholmoperatoren, Math. Ann., 161 (1965), 129-142. doi: 10.1007/BF01360851. Google Scholar
S. Lang, Differential and Riemannian Manifolds, Third edition, Graduate Texts in Mathematics, 160 Springer-Verlag, New York, 1995. doi: 10.1007/978-1-4612-4182-9. Google Scholar
H. B. Lawson and M. -L. Michelsohn, Spin Geometry, Princeton Mathematical Series, 38 Princeton University Press, Princeton, NJ, 1989. Google Scholar
J. P. May, A Concise Course in Algebraic Topology, Chicago University Press, 2nd edition, 1999. Google Scholar
J. W. Milnor and J. D. Stasheff, Characteristic Classes, Princeton University Press, 1974. Google Scholar
K. J. Palmer, Exponential dichotomies and transversal homoclinic points, Journal of Differential Equations, 55 (1984), 225-256. doi: 10.1016/0022-0396(84)90082-2. Google Scholar
K. J. Palmer, Exponential dichotomies, the shadowing lemma and transversal homoclinic points, Dynamics Reported, 1 (1988), 265-306. Google Scholar
E. Park, Complex Topological K-theory, Cambridge Studies in Advanced Mathematics, 111 Cambridge University Press, Cambridge, 2008. doi: 10.1017/CBO9780511611476. Google Scholar
J. Pejsachowicz, K-theoretic methods in bifurcation theory, Fixed point theory and its applications (Berkeley, CA, 1986), Contemp. Math, 72 (1988), 193-206. doi: 10.1090/conm/072/956492. Google Scholar
J. Pejsachowicz, Index bundle, Leray-Schauder reduction and bifurcation of solutions of nonlinear elliptic boundary value problems, Topol. Methods Nonlinear Anal., 18 (2001), 243-267. doi: 10.12775/TMNA.2001.033. Google Scholar
J. Pejsachowicz, Bifurcation of homoclinics, Proc. Amer. Math. Soc., 136 (2008), 111-118. doi: 10.1090/S0002-9939-07-09088-0. Google Scholar
J. Pejsachowicz, Bifurcation of homoclinics of Hamiltonian systems, Proc. Amer. Math. Soc., 136 (2008). Google Scholar
J. Pejsachowicz, Bifurcation of Fredholm maps Ⅰ. The index bundle and bifurcation, Topol. Methods Nonlinear Anal, 38 (2011), 115-168. Google Scholar
J. Pejsachowicz, Bifurcation of Fredholm maps Ⅱ. The dimension of the set of bifurcation points, Topol. Methods Nonlinear Anal., 38 (2011), 291-305. Google Scholar
J. Pejsachowicz and R. Skiba, Global bifurcation of homoclinic trajectories of discrete dynamical systems, Central European Journal of Mathematics, 10 (2012), 2088-2109. doi: 10.2478/s11533-012-0121-8. Google Scholar
J. Pejsachowicz and R. Skiba, Topology and homoclinic trajectories of discrete dynamical systems, Discrete and Continuous Dynamical Systems, Series S, 6 (2013), 1077-1094. doi: 10.3934/dcdss.2013.6.1077. Google Scholar
J. Pejsachowicz, The index bundle and bifurcation from infinity of solutions of nonlinear elliptic boundary value problems, J. Fixed Point Theory Appl., 17 (2015), 43-64. doi: 10.1007/s11784-015-0237-0. Google Scholar
O. Perron, Die Stabilitätsfrage bei Differentialgleichungen, Math. Z., 32 (1930), 703-728. doi: 10.1007/BF01194662. Google Scholar
C. Pötzsche, Nonautonomous bifurcation of bounded solutions Ⅰ: A Lyapunov-Schmidt approach, Discrete Contin. Dyn. Syst., Ser. B, 14 (2010), 739-776. doi: 10.3934/dcdsb.2010.14.739. Google Scholar
C. Pötzsche, Nonautonomous continuation of bounded solutions, Commun. Pure Appl. Anal., 10 (2011), 937-961. doi: 10.3934/cpaa.2011.10.937. Google Scholar
C. Pötzsche, Bifurcations in nonautonomous dynamical systems: Results and tools in discrete time, Proceedings of the International Workshop Future Directions in Difference Equations, 69 (2011), 163-212. Google Scholar
S. Secchi and C. A. Stuart, Global Bifurcation of homoclinic solutions of Hamiltonian systems, Discrete Contin. Dyn. Syst., 9 (2003), 1493-1518. doi: 10.3934/dcds.2003.9.1493. Google Scholar
M. Starostka and N. Waterstraat, A remark on singular sets of vector bundle morphisms, Eur. J. Math., 1 (2015), 154-159. doi: 10.1007/s40879-014-0010-8. Google Scholar
N. Waterstraat, The index bundle for Fredholm morphisms, Rend. Sem. Mat. Univ. Politec. Torino, 69 (2011), 299-315. Google Scholar
N. Waterstraat, A remark on bifurcation of Fredholm maps accepted for publication in Adv. Nonlinear Anal., arXiv: 1602.02320 [math. FA] doi: 10.1515/anona-2016-0067. Google Scholar
M. G. Zaidenberg, S. G. Krein, P. A. Kuchment and A. A. Pankov, Banach bundles and linear operators, Russian Math. Surveys, 30 (1975), 101-157. Google Scholar
Fengwei Li, Qin Yue, Xiaoming Sun. The values of two classes of Gaussian periods in index 2 case and weight distributions of linear codes. Advances in Mathematics of Communications, 2021, 15 (1) : 131-153. doi: 10.3934/amc.2020049
Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032
Sishu Shankar Muni, Robert I. McLachlan, David J. W. Simpson. Homoclinic tangencies with infinitely many asymptotically stable single-round periodic solutions. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021010
Knut Hüper, Irina Markina, Fátima Silva Leite. A Lagrangian approach to extremal curves on Stiefel manifolds. Journal of Geometric Mechanics, 2020 doi: 10.3934/jgm.2020031
Fabian Ziltener. Note on coisotropic Floer homology and leafwise fixed points. Electronic Research Archive, , () : -. doi: 10.3934/era.2021001
Hui Gao, Jian Lv, Xiaoliang Wang, Liping Pang. An alternating linearization bundle method for a class of nonconvex optimization problem with inexact information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 805-825. doi: 10.3934/jimo.2019135
Yangjian Sun, Changjian Liu. The Poincaré bifurcation of a SD oscillator. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1565-1577. doi: 10.3934/dcdsb.2020173
Bing Yu, Lei Zhang. Global optimization-based dimer method for finding saddle points. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 741-753. doi: 10.3934/dcdsb.2020139
Héctor Barge. Čech cohomology, homoclinic trajectories and robustness of non-saddle sets. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020381
Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217
Claudio Bonanno, Marco Lenci. Pomeau-Manneville maps are global-local mixing. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1051-1069. doi: 10.3934/dcds.2020309
Bernold Fiedler. Global Hopf bifurcation in networks with fast feedback cycles. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 177-203. doi: 10.3934/dcdss.2020344
Yuri Fedorov, Božidar Jovanović. Continuous and discrete Neumann systems on Stiefel varieties as matrix generalizations of the Jacobi–Mumford systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020375
Gaojun Luo, Xiwang Cao. Two classes of near-optimal codebooks with respect to the Welch bound. Advances in Mathematics of Communications, 2021, 15 (2) : 279-289. doi: 10.3934/amc.2020066
Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361
Shanding Xu, Longjiang Qu, Xiwang Cao. Three classes of partitioned difference families and their optimal constant composition codes. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020120
Ying Lv, Yan-Fang Xue, Chun-Lei Tang. Ground state homoclinic orbits for a class of asymptotically periodic second-order Hamiltonian systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1627-1652. doi: 10.3934/dcdsb.2020176
Chihiro Aida, Chao-Nien Chen, Kousuke Kuto, Hirokazu Ninomiya. Bifurcation from infinity with applications to reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3031-3055. doi: 10.3934/dcds.2020053
Arthur Fleig, Lars Grüne. Strict dissipativity analysis for classes of optimal control problems involving probability density functions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020053
Guoyuan Chen, Yong Liu, Juncheng Wei. Nondegeneracy of harmonic maps from $ {{\mathbb{R}}^{2}} $ to $ {{\mathbb{S}}^{2}} $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3215-3233. doi: 10.3934/dcds.2019228
HTML views (62)
Robert Skiba Nils Waterstraat
|
CommonCrawl
|
Semi-continuous pilot-scale microbial oil production with Metschnikowia pulcherrima on starch hydrolysate
Felix Abeln1,2,
Robert H. Hicks3,
Hadiza Auta2,
Mauro Moreno-Beltrán3,
Luca Longanesi2,
Daniel A. Henk3 &
Christopher J. Chuck ORCID: orcid.org/0000-0003-0804-67512
Heterotrophic microbial oils are potentially a more sustainable alternative to vegetable or fossil oils for food and fuel applications. However, as almost all work in the area is conducted on the laboratory scale, such studies carry limited industrial relevance and do not give a clear indication of what is required to produce an actual industrial process. Metschnikowia pulcherrima is a non-pathogenic industrially promising oleaginous yeast which exhibits numerous advantages for cost-effective lipid production, including a wide substrate uptake, antimicrobial activity and fermentation inhibitor tolerance. In this study, M. pulcherrima was fermented in stirred tank reactors of up to 350 L with 250-L working volume in both batch and semi-continuous operation to highlight the potential industrial relevance. Due to being food-grade, suitable for handling at scale and to demonstrate the oligosaccharide uptake capacity of M. pulcherrima, enzyme-hydrolysed starch in the form of glucose syrup was selected as fermentation feedstock.
In batch fermentations on the 2-L scale, a lipid concentration of 14.6 g L−1 and productivity of 0.11 g L−1 h−1 were achieved, which was confirmed at 50 L (15.8 g L−1; 0.10 g L−1 h−1). The maximum lipid production rate was 0.33 g L−1 h−1 (daily average), but the substrate uptake rate decreased with oligosaccharide chain length. To produce 1 kg of dry yeast biomass containing up to 43% (w/w) lipids, 5.2 kg of the glucose syrup was required, with a lipid yield of up to 0.21 g g−1 consumed saccharides. In semi-continuous operation, for the first time, an oleaginous yeast was cultured for over 2 months with a relatively stable lipid production rate (around 0.08 g L−1 h−1) and fatty acid profile (degree of fatty acid saturation around 27.6% w/w), and without contamination. On the 250-L scale, comparable results were observed, culminating in the generation of nearly 10 kg lipids with a lipid productivity of 0.10 g L−1 h−1.
The results establish the importance of M. pulcherrima for industrial biotechnology and its suitability to commercially produce a food-grade oil. Further improvements in the productivity are required to make M. pulcherrima lipid production industrial reality, particularly when longer-chain saccharides are involved.
The benefits of microbial oils to supply the food and fuel oil market, such as its potential sustainability, have been highlighted many times [1,2,3]. Despite enormous potential, to date only a few oleaginous yeast have been cultured at the pilot scale and above [4,5,6,7,8,9,10,11,12], largely due to not finding a suitable host able to produce bulk oils close to market price or with a composition to qualify as high-value niche oils, as well as the absence of suitable equipment. Consequently, complex techno-economic studies are often based on results from laboratory-scale fermentations [13, 14], hampering the significance of those findings. Moreover, the majority of those pilot-scale experiments are run in batch or fed-batch operation, additionally limiting data for other promising operation modes such as continuous or semi-continuous operation [3, 15]. On route to commercialisation, it is key to show that a suitable oleaginous yeast is scalable to underline its industrial attractiveness [10].
It has been shown that Metschnikowia pulcherrima is a very promising oleaginous yeast for commercial lipid production, growing on a variety of low-cost substrates in non-sterile environments [8, 16]. The growth on certain oligosaccharides has been established [16, 17], making complex hydrolysates from waste streams, lignocellulosic biomass or starch attractive substrates. As being food-grade, hydrolysed starch in the form of glucose syrup (GS) is potentially a suitable feedstock for the production of a food-grade microbial oil, though the economic feasibility would depend heavily on the characteristics and price of the produced oil and by-products [13, 14]. A few other oleaginous yeasts including Rhodotorula toruloides [18, 19] have been cultured on starch hydrolysates, often derived from the cassava plant [18,19,20], but also on untreated starch [21] or starch wastewater [5]. Despite starch hydrolysates qualifying as a food-grade material, the aforementioned studies target the production of biodiesel. Considering M. pulcherrima's antagonistic traits facilitating sterility [22, 23] and capability of producing an oil similar in composition to prominent vegetable oils including palm oil [3, 16], the development of a food-grade oil can be envisioned in addition to biodiesel.
Culturing oleaginous yeasts in flow operation can benefit the process performance such as through increasing the productivity or supplying a consistent product stream [3, 15]. With M. pulcherrima, a nearly twofold increase of lipid production rates has been achieved in semi-continuous and continuous operation in combination with cell densities above 100 g L−1 [3]. However, changes in the cell morphology have been observed after around 10 days of cultivation [3, 24], which presumably must be overcome to provide a consistent product stream. To highlight the industrial relevance of the yeast, three aspects were addressed in this study:
Assess the growth of M. pulcherrima on maltooligosaccharides.
Achieve a consistent product stream in a semi-continuous culture (over 60 days).
Demonstrate the scalability of M. pulcherrima (up to 250 L).
Maltooligosaccharide uptake by M. pulcherrima
To assess the growth of M. pulcherrima on the maltooligosaccharides composing GS, batch fermentations were conducted on the 2- and 50-L scale, providing an insight into the scalability of M. pulcherrima. As nitric acid has been routinely used in previous stirred tank reactor fermentations with M. pulcherrima [3, 16, 25], but phosphoric acid being safer to handle at scale, the impact of both pH control agents on the fermentation parameters was additionally investigated.
In batch fermentations on the 2-L scale, M. pulcherrima assimilated glucose (DP 1), maltose (DP 2) and maltotriose (DP 3), the latter two being broken down simultaneously after glucose depletion suggesting that a similar metabolic pathway is involved (Fig. 1). The amylolytic activity of M. pulcherrima isolates has been demonstrated previously [26]. The corresponding enzyme is unlikely an α-amylase, but instead the breakdown is potentially facilitated intracellularly by an α-glucosidase (maltase) such as with Saccharomyces cerevisiae [26, 27]. On the 24% (w/v) GS supplied, the yeast grew to a dry cell weight (DCW) of 37.8 g L−1 and produced lipids up to a concentration of 14.6 g L−1 and yield of 0.18 g g−1 consumed saccharides. The uptake of the produced glycerol (≤ 1.0 g L−1) was favoured over maltose assimilation, and no additional glycerol was detected upon metabolisation of DP 2 and DP 3 (Fig. 1a). Arabitol, another polyol produced by M. pulcherrima [3, 25], was produced in lower quantities (≤ 0.2 g L−1) and present throughout the fermentation.
Batch cultivation of Metschnikowia pulcherrima on glucose syrup on the 2-L scale. a Profiles of dry cell weight, glycerol and saccharide concentrations, b saccharide uptake rates, and c lipid production rate and lipid content in stirred tank reactor fermentations of an evolved M. pulcherrima strain on glucose syrup and yeast extract (duplicate, mean ± standard error). Dashed lines and empty symbols: nitric acid pH regulation; solid lines and filled symbols: phosphoric acid pH regulation. For clarity, error bars are omitted where rates are displayed. DP degree of polymerisation
Saccharide chain length had a distinct effect on the fermentation kinetics, with maximum saccharide uptake rates decreasing from 2.0 (DP 1) to 0.8 (DP 2) and 0.5 g L−1 h−1 (DP 3), consequently decreasing biomass production rates over time (Fig. 1). With amylolytic enzymes typically exhibiting increased kinetics with increasing maltooligosaccharide chain length [28], reaction rates are presumably limited by the transport efficiencies [27]. Consequently, the yeast achieved maximum biomass and lipid production rates of 0.98 and 0.31 g L−1 h−1 (average over 6.8 h), respectively, during the growth on glucose. These reaction rates are higher than those reported during batch growth on glucose in nitrogen-limited broth (0.72 and 0.25 g L−1 h−1, respectively [3]), but this was impacted by the higher frequency of sampling and process parameters influencing fermentation kinetics such as the higher amount of yeast extract supplied initially [25] and the use of phosphoric acid for pH control (Fig. 1).
Indeed, compared to using phosphoric acid, nitric acid pH control led to a higher lipid content (Fig. 1c). Whilst similar lipid concentrations were obtained with both acids, the use of phosphoric acid led to advanced fermentation kinetics (Fig. 1a) and was therefore used in subsequent fermentations in this study.
On the 50-L scale, an 18% (w/w) higher DCW was achieved compared to the 2-L scale, but the lower lipid content (34.3% w/w) meant only a slightly higher lipid concentration of 15.8 g L−1 was obtained (Fig. 2, Table 1). Such behaviour has also been observed in the scale-up with Rhodotorula diobovata from 7 to 150 L [6] and is potentially due to the differences in reactor design. Remarkably, the highest yet reported lipid yield of M. pulcherrima was achieved, amounting to 0.21 g g−1 consumed saccharides. This is considerably higher compared to growth on synthetic media including glucose (0.15 g g−1 [3]) or glycerol (0.10 g g−1 [8]) as carbon sources. Maximum lipid production (0.33 g L−1 h−1) and DP 1 to 3 saccharide uptake (1.8, 0.7 and 0.4 g L−1 h−1) rates were very similar to those at the 2-L scale. M. pulcherrima may also be able to break down the long-chain oligosaccharides (Fig. 2b). However, due to inefficient uptake of long-chain saccharides, the substrate utilisation was limited to 0.19 g g−1 (Table 1).
Batch cultivation of Metschnikowia pulcherrima on glucose syrup on the 50-L scale. a Profiles of dry cell weight, lipid and saccharide concentrations, and b signals obtained through high-performance anion-exchange chromatography of fermentation samples, in stirred tank reactor fermentation of an evolved M. pulcherrima strain on glucose syrup and yeast extract (singlicate). After 5 days, a second inoculum was added to the reactor to promote oligosaccharide uptake. DP degree of polymerisation
Table 1 Results from batch and semi-continuous cultivations of M. pulcherrima on glucose syrup
Overall the excellent capability of M. pulcherrima to break down maltooligosaccharides, particularly up to DP 3, was demonstrated, achieving a remarkably high yield on the consumed saccharides. However, reaction rates decreased with chain length, and this imposes a trade-off between productivity and substrate utilisation. This is particularly important for a semi-continuous process, in which the dilution rate may be adjusted accordingly.
Steady-state semi-continuous cultivation on the 2-L scale
To illustrate the productivity/yield trade-off, three semi-continuous fermentations were set up with a dilution rate (D) of 0.21 d−1 and one fermentation with D = 0.14 d−1. In two of the former (duplicate experiments) and the latter, additional preculture was added to the vessel together with the feed. Through this, it was attempted to mitigate the negative influence of the small cell formation typically observed in M. pulcherrima cultures after around 10 days [3, 24]. The overall goal was to achieve a steady output of lipids with a consistent composition.
When fermenting at D = 0.21 d−1, the maximum biomass and lipid production rates were 0.81 and 0.25 g L−1 h−1 (daily average), respectively, an increase of up to 43% (w/w) from the initial batch (Fig. 3). The maximum biomass and lipid concentrations were 43.4 and 14.3 g L−1, respectively (Table 1). The small standard error between the duplicates demonstrates excellent repeatability for experiments with this yeast in stirred tank reactors, even in semi-continuous processing for over 3 weeks cultivation. Due to the high frequency of the feed, the yeast largely grew on glucose, with oligosaccharides accumulating in the broth (Additional file 1: Fig. S1). Therefore, compared to applying a lower dilution rate (D = 0.14 d−1), the biomass and lipid productivity were higher, but the substrate utilisation lower (Table 1).
Semi-continuous cultivation of Metschnikowia pulcherrima on glucose syrup at a high dilution rate on the 2-L scale. a Profiles of dry cell weight and lipid concentration of an evolved M. pulcherrima strain cultured semi-continuously in stirred tank reactors at a dilution rate D = 0.21 d−1 on glucose syrup and yeast extract (singlicate), and b when additionally preculture was added with every feed (duplicate, mean ± standard error). After 22 days the dilution rate was switched to D = 0.14 d−1 (singlicate)
The formation of small cells evident in the abruptly dropping average cell size occurred around Day 10 (Additional file 1: Figs. S2, S3). This was also observed in high-density flow cultures of M. pulcherrima on synthetic glucose medium [3]. Interestingly, the addition of fresh cells with every feed did not make a notable difference to this phenomenon. Possible reasons are the accumulation of certain nutrients or metabolites affecting the cell morphology of M. pulcherrima [3, 24]. This abrupt change in population dynamics did not majorly affect the DCW, but rather a general DCW decrease was observed (Fig. 3). Without the addition of preculture, this decrease was steadier, whereas with cell addition only notable towards the end of a 5-day consecutive broth exchange. Consequently, through the addition of preculture, the biomass productivity until Day 22 could be increased by 4.7 ± 1.2% (w/w). A similar decrease in biomass production and lipid content has been observed when culturing Rhodotorula glutinis semi-continuously on palm oil mill effluent [30]. Performing a broth exchange every 2 days, the lipid content distinctly dropped at a similar time as observed with M. pulcherrima (Day 9). Whilst the similarity between the behaviours is evident, morphological changes have not been reported in that study.
When fermenting at D = 0.14 d−1, the DCW, lipid content and fatty acid profiles remained considerably stable for 62 days (Fig. 4). However, the lipid content slightly dropped from approximately 33 to 26% (w/w) and the degree of fatty acid saturation from 30.0 to 25.1% (w/w). Through the lower feeding rate, a higher substrate uptake and hence, substrate utilisation was achieved compared to fermenting at D = 0.21 d−1 (Table 1). Indeed, the yeast grew on maltose and maltotriose (Fig. 4), but to achieve full conversion of these compounds an even lower dilution rate, for example, D = 0.07 d−1 (broth exchange every 7 days), would be required. Biomass and lipid productivities were held at around 0.26 and 0.08 g L−1 h−1, respectively, with the final lipid yield being 0.14 g g−1 consumed saccharides. Over time, the produced oil contained increasing C18:1 and C18:2 fatty acids at the expense of C16:0 (Fig. 4), which has been observed with M. pulcherrima previously [3], and is similar to other oleaginous yeasts [6]. Overall, through fermenting at a lower dilution rate the formation of small cells could not be avoided (Additional file 1: Fig. S4), but a drop in biomass productivity did not occur for over 2 months cultivation (Fig. 4). Remarkably, the lipid-rich cells grew to a diameter of up to 17.0 μm and the culture was not contaminated.
Semi-continuous cultivation of Metschnikowia pulcherrima on glucose syrup at a lower dilution rate on the 2-L scale. a Profiles of dry cell weight, lipid and saccharide concentrations, and b fatty acid profile and degree of fatty acid saturation in semi-continuous cultivations of an evolved M. pulcherrima strain in stirred tank reactors at a dilution rate D = 0.14 d−1 on glucose syrup and yeast extract (singlicate). Additional preculture was added with every feed
When reducing the dilution rate in fermentations from D = 0.21 to 0.14 d−1 (on Day 22), the previously dropping biomass production recovered (Fig. 3). This manifests that the higher dilution rate leads to the wash-out of this relatively slow-growing yeast. However, the instant recovery also means that is it possible to increase the productivity for short periods of time, by switching to a more frequent broth exchange, therefore reacting to market fluctuations or outages of other equipment, for instance (Table 1). A drop of the lipid content with increasing dilution rate has been observed with other oleaginous yeasts [30, 31].
Glycerol and arabitol have been proposed to serve as osmolytes with M. pulcherrima [3] and as such could have been expected in semi-continuous fermentations where saccharide concentrations > 300 g L−1 occurred (please see "Organism and media" and "Fermentation" sections). However, under the experimental conditions herein, these compounds were only detected extracellularly when the yeast grew on glucose (Fig. 4). Secreted glycerol was re-assimilated before maltose assimilation and thereafter only detected in small quantities (≤ 1.7 g L−1). It may be that the glycerol metabolism is carbon-source specific but more likely that any leached glycerol is re-assimilated by the yeast before it can be detected in the broth. Moreover, due to the saccharides split across more than six compounds and the higher molecular weights of the oligosaccharides, the water activity was always ≥ 0.98.
Lipids from semi-continuous cultivation at the 250-L scale
On the 250-L scale, the excellent scalability of M. pulcherrima as determined on the 50-L scale (Table 1), was to be further demonstrated in semi-continuous culture. The goal was to achieve high productivities during a relatively short production time (16 days), wherefore a dilution rate of 0.21 d−1 was chosen. Consequently, the yeast mainly grew on glucose, with larger saccharides accumulating in the broth (Fig. 5). Maximum DCW, lipid content and concentration were 40.0 g L−1, 32.6% (w/w) and 11.6 g L−1, respectively. The lipid content remained stable throughout the cultivation with an average of 29.0% (w/w) from Day 3. On Day 10, at a lipid content of 28.6% (w/w), the dried yeast also contained 9.8% (w/w) crude protein, of which 81.2% were amino acids (Additional file 1: Table S1). Of these, 7.3% were lysine and 1.6% methionine, amino acids typically required in increased quantities in animal feed [32]. Whilst specifically the high lysine content is promising for use in animal feed, for instance as a soy protein substitute [33], the pepsin digestibility was low (50.6%).
Semi-continuous cultivation of Metschnikowia pulcherrima on glucose syrup on the 250-L scale. a Profiles of dry cell weight, lipid concentration and lipid productivity (up to the corresponding time), and b profiles of glucose and maltose concentration, and the daily biomass production rate in semi-continuous stirred tank reactor fermentation at a dilution rate D = 0.21 d−1 of an evolved M. pulcherrima strain on glucose syrup and yeast extract (singlicate). Additional preculture was added with every feed
Scale comparison and productivity
In this study, M. pulcherrima was cultured in batch and semi-continuous operation on the 2-, 50- and 250-L scale (Table 1). Pilot-scale fermentations are often conducted to gather information on the scalability of an organism, such as in this study, and to determine scale-up criteria for further scale-up [10]. Exemplary scale-up criteria are constant power input, oxygen transfer coefficient or geometric similarity [29], though often the vessels are not designed to meet several criteria [34]. In this study, a constant gas volumetric flow rate was used as single scale-up criterion and linear scale-up parameters kept constant as base for scale comparison [29].
Remarkably, throughout the different scales reported M. pulcherrima performed similarly well with respect to kinetic and yield parameters (Table 1). Although the lipid content was minorly compromised on the pilot scale, higher maximum lipid production rates were achieved. This remarkable scalability becomes even more apparent in the similar lipid concentration profiles (Fig. 6). Potentially, the small differences in the results could be further reduced when increasing the geometric similarity between the reactor systems [10, 29].
The scalability of Metschnikowia pulcherrima. Comparison of the lipid concentration in M. pulcherrima cultures when grown batch-wise or semi-continuously on glucose syrup on the 2-, 50- or 250-L scale in stirred tank reactors. A constant gas volumetric flow rate was used as scale-up criterion and linear scale-up parameters kept constant for each batch and semi-continuous operation as base for scale comparison [29]
Despite the promising results at the 250-L scale, only a total of 9.8 kg oil was produced in semi-continuous cultivation, equivalent to a lipid productivity of 0.10 g L−1 h−1 (Fig. 5). It is generally recognised that for commercial lipid production productivities above 1 g L−1 h−1 are likely required [13, 14]. Therefore, whilst it was demonstrated that M. pulcherrima is a robust, and importantly, scalable organism, its key issue remains the low lipid productivity, particularly when longer chain saccharides are involved. A combination of strategies is therefore required to achieve lipid productivities suitable for commercial production [13]. This includes fermentation at high cell densities, through which M. pulcherrima has already been shown to achieve lipid productivities nearly double of those herein (0.18 g L−1 h−1) [3]. Through increased micronutrient supplementation, the lipid productivity in batch fermentation could be further increased to 0.29 g L−1 h−1 [25]. And finally, genetic modification is required for this very promising oleaginous yeast to attract further industrial relevance. An example is set with Yarrowia lipolytica, with which a lipid productivity of 0.92 g L−1 h−1, an approx. 11.5-fold increase from the wild type has been achieved in fed-batch operation after substantial genetic engineering [35].
Hydrolysed starch was shown to be a suitable feedstock for M. pulcherrima, with the lipid yields with respect to the consumed saccharides considerably exceeding those using any other feedstock to date, including glucose and glycerol. To decrease process cost, the suitability of starchy wastes such as cassava pulp or starch wastewater could be investigated, though they might be unsuitable for producing a food-grade oil. Additionally, the oligosaccharide breakdown capacity and rates require improvement. The results demonstrate that M. pulcherrima is an oleaginous yeast suitable for continued operation, with a reasonably steady lipid production rate, fatty acid composition of the produced oil and no contamination in fermentation over 2 months. Excellent repeatability in lipid production parameters has been demonstrated in semi-continuous operation over 3 weeks. Finally, M. pulcherrima has proven itself as scalable oleaginous yeast, with superior biomass and lipid concentrations as well as lipid production rates achieved at the 50- and 250-L scale compared to the 2-L scale. The kinetics of this promising yeast require improvement, but through a combination of high-density cultivation, media supplementation and potentially genetic engineering and/or directed evolution, it is envisaged that lipid productivities required for commercial production can be achieved. These exciting results are valuable for techno-economic analysis of microbial lipid production at this scale and provide further credibility to the emergence of M. pulcherrima as industrially relevant yeast.
Organism and media
Chemicals were purchased from Sigma-Aldrich unless noted otherwise. All fermentation equipment were sterilised and media autoclaved at 121 °C for 20 min prior to use. For the fermentation experiments, M. pulcherrima strain NCYC 4331 (National Collection of Yeast Cultures, Norfolk, UK), which has been evolved towards increased fermentation inhibitor tolerance [3], was used. The strain was kept as 20% (v/v) glycerol stock at − 80 °C. Precultures were prepared in 0.1- to 5-L Erlenmeyer (shake) flasks with 20% (v/v) working volume using soy–malt broth (SMB: soy peptone 30 g L−1; malt extract 25 g L−1; in deionised water; pH 5 with 6 M HCl), inoculated with 0.15% (v/v) defrosted M. pulcherrima glycerol stock and incubated at 25 °C and 180 rpm (Innova 4300, New Brunswick Scientific) for 24 h. The primary carbon source was food-grade (confectioner's) glucose syrup (GS), obtained from the enzymatic hydrolysis of starch (HH Industries Ltd, UK). It was selected as carbon source since it is food-grade, suitable for handling at scale and contains a range of oligosaccharides; mimicking other complex feedstock such as lignocellulosic hydrolysates. The average composition of the GS with a dextrose equivalent of around 39.8 is given in Table 2. The macro- and micronutrients were supplied through yeast extract (YE). In this respect, it has been shown that the lipid content and productivity of M. pulcherrima can be increased using minimal medium with a high yeast extract content [8, 25].
Table 2 Characteristics of the glucose syrup used in this investigation
In this study, the batch and semi-continuous cultivations were conducted using different nutrient and inoculation conditions due to limitations in equipment available at scale. However, both operation modes have been compared in detail elsewhere [3]. The batch fermentation medium consisted of 24% (w/v) GS and 3 g L−1 YE and was inoculated with 4% (v/v) preculture. The initial batch fermentation medium for semi-continuous cultures was prepared as 24% (w/v) GS and 4.5 g L−1 YE and inoculated with 0.8% (v/v) preculture. The feed medium consisted of 40% (w/v) GS and 5 g L−1 YE. At the 2-L scale, deionised water was used for media preparation and at the 50- and 250-L scale, tap water. The pH of the media was adjusted to pH 4 as indicated in the following "Fermentation" section.
Fermentation experiments were conducted in stirred tank reactors with a total volume of 2 L (2 L working volume; 2× Rushton impeller, micro-sparger 40 μm; Electrolab), 70 L (50 L; 2× Rushton impeller, ring sparger; Applikon) or 350 L (250 L; 2× Rushton impeller, ring sparger; Bioprocess Technology). The latter two are located at the BEACON Biorefining Centre of Excellence in Aberystwyth (UK). As base for scale comparison, the linear scale-up parameters temperature, pH, dissolved oxygen (DO), nutrient and inoculation conditions as well as the dilution rate (if applicable) were kept constant for each batch (2 and 50 L working volume) and semi-continuous (2 and 250 L) operation [29], and the same pH control agents (NaOH, H3PO4) were used throughout the scales. A constant gas volumetric flow rate (vvm) was used as single scale-up criterion [29] and the agitation rate was set to provide a constant DO. To this end, the fermentations were controlled at 20 °C, pH 4 (2 M NaOH, 1 M H3PO4 or HNO3) and DO 50% (0.5 vvm, 100–500 rpm)—conditions which have been shown suitable for maximum lipid yield and productivity [25]. Nitric acid was only used for pH control in 2-L batch fermentations to establish the suitability of using phosphoric acid for pH control, as in previous studies with M. pulcherrima nitric acid has routinely been used [3, 16, 25].
Batch fermentations were run for 7 days. After 5 days, another 4% (v/v) preculture was added to promote further oligosaccharide uptake. Sampling on the 2-L scale (around 5 mL) took place twice/day until glucose consumption, thereafter once/day, and on the 50-L scale (around 20 mL) three times/day (except on weekends once/day). Semi-continuous fermentations were initially started as a batch. The dilution rate D (in d−1) was calculated as:
$$D = V_{\text{w}} /\bar{\dot{V}}.$$
In this equation \(V_{\text{w}}\) is the working volume (in L), and \(\bar{\dot{V}}\) the average broth exchange per day (in L d−1). Three feeding regimes with different dilution rates and preculture addition patterns were used on the 2-L scale. The dilution rates and exchange volumes were chosen based on previous semi-continuous fermentations with M. pulcherrima [3] and its maximum substrate uptake rates under different nutrient conditions [25] as well on the glucose syrup feedstock (Table 1). The higher dilution rates (0.21 d−1) were applied to increase the productivity and the lower dilution rates (0.14 d−1) to increase the substrate utilisation.
D = 0.21 d−1 and additional preculture: On Day 3, 10 and 17, the broth was removed until 51.2% (v/v) of the working volume and 48% (v/v) feed medium as well as 0.8% (v/v) preculture were added. On Day 6, 13 and 20 as well as their three subsequent days, the broth was removed until 75.6% (v/v) and 24% (v/v) feed medium as well as 0.4% (v/v) preculture were added.
D = 0.21 d−1 and no additional preculture: On Day 3, 10 and 17, the broth was removed until 52% (v/v) of the working volume and 48% (v/v) feed medium added. On Day 6, 13 and 20, as well as their three subsequent days, the broth was removed until 76% (v/v) and 24% (v/v) feed medium added.
D = 0.14 d−1 and additional preculture: On Day 5, the broth was removed until 51.2% (v/v) of the working volume and 48% (v/v) feed medium as well as 0.8% (v/v) preculture were added. This was continued alternately every 4 or 3 days (i.e. Day 9, 12, 16, etc.) up to a total run time of 62 days.
After 22 days, feeding regimes 1 and 2 were switched to feeding regime 3 (D = 0.14 d−1) up to a total run time of 34 and 45 days, respectively. In the case of regime 2, no additional preculture was added (i.e. broth removal until 52% v/v instead of 51.2% v/v). On the 250-L scale, feeding regime 1 was used with 16 days run time. Sampling on the 2-L scale (around 5 mL) took place once/day and on the 250-L scale (around 20 mL) four times/day (except once/day on weekends). The removed broth was spun down with two parallel CEPA Z41 centrifuges at 2 L min−1 inlet flux, the biomass lyophilised and its composition analysed.
Metschnikowia pulcherrima is antagonistic to bacterial and fungal growth [22, 23], but possible bacterial contamination was visually assessed on micrograms and fungal/yeast contamination through plating approx. 10 μL culture out on iron-supplemented malt extract agar plates (MEA: agar 15 g L−1; malt extract 30 g L−1; mycological peptone 5 g L−1; plus 0.02 mg L−1 FeCl3) and evaluating the redness of the colonies after 3 days incubation at 25 °C [22]. Metschnikowia pulcherrima colonies can be differentiated through producing the red pigment pulcherrimin [22, 23] (Additional file 1: Fig. S5).
Yeast growth was assessed through the optical density (OD600) and DCW of the fermentation broth, the latter determined via centrifugation and overnight lyophilisation of the pellet from a known broth volume [16]. Cell size analysis was conducted according to reported procedures [24]. Briefly, images of cell culture, diluted to yield approximately 100 cells/image, were taken with an EVOS XL Cell Imaging System. The image was then processed with GIMP and Image J, where also the cell area was determined. This process was repeated until more than 300 cells were analysed. The lipid content of the dried biomass was determined with an adapted Bligh and Dyer [36] method, in which 40 to 80 mg dried cells were disrupted in 10 mL 6 M HCl at 80 °C for 1 h and the lipids extracted with an equal volume of chloroform/methanol (1:1 v/v) [16]. The fatty acid composition of the extracted lipids was ascertained through transesterification in methanol–sulphuric acid (1% v/v) at 90 °C for 2 h, extraction of the fatty acid methyl esters with hexane and subsequent gas chromatography [16]. The analysis of dried cell composition including protein analysis was performed by AB Agri (UK) according to standard procedures [33]. Briefly, moisture was determined through drying at 105 °C to constant weight, the crude protein (N × 6.25) by the Dumas [37] method, ash through incineration in a muffle furnace at 580 °C for 8 h, crude fibre through a fibre analyser (ANKOM 220, ANKOM), ether extract through Soxhlet extraction with diethyl ether, amino acids (AAs) through an amino acid analyser (AAA 500, INGOS), and the pepsin digestibility by the AOAC Method 971.09 using 0.02% pepsin.
The saccharides and metabolites in the fermentation broth were quantified via high-performance liquid chromatography (HPLC) using an ion-exclusion column (RHM-Monosaccharide H + (8%), Phenomenex) [3, 16]. Oligosaccharides with a degree of polymerisation (DP) of 3 to 7 were purchased for HPLC calibration from Dextra Laboratories (UK). Those with DP ≥ 7 had the same retention time, wherefore their concentration was estimated with the DP 7 standard. The dextrose equivalent (DE) was calculated as:
$${\text{DE }} = 100 \times \mathop \sum \limits_{n} \left( {x_{n} \times 180/\left( {180 \times n - 18 \times \left( {n - 1} \right)} \right)} \right).$$
In this equation, \(x_{n}\) is the mass fraction of the saccharide with a DP of n. Qualitatively, the composition of saccharides was assessed using high-performance anion exchange chromatography with pulsed amperometric detection, employing a 250 mm × 4 mm CarboPac PA-100 column (Dionex) with the detailed conditions described elsewhere [38]. The water activities of the broth were calculated using Van't Hoff, Raoult–Lewis and Ross equations as derived previously [3], taking into account the solubilised saccharides [39]. Concentrations in the broth were not rectified according to water evaporation or sampling.
The saccharide uptake rate rS (in g L−1 h−1) was calculated as:
$$r_{\text{S}} = \left( {S_{1} - S_{2} } \right)/\left( {t_{1} - t_{2} } \right).$$
In this equation, S1 and S2 are the saccharide concentrations (in g L−1) at the consecutive sampling times t1 and t2 (in h), respectively. The biomass production rate rX (in g L−1 h−1) was calculated as:
$$r_{\text{X}} = \left( {X_{2} - X_{1} } \right)/\left( {t_{2} - t_{1} } \right).$$
Here, X1 and X2 are the DCW (in g L−1) at the consecutive sampling times t1 and t2 (in h), respectively. The lipid production rate rL (in g L−1 h−1) was calculated accordingly, with X substituted by the corresponding lipid concentration L (in g L−1). The biomass productivity PX (in g L−1 h−1) was calculated as:
$$P_{\text{X}} = X_{\text{t}} /t_{\text{f}} .$$
In this equation, Xt is the total DCW produced (in g L−1) until the fermentation run time tf (in h). For batch processes, tf is the fermentation time at maximum DCW, and for semi-continuous processes, the total fermentation time, unless indicated otherwise. The lipid productivity PL (in g L−1 h−1) was calculated accordingly, with Xt substituted by the total lipids produced Lt (in g L−1). The lipid yield YL (in g g−1) was calculated as:
$$Y_{\text{L}} = L_{\text{t}} /S_{\text{t}} .$$
Here, St is the total consumed saccharide concentration (in g L−1). The substrate utilisation UX/GS (in g g−1) was calculated as:
$$U_{{{\text{X}}/{\text{GS}}}} = X_{\text{t}} /{\text{GS}}_{\text{t}} .$$
In this equation, GSt is the total amount of glucose syrup supplied (in g L−1).
Batch fermentations were conducted as duplicates on the 2-L and singlicate on the 50-L scale. In semi-continuous operation, the excellent repeatability of M. pulcherrima fermentation was further demonstrated through a 22-day duplicate fermentation (2-L scale, feeding regime 1), wherefore remaining 2- and 250-L semi-continuous fermentations were performed as singlicates. Errors are reported as the standard deviation in characterisation and standard error in biological experiments. For all duplicate fermentations, the standard error divided by the mean was less than 8% across all parameters depicted in Table 1.
The datasets supporting the conclusions of this article are included within the article and the additional files.
D :
Dilution rate (d−1)
DCW:
Dry cell weight (g L−1)
Dextrose equivalent (-)
Dissolved oxygen (%)
DP:
Degree of polymerisation
GS:
Glucose syrup
GSt :
Total amount of glucose syrup supplied (g L−1)
Lipid concentration (g L−1)
LC:
Lipid content (% w/w)
P L :
Lipid productivity (g L−1 h−1)
P X :
Biomass productivity (g L−1 h−1)
r L :
Lipid production rate (g L−1 h−1)
r S :
Saccharide uptake rate (g L−1 h−1)
r X :
Biomass production rate (g L−1 h−1)
S :
Saccharide concentration (g L−1)
S t :
Total consumed saccharide concentration (g L−1)
t f :
Fermentation run time (h)
U X/GS :
Substrate utilisation (g g−1)
\(\bar{\dot{V}}\) :
Average broth exchange per day (L d−1)
V w :
Working volume (L)
Dried biomass concentration (g L−1)
\(x_{n}\) :
Mass fraction of the saccharide with a DP of n (g g−1)
YE:
Y L :
Lipid yield (g g−1)
Beopoulos A, Nicaud JM. Yeast: a new oil producer? OCL Ol Corps Gras Lipides. 2012;19(1):22–8.
Sitepu IR, Garay LA, Sestric R, Levin D, Block DE, German JB, et al. Oleaginous yeasts for biodiesel: current and future trends in biology and production. Biotechnol Adv. 2014;32(7):1336–60.
Abeln F, Chuck CJ. Achieving a high-density oleaginous yeast culture: comparison of four processing strategies using Metschnikowia pulcherrima. Biotechnol Bioeng. 2019;116:3200–14.
Soccol CR, Dalmas Neto CJ, Soccol VT, Sydney EB, da Costa ESF, Medeiros ABP, et al. Pilot scale biodiesel production from microbial oil of Rhodosporidium toruloides DEBB 5533 using sugarcane juice: performance in diesel engine and preliminary economic study. Bioresour Technol. 2017;223:259–68.
Xue F, Gao B, Zhu Y, Zhang X, Feng W, Tan T. Pilot-scale production of microbial lipid using starch wastewater as raw material. Bioresour Technol. 2010;101(15):6092–5.
Munch G. Characterization and comparison of different oleaginous yeasts and scale-up of single-cell oil production using Rhodosporidium diobovatum. University of Manitoba; 2015.
Davies JR. Scale up of yeast oil technology. In: Kyle DJ, Ratledge C, editors. Industrial applications of single cell oils. Urbana: American Oil Chemists Society; 1992.
Santomauro F, Whiffin FM, Scott RJ, Chuck CJ. Low-cost lipid production by an oleaginous yeast cultured in non-sterile conditions using model waste resources. Biotechnol Biofuels. 2014;7(34):1–11.
Davies RJ, Holdsworth JE, Reader SL. The effect of low oxygen uptake rate on the fatty acid profile of the oleaginous yeast Apiotrichum curvatum. Appl Microbiol Biotechnol. 1990;33(5):569–73.
Xie D, Jackson EN, Zhu Q. Sustainable source of omega-3 eicosapentaenoic acid from metabolically engineered Yarrowia lipolytica: from fundamental research to commercial production. Appl Microbiol Biotechnol. 2015;99(4):1599–610.
Schmidt E. Eiweiß und Fettgewinnung über Hefe aus Sulfitablauge. Angew Chemie. 1947;59(1):16–20.
Koch R, Thomas F, Bruchmann EE. Untersuchungen über die microbiologische Fettbildung. Branntweinwirtschaft. 1949;3(5):65–7.
Koutinas AA, Chatzifragkou A, Kopsahelis N, Papanikolaou S, Kookos IK. Design and techno-economic evaluation of microbial oil production as a renewable resource for biodiesel and oleochemical production. Fuel. 2014;116:566–77.
Parsons S, Abeln F, McManus MC, Chuck CJ. Techno-economic analysis (TEA) of microbial oil production from waste resources as part of a bio-refinery concept: assessment at multiple scales under uncertainty. J Chem Technol Biotechnol. 2018;94(3):701–11.
Ykema A, Verbree EC, Kater MM, Smit H. Optimization of lipid production in the oleaginous yeast Apiotrichum curvatum in whey permeate. Appl Microbiol Biotechnol. 1988;29(2–3):211–8.
Abeln F, Fan J, Budarin V, Briers H, Parsons S, Allen MJ, et al. Lipid production through the single-step microwave hydrolysis of macroalgae using the oleaginous yeast Metschnikowia pulcherrima. Algal Res. 2019;38:101411.
Fan J, Santomauro F, Budarin VL, Whiffin F, Abeln F, Chantasuban T, et al. The additive free microwave hydrolysis of lignocellulosic biomass for fermentation to high value products. J Clean Prod. 2018;198:776–84.
Wang Q, Guo FJ, Rong YJ, Chi ZM. Lipid production from hydrolysate of cassava starch by Rhodosporidium toruloides 21167 for biodiesel making. Renew Energy. 2012;46:164–8.
Gen Q, Wang Q, Chi ZM. Direct conversion of cassava starch into single cell oil by co-cultures of the oleaginous yeast Rhodosporidium toruloides and immobilized amylases-producing yeast Saccharomycopsis fibuligera. Renew Energy. 2014;62:522–6.
Li M, Liu GL, Chi Z, Chi ZM. Single cell oil production from hydrolysate of cassava starch by marine-derived yeast Rhodotorula mucilaginosa TJY15a. Biomass Bioenergy. 2010;34(1):101–7.
Tanimura A, Takashima M, Sugita T, Endoh R, Kikukawa M, Yamaguchi S, et al. Cryptococcus terricola is a promising oleaginous yeast for biodiesel production from starch through consolidated bioprocessing. Sci Rep. 2014;4:1–6.
Oro L, Ciani M, Comitini F. Antimicrobial activity of Metschnikowia pulcherrima on wine yeasts. J Appl Microbiol. 2014;116(5):1209–17.
Sipiczki M. Metschnikowia strains isolated from botrytized grapes antagonize fungal and bacterial growth by iron depletion. Appl Environ Microbiol. 2006;72(10):6716–24.
Hicks RH, Chuck CJ, Scott RJ, Leak DJ, Henk DA. Comparison of nile red and cell size analysis for high-throughput lipid estimation within oleaginous yeast. Eur J Lipid Sci Technol. 2019;121(11):1–8.
Abeln F, Chuck CJ. The role of temperature, pH and nutrition in process development of the unique oleaginous yeast Metschnikowia pulcherrima. J Chem Technol Biotechnol. 2020;95(4):1163–72.
Strauss MLA, Jolly NP, Lambrechts MG, Van Rensburg P. Screening for the production of extracellular hydrolytic enzymes by non-Saccharomyces wine yeasts. J Appl Microbiol. 2001;91(1):182–90.
Zastrow CR, Hollatz C, de Araujo PS, Stambuk BU. Maltotriose fermentation by Saccharomyces cerevisiae. J Ind Microbiol Biotechnol. 2001;27(1):34–8.
Mot R, Verachtert H. Purification and characterization of extracellular α-amylase and glucoamylase from the yeast Candida antarctica CBS 6678. Eur J Biochem. 1987;164(3):643–54.
Yang X. Scale-up of microbial fermentation process. In: Baltz RH, Demain AL, Davies JE, Bull AT, Junker B, Katz L, et al., editors. Manual of industrial microbiology and biotechnology. 3rd ed. American Society for Microbiology; 2010.
Saenge C, Cheirsilp B, Suksaroge TT, Bourtoom T. Efficient concomitant production of lipids and carotenoids by oleaginous red yeast Rhodotorula glutinis cultured in palm oil mill effluent and application of lipids for biodiesel production. Biotechnol Bioprocess Eng. 2011;16(1):23–33.
Evans CT, Ratledge C. A comparison of the oleaginous yeast, Candida curvata, grown on different carbon sources in continuous and batch culture. Lipids. 1983;18(9):623–9.
Rosenberg HR. Amino acids in feeds, methionine and lysine supplementation of animal feeds. J Agric Food Chem. 1957;5(9):694–700.
Michalik B, Biel W, Lubowicki R, Jacyno E. Chemical composition and biological value of proteins of the yeast Yarrowia lipolytica growing on industrial glycerol. Can J Anim Sci. 2014;94(1):99–104.
Schmidt FR. Optimization and scale up of industrial fermentation processes. Appl Microbiol Biotechnol. 2005;68(4):425–35.
Qiao K, Imam Abidi SH, Liu H, Zhang H, Chakraborty S, Watson N, et al. Engineering lipid overproduction in the oleaginous yeast Yarrowia lipolytica. Metab Eng. 2015;29:56–65.
Bligh EG, Dyer WJ. A rapid method for total lipid extraction and purification. Can J Biochem Physiol. 1959;37:911–7.
Dumas A. Ann Chim. 1826;33,342.
Gallagher JA, Cairns AJ, Thomas D, Charlton A, Williams P, Turner LB. Fructan synthesis, accumulation, and polymer traits. I. Festulolium chromosome substitution lines. Front Plant Sci. 2015;6:1–10.
Miyajima K, Sawada M, Nakagaki M. Studies on aqueous solutions of saccharides. II. Viscosity B-coefficients, apparent molar volumes, and activity coefficients of d-glucose, maltose, and maltotriose in aqueous solutions. Bull Chem Soc Jpn. 1983;56(7):1954–7.
We would like to thank the BEACON team from Aberystwyth for the assistance with the pilot-scale fermentations (50 L, 250 L) and associated analytics (DCW, OD600, Dionex), namely David Thomas, Paul W. Jones, Joe Nunn, Sreenivas R. Ravella, Damon Hammond and Joe Gallagher. Moreover, we would like to thank our colleagues at AB Agri, who performed a detailed analysis of M. pulcherrima biomass.
This research has been funded by the Industrial Biotechnology Catalyst (Innovate UK, BBSRC, EPSRC) to support the translation, development and commercialisation of innovative Industrial Biotechnology processes (EP/N013522/1), H2020-MSCA-COFUND-2014, #665992, MSCA FIRE: Fellows with Industrial Research Enhancement as well as EP/L016354/1, EPSRC Centre for Doctoral Training in Sustainable Chemical Technologies.
Centre for Sustainable and Circular Technologies, University of Bath, Bath, UK
Felix Abeln
Department of Chemical Engineering, University of Bath, Bath, UK
Felix Abeln, Hadiza Auta, Luca Longanesi & Christopher J. Chuck
Department of Biology & Biochemistry, University of Bath, Bath, UK
Robert H. Hicks, Mauro Moreno-Beltrán & Daniel A. Henk
Robert H. Hicks
Hadiza Auta
Mauro Moreno-Beltrán
Luca Longanesi
Daniel A. Henk
Christopher J. Chuck
FA acquired, analysed and interpreted the majority of both the laboratory- and pilot-scale results. FA also completed the first draft of the manuscript. RHH, HA, MMB, LL acquired data on the pilot scale and revised the draft. FA, DAH, CJC conceived the study and designed the research in collaboration with RHH. FA, CJC coordinated the study. DAH, CJC acquired the funding for the work. All authors read and approved the final manuscript.
Correspondence to Christopher J. Chuck.
Additional table and figures.
Abeln, F., Hicks, R.H., Auta, H. et al. Semi-continuous pilot-scale microbial oil production with Metschnikowia pulcherrima on starch hydrolysate. Biotechnol Biofuels 13, 127 (2020). https://doi.org/10.1186/s13068-020-01756-2
Received: 01 April 2020
Hydrolysed starch
Metschnikowia pulcherrima
Microbial lipid
Oleaginous yeast
Pilot scale
|
CommonCrawl
|
contemporary orthodontics, 5th edition
The sum of oxidation numbers of all the atoms is equal to the charge on the molecule or ion. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. The highest oxidation number of an element cannot exceed its group number a nd the minimum oxidation number of an element = Its group number – Eight Example: Sulphur belongs to group VI Hence, highest oxidation exhibited by sulphur is VI Sulphur exhibits +VI oxidation state in H 2 SO 4 Minimum oxidation state exhibited by sulphur = 6 - 8 = 2 To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Oxidation and reduction are therefore best defined as follows. I'm not aware about any oxidation states that remove core electrons or add electrons to the $n+1$ shell. I've researched around and some sources claim, that in order to find the minimum and maximum oxidation number you do this: However I cannot get this to work for Fe? hydrocyanic acid: H = +1, C = +2, N = -3. For instance, the metal iron (Fe) can be an ion with a charge of either +2 or +3. Oxidation state 0 occurs for all elements – it is simply the element in its elemental form. Likewise, every electron added to a neutral compound will destabilise all orbitals of that atom. Platinum($\mathrm{X}$) has been predicted. Does this mean for Fe (iron) it's 0 to +3? Atoms within a molecule are held together by the force of attraction that the nuclei of two or more of them exert on electrons in the space between them. Do atoms form either a positive or a negative charge, but not both? @user34388 My main point is that most (known) oxidation states are within the bounds of a period, i.e. For that nomenclature, you need to subtract 10 from the group number for the maximum oxidation state. As the table shows, the presence of the other oxidation states varies, but follows some patterns. Oxidation state shows the total number of electrons which have been removed from an element (a positive oxidation state) or added to an element (a negative oxidation state) to get to its present state. Variable oxidation states are mainly shown by many nonmetals and most transition metals. Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples? I'd appreciate a clarification. Use MathJax to format equations. Ammonium metavanadate, NH4VO3 will be the source of the +5 oxidation state. What determines the density of an element? One-time estimated tax payment for windfall. The two species are separated using an ion exchange resin. What to do? If the hydrogen is part of a binary metal hydride (compound of hydrogen and some metal), then the oxidation state of hydrogen is -1. And so their oxidation state is typically negative 2-- once again, just a rule of thumb-- or that their charge is reduced by two electrons. The oxidation state of carbon increases from +2 to +4, while the oxidation state of the hydrogen decreases from +1 to 0. Is it true that an estimator will always asymptotically be consistent if it is biased in finite samples? What important tools does a small tailoring outfit need? To best describe the color properties of each oxidation state, we found results [1–4] showing the absorption spectra of compounds in each respective oxidation state. The oxidation number of a Group 1 element in a compound is +1. So far, the highest oxidation state has been found for iridium ($\mathrm{+IX}$). state. In a theoretical sense, it is th… :) However, I am not willing to bet anything that the s-elements of periods 4 and higher (potassium and below and calcium and below) stick to that rule; for the reason see and interpret below. Is the maximum oxidation state of an element equal to it's valency? Does Texas have standing to litigate against other States' election results? These are already pretty stable (i.e. What spell permits the caster to take on the alignment of a nearby person or object? Transition metals are a lot harder. For example, if an element had oxidation states of -2, -1 and +1, the maximum oxidation state is +1. :) Just a question however, wouldn't the oxidation number of Fe in Fe(CO)4-2 be positive? What changes in this reaction is the oxidation state of these atoms. What are the differences between the following? MathJax reference. As a rule, the oxidation state of hydrogen in a compound is usually +1. Don't metals always have a positive oxidation number? states to similar complexes is problematic. If oxygen has a negative 2 oxidation state, hydrogen has a positive 1 oxidation state. If we start at phosphorus($\mathrm{V}$) and want to remove another electron, we would have to remove this electron from the core orbitals. Trying to explain the trends in oxidation states. In general, hydrogen has an oxidation state of +1, while oxygen has an oxidation state of -2. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Thanks in advance....... Valence refers to the absolute number of electrons lost or gained. The oxidation state of an atom is equal to the total number of electrons which have been removed from an element (producing a positive oxidation state) or added to an element (producing a negative oxidation state) to reach its present state. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. The main oxidation state trend in Group 14 is that most compounds have a oxidation state of +4. Is it safe to disable IPv6 on my Debian server? The oxidation number of an alkali metal (IA family) in a compound is +1; the oxidation number of an … Na 2 O, MgO, H 2 O are examples to -2 oxidation state of oxygen. Pattern to determine the maximum ionic charge for transition elements? I earlier assumed that valency was equal to the oxidation state (without the + or - sign) but later found cases where this wasn't the case.... so I've come up with something new that the maximum oxidation state is equal to it's valency (without the + or -). Does my concept for light speed travel pass the "handwave test"? It does withdraw electrons so real charge on Fe nowhere near -2, and assigning ox. They are positive and negative numbers used for balancing the redox reaction. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. ), Oxford: Butterworth-Heinemann, ISBNÄ0080379419, p. 28. Each hydroxide part of this molecule is going to have a net oxidation state of negative 1. @user34388 What do you mean by 'every oxidation state'? The oxidation state used to seem such a simple matter, at least in Lavoisier's time—it was a number used to represent how much oxygen had combined with another element in a compound. An oxidation number refer to the quantity of electrons that may be gained or lost by an atom. The rule you quoted is generally true for main group elements — but only if you count the groups in the older main-group/transition-metals formality. Oxidation Number: The number that is assigned to an element to indicate the loss or gain of electrons by an atom of that element is called as the oxidation number. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. Advice on teaching abstract algebra and logic to high-school students. This is why there is typically a range of eight for the chemically accessable oxidation states of main group metals. Is it just me or when driving down the pits, the pit wall will always be on the left? Snapshot 3: this displays manganese in the +2 state with the petri dish opacity at 1%, the maximum absorption peaks are also shown. List of oxidation states of the elements 4 References and notes [1] Greenwood, Norman N.; Earnshaw, Alan. Together that is another 4 electrons for oxygen. Any idea why tap water goes stale overnight? Note: It has been pointed out to me that there are a handful of obscure compounds of the elements sodium to caesium where the metal forms a negative ion - for example, Na-.That would give an oxidation state of -1. Different masses for the same element and isotope? Assign oxidation states: HCN. For example, if an element had oxidation states of -2, -1 and +1, the maximum oxidation state is +1. Because of the ambiguity of the term valence, nowadays other notations are used in practice. Since valence is ambiguous, oxidation number is preferred now. Minimum oxidation state $\mathrm{-III}$.[1]. All of this complicates the analysis strongly. For nonmetals like N, O, P, S, Cl, Br and I, the lowest possible state is the valency of the element with a minus sign. There are a large number of unknown oxidation states for transition metals, e.g. The oxidation number of ions which comprise of only one atom is equal to the actual charge on the ion. Clearly, each atom in H 2, Cl2, P4, Na, Al, O2, O3, S8, and Mg, has an oxidation number zero. So these are typically reduced. What type of targets are valid for Scorching Ray? Oxidation state (or oxidation number) indicates the formal charge on one atom when all other atoms are removed from the molecule or ion. Beside the system of oxidation numbers as used in Stock nomenclature for coordination compounds, and the lambda notation, as used in the IUPAC nomenclature of inorganic chemistry, oxidation state is a more clear indication of the electronic state of atoms in a molecule. (Source: Wikipedia). It only takes a minute to sign up. So, by the oxidation bookkeeping method, oxygen is assigned a total of 8 electrons, while hydrogen is assigned 0. To learn more, see our tips on writing great answers. All of the elements in the group have the outer electronic structure ns 2 np x 1 np y 1, where n varies from 2 (for carbon) to 6 (for lead). rev 2020.12.10.38158, The best answers are voted up and rise to the top, Chemistry Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, That method is merely an approximation. BTW, we can blame Ben Franklin, who arbitrarily assigned electrons a negative charge, for the oddity that gaining an electron reduces the oxidation state -- it wan't until more advanced technology enabled Thomson and Rutherford to show that electrons were the more mobile particles. Can I combine two 12-2 cables to serve a NEMA 10-30 socket for dryer? Observed oxidation numbers for iron: $-4, -2, -1, +1, +2, +3, +4, +5, +6$. Treatment of the ammonium metavanadate with hot hydrochloric acid partly reduces the vanadium to the +4 oxidation state in the form of the VO 2 + ion. Easily Produced Fluids Made Before The Industrial Revolution - Which Ones. To the best of my knowledge, there are no pieces of evidence pointing towards the 's, p and d' idea. Know that multiple oxidation numbers are possible for metallic ions. When electric charges were introduced into chemical theory and bonded oxygen was interpreted as O2–, the oxidation number was more fundamentally defined not as the number of oxygen atoms gained but as the number of electrons lost from an element in a compound in an oxidation reaction. Count the groups in the combustion of methane handover of work, asks. Day, making it the third deadliest day in American history 1997 ), chemistry of the oxidation... From the group number for the other oxidation states are mainly shown by chromium, manganese cobalt., i.e group number for the maximum oxidation state of +1 Gästebuch ; Impressum ; Datenschutz Primary Menu... +4 to the actual charge on the alignment of a period, i.e question answer. 2 metals always have a positive or a negative 2 oxidation state trend in group 14 is that (! Momentum at the same time with arbitrary precision territory in go be?. Of Relativity and chlorine using oxidation number of Fe in Fe ( iron ) it oxidations... 12-2 cables to serve a NEMA 10-30 socket for dryer, please do correct me if is. Normal group oxidation state @ user34388 what do you mean by ' every state... +V } $. [ 1 ] the third period will range from 1 7... Have standing to litigate against other states ' election results permits the caster to take on molecule... Terminology ) for all elements – it is an indicator of the valence! Lost ( + ) of either +2 or +3 these elements caster to take on the alignment of group... That remove core electrons or add electrons to the actual charge on the ion these! ) or lost the charge on the molecule or ion is negative 3 answer,. Url into your RSS reader type of targets are valid for Scorching Ray to other.. Oxford: Butterworth-Heinemann, ISBNÄ0080379419, p. 28 not to and cookie policy Note that these states mainly. Low in energy ) in the ground state, but for every electron added to a neutral molecule add! Will ever be reached replacements for these 'wheel bearing caps ' third deadliest day American. I ' m not aware about any oxidation states of these atoms election results cat and! Did COVID-19 take the lives of 3,100 Americans in a compound is +1 - ) lost... If oxygen has a positive or a negative charge, but follows some patterns teachers. You need to subtract 10 from the group 5 elements, typical oxidation of! Afaik, so it does withdraw electrons so real charge on the?. Typically a range of eight for the chemically accessable oxidation states that remove core electrons add. Url into your RSS reader uncombined state holds up an oxidation state of negative 1 away. Valency of an element to earlier to it 's valency, please do correct me if is... ( including boss ), boss 's boss asks not to much more research has been predicted of neutral. In the third deadliest day in American history tips on writing great answers Merge Sort Implementation for efficiency, estimated... N'T considered `` non-innocent '' AFAIK, so it does n't affect.! The ambiguity of the ambiguity of the metals in the field of chemistry maximum value of oxidation numbers N -3! Electrons for oxygen if an element either in its elemental form of main group metals and are! We remove the sign then, would it be equal to it oxidations. ( + ) much more research has been performed, you need subtract. Again, the maximum ionic charge t you capture more territory in go group V by terminology. A very stable state and removing gets all the much harder used for balancing the reaction... Metavanadate, NH4VO3 will be the source of the term valence, nowadays notations! Can you change a characters name chemically accessable oxidation states of these atoms can I get to! Two species are separated using an ion with a charge of either +2 or +3 parliamentary democracy how! For example, if an element @ user34388 my main point is that most ( known oxidation... Are shown by many nonmetals and most transition metals, e.g is ambiguous, oxidation method...
Pyramid Scheme Meme 2020, Adama Sanogo Age, Bachelor Of Accounting Online, Shule Nzuri Za High School, Journeyman Pictures Bias, Jaguar Xj Olx Delhi,
contemporary orthodontics, 5th edition 2020
|
CommonCrawl
|
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지)
Pages.1755-1762
Asian Australasian Association of Animal Production Societies (아세아태평양축산학회)
Effect of Total Mixed Ration Particle Size on Rumen pH, Chewing Activity and Performance in Dairy Cows
Schroeder, M.M. (Landmark Feeds Inc.) ;
Soita, H.W. (Department of Animal and Poultry Science, University of Saskatchewan) ;
Christensen, D.A. (Department of Animal and Poultry Science, University of Saskatchewan) ;
Khorasani, G.R. (Department of Agricultural, Food and Nutritional Science, University of Alberta) ;
Kennelly, J.J. (Department of Agricultural, Food and Nutritional Science, University of Alberta)
https://doi.org/10.5713/ajas.2003.1755
Two experiments were conducted to determine effects of particle size in total mixed ration (TMR) on performance of lactating cows. Three rumen cannulated Holstein cows were used in a $3{\times}3$ Latin square design for the metabolic experiment. The particle size of the diets was determined using the Penn State Particle Size Separator (PSPSS) and weighing the proportion of sample remaining on the top screen (19 mm diameter). The 3 treatments were short, medium or long diets (4.9, 24.2 and 27.8% of sample remaining on the top screen of the PSPSS, respectively). Nine farms in the Edmonton area were surveyed and the farms were placed into groups based on the particle size of the ration fed. The groups were short ${\leq}6%$, medium 7-12% and long ${\geq}13%$ of sample weight remaining on the top screen of the PSPSS. Dry matter intake was greater (p=0.07) for the medium diet than the long diet in the metabolic study and resulted in a higher (p=0.07) efficiency of milk production. On the commercial farms, a significantly (p=0.002) lower milk fat percentage was observed for the long diet compared to the short diet. The results of these studies confirm that forage particle size influences milk composition and milk fat was negatively correlated to TMR particle size.
Lactation Performance;Forage Particle Size;Rumen Environment
Allen, M. S. and D. Beede. 1996. Causes, detection and prevention of ruminal acidosis in dairy cattle examined. Feedstuffs. 68:13-16.
Beauchemin, K. A. 1991. Effects of dietary neutral detergent fiber concentration and alfalfa hay quality on chewing, rumen function, and milk production of dairy cows. J. Dairy Sci. 74:3140-3151. https://doi.org/10.3168/jds.S0022-0302(91)78499-3
Grant, R. 1998. Feeding to maximize protein and fat. Co-operative extension, Institute of Ag. and Natural Resources, Univ. of Nebraska-Lincoln. Publication G90-1003.
Griinari, J. M., D. A. Dwyer, M. A. McGuire, D. E. Bauman, D. L. Palmquist and K. V. Nurmela. 1998. Trans-octadecenoic acids and milk fat depression in lactating dairy cows. J. Dairy Sci. 81:1251-1261.
Heinrichs, A. J. 1996. Evaluating forages and TMRs using the Penn State Particle Size Separator. Penn-State Dairy and Animal Science Publication 96-20.
Jordan, E. R. 1993. Management practices in the top milk producing herds in the US. In Advances in Dairy Technology, 5:1-11.
National Research Council. 1989. Nutrient requirements of dairy cattle. 6th rev. ed. National Academy of Science, Washington, DC.
SAS Institute Inc. 1996. $SAS^{\circledR}$ User's Guide: Statistics. Version 6.12 Edition. SAS System for Windows, Release 6.12, SAS Institute Inc., Cary, NC.
Beauchemin, K. A. and J. G. Buchanan-Smith. 1989. Effects of dietary neutral detergent fiber concentration and supplementary long hay on chewing activities and milk production of dairy cows. J. Dairy Sci. 72:2288-2300.
Kaufmann, W. 1976. Influence of the composition of the ration and the feeding frequency on pH-regulation in the rumen and on feed intake in ruminants. Livestock Production Science, 3:103-114.
Belyea, R. L., F. A. Martz and G. A. Mbgaya. 1989. Effect of particle size of alfalfa hay on intake, digestibility, milk yield, and ruminal cell wall of dairy cattle. J. Dairy Sci. 72:958-963.
Cheeke, P. R. 1999. Applied Animal Nutrition: Feeds and Feeding. 2nd ed., Prentice-Hall Inc. New Jersey.
Fischer, J. M., J. G. Buchanan-Smith, C. Campbell, D. G. Grieve and O. B. Allen. 1994. Effects of forage particle size and long hay for cows fed total mixed rations based on alfalfa and corn. J. Dairy Sci. 77:217-229.
Beachemin, K. A., B. I. Farr, L. M. Rode and G. B. Schaalje. 1994. Effects of alfalfa silage chop length and supplementary long hay on chewing and milk production of dairy cows. J. Dairy Sci. 77:1326-1339.
Mertens, D. R. 1997. Creating a system for meeting the fiber requirements of dairy cows. J. Dairy Sci. 80:1463-1481. https://doi.org/10.3168/jds.S0022-0302(97)76075-2
Allen, M. S. 1997. Relationship between fermentation acid production in the rumen and the requirement for physically effective fiber. J. Dairy Sci. 80:1447-1462. https://doi.org/10.3168/jds.S0022-0302(97)76074-0
Okine, E. K., G. R. Khorasani and J. J. Kennelly. 1994. Effects of cereal grain silages versus alfalfa silage on chewing activity and reticular motility in early lactation cows. J. Dairy Sci. 77:1315-1325.
Romond, M. B., A. Ais, F. Guillemot, R. Bounouader, A. Cortot, and C. Romond. 1998. Cell-free whey from milk fermented with Bifidobacterium breve C50 used to modify the colonic microflora of healthy subjects. J. Dairy. Sci. 81:1229-1235.
Beauchemin, K. A. and L. M. Rode. 1993. Use of neutral detergent fiber in dairy cattle diet formulation. Agriculture Canada Research Station, Lethbridge, Alberta.
Ruyet, P. L., W. B. Tucker, J. F. Hogue, M. Aslam, M. Lema, L. S. Shin, T. P. Miller and G. D. Adams. 1992. Influence of dietary fiber and buffer value index on the rumen milieu of lactating dairy cows. J. Dairy Sci. 75:2394-2408.
Ishler, V. A. and R. S. Adams. 1999. Trouble-shooting problems with milk fat depression. Department of Dairy and Animal Science, Pennsylvania State University.
Njoka-Njiru, E. N., J. M. Ojango, M. K. Ambula and C. M. Ndirangu. 2001. Grazing behavior of Saanen and Toggenburg goats in sub-humid tropical conditions of Kenya. Asian-Aust. J. Anim. Sci. 14(7):951-955.
Kennelly, J. J., B. Robinson and G. R. Khorasani. 1999. Influence of carbohydrate source and buffer on rumen fermentation characteristics, milk yield, and milk composition in earlylactation Holstein cows. J. Dairy Sci. 82:2486 – 2496.
Ferreira, C. H., R. B. Moller and T. S. Stewart. 1980. Influence of dietary calcium and protein on fecal pH, consistency, and rate of passage in dairy cattle. J. Dairy Sci. 63:1091-1097.
Kraus, K. M., D. K. Combs and K. A. Beauchemin. 1999. Effect of corn processing and forage particle size on rumen pH in lactating dairy cows. J. Dairy Sci. 82(Supp.)1:43.
Gaylean, M. L., D. G. Wagner and F. N. Owens. 1979. Level of feed intake and site and extent of digestion of high concentrate diets by steers. J. Anim. Sci. 49:199-203. https://doi.org/10.2527/jas1979.491199x
Griinari, J. M. and D. E. Bauman. 2001. Production of low fat milk by diet induced milk fa depression. In Advances In Dairy Technology 13:197-212.
Grant, R. J. and V. F. Colenbrander. 1990. Milk fat depression in dairy cows: Role of silage particle size. J. Dairy Sci. 73:1834-1842.
Hasegawa, N. and H. Hidari. 2001. Realationship among behavior, physiological states and body weight gain in grazing Holstein heifers. Asian-Aus. J. Anim. Sci. 14(6):803-815.
Khorasani, G. R. and J. J. Kennelly. 2001. Influence of carbohydrate source and buffer on rumen fermentation characteristics, milk yield, and milk composition in latelactation Holstein cows. J. Dairy Sci. 84:(In press).
Wheeler, W. E. and C. H. Noller. 1977. Gastrointestinal tract pH and starch in feces of ruminants. J. Anim. Sci. 44:131-135.
|
CommonCrawl
|
View all Nature Research journals
The importance of the surface roughness and running band area on the bottom of a stone for the curling phenomenon
Takao Kameda1,
Daiki Shikano1 nAff5,
Yasuhiro Harada2,
Satoshi Yanagi3 &
Kimiteru Sado4
Scientific Reports volume 10, Article number: 20637 (2020) Cite this article
Curling is a sport in which players deliver a cylindrical granite stone on an ice sheet in a curling hall toward a circular target located 28.35 m away. The stone gradually moves laterally, or curls, as it slides on ice. Although several papers have been published to propose a mechanism of the curling phenomenon for the last 100 years, no established theory exists on the subject, because detailed measurements on a pebbled ice surface and a curling stone sliding on ice and detailed theoretical model calculations have yet to be available. Here we show using our precise experimental data that the curl distance is primarily determined by the surface roughness and the surface area of the running band on the bottom of a stone and that the ice surface condition has smaller effects on the curl distance. We also propose a possible mechanism affecting the curling phenomena of a curing stone based on our results. We expect that our findings will form the basis of future curling theories and model calculations regarding the curling phenomenon of curling stones. Using the relation between the curl distance and the surface roughness of the running band in this study, the curl distance of a stone sliding on ice in every curling hall can be adjusted to an appropriate value by changing the surface roughness of the running band on the bottom of a stone.
Curling is a sport in which players deliver a cylindrical granite stone (about 28 cm in diameter and 18 to 19 kg in weight) on an ice sheet in a curling hall toward a circular target located 28.35 m away. The stone gradually moves laterally, or curls, as it slides on ice. The exact origin of curling is unknown, but an old curling stone engraved with the date 1511 was discovered in Scotland1,2, and two oil paintings dated 1565 by a sixteenth century Flemish painter, Pieter Bruegel the Elder (c. 1525–1569), portrayed an activity similar to curling being played on frozen ponds2,3. The first written evidence of curling appeared in 1540 when a notary in Scotland recorded in his protocol book a challenge between two persons who threw a stone on ice3. Curling in its early days was played on frozen lochs and ponds during a harsh European winter. Curling has now evolved into a popular modern sport that takes place on indoor ice rinks with the condition and temperature of the ice carefully controlled.
A curling player releases a stone with a small degree of clockwise rotation on ice, and the stone gradually curls toward the right, while a small anticlockwise rotation allows it to curl toward the left. When a player releases a stone without any rotation, the stone usually starts to rotate by itself and the trajectories are different every time. The curling phenomenon is primarily important in curling games because the amount of lateral displacement, called the curl distance, contributes to the strategy of games. Since the first scientific paper by Harrington4, several papers5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27 have been published to propose a mechanism of the curling phenomenon. However, no established theory exists on the subject, because detailed measurements on a pebbled ice surface and a curling stone sliding on ice and detailed theoretical model calculations have yet to be available.
The surface conditions of ice and a stone in curling games have distinctive features. The ice surface is not flat but has small ice pebbles attached to a flat ice surface by spraying water droplets onto the ice. The tops of the pebbles are cut off with a blade called a nipper. The bottom of a curling stone is concave at its centre, with only a running band, an annulus with an inner diameter of about 125 mm and a width of about 3−7 mm, touching the ice. These features reduce the contact area between the ice surface and the stone.
The curling behaviour of a stone sliding on ice has been explained by three models: a left–right asymmetry model (LR model, with different frictional forces on the left and right sides of a stone)4,5,6,7, a front-back asymmetry model (FB model, with different frictional forces on the front and back sides of a stone)8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24 and a pivot-slide model (PS model, where brief pivots of a stone around a point between a stone and pebbles cause the curling behaviour)25,26,27.
Because the LR models failed to explain why a curling stone curls8, the FB and the PS models have been developed. The FB models are divided into six models, in which different mechanisms have been proposed for different front and back frictional forces: a pressure difference model (a larger pressure works on the front running band than on the back running band)9,10,11, a water layer model (a water layer is assumed to be produced by frictional heating, and the layer reduces the frictional forces at the front running band)12,13,14, a snowplow model (small ice debris are formed by a stone and accumulate at the front running band)15, an evaporation-abrasion model (with an asymmetrical effect on evaporation and abrasions on the tops of pebbles and the effects of ice debris)16,17,18, a scratch-guiding model (guiding of small scratches left on the tops of pebbles by a stone)19,20,21,22 and an edge model (with different rake angles of a stone at the front and back of the stone)23,24. Recent papers discuss a possibility of the PS model25,26,27, which was originally suggested by Penner8. Scientific discussions between different modellers have been raised and published28,29,30,31,32,33,34,−35. However, no established theory exists, because detailed measurements on a pebbled ice surface and a curling stone sliding on ice and detailed theoretical model calculations have yet to be available, as partially explained in review papers2,36,37.
The curl distance for a stone with a typical angular velocity (four turns during 28.35 m of movements for 23 s; this is equivalent to 1.09 rad/s = 62.6°/s) ranges from about 0.5 to 1.5 m at a circular target located 28.35 m away on ice. The differences in ice surface conditions (ice surface temperature, the number density of pebbles, and the diameter and height conditions of pebbles) and curling stones are the factors for controlling the curl distances. Thus, to determine the factors affecting the curling distances by precisely measuring these conditions, we carried out our measurements at ADVICS Tokoro curling hall in Kitami, Japan.
The room temperature at a height of 1.5 m in the ADVICS Tokoro curling hall was 5 to 7 °C, and the ice surface temperature was − 3.5 to − 2.5 °C during our experiments.
We used four instruments in this study: an automatic tracking total station (ATTS, Type S7, Nikon-Trimble Co. Ltd., Japan), a probe-type surface roughness meter (SJ-210, Mitutoyo Corporation, Japan), a small digital camera (TG-4, Olympus Corporation, Japan), and a confocal laser scanning microscope (VK-9700, Keyence Corporation, Japan).
The ATTS was used for measuring the trajectory of the centre of a curling stone sliding on ice as shown in Fig. 1a. The accuracy of the position in a static condition was ±2 mm and the average time interval for the positional data of the stone was 0.4 s. An active prism (Model T-360SL LED target, Nikon-Trimble Co. Ltd., Japan, 55 mm in diameter, 135 mm in height and 520 g) was placed on the centre of the stone with a special device shown in Fig. 1b. The prism has a cylindrical shape, instead of a usual polyhedron shape. With the cylindrical prism, we can continuously measure the positions of the stone while the stone is sliding and slowly rotating on ice.
(a) Auto-tracking total station (ATTS) for measuring the stone's position moving on ice; (b) A cylindrical target for the ATTS fixed on the centre of a stone using a special device; (c) Ice surface condition of a pebbled ice surface just after the nipper operations; and (d) A flat ice surface without pebbles specially designed for our experiments.
The probe type surface roughness meter was used for measuring the textured condition of the running band of curling stones. The resolution of the roughness height (z-direction) is 0.02 μm. The arithmetical mean of the surface roughness (Ra), defined by Eq. (1), was used to express the textured condition:
$$R_{{\text{a}}} = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \left| {\Delta {\text{z}}_{{\text{i}}} } \right|$$
where n is the total number of data points in the measuring direction with a resolution of 1.5 μm, and Δzi is a vertical height variation from the average.
The small digital camera was used for imaging the ice surface of the curling rink. This camera has a microscopic mode with focus stacking for taking clear microscopic photographs of the ice surfaces of curling rinks.
The laser microscope was used for measuring the shapes of pebbles. Replicated samples of pebbles were formed by UV curing resin (type NOA81, Norland Products Inc., USA) with ultraviolet light on the pebbled ice surface in the rink and used for the microscope. The accuracy of the height measurements was ±0.8 μm. Because the replica samples shrink at 1.1% in the parallel direction of the specimens and 2.7% in the vertical direction of the specimens during solidification38, we corrected them for the measurements of the replicated samples. The lower linear shrinkage ratio in the parallel direction was caused by friction between the resin and the specimens, which occurs only in the parallel direction, not in the vertical direction38. This method was originally reported in snow crystal studies38,39 and applied to this study.
We used four stones (A to D) with different surface roughness conditions. The surface roughness of the running band of a stone was adjusted using a sheet of sandpaper as follows. A sheet of sandpaper (23 by 28 cm) was secured on a flat desk with a double sided tape, and each stone was moved on the paper forth and back in a straight line by a distance of about 10 cm. This procedure was carried out four times each with a stone rotation angle of 45°. When the stone was rotated by 45°, the stone was lifted. A total of four forth-and-back movements were carried out with the same rotation angle on the same sandpaper.
We use a sheet of sandpaper of P80 grit number as a reference. We selected rougher sandpaper (P40) for A2 to increase the surface roughness of A2. We selected smoother sandpaper (P120) for C1 to slightly adjust the surface roughness of C1. For B2 and D1, we use the original surface roughness of the running band, and did not use the above sandpaper operations. Thus, sandpapers with the following grit numbers were used for the respective running bands: P80 for A1, B1, C2, and D2; P40 for A2; and P120 for C1.
Because each stone has two running bands on its top and bottom, we measured the eight running bands of the four curling stones. We used two types of ice surface in our experiments in ADVICS Tokoro curling hall: a typical ice surface with pebbles (Fig. 1c) and a flattened ice surface (Fig. 1d). The typical ice surface was prepared by spraying water droplets onto the ice. The flat ice surface was prepared by cutting pebbles with a special cutting machine Ice King (https://iceking.ca/web/en/). In this paper, we used stone trajectory data measured on August 17 and 18 in 2019, and a total of 201 trajectory data sets were recorded on the days. All stones were released by skilled members of the curling club at Kitami Institute of Technology.
For the positional data of a curling stone measured by the ATTS, we defined that y-axis is parallel to the longitudinal direction of the ice rink, and x-axis is vertical to the y-axis. A curling stone was normally delivered along the y-axis in our experiments; however, it was difficult to perfectly deliver the stone along the y-axis. Thus, we corrected the stone's trajectory data as follows, outlined in Fig. 2.
A least square method by a linear approximation was applied to the first 3 to 8 positional data sets of the stone after its delivery and the linear lines were obtained, because a stone moves almost lineally for a few seconds after its delivery.
The maximum distances for each data set were calculated from the lines.
The most appropriate linear line was selected with the following two criteria: the maximum distance between the lines and the positional data is less than 2 mm (measurement accuracy of the ATTS), and the linear line uses the maximum number of positional data points.
The broken line in Fig. 2 is the linear line selected with the above two criteria and expressed in Eq. (2). The open circles in Fig. 2 indicate the original positional data (xi′, yi′) of a stone, and y′ is the original direction of a stone. A tilt angle (θ) between y′ and y axes expressed in Eq. (3) is obtained as follows:
$${y}^{^{\prime}}=a{x}^{^{\prime}}+b$$
$$\theta = {\mathrm{tan}}^{-1}a$$
where x′ denotes the axis normal to y′.
The x and y axes in Fig. 2 were calculated by the tilt angle θ using Eqs. (4) and (5), which are the rotation matrix equations of the coordinates. The corrected positional data of the stone (xi, yi) shown by solid circles in Fig. 2 were obtained as follows:
$$x={ x}^{^{\prime}}\mathrm{cos}\theta +{y}^{^{\prime}}\mathrm{sin}\theta$$
$$y=-{x}^{^{\prime}}\mathrm{sin}\theta +{y}^{^{\prime}}\mathrm{cos}\theta$$
where x and y are the lateral and longitudinal displacements, respectively.
A method for correcting a stone's trajectories in the delivery direction of yʹ-axis to those in the direction of y-axis. The direction of y-axis is parallel to the longitudinal direction of the ice rink. Open circles are the original positions (xiʹ, yiʹ) of the centre of a stone before correction, and solid circles are the positions (xi, yi) of the centre of the stone after correction as described in the text.
We defined two types of curl distance for our experimental data. If a stone stopped within 1 m from the centre of the house, 28.35 m away, the curl distance calculated in Eq. (4) was used. If a stone stopped at more than 1 to 2 m from the centre of the house in the y-direction, the trajectories of the stone were pulled back by the distances along the y-direction, and the curl distances at 28.35 m were corrected by the x position at y = 0.
All experiments were carried out in accordance with relevant guidelines, regulations and ethics approval by ADVICS Tokoro curing hall in Kitami, Japan. Permission was individually obtained from two persons who appear in Fig. 1a.
Figure 1c shows a typical pebble on the ice in ADVICS Tokoro curling hall. The pebble is in the shape of a spherical segment, which is a cutting sphere with a pair of parallel planes. The upper flat surface was initially formed by a nipper, which is a specially designed device to make uniform heights of pebbles, and the nipper is usually used before curling games.
The diameter of the upper surface ranged from 0.36 to 2.18 mm, and the average diameter was 0.97 ± 0.63 mm for seven samples (the value following the ± sign is a standard deviation). The diameter of the lower base of pebbles ranged from 1.57 to 6.84 mm, and the average diameter was 3.58 ± 1.62 mm. The height of pebbles ranged from 0.11 to 0.16 mm, and the average height was 0.13 ± 0.02 mm. We found that the upper surface was slightly enlarged and the height was slightly shortened after stones passed on the pebbles. The average number density of pebbles ranged from 2 to 5 cm−2. Figure 1d shows the flat ice surface in ADVICS Tokoro curling hall. This ice surface was formed by an Ice king, which is usually used for removing the pebbles and levelling the ice surface on curling rinks. The flat ice surface is not used for ordinary curling games and is specially designed for our experiments.
We measured the surface roughness profiles of each running band at four places at 90° intervals of the four stones using the probe type surface roughness meter. Thus, the total of 32 surface roughness profiles of the running bands (4 stones × 2 surfaces × 4 places) were measured. Figure 3 shows examples of surface roughness profiles for the eight running bands of the four stones. The average value Ra ranges from 0.389 ± 0.099 μm to 3.106 ± 0.258 μm as shown in Fig. 3. Table 1 summarizes the surface roughness data of four stones.
Surface roughness profiles of eight running bands for four stones. Average surface roughness Ra (average value ± standard deviation) is expressed at the top right corner in each panel.
Table 1 Average surface roughness (Ra) of four stones used in this study.
Figure 4 shows typical examples of the trajectories of stone A with the running band A1 (Ra: 2.772 ± 0.195 μm) and stone B with the running band B2 (Ra: 0.526 ± 0.070 μm), which were obtained after the coordinate rotations described in Eqs. (4) and (5). The two stones were similarly rotated clockwise, by about four turns during the 28.35 m movements, which is equivalent to about 1.07 rad/s = 61.3°/s. Stone A1 is a normal curling stone used in ordinary curling games, and stone B2 is a special stone having a running band with smoother surface roughness.
Typical trajectories of stones moving on ice. Surface roughness of stones A1 and B2 are 2.772 ± 0.195 μm and 0.526 ± 0.070 μm, respectively. These trajectories were measured at ADVICS Tokoro curling hall on August 17, 2019.
We found that stone A1 moves along the almost straight trajectory within 2 m after delivery and gradually curls to the right, with the curl distance of 1.1 m at 28.35 m away. On the other hand, the curl distance of stone B2 is 0.2 m, which is 1/5 of that of stone A1, and stone B2 moves with a complex trajectory as shown in Fig. 4b. We measured the trajectories of stone B2 several times, which were found different every time. It seems that the trajectories of stone B2 were affected by accidental collisions of some pebbles on the ice.
Figure 5a shows the relationship between the average surface roughness Ra for different curling stones and the curl distance for an ordinary pebbled ice surface shown by solid circles and a solid line, and for a flat ice surface shown by open circles and a broken line at a typical angular velocity (about three to five turns from its delivery to stop; the initial angular velocity of the stone ranges from 50 to 80°/s). A total of 43 trajectory data sets were used in Fig. 5a,b. We found that the curl distance increases with increasing surface roughness, and the regression lines are given by
$${x}_{\mathrm{p}}= 0.411{R}_{\mathrm{a}}+0.046$$
$${x}_{\mathrm{f}}= 0.453{R}_{\mathrm{a}}+0.193$$
where xp and xf denote the curl distances for the normal pebbled ice surface and for the flat ice surface, respectively. The correlation coefficient (r) and the level of significance (p) are 0.88 (p < 0.01) for the normal pebbled ice surface, and 0.97 (p < 0.01) for the flat ice surface.
(a) Relationship between average surface roughness Ra of a stone and the curl distances for ordinary pebbled ice surface shown by solid circles and a solid line, and flat ice surface without any pebbles shown by open circles and a dashed line. (b) Relationship between surface roughness area (SRA) of a stone and the curl distances for ordinary pebbled ice surface shown by solid circles and a solid line, and flat ice surface without any pebbles shown by open circles and a dashed line. Error bars show the ranges of standard deviation.
The maximum difference between the curl distances xp and xf was 0.273 m for Ra = 3 μm using Eqs. (6) and (7). Because the contact area between the running band and the ice becomes larger for the flat ice surface and the frictional force between the stone and the ice surface will increase, the curling force of a stone seems to be larger.
Because the width of the eight running bands (d) is slightly different for each stone, we define the surface roughness area (SRA) for each running band using Eq. (8):
$$SRA={ R}_{\mathrm{a}} A = {\pi R}_{\mathrm{a}} \left(\overline{r}_{2}^{\ \ 2}-\overline{r}_{1}^{\ \ 2}\right)= {\pi R}_{\mathrm{a}} \overline{d} (2\overline{r}_{1}+\overline{d})$$
where A is the surface area of the running band, \(\overline{r}_{1}\) and \(\overline{r}_{2}\) are the average inner and outer radii of the running band, and \(\overline{d}\) is the average width of the running band. We measured the inner radii r1 and the widths d of a running band of each stone with a calliper at four places at 90° intervals. The value r1 ranged from 122 (A1, A2) to 126 mm (D1, D2), and the value d ranged from 3.05 ± 0.02 (D1) to 6.20 ± 0.00 mm (A2).
Figure 5b shows the relationship between the SRA defined by Eq. (8) and the curl distance for an ordinary pebbled ice surface shown by solid circles and a solid line, and for a flat ice surface shown by open circles and a broken line. The regression lines are given by
$${x}_{\mathrm{p}}= 0.0092 SRA+0.116$$
$${x}_{\mathrm{f}}= 0.0093 SRA+0.248.$$
We find that the relationship is better than the relationship in Fig. 5a; the correlation coefficients (r) and the level of significance (p) are 0.97 (p < 0.01) and 0.98 (p < 0.01). Because the SRA is corrected for the different width of the running band for each stone, the relationship between the curl distance and the surface roughness condition of running bands is better as shown in Fig. 5b.
We also find that the different ice surface condition contributes to the curl distances of 0.13 to 0.15 m for different values of SRA from 10 to 150 μm cm2. Thus, the ice surface condition has much smaller effects on the curl distance than the surface roughness condition of curling stones, because the flat ice surface condition in Fig. 1d is an unusual surface condition, and similar pebbled ice surfaces as shown in Fig. 1c are usually used in ordinary curling games.
As already mentioned, the curling distances of curling stones are different on every curling rink. The reason has been unknown to curling players and to scientists studying the curling phenomena of a stone sliding on ice. Most curling players may consider that the difference is primarily caused by ice surface conditions, including ice surface temperature, the number density of pebbles, and the diameter and height conditions of pebbles. However, our study clearly demonstrates that the curl distance is primarily determined by the surface roughness and the surface area of the running band on the bottom of a stone, and the ice surface condition has smaller effects on the curl distance. We expect that our findings will form the basis of future curling theories and model calculations regarding the curling phenomenon of curling stones and will clarify requirements of the running band on the bottom of a stone to achieve an appropriate curl distance in curling games.
Using the results in this study, ice technicians in curling halls can adjust the curl distance to 1.0 m or 1.5 m with an ordinary angular velocity (4 turns from delivery to stop) by setting the average surface roughness (Ra) of a running band to 2.32 μm for a curling distance of 1.0 m and to 3.54 µm for a curling distance of 1.5 m, according to Eq. (6). Because the SRA is 41.4 μm cm2 or 42.5 μm cm2 for these curl distances, the width of the running band is to be 5.2 mm or 5.3 mm if the inner diameter of the running band is 124 mm, according to Eq. (9). Thus, the curl distance of a stone sliding on ice in every curling hall can be adjusted for an appropriate value by changing the surface roughness of the running band on the bottom of a stone. The surface ice temperature and pebbled ice surface conditions in curling rinks should be the same as described in this paper.
Finally, we propose a possible frictional mechanism affecting the curling phenomena of a curling stone based on our results. Our results reveal that the amount of curl, or specifically the lateral displacements of a stone, increase with the increase of the surface roughness and the area of the running band. We consider that the increase of the surface roughness and the area causes the increase of erosion depths of pebbles, which leads to larger ploughing forces acting at the contact points between the running band and the surface of ice pebbles. Thus, we consider that inhomogeneous distribution of the ploughing forces around the running band or continuous small pivoting around the contact points possibly causes the curling phenomena of a curling stone moving on ice. These are the reasons for the FB model and the PS model, respectively.
In this paper, we clarified the following relating to the curl distance of a stone moving on ice.
The curl distance of a stone is primarily determined by the surface roughness of a running band of a stone. Using the products of the surface roughness and the area of the running band (SRA in this paper), the relationship was improved. This shows the importance of the surface roughness and the area of the running band for the curling phenomenon of a stone moving on ice.
With the increase of the surface roughness of the running band, erosion depths at the surface of pebbles will be deeper, and larger ploughing forces are generated at the contact points between the running band and the surface of pebbles. Inhomogeneous distribution of the ploughing forces around the running band or continuous small pivoting around the contact points will possibly cause the curling phenomena of a curling stone moving on ice. The former is the reason for the FB model, and the latter is the reason for the PS model.
The curl distance of a stone sliding on ice in every curling hall can be adjusted to an appropriate value by changing the surface roughness of the running band on the bottom of a stone, under the same conditions as described in this paper for the surface ice temperature and the pebbled ice surface conditions in curling rinks.
The ice surface conditions contribute to the curling distance of a stone. However, the effect was smaller than that of the surface roughness and the SRA of a stone.
The datasets generated from this study are available from the corresponding author on reasonable request.
Ivanov, A. P & Shuvalov, N. D. Friction in curling game. Preprints MATHMOD 2012 Vienna.
Maeno, N. Curling. In The Engineering Approach to Winter Sports (Eds. Braghin, F. et al.) 327–347 (Springer-Verlag, 2016).
World Curling Federation History of curling. https://worldcurling.org/about/history/
Harrington, L. E. An experimental study of the motion of curling stones. Proc. Trans. R. Soc. Canada 18(3), 247–259 (1924).
Denny, M. Curling rock dynamics. Can. J. Phys. 76, 295–304 (1998).
ADS CAS Google Scholar
Marmo, B. A. & Blackford, J. R. Friction in the sport of curling. In 5th International Sports Engineering Conference. Vol. 1 379–385 (2004).
Tusima, K. Explanation of the curling motion of curling stone. J. Jpn. Soc. Snow Ice 73(3), 165–171 (2010) (in Japanese with English abstract).
Penner, A. P. The physics of sliding cylinders and curling rocks. Am. J. Phys. 69(3), 332–339 (2001).
ADS Article Google Scholar
Maculay, W. H. & Smith, G. E. Curling. Nature 125(3150), 408–409 (1930).
ADS Article MATH Google Scholar
Walker, G. Mechanics of sport. Nature 140(3544), 567–568 (1937).
Johnston, J. W. The dynamics of a curling stone. Can. Aeronaut. Sp. J. 27(2), 144–160 (1981).
Shegelski, M. R. A., Niebergall, R. & Walton, M. A. The motion of a curling rock. Can. J. Phys. 74, 663–670 (1996).
ADS CAS Article Google Scholar
Shegelski, M. R. A. The motion of a curling rock: analytical approach. Can. J. Phys. 78, 857–864 (2000).
Jensen, E. T. & Shegelski, M. R. A. The motion of curling rocks: experimental investigation and semi-phenomenological description. Can. J. Phys. 82, 791–809 (2004).
Denny, M. Curling rock dynamics: towards a realistic model. Can. J. Phys. 80(9), 791–809 (2002).
Maeno, N. Curl mechanism of a curling stone on ice pebbles. Bull. Glaciol. Res. 29, 1–6. https://doi.org/10.5331/bgr.28.1 (2010).
Maeno, N. Dynamics and curl ratio of a curling stone. Sports Eng. 17, 33–41. https://doi.org/10.1007/s12283-013-0129-8 (2014).
Maeno, N. Erratum to: Dynamics and curl ratio of a curling stone. Sports Eng. 17, 43–44. https://doi.org/10.1007/s12283-013-0131-1 (2014).
Nyberg, H., Alfredson, S., Hogmark, S. & Jacobson, S. The asymmetrical friction mechanism that puts the curl in the curling stone. Wear 301(1–2), 583–589. https://doi.org/10.1016/j.wear.2013.01.051 (2013).
Nyberg, H., Hogmark, S. & Jacobson, S. Calculated trajectories of curling stones sliding under asymmetrical friction: validation of published models. Tribol. Lett. 50(3), 379–385 (2013).
Honkanen, V., Ovaska, M., Alava, M. J., Lasse Laurson, L. & Tuononen, A. J. A surface topography analysis of the curling stone curl mechanism. Sci. Rep. https://doi.org/10.1038/s41598-018-26595-y (2018).
Penner, A. R. A scratch-guide model for the motion of a curling rock. Tribol. Lett. 67, 1–35 (2019).
Maeno, N. Why does a curling stone curl? A new theory: edge model. Annual Report on Snow and Ice Studies in Hokkaido 37, 9–22 (2018) (in Japanese).
Maeno, N. Mechanism of ice cutting and the edge model of the motion of a curling stone (2). Summaries of JSSI&JSSE joint conference on snow and ice research – 2019 in Yamagata, 1 (2019) (in Japanese).
Shegelski, M. R. A. & Lozowski, E. Pivot–slide model of the motion of a curling rock. Can. J. Phys. 94(12), 1305–1309. https://doi.org/10.1139/cjp-2016-0466 (2016).
Shegelski, M. R. A. & Lozowski, E. First principles pivot-slide model of the motion of a curling rock: qualitative and quantitative predictions. Cold Reg. Sci. Technol. 146, 182–186. https://doi.org/10.1016/j.coldregions.2017.10.021 (2018).
Mancini, G. & de Schoulepnikoffb, L. Improved pivot–slide model of the motion of a curling rock. Can. J. Phys. 97(12), 1301–1308. https://doi.org/10.1139/cjp-2018-0356 (2019).
Shgelski, M. R. A. & Reid, M. Comment on: Curling rock dynamics—the motion of a curling rock: inertial vs. noninertial reference frames. Can. J. Phys. 77, 903–922 (1999).
Denny, M. Reply to comment on: Curling rock dynamics—the motion of a curling rock: inertial vs. noninertial reference frames. Can. J. Phys. 77, 923–926 (1999).
Denny, M. Comment on "The motion of a curling rock". Can. J. Phys. 81, 877–881 (2003).
Shegelski, M. R. A. & Niebergall, R. Reply to the comment by M. Denny on "The motion of a curling rock". Can. J. Phys. 81, 883–888 (2003).
Shegelski, M. R. A., Jensen, E. T. & Reid, M. Comment on the asymmetrical friction mechanism that puts the curl in the curling stone. Wear 336–337, 69–71. https://doi.org/10.1016/j.wear.2015.04.015 (2015).
Shegelski, M. R. A. & Lozowski, E. Null effect of scratches made by curling rocks. Proc. Inst. Mech. Eng. P J. Sport Eng. Technol. 1−5 (2019). https://doi.org/10.1177/1754337118821575
Lozowski, E. et al. Comment on "A scratch-guide model for the motion of a curling rock". Tribol. Lett. https://doi.org/10.1007/s11249-019-1242-z (2020).
Penner, A. R. Reply to the comment on "A scratch‑guide model for the motion of a Curling rock". Tribol. Lett., 68(1) (2020). https://doi.org/10.1007/s11249-019-1243-y
Lozowski, E.P. et al. Towards a first principles model of curling ice friction and curling stone dynamics. Proceedings of the Twenty-fifth International Ocean and Polar Engineering Conference, 21–26 June, Kona, Hawaii, USA, 1730–1738 (2015).
Maeno, N. Assignments and progress of curling stone dynamics. Proc. Inst. Mech. Eng. P J. Sport Eng. Technol. (2016). https://doi.org/10.1177/1754337116647241
Tamaki, J. et al. 3D reproduction of a snow crystal by stereolithography. J. Adv. Mech. Des. Syst. Manuf. 6(6), 923–935. https://doi.org/10.1299/jamdsm.6.923 (2012).
Yanagi, S., Kubo, A., Kameda, T., Tamaki, J. & Ullah, A. M. M. S. Replication technique of snow crystal using light-curing resin and its copying accuracy. J. Jpn. Soc. Snow Ice (Seppyo) 77(1), 75–89 (2015) (in Japanese with English abstract).
We thank Dr. Tetsuya Ohashi and Profs. Fumito Masui and Hitoshi Yanagi for fruitful discussions, and Kitami Institute of Technology for financial support. We also thank Mr. S. Suzuki and his assistant at Tokoro Curling Club for their supports to our experiments held in ADVICS Tokoro curling hall. Comments from anonymous reviewers are helpful for revising the manuscript.
Daiki Shikano
Present address: Utsunomiya Management Office, East Nippon Expressway Co., Ltd, 24-2 Moro, Kanuma, Tochigi, 322-0026, Japan
Snow and Ice Research Laboratory, Kitami Institute of Technology, 165 Koencho, Kitami, Hokkaido, 090-8507, Japan
Takao Kameda & Daiki Shikano
Optical Engineering Laboratory, Kitami Institute of Technology, 165 Koencho, Kitami, Hokkaido, 090-8507, Japan
Yasuhiro Harada
Hokkaido Kushiro Meiki Senior High School, 1-38-7 Aikoku-nishi, Kushiro, Hokkaido, 085-0057, Japan
Satoshi Yanagi
Professor Emeritus, Kitami Institute of Technology, 165 Koencho, Kitami, Hokkaido, 090-8507, Japan
Kimiteru Sado
Takao Kameda
T.K. performed all analyses and drafted the manuscript. D.S. took the photographs in Fig. 1 and measured the original data in Figs. 2, 3, 4 and 5. S.Y. measured the shapes of pebbles using replica samples, S.Y. and K.S. individually measured the number density of pebbles. Y.H. advised T.K. and D. S. about a correction method for stones' trajectories described in Fig. 2. D.S., Y.H., S.Y. and K.S. collaborated with T.K. on the manuscript and contributed to the revision of the manuscript.
Correspondence to Takao Kameda.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Kameda, T., Shikano, D., Harada, Y. et al. The importance of the surface roughness and running band area on the bottom of a stone for the curling phenomenon. Sci Rep 10, 20637 (2020). https://doi.org/10.1038/s41598-020-76660-8
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
About Scientific Reports
Guest Edited Collections
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Editorial Board Highlights
Author Highlights
Scientific Reports ISSN 2045-2322 (online)
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
Dong* , Zhang*** , and Li*: Microblog Sentiment Analysis Method Based on Spectral Clustering
Shi Dong* , Xingang Zhang*** and Ya Li*
Microblog Sentiment Analysis Method Based on Spectral Clustering
Abstract: This study evaluates the viewpoints of user focus incidents using microblog sentiment analysis, which has been actively researched in academia. Most existing works have adopted traditional supervised machine learning methods to analyze emotions in microblogs; however, these approaches may not be suitable in Chinese due to linguistic differences. This paper proposes a new microblog sentiment analysis method that mines associated microblog emotions based on a popular microblog through user-building combined with spectral clustering to analyze microblog content. Experimental results for a public microblog benchmark corpus show that the proposed method can improve identification accuracy and save manually labeled time compared to existing methods.
Keywords: Machine Learning , RDM , Sentiment Analysis , Spectral Cluster
With the development of online social networks, people have begun to express their feelings, emotions, and attitudes online. Microblogs, such as Twitter and Sina Weibo, constitute a popular type of social networking platform. Thus, when an exciting event occurs, copious amounts of news and public opinions increase user data. The notion of how to mine useful data may provide implications for government officials and companies. Many methods have been proposed to mine big data from social media (e.g., microblogs), of which sentiment analysis (SA) is a popular approach.
SA is also referred to as view mining, a means of mining people's feelings and attitudes from texts. SA is divided into three levels: document-level [1], sentence-level [2], and aspect-level [3]. Medhat et al. [4] noted that document-level SA aims to classify an opinion document as expressing either a positive or negative opinion or sentiment. The level considers the whole document a basic information unit (i.e., addressing one topic). Sentence-level SA aims to classify the sentiment expressed in each sentence. The first step is to identify whether the sentence is subjective or objective; if it is subjective, then sentencelevel SA will determine whether the sentence expresses a positive or negative opinion. Aspect-level SA seeks to classify sentiment with respect to the specific aspects of entities. The first step is to identify the entities and their aspects. Opinion holders can offer different opinions regarding various aspects of the same entity. Because social media texts possess common characteristics, such as limited length and informal expression, SA can be challenging. Existing SA methods tend to use supervised machine learning (ML) to train the data, after which an identification model is constructed to identify sentiment data; however, these often rely on manual labels marked in advance. To improve the performance of microblog sentiment identification tasks, the present authors adopt a SA method based on semisupervised spectral clustering to analyze and identify microblog emotion text. The proposed method only requires a few artificial labels, thus reducing the workload and improving the efficiency of SA.
The rest of this paper is organized as follows. Section 2 discusses related work on SA. Section 3 proposes spectral clustering SA models. Section 4 introduces the evaluation metric. Experimental results are provided in Section 5. Finally, conclusions and future work are presented in Section 6.
2. Related Work
Because ML methods are unlikely to be affected by dictionary size and updates, an increasing number of researchers focusing on SA have turned to such methods to classify sentiment in texts. Pang et al. [5] introduced ML methods in sentiment classification and adopted a naive Bayes classification, maximum entropy classification, and support vector machines (SVMs) to classify a movie-review corpus. Paltoglou and Thelwall [6] studied document representations with SA using term weighting functions adopted from information retrieval and adapted to classification. The proposed weighting schemes were tested with several publicly available datasets, many of which repeatedly demonstrated significant increases in accuracy using these schemes compared to other state-of-the-art approaches. Jin et al. [7] proposed a novel and robust ML system for opinion mining and extraction. Xu et al. [8] used a naive Bayes and maximum entropy model to mine Chinese web news and complete automatic classification of emotion-related content, combining a special microblog dictionary with a traditional emotional dictionary. However, ML applications are limited given the need for a correctly labeled corpus as a basis for training and learning. When the difference between the object sample and the training sample is large, results are inadequate. Yin, Pei, et al. [9] mainly focused on improving sentiment classification in Chinese online reviews by analyzing and improving each step in supervised ML. The experimental results indicated that part of speech, number of features, evaluation domain, feature extraction algorithm, and SVM kernel function exerted great influences on sentiment classification, whereas the number of training corpora had little impact. Da Silva et al. [10] proposed an integrated classifier to analyze Twitter emotion by integrated naive Bayes, SVM, random forest, and logistic regression approaches; experimental results showed that the ensemble classifier could improve emotional classification accuracy. Zhang et al. [11] suggested a new SA method that combined text and image information using the similarity neighbor classification model to classify emotions and effectively improve classification accuracy. Jiang et al. [12] put forth an improved SA model constructed to manage more unlabeled data, establish emotional space, and effectively grab emotional keywords in SA to improve efficiency. Barborsa and Feng [13] presented an effective and robust sentiment detection approach for Twitter messages, using biased and noisy labels as input to build the models. Pang et al. [14] used emotional words and emoticons to filter non-marked microblogging corpus, constructed the corpus, and then trained the resultant automatic annotation corpus as a training set to build a classifier for microblog emotion-related text and classify emotional polarity in microblogging text. Liu et al. [15] constructed dictionaries of sentiment words, internet slang, and emoticons, respectively, and then implemented SA algorithms based on phrase paths and multiple characteristics of emotional tendency in microblog topics. Using microblog forwarding, comments, sharing, and similar behaviors, this algorithm could be optimized in the future based on multiple characteristics. Go et al. [16] used Twitter to collect training data and perform a sentiment search to construct corpora by using emoticons to obtain positive and negative samples followed by the application of various classifiers; however, this method demonstrated poor performance across three classes (i.e., negative, positive, and neutral). Liu et al. [17] presented a novel model called the emoticon-smoothed language model to address this issue. The basic idea is to train a language model based on manually labeled data and then use noisy emoticon data for smoothing. Che et al. [18] applied a discriminative conditional random field model with special features to compress sentiment sentences automatically; experimental results highlighted the effectiveness of the feature sets used for sentiment sentence compression (Sent_Comp) and the effectiveness of the Sent_Comp model applied in aspect-based sentiment analysis. Jiang et al. [2] incorporated targetdependent features and took related tweets into consideration. Dong et al. [19] proposed a set-similarity joint-based semi-supervised approach, which joined nodes in unconnected sub-graphs by cutting the flow graph with the Ford-Fulkerson algorithm into positive and negative sets to correct incorrect polarities predicted by min-cut-based semi-supervised methods. Although text-based SA has achieved notable success to this point, SA in Chinese texts still suffers from unresolved problems.
3. Spectral Clustering SA Model
Traditional research on SA methods generally focus on supervised ML methods. This section introduces a new semi-supervised SA method based on spectral clustering to construct a clustering model. Spectral clustering is a clustering method based on graph theory, which makes use of the spectrum (i.e., eigenvalues) of the similarity matrix of data to perform dimensionality reduction before clustering on fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset.
Spectral clustering sentiment analysis model.
Fig. 1 illustrates that the model is a traditional semi-supervised ML approach that differs from other semi-supervised methods and adopts spectral clustering to improve the cluster results. The classifier adopts sentiment sentence classifier-based maximum entropy [12]. The analysis procedure is as follows. First, training data are used as classifier input to train the maximum entropy classifier. Next, spectral clustering with k-means is employed to improve the classifier and incorporate test data into the classifier. Finally, output data is obtained as a labeled sentence. This section mainly focuses on the spectral cluster method and related information.
3.1 Graph Partition
A graph partition is defined as data represented in the form of a graph G=(V,E), with V vertices and E edges, such that it is possible to partition G into smaller components with specific properties. For instance, a k-way partition divides the vertex set into k smaller components. A good partition is defined as one in which the number of edges running between separate components is small. Therefore, the size of a sub-graph is nearly sufficient, and the weight of the cutting edge reaches the minimum. A graph partition can be considered a constrained optimization problem involving how to divide each point into a sub-graph. Unfortunately, when choosing a variety of objective functions, the optimization problem is often NP-hard. A relaxation method can be helpful in solving this problem, especially in transforming a combinatorial optimization problem into a numerical optimization problem that can then be solved in polynomial time before finally being restored by a threshold when restoring the division. A similar method to k-means may also apply with instructions. The related definition is as follows:
Given an undirected distance graph G(V,E), let G be a graph with V vertices and E edges. If an edge belongs to a sub-picture, then two vertices of the edge are included in the sub-graph. Assume two different endpoints exist from edge E, where the weight of E is denoted as wi,j. For an undirected graph, wi,j = wj,i and wi,j = 0. The graph partition method is referred to as cut, defined as all end points not existing on the same side of the sub-sum of the weights and the figure (i.e., a loss of function of the partition plan, which is intended to be as small as possible). This paper takes an undirected graph as an example; assume the original undirected graph G, which is divided into G1 and G2, is denoted as
[TeX:] $$\operatorname { cut } \left( G _ { 1 } , G _ { 2 } \right) = \sum _ { i \in G _ { 1 } , j \in G _ { 2 } } w _ { i , j }$$
Laplacian matrix
Assume undirected graph G is divided into two sub-graphs, G1 and G2. The vertices of G are n = |V|, and q is an n-dimensional vector, which can then be denoted as
[TeX:] $$\operatorname { cut } \left( G _ { 1 } , G _ { 2 } \right) = \sum _ { i \in G _ { 1 } , j \in C _ { 2 } } w _ { i , j } = \frac { \sum _ { i = 1 } ^ { n } \sum _ { j = 1 } ^ { n } w _ { i , j } \left( q _ { i } - q _ { j } \right) ^ { 2 } } { 2 \left( c _ { 1 } - c _ { 2 } \right) ^ { 2 } } \\ = \sum _ { i = 1 } ^ { n } \sum _ { j = 1 } ^ { n } - 2 w _ { i , j } q _ { i } q _ { j } + \sum _ { i = 1 } ^ { n } \sum _ { j = 1 } ^ { n } w _ { i , j } \left( q _ { i } ^ { 2 } + q _ { j } ^ { 2 } \right) \\ = \sum _ { i = 1 } ^ { n } \sum _ { j = 1 } ^ { n } - 2 w _ { i , j } q _ { i } q _ { j } + \sum _ { i = 1 } ^ { n } 2 q _ { i } ^ { 2 } \left( \sum _ { j = 1 } ^ { n } w _ { i , j } \right) \\ = 2 q ^ { T } ( D - W ) q$$
where D is the diagonal matrix. Diagonal elements are defined as follows:
[TeX:] $$D _ { i , i } = \sum _ { j = 1 } ^ { n } w _ { i , j }$$
where W is a weight matrix, because wi,j = wj,i ,and wi,j = 0. L is a Laplacian matrix, defined as L=D-W. From there, we obtain the following:
[TeX:] $$q ^ { T } L q = \frac { 1 } { 2 } \sum _ { i = 1 } ^ { n } \sum _ { j = 1 } ^ { n } w _ { i , j } \left( q _ { i } - q _ { j } \right) ^ { 2 }$$
If the gravity ownership is non-negative, then qTLq ≥ 0, indicating the Laplacian matrix is a semipositive definite matrix. When no connectivity exists, the eigenvalues of L are 0 and the corresponding feature vectors are [1, 1, ..., 1]T. Hence, if an undirected graph G is partitioned into two sub-diagrams, then one is itself, the other is empty, and the cut is 0. Thus, we obtain the following Eq. (5):
[TeX:] $$\operatorname { cut } ( G 1 , G 2 ) = \frac { q ^ { T } L q } { \left( c _ { 1 } - c _ { 2 } \right) ^ { 2 } }$$
Eq. (5) indicates that the minimization cut partition problem is converted to a minimization quadratic function qTLq; that is, it is a problem from seeking the relaxation of discrete values into continuous real values. The Laplacian matrix can better represent a graph; any such matrix corresponds to an undirected graph of non-negative weights, and the Laplacian matrix should meet the following conditions:
1. L is a symmetric, positive, semi-definite matrix to ensure that all eigenvalues are greater than or equal to 0;
2. The matrix L has a unique characteristic value of 0 and corresponding feature vectors of [1,1,...,1], reflecting a graph partition: A sub-graph contains all the endpoints of the graph, and another sub-graph is empty.
3.2 Partition Method
Several methods are available for graph partitioning, including minimum cut, ratio cut, and normalized cut; this paper uses the normalized cut method, which measures the sub-graph according to each degree sum of the endpoint from the sub-graph. Let d1 be the degree sum of G1, defined as
[TeX:] $$d _ { 1 } = \sum _ { i \in G _ { 1 } } d _ { i }$$
Let d2 be the degree sum of G2, defined as
Then, the objective function is
[TeX:] $$o b j = \operatorname { cut } \left( G _ { 1 } , G _ { 2 } \right) * \left( \frac { 1 } { d _ { 1 } } + \frac { 1 } { d _ { 2 } } \right) = \sum _ { i \in G _ { 1 } , j \in G _ { 2 } } w _ { i , j } * \left( q _ { i } - q _ { j } \right) ^ { 2 }$$
However, relaxation of the original problem is based on the following:
[TeX:] $$\begin{array} { l l } { \min } & { q ^ { T } L q } \\ { \text {subject } } & { \text { to } \quad q ^ { T } D e = 0 } \\ { } & { q ^ { T } D q = 1 } \end{array}$$
The generalized Rayleigh quotient is expressed as
[TeX:] $$R ( L , q ) = \frac { q ^ { T } L q } { q ^ { T } D q }$$
The problem can be converted to obtain the eigenvalues and eigenvectors in the features system:
[TeX:] $$L q = \lambda D q \\ \Leftrightarrow L q = \lambda D ^ { \frac { 1 } { 2 } } D ^ { \frac { 1 } { 2 } } q \\ \Leftrightarrow D ^ { - \frac { 1 } { 2 } } L D ^ { - \frac { 1 } { 2 } } D ^ { \frac { 1 } { 2 } } q = \lambda D ^ { \frac { 1 } { 2 } } q \\ \Leftrightarrow L ^ { \prime } q ^ { \prime } = \lambda q ^ { \prime }$$
[TeX:] $$L ^ { \prime } = D ^ { - \frac { 1 } { 2 } } L D ^ { - \frac { 1 } { 2 } } , q ^ { \prime } = D ^ { \frac { 1 } { 2 } }$$
In Eq. (12), Lq = λDq has the same eigenvalues as L'q'= λq', and a relationship exists between the feature vectors corresponding to the eigenvalues q'= D½q. Therefore, after the eigenvalues and eigenvectors are obtained in Eq. (12), each feature vector D-(1/2) can be multiplied. Then, the eigenvectors for Lq = λDq are got.L' = D-(1/2) LD-(1/2) constitute a normalized Laplacian matrix.
3.3 Spectral Cluster Method (SASC) Algorithm
This paper proposes an SASC. To mine sentiment sentences, nodes are considered sentences. Sentiment sentence features are sentiment patterns. A maximum entropy [12]-based sentiment sentence classifier is used to predict primary polarities. Then, a flow graph of sentences can be constructed using these candidate sentences. The proposed method (a normalized spectral clustering algorithm) is described as follows:
Algorithm Input: a sample matrix S and a similar number of classes to be clustered k.
Algorithm Output: O.
Step 1: Establish the right weight matrix W based on the matrix S and triangular matrix D;
Step 2: Establish Laplacian matrix L;
Step 3: Use maximum entropy to predict the primary polarities, ME(L);
Step 4: Compute k eigenvalues and the corresponding eigenvectors of Matrix L, the minimum of the eigen values must be 0, and the corresponding feature vector is [1, 1, ..., 1]T ;
Step 5: Considering k feature vectors as a new matrix, the number of rows is the number of samples, and the number of columns is k; Dimensionality reduction is done from N to k;
Step 6: Use k-means clustering algorithm to obtain the k cluster.
Step 7: Compare unlabeled sentence with labeled sentence by maximum entropy; if equal, add the unlabeled sentence into the labeled sentence that is the same as ME(L).
Step 8: Output labeled sentence.
Algorithm 1:
SASC algorithm
As shown in Algorithm 1, the SASC algorithm uses maximum entropy to construct the classifier and label the sample data. Nigam et al. [20] pointed out that the maximum entropy method performs better than naive Bayes. In addition, the SASC algorithm introduces spectral clustering to cluster microblog sentiment data. Thus, a small amount of labeled data and large amount of unlabeled data apply. When using k-means to compare the cluster with the labeled data, the cluster data will be labeled if requirements are met. The SASC algorithm can reduce sample data to train the classifier and improve identification efficiency.
4. Performance Analysis
4.1 Evaluation for Sentiment Sentence Extraction
This paper employs the routine evaluation standard to verify the effectiveness of the proposed algorithm. The following three evaluation criteria apply:
TP (true positive): The sentiment sentence S is correctly classified as S, which is a correct classification result;
FP (false positive): The sentiment sentence outside S is misclassified as S; FPs will produce false warnings for the classification system;
FN (false negative): The flows in S are misclassified as belonging to some other category; FNs will result in a loss of identification accuracy.
Calculation methods are as follows:
Precision: The percentage of samples classified as S that are truly in class S:
[TeX:] $$P r e c i s i o n = \frac { T P } { T P + F P }$$
Recall: The percentage of samples in class S that are correctly classified as S:
[TeX:] $$R e c a l l = \frac { T P } { T P + F N }$$
Overall accuracy: The percentage of correctly classified samples:
[TeX:] $$O v e r a l l = \frac { \sum _ { i = 1 } ^ { n } T P _ { i } } { \sum _ { i = 1 } ^ { n } \left( T P _ { i } + F P _ { i } \right) }$$
4.2 Evaluation for Correction Decision of Sentiment Sentence
Accuracy, recall, and precision values serve as evaluation criteria related to both positive and negative tendencies to judge the effect of emotion key sentence judgment. Table 1 shows the label mark of the database, where A and C respectively refer to the number of positively and negatively labeled sentences. B and D respectively refer to the number of positive and negative sentences for real results. Several metrics are calculated as follows:
Po-Precision: Precision of positive sentences as expressed by
[TeX:] $$P o - \text { Precision } = \frac { A \bigcap B } { A }$$
Po-Recall: Recall of positive sentences as expressed by
[TeX:] $$N e - \text {Recall} = \frac { A \bigcap B } { B }$$
Ne-Precision: Precision of negative sentences as expressed by
[TeX:] $$N e - \text {Precision} = \frac { C \bigcap D } { C }$$
Ne-Recall: Recall of negative sentences as expressed by
[TeX:] $$N e - \operatorname { Recall } = \frac { C \bigcap D } { D }$$
[TeX:] $$Overallaccuracy = \frac { ( A \bigcap B ) + ( C \bigcap D ) } { ( A + C ) \bigcap ( B + D ) }$$
Label mark of dataset
Label mark Real result
Positive sentence A B
Negative sentence C D
5. Experiment Results and Analysis
5.1 Chinese Opinion Analysis Evaluation (COAE) Dataset
This study used standard data from the COAE2013 dataset and label data from the COAE2014 dataset, which randomly selected 6,000 annotated sentences as training data from COAE2014; the remaining 5,000 served as test data. The corpus was training, and 1,230 positive sentiment sentences from microblogs were marked along with 1,350 negative sentiment sentences and 3,420 neutral sentiment sentences. In addition, 1,500 samples with sentiment were selected from the standard COAE2013 by annotation processing for a total training corpus from microblog of 7,500 sentences. The dataset was then divided into 10 parts.
5.2 Sentence Extraction Results
Precision (average precision), recall (average recall), and overall accuracy (average total accuracy) were used to evaluate the performance results of the 10-part dataset. The SASC spectral clustering algorithm is proposed and compared with the p1 method suggested by Jiang et al. [12] and the p2 approach put forth by Dong et al. [19]. Fig. 2 indicates that the average accuracy rate, average precision, and average recall were significantly higher in the proposed method than the other two methods; the spectral clustering model greatly reduced the time complexity, which reached an applied level. When the unlabeled dataset was larger, the other two methods did not consider optimization for the semisupervised method, whereas the proposed method included an optimization mechanism in the semisupervised method.
5.3 Sentence Assessment Results
Figs. 3 and 4 demonstrate that the SASC approach was the best method. A positive sentence and negative sentence were chosen to experimentally evaluate the performance of the algorithm. The COAE dataset was used as input data with a comparison to the aforementioned p1 and p2 methods. The dataset was split into 10 parts, denoted as Experiment 1, …, 10. Let Experiment no be en, in the task; en varies as 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10. The proposed method obtained 68% precision and 52% recall for the positive sentence in Experiment 8 and 69% precision and 51% recall for the negative sentence in Experiment 3. The SASC algorithm thus achieved higher precision and recall than p1 and p2 with different experimental data. Fig. 5 shows the overall accuracy of the SASC method to be higher compared to the other methods; in fact, the proposed approach outperformed all baseline methods because spectral clustering optimizes the k value selection of the k-means method to obtain more accurate cluster results. The overall accuracy of the SASC method was also more even, suggesting that the proposed method is more stable than others.
Performance evaluation of sentence extraction.
Performance evaluation in positive sentence assessment. (a) Po-precision in positive sentence and (b) Po-recall in positive sentence.
Performance evaluation in negative sentence assessment. (a) Ne-precision in negative sentence and (b) Ne-recall in negative sentence.
Overall accuracy in sentence assessment.
To improve the accuracy of sentiment classification and solve the problem of SA on the Chinese website Weibo, this paper presents a SA model based on spectral clustering in semi-supervised ML. An optimal solution can be found through the iteration process. Experimental results show that the proposed algorithm can improve identification accuracy in Chinese microblogs without increasing the network complexity of SA. The performance of the proposed algorithm approximated that of the current traditional manual annotation algorithm. Even so, research on ML methods in SA warrants further exploration to address unresolved issues. For example, due to excessive parameters, the number of neural network models is prone to over-fitting when the model is trained. Future work will focus on reducing the complexity of structural models to identify a more suitable ML algorithm for SA.
The authors would like to thank the anonymous reviewers for their insightful comments that helped to improve the technical quality of this paper. This work was supported by the National Natural Science Foundation of China (Grant No. U1504602), China Postdoctoral Science Foundation (Grant No. 2015M572141), Science and Technology Plan Projects of Henan Province (Grant No. 162102310147), Henan Science and Technology Department of Basic and Advanced Technology Research Projects (No. 132300410276, 142300410339), and Education Department of Henan Province Science and Technology Key Project Funding (Grant No.14A520065).
Shi Dong
He received an M.E. degree in computer application technology from the University of Electronic Science and Technology of China in 2009 and a Ph.D. in computer application technology from Southeast University is an associate professor at the School of Computer Science and Technology at Zhoukou Normal University and works as a post-doctoral researcher at Huazhong University of Science and Technology. He is a member of the China Computer Federation and a visiting scholar at Washington University in St. Louis. His research interests include distributed computing, network management, and evolutionary algorithms.
Xingang Zhang
He received a Master's degree in computer application technology from Huazhong University of Science and Technology in 2010. Currently, he is an associate professor at the School of Computer and Information Technology at Nanyang Normal University. He is a senior member of the China Computer Federation. His research interests include distributed computing and computer networks.
Ya Li
He received a B.Sc. degree in computer science and technology from Northeast Normal University and an M.Sc. degree in computer application technology from Beijing Jiaotong University, China in 1995 and 2005, respectively. He is currently conducting research on computer application technologies and mobile internet technologies.
1 A. Balahur, R. Steinberger, M. Kabadjov, V. Zavarella, E. Van Der Goot, M. Halkia, B. Pouliquen, J. Belyaeva, "Sentiment analysis in the news," in Proceedings of the 7th International Conference on Language Resources and Evaluation, Valletta, Malta, 2010;pp. 2216-2220. custom:[[[https://arxiv.org/abs/1309.6202]]]
2 L. Jiang, M. Yu, M. Zhou, X. Liu, T. Zhao, "Target-dependent twitter sentiment classification," in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, OR, 2011;pp. 151-160. custom:[[[https://dl.acm.org/citation.cfm?id=2002492]]]
3 M. Taboada, J. Brooke, M. Tofiloski, K. Voll, M. Stede, "Lexicon-based methods for sentiment analysis," Computational linguistics, 2011, vol. 37, no. 2, pp. 267-307. doi:[[[10.1162/COLI_a_00049]]]
4 W. Medhat, A. Hassan, H. Korashy, "Sentiment analysis algorithms and applications: a survey," Ain Shams Engineering Journal, 2014, vol. 5, no. 4, pp. 1093-1113. doi:[[[10.1016/j.asej.2014.04.011]]]
5 B. Pang, L. Lee, S. Vaithyanathan, "Thumbs up?: sentiment classification using machine learning techniques," in Proceedings of the ACL-02 Conference on Empirical Methods Natural Language Processing, Philadelphia, PA, 2002;pp. 79-86. doi:[[[10.3115/1118693.1118704]]]
6 G. Paltoglou, M. Thelwall, "A study of information retrieval weighting schemes for sentiment analysis," in Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, Uppsala, Sweden, 2010;pp. 1386-1395. custom:[[[https://dl.acm.org/citation.cfm?id=1858822]]]
7 W. Jin, H. H. Ho, R. K. Srihari, "OpinionMiner: a novel machine learning system for web opinion mining and extraction," in Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, France, 2009;pp. 1195-1204. doi:[[[10.1145/1557019.1557148]]]
8 J. Xu, Y. X. Ding, X. L. Wang, "Sentiment classification for Chinese news using machine learning methods," Journal of Chinese Information Processing, 2007, vol. 21, no. 6, pp. 95-100. custom:[[[http://en.cnki.com.cn/Article_en/CJFDTOTAL-MESS200706016.htm]]]
9 P. Yin, H. Wang, L. Zheng, "Sentiment classification of Chinese online reviews: analysing and improving supervised machine learning," International Journal of Web Engineering and Technology, 2012, vol. 7, no. 4, pp. 381-398. doi:[[[10.1504/IJWET.2012.050968]]]
10 N. F. Da Silva, E. R. Hruschka, E. R. Hruschka, "Tweet sentiment analysis with classifier ensembles," Decision Support Systems, 2014, vol. 66, pp. 170-179. doi:[[[10.1016/j.dss.2014.07.003]]]
11 Y. Zhang, L. Shang, X. Jia, "Sentiment analysis on microblogging by integrating text and image features," in Pacific-Asia Conference on Knowledge Discovery and Data Mining. Cham: Springer, 2015;pp. 52-63. doi:[[[10.1007/978-3-319-18032-8_5]]]
12 F. Jiang, Y. Liu, H. Luan, M. Zhang, S. Ma, "Microblog sentiment analysis with emoticon space model," in Chinese National Conference on Social Media Processing. Heidelberg: Springer, 2014;pp. 76-87. custom:[[[https://link.springer.com/chapter/10.1007/978-3-662-45558-6_7]]]
13 L. Barbosa, J. Feng, "Robust sentiment detection on twitter from biased and noisy data," in Proceedings of the 23rd International Conference on Computational Linguistics: Posters, Beijing, China, 2010;pp. 36-44. custom:[[[https://dl.acm.org/citation.cfm?id=1944571]]]
14 L. Pang, S. S. Li, G. D. Zhou, "Sentiment classification method of Chinese micro-blog based on emotional knowledge," Jisuanji Gongcheng/Computer Engineering, vol. 38, no. 13, 2012.custom:[[[-]]]
15 Q. Liu, C. Feng, H. Huang, "Emotional tendency identification for micro-blog topics based on multiple characteristics," in Proceedings of the 26th Pacific Asia Conference on Language, Information, and Computation, Bali, Indonesia, 2012;pp. 280-288. custom:[[[https://www.semanticscholar.org/paper/Emotional-Tendency-Identification-for-Micro-blog-on-Liu-Feng/64ce4e850a5f1f6261d840c03d040aefb39085e2]]]
16 A. Go, L. Huang, R. Bhayani, "Twitter sentiment analysis," Stanford UniversityReport No. CS224N, 2009.. doi:[[[10.23956/ijarcsse.v7i12.493]]]
17 K. L. Liu, W. J. Li, M. Guo, "Emoticon smoothed language models for twitter sentiment analysis," in Proceedings of the 26th AAAI Conference on Artificial Intelligence, Toronto, Canada, 2012;custom:[[[https://dl.acm.org/citation.cfm?id=2900966]]]
18 W. Che, Y. Zhao, H. Guo, Z. Su, T. Liu, "Sentence compression for aspect-based sentiment analysis," IEEE/ACM Transactions on AudioSpeech, and Language Processing, , 2015, vol. 23, no. 12, pp. 2111-2124. doi:[[[10.1109/TASLP.2015.2443982]]]
19 X. Dong, Q. Zou, Y. Guan, "Set-Similarity joins based semi-supervised sentiment analysis," in Neural Information Processing. Heidelberg: Springer2012,, pp. 176-183. doi:[[[10.1007/978-3-642-34475-6_22]]]
20 K. Nigam, J. Lafferty, A. McCallum, "Using maximum entropy for text classification," in Proceedings of IJCAI-99 Workshop on Machine Learning for Information Filtering, Stockholm, Sweden, 1999;pp. 61-67. custom:[[[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.1073]]]
Received: April 26 2016
Accepted: February 1 2017
Published (Print): June 30 2018
Published (Electronic): June 30 2018
Corresponding Author: Shi Dong* ([email protected])
Shi Dong*, School of Computer Science and Technology, HuaZhong Universtiy of Science and Technology, Wuhan, China, [email protected]
Xingang Zhang***, School of Computer and Information Technology, Nanyang Normal University, Nanyang, China, [email protected]
Ya Li*, School of Computer Science and Technology, Zhoukou Normal University, Zhoukou, China, [email protected]
|
CommonCrawl
|
EURASIP Journal on Image and Video Processing
The optimally designed autoencoder network for compressed sensing
Zufan Zhang1,
Yunfeng Wu1,
Chenquan Gan ORCID: orcid.org/0000-0002-0453-56301 &
Qingyi Zhu2
EURASIP Journal on Image and Video Processing volume 2019, Article number: 56 (2019) Cite this article
Compressed sensing (CS) is a signal processing framework, which reconstructs a signal from a small set of random measurements obtained by measurement matrices. Due to the strong randomness of measurement matrices, the reconstruction performance is unstable. Additionally, current reconstruction algorithms are relatively independent of the compressed sampling process and have high time complexity. To this end, a deep learning based stacked sparse denoising autoencoder compressed sensing (SSDAE_CS) model, which mainly consists of an encoder sub-network and a decoder sub-network, is proposed and analyzed in this paper. Instead of traditional linear measurements, a multiple nonlinear measurements encoder sub-network is trained to obtain measurements. Meanwhile, a trained decoder sub-network solves the CS recovery problem by learning the structure features within the training data. Specifically, the two sub-networks are integrated into SSDAE_CS model through end-to-end training for strengthening the connection between the two processes, and their parameters are jointly trained to improve the overall performance of CS. Finally, experimental results demonstrate that the proposed method significantly outperforms state-of-the-art methods in terms of reconstruction performance, time cost, and denoising ability. Most importantly, the proposed model shows excellent reconstruction performance in the case of a few measurements.
With the increasing demand of information processing, the information sampling rate and device processing speed of signal processing framework are also getting higher increasingly. To reduce the cost of storage, processing, and transmission of massive information, Donoho [1] first proposed a compressed sensing (CS) method, which merges sampling and compression steps. The traditional method samples the data uniformly and then compresses them. The CS method just needs to store and transmit a few non-zero coefficients, which further reduces the time of data acquisition and the complexity of sampling process.
CS method has been applied successfully in many fields, such as biomedical [2, 3], image processing [4, 5], communication [6, 7] and sensor network [8, 9], but there are still two significant challenges in the CS that need to be resolved. On the one hand, due to the strong randomness of measurement matrix, it is difficult to realize measurement matrix on the hardware and the reconstruction performance is unstable. On the other hand, current high-performance reconstruction algorithms just take into account the recovery process while the connection with compressed sampling process is neglected.
For the former challenge, it is critical to design a suitable measurement matrix in the process of compressed sampling. Generally, measurement matrices are divided into random matrices and definite matrices. On the one hand, Gaussian [10] and Bernoulli [11] random matrices are used as the sampling matrices in most previous works because they meet the restricted isometry property [12] with a large probability. However, it always suffers some problems such as high computation cost, vast storage, and uncertain reconstruction qualities. On the other hand, definite matrix had been proposed as an alternative solution to reduce the high cost problem of random matrix, such as Toeplitz polynomial [13] and Circulant matrice [14]. However, the reconstruction quality of the definite matrix is worse than a random matrix. A gradient descent method [15] is designed to minimize the mutual coherence of measurement matrix, which is described as absolute off-diagonal elements of the corresponding Gram matrix. Gao et al. [16] designed a local structural sampling matrix for block-based CS coding of natural images by utilizing the local smooth property of images. As previously mentioned, these measurement matrices still have some disadvantages because they are not optimally designed for signals, and neglect the structure of the signals.
For the latter challenge, the most crucial work in CS is to construct a stable reconstruction algorithm with low computational complexity and less restriction on the number of measurements to accurately recover signals from measurements. According to the volume of data, the CS reconstruction algorithms are divided into two categories: hand-designed recovery methods and data-driven recovery methods. Most of the existing algorithms can be considered "hand-designed" in the sense that they use some sort of expert knowledge, i.e., prior, about the structure of x. The hand-designed methods have three directions: convex optimization, greedy iterative, and Bayesian. Convex optimization algorithms get the approximate solution by translating the non-convex problem into a convex one, e.g., basis pursuit denoising (BPDN) algorithm [17], the minimization of total variation (TV) [18]. Greedy iterative algorithms approach gradually the original signal by selecting a local optimal solution in iteration, e.g., orthogonal matching pursuit (OMP) [19], compressive sampling matching pursuit (CoSaMP) [20], and iterative hard thresholding (IHT) [21]. Bayesian algorithms solve the sparse recovery problem by taking into account a prior knowledge of the sparse signal distribution, e.g., Bayesian via Laplace Prior (BCS-LP) [22] and Bayesian framework via Belief Propagation (BCS-BP) [23]. Unfortunately, these algorithms are too slow for many real-time applications and the potential information in training data is typically underutilized [24]. The second category is data-driven method that builds deep learning framework to solve the CS recovery problem via learning the structure within the training data. For instance, the SDA model [25] was proposed to recover structured signal by capturing statistical dependencies between the different elements of certain signals. Another reference [26] used RBM-OMP-like and DBN-OMP-like CS model, which are based on restricted Boltzmann machines and deep belief network, respectively, to model the prior distribution of the sparsity pattern of signals. Other work in this area used either DeepInverse [27] based on convolutional neural network or ReconNet [28] based on combination of convolutional and fully connected layers to solve the CS recovery problem. The data-driven methods can compete with state-of-the-art methods in terms of performance while running hundreds of times faster compared to the hand-designed methods. However, they need a lot of time and data to train model. The main reason is that these previous methods only consider the recovery process, ignoring the connection with compressed sampling process.
Noting the above discussions and previous work, this paper proposes a SSDAE_CS model based on sparse autoencoder (SAE) [29, 30] and denoising autoencoder (DAE) [31, 32] to solve the two important issues in CS. This model mainly consists of an encoder sub-network and a decoder sub-network. Given enough training data, neural network is acted as universal function approximator to represent arbitrary functions. Thus, two sub-networks are used to learn the mapping functions of the compressed sampling and the recovery process, respectively. A trained encoder sub-network, which uses multiple nonlinear measurements and specially designs for the type of signals, is used to obtain measurements during the compressed sampling process (addressing problem one). Then, these traditional signal reconstruction algorithms are replaced with a trained decoder sub-network to recover original signals from measurements. It just needs a few times matrix-vector multiplications and nonlinear mapping; hence, the proposed approach reduces the time cost in the reconstruction process. In the previous CS researches, the compression process was relatively independent of the recovery process. For this motivation, SSDAE_CS method integrates compressed sampling and recovery processes into a deep learning network to strengthen the connection. Through end-to-end training, the two sub-networks can be jointly optimized to improve the overall performance of CS, but they can also extend to different scenarios as two independent networks (addressing problem two). Finally, experiment results demonstrate that the proposed model significantly outperforms state-of-the-art methods in terms of reconstruction performance, time cost, and denoising ability. Especially, the SSDAE_CS model shows the excellent performance of signal reconstruction in the case of a few measurements.
The rest of this paper is organized as follows: a deep learning model for compressed sensing and the training method of model is presented in Section 2. Experiment results for the proposed method and comparisons with other CS reconstruction algorithms are performed in Section 3. Finally, Section 4 includes the conclusion of this work.
In this section, a deep learning CS model, which integrates the advantages of denoising and sparse autoencoders into CS theory, will be introduced in detail. The following notations are used throughout this paper: boldfaced capital letters such as W for matrices, small letters such as x are reserved for vectors and scalars, and italic small letters such as xi denotes the ith element of the vector. W(l) and b(l) denote the weight matrix and the bias vector associated between layer l and layer l+1, respectively. a(l) denotes the feature vector of the l hidden layer. f(·) represents the activation function, and the sigmoid function \(f(x)=\frac {1}{1+{{e}^{-x}}}\) is used as the activation function.
Overall framework of SSDAE_CS model
This paper proposes a deep learning model named stacked sparse denoising autoencoder compressed sensing which integrates advantage of denoising and sparse autoencoders into CS theory. The corrupted input is trained to reconstruct the clean version through DAE. In SAE, sparse regularization inhibits the activity of neurons to improve the overall performance of the model. It is similar to the human brain that a small number of neurons are activated and most neurons are inhibited.
As discussed above, traditional CS methods consist of two steps including linear measurement sampling and non-linear reconstruction algorithm. As shown in Fig. 1, our proposed model contains two corresponding modules: an encoder sub-network and a decoder sub-network. Instead of traditional linear measurements, a multiple nonlinear measurement encoder sub-network is trained to obtain measurements during the compressed sampling process. Then, these traditional signal reconstruction algorithms are replaced with a trained decoder sub-network to recover original signals from measurements. The framework of the proposed SSDAE_CS model constituting the training stage and testing stage is illustrated in the upper part of Fig. 1. The training stage is employed to learn a prior parameter for encoder and decoder sub-networks. When trained on a set of representative signal, the network learns both a feature representation to obtain measurements and an inverse mapping to recover signals. The goal of training stage is to learn the optimal encoder and the signals recovery decoder simultaneously. At the test stage, the test set is fed into the training model to test the performance of the model in all aspects.
Comparison between traditional CS model and the proposed model
Encoder and decoder sub-networks
The architecture of the SSDAE_CS model constituting five layers is illustrated in Fig. 2. The SSDAE_CS model is a deep neural network consisting of multiple layers of basic SAE and DAE, in which the outputs of each layer are wired to the inputs of each successive layer. It is remarkable that the proposed model is robust to the input because it can reconstruct the original signals from the corrupted input. The proposed model extracts robust features by sparse penalty term, which punishes and inhibits the larger change in the hidden layer. In the corruption stage, the original signals are corrupted by additive white Gaussian noise \(\tilde {\mathbf {x}}=\mathbf {x}+\lambda n\), where n denotes the additive Gaussian sampling noise of zero mean and variance one, and λ denotes the degree of the corruption of signals. In the encoder sub-network, the signal can be compressed to M measurements by utilizing multiple nonlinear measurement method. The decoder sub-network reconstructs the original signals from measurements by minimizing the reconstruction error between input and output. Finally, the two sub-networks are integrated into SSDAE_CS model by jointly optimizing parameters to improve the overall performance of CS.
Architecture of the SSDAE_CS Layer_5 when M=64
The encoder sub-network can be represented as a deterministic mapping Te(∙), which transforms an input \(\mathbf {x}\in {{\mathbb {R}}^{{{d}_{x}}}}\) into hidden representation space \(\mathbf {y}\in {{\mathbb {R}}^{{{d}_{y}}}}\). In the compression process of traditional CS, linear measurement y=Φx is used, but linear measurements are not optimal. In the SSDAE_CS model, multiple nonlinear measurements are applied to obtain measurements in CS, as shown in the encoding part of Fig. 2. It is found from [25] that nonlinear measurements can preserve more effective information compared to traditional linear measurements. The encoder consists of three layers, (a) an input layer with N nodes, (b) the first hidden layer with K nodes, and (c) the second hidden layer with M nodes, where N>K>M. The first hidden feature vector is the value of the first hidden layer, which receives the signals as its input in Eq. (1). The final measurement vector y is the value of the second hidden layer, which receives the first hidden feature vector as its input in Eq. (2).
$$ {{\mathbf{a}}^{(1)}}=f\left({{\mathbf{z}}^{(1)}}\right)=f\left({{\mathbf{W}}^{(1)}}\tilde{\mathbf{x}}+{{\mathbf{b}}^{(1)}}\right). $$
$$ \begin{aligned} \mathbf{y}=f\left({{\mathbf{z}}^{(2)}}\right)&=f\left({{\mathbf{W}}^{(2)}}{{\mathbf{a}}^{(1)}}+{{\mathbf{b}}^{(2)}}\right)\\ &= f\left({{\mathbf{W}}^{(2)}}f\left({{\mathbf{W}}^{(1)}}\tilde{\mathbf{x}}+{{\mathbf{b}}^{(1)}}\right)+{{\mathbf{b}}^{(2)}}\right). \end{aligned} $$
In the SSDAE_CS model, measurements are obtained by two matrix multiplications and nonlinear transformations, so this method is called multiple nonlinear measurement method. Measurement vector y can also be written as:
$$ \mathbf{y}={{T}_{e}}(\tilde{\mathbf{x}},{{\mathbf{\Omega}}_{e}}), $$
where Ωe={W(1),W(2);b(1),b(2)} denotes the set of encoded parameters and Te(∙) denotes the encoding nonlinear mapping function.
The decoder sub-network is used to map the measurement vector y back to input space \(\mathbf {x}\in {{\mathbb {R}}^{{{d}_{x}}}}\) by capturing the feature representation in signal reconstruction process. Among the traditional signal recovery algorithms, each iteration in these greedy or iterative algorithms includes multiple matrix-vector multiplication, which has the computational cost. In this paper, a nonlinear mapping is learned from measurements y to its original signal x by training; it just needs two matrix-vector multiplications and two nonlinear mappings. The decoder whose nodes are symmetric with the encoder consists of three layers: input layer with M nodes, the first hidden layer with K nodes, and the second hidden layer with N nodes. The decode function Eqs. (4) and (5) are used to recover the reconstruction signals \(\hat {\mathbf {x}}\) from measurement vector y.
$$ {{\mathbf{a}}^{(3)}}=f\left({{\mathbf{z}}^{(3)}}\right)=f\left({{\mathbf{W}}^{(3)}}\mathbf{y}+{{\mathbf{b}}^{(3)}}\right). $$
$$ \begin{aligned} \hat{\mathbf{x}}=f\left({{\mathbf{z}}^{(4)}}\right)& =f\left({{\mathbf{W}}^{(4)}}{{\mathbf{a}}^{(3)}}+{{\mathbf{b}}^{(4)}}\right)\\ & = f\left({{\mathbf{W}}^{(4)}}{f\left({{\mathbf{W}}^{(3)}}\mathbf{y}+{{\mathbf{b}}^{(3)}}\right)}+{{\mathbf{b}}^{(4)}}\right). \end{aligned} $$
The reconstruction signals \(\hat {\mathbf {x}}\) can also be represented as:
$$ \hat{\mathbf{x}}={{T}_{d}}(\mathbf{y},{{\Omega }_{d}}), $$
where Ωd={W(3),W(4);b(3),b(4)} denotes the set of decoded parameters and Td(∙) denotes the decoding nonlinear mapping function.
Offline training algorithm
Given enough training data, the neural networks can learn to represent arbitrary functions as universal function approximators. The major objective of this training phase is to extract the structural features of signals and learn the nonlinear mapping function of signal reconstruction. Specifically, the encoder and decoder sub-networks are integrated into SSDAE_CS model through end-to-end training for strengthening the connection between the two processes. The parameters will be updated constantly to achieve the optimal training model by reducing the loss function.
The SSDAE_CS model is a typical unsupervised learning model, in which the training set Dtrain has N signals whose label is the same as the sample, i.e., Dtrain={(x1,x1),(x2,x2),⋯,(xn,xn)}. A trained nonlinear mapping Te(∙) acts as the measurement matrix Φ to obtain the measurements y from the original signals x, and a trained inverse nonlinear mapping Td(∙) acts as the reconstruction algorithm to recover the reconstruction signals \(\hat {\mathbf {x}}\) from y in the proposed model. To ensure the reconstruction signal \(\hat {\mathbf {x}}\) close to the original signals x, the squared error is set as the error function for all data, as shown in Eq. (7):
$$ \begin{aligned} {{J}_{SDAE}}(\mathbf{W},\mathbf{b}) & =\frac{1}{N}\sum\limits_{i=1}^{N}{\left(\frac{1}{2}||{{{\hat{x}}}_{i}}-{{x}_{i}}|{{|}^{2}}\right)}\\\\ & +\frac{1}{2}\alpha \sum{||\mathbf{W}|{{|}^{2}}+}\beta \sum\limits_{j=1}{KL(\rho ||{{{\hat{\rho}}}_{j}}}). \end{aligned} $$
To prevent model overfitting, the second term limits the weight parameters W with L2 norm as a weight decay term that helps to penalize large weight. α denotes the strength of the penalty term. The third term is a sparse penalty term. \({{\hat {\rho }}_{j}}\) represents the average activation value of the j-th neuron in each batch of training set, β controls the strength of the sparsity penalty term, and ρ denotes the expected activation.
The train goal minimizes JSDAE(W,b) to update the SSDAE_CS's weights W and biases b; the detailed training process is shown in Algorithm 1. Firstly, parameters Ωe and Ωd are randomly initialized to serve the purpose of symmetry breaking. And then the measurement vector y and the reconstruction signals \(\hat {\mathbf {x}}\) are obtained through the encoder and decoder sub-networks, respectively. Next, the loss function JSDAE(W,b) is computed by Eq. (7) and batch gradient descent algorithm is performed to compute the gradients and update Ωe and Ωd. Each iteration of the gradient descent method updates the parameters W and b by Eqs. (8) and (9), respectively.
$$ \mathbf{W}_{ij}^{(l)}:=\mathbf{W}_{ij}^{(l)}-\alpha \frac{\partial }{\partial \mathbf{W}_{ij}^{(l)}}{{J}_{SDAE}}(\mathbf{W},\mathbf{b}), $$
$$ \mathbf{b}_{i}^{(l)}:=\mathbf{b}_{i}^{(l)}-\alpha \frac{\partial }{\partial \mathbf{b}_{i}^{(l)}}{{J}_{SDAE}}(\mathbf{W},\mathbf{b}), $$
where α is the learning rate. Computing the partial derivatives is the key thing in this process. The partial derivative is given by the back-propagation algorithm:
$$ \begin{aligned} & \frac{\partial }{\partial \mathbf{W}_{ij}^{(l)}}{{J}_{SDAE}}(\mathbf{W},\mathbf{b}) \,=\,\frac{1}{n}\sum\limits_{k=1}^{n}{\frac{\partial {{J}_{SDAE}}(\mathbf{W},\mathbf{b};{{x}_{k}},{{y}_{k}})}{\partial \mathbf{W}_{ij}^{(l)}}} \text{ \,=\, }\frac{1}{n}\sum\limits_{k=1}^{n}\\&{\mathbf{a}_{j}^{(l)}\left[ \delta_{i}^{(l+1)}+\beta \left(-\frac{\rho }{{{{\hat{\rho }}}_{i}}}+\frac{1-\rho }{1-{{{\hat{\rho }}}_{i}}}\right){f}'\left(\mathbf{z}_{i}^{(l+1)}\right) \right]}+\alpha \mathbf{W}_{ij}^{(l)}, \end{aligned} $$
$$ \begin{aligned} & \frac{\partial }{\partial b_{i}^{(l)}}{{J}_{SDAE}}(\mathbf{W},\mathbf{b}) =\frac{1}{n}\sum\limits_{k=1}^{n}{\frac{\partial {{J}_{SDAE}}(\mathbf{W},\mathbf{b};{{x}_{k}},{{y}_{k}})}{\partial \mathbf{b}_{i}^{(l)}}} =\frac{1}{n}\\&\sum\limits_{k=1}^{n}{\left[ \delta_{i}^{(l+1)}+\beta \left(-\frac{\rho }{{{{\hat{\rho }}}_{i}}} +\frac{1-\rho }{1-{{{\hat{\rho }}}_{i}}}\right){f}'\left(\mathbf{z}_{i}^{(l+1)}\right) \right]}, \end{aligned} $$
where \(\delta _{^{i}}^{(l)} \,=\, \left (\sum \limits _{j=1}{\mathbf {W}_{ji}^{(l)}\delta _{j}^{(l+1)}}+\beta \left (-\frac {\rho }{{{{\hat {\rho }}}_{i}}}+\frac {1-\rho }{1-{{{\hat {\rho }}}_{i}}}\right) \right){f}'\left (\mathbf {z}_{i}^{(l+1)}\right)\) denotes error term of node i in layer l, and n denotes the number of training samples.
In this section, a series of experiments are made to evaluate the performance of the SSDAE_CS model. In the first part, performance indicator is introduced and the MNIST dataset is used for training and testing. Then, the detailed experimental results are given in the final part.
Dataset and performance indicators
The MNIST dataset, which contains 70,000 grayscale images of handwritten digits of size N=28×28, is employed for the experiments. The dataset is divided into 55,000 samples for training, 5000 samples for validation, and 10,000 samples for testing. K denotes the number of non-zero entries in a grayscale image. It can be seen from Table 1 that the K of most grayscale images is concentrated in the range of 100–200. It shows from average that the number of non-zero items is about 19% of the total in a grayscale image. The handwritten digit images are almost sparse in the spatial domain; therefore, the sparse representation of CS is not necessary or Ψ=I, where I is the unit matrix.
Table 1 The distribution of non-zero entries in the MNIST dataset
Peak signal-to-noise ratio (PSNR), which is based on the error between the corresponding pixel points, is often used as a performance indicator to evaluate signal reconstruction quality in the field of image compression. The PSNR is defined as:
$$ \text{PSNR(dB)}{=}10\text{lo}{{\text{g}}_{10}}\frac{\text{peakval}^{2}}{\text{MSE}}, $$
where peakval is either specified by the user or taken from the range of the image data type (e.g., for unit image it is 255). Mean square error (MSE) is defined as \(\text {MSE}=\frac {1}{N}\sum \limits _{i=1}^{N}{({{{\hat {x}}}_{i}}-{{x}_{i}}}{{)}^{2}}\).
As mentioned earlier, one of the main goals recovers signals from undersampled measurements. To verify the impact of the parameters, a series of experiments are examined for finding the best parameter settings. The motivation of these experiments is to prevent parameter estimation errors from propagating into the reconstruction model during training. Set the number of model layers from L=3 to L=9 for verifying the effect of the number of model layers on the experimental results. The number of neuron nodes per layer is set as follows: the number of nodes in input layer and output layer is 784 (28×28) and the number of measurements ranges from 8 to 512, so the neuron nodes are set to 784, 8–512, 784 in the three-layer network; the neuron nodes are set to 784, 512, 8–512, 512, 784 in the five-layer network, respectively; the neuron nodes are set to 784, 512, 256, 8–512, 256, 512, 784 in the seven-layer network, respectively; and the neuron nodes are set to 784, 512, 512, 256, 8–512, 256, 512, 512, 784 in the nine-layer network, respectively.
Figure 3 reveals that the reconstruction performance is not effectively improved when the number of model layers L>5. This is because the loss function converges to a local minimum value rather than a global optimal value as the number of hidden layers increases. Therefore, a SSDAE_CS with five layers is selected as optimal testing model. The mean PSNR of SSDAE_CS is higher than that of basic autoencoder (BAE) without sparsity penalty term, as shown in Fig. 3. And it proves that sparse penalty term improves the model performance by inhibiting the activity of neurons.
The experimental results of compared BAE with SSDAE_CS
To find the optimal sparse factor, different values ρ are tested on SSDAE_CS model with different layers. Figure 4 shows the variation tendency of mean PSNR with sparse factor of hidden units. Experiments show that the model achieves optimal performance when sparse factor ρ=0.005.
The variation tendency of mean PSNR with different sparse factor
Additionally, a series of comparative experiments have been done to evaluate the performance of the proposed algorithm in the reconstruction quality and time complexity. Four comparative algorithms are selected from the two categories of reconstruction algorithms: BPDN [17] and OMP [19] based on hand-designed recovery method; DBN-OMP-like and RBM-OMP-like [26] based on data-driven recovery method.
Figure 5 illustrates the variation tendency of mean PSNR of the proposed method and others; it can be clearly seen that the reconstruction performance of the proposed model is significantly better than other algorithms. Firstly, the proposed model not only require fewer measurements than conventional OMP and BPDN to achieve stable recovery but also attain higher mean PSNR values for the entire range of measurements. The main reason for this problem is that the SSDAE_CS model can obtain an optimal function approximator (decoder) by capturing the structural features of training signals in the training process. However, the reconstructed object is a signal in OMP and BPDN, not a batch of signals. Once the signal loses too much information during the compression process, OMP and BPDN cannot reconstruct the signal or the quality of reconstruction is poor. Then, Fig. 5 displays that the reconstruction performance of DBN-OMP-like and RBM-OMP-like, which are based on data-driven recovery method, are significantly better than OMP and BPDN. However, the SSDAE_CS model attains higher PSNR values in the range of measurements M<350 and requires fewer measurements to achieve stable recovery than DBN-OMP-like and RBM-OMP-like. The reconstruction performance of SSDAE_CS is slightly lower than that of DBN-OMP-like and RBM-OMP-like when M>350. The reason for this result is that DBN-OMP-like and RBM-OMP-like use the traditional linear measurement matrix method in the compressed sampling process. Furthermore, they relatively tear the intrinsic relationship between compressed sampling and signal reconstruction. However, the SSDAE_CS model adopts multiple nonlinear measurement methods to preserve more effective information in the compressed sampling process. Through end-to-end training, compressed sampling and signal reconstruction process are perfectly integrated into the proposed model to improve the overall performance of CS.
Evaluation of the reconstruction performance of the MNIST dataset in different reconstruction algorithms
Table 2 compares the running time of the decoder net of SSDAE_CS with other CS recovery algorithms. The reconstruction time of SSDAE_CS is an order of magnitude faster than OMP and BPND. The main reason for this problem is that SSDAE_CS needs to consider the reconstruction of a batch of signals, but the signal is gradually reconstructed one by one in OMP and BPDN. And the decoder sub-network of SSDAE_CS just needs two matrix-vector multiplications and two nonlinear mappings while other CS recovery algorithms need hundreds of iterations which include multiple matrix-vector multiplications. More precisely, the five-layer SSDAE_CS model requires from 812,824 to 1,329,424 parameters. Although the SSDAE_CS model takes a lot of time to train parameters, the SSDAE_CS model is still very attractive when dealing with large numbers of signals. The SSDAE_CS model spends less time than the RBM-OMP-like and DBN-OMP-like model in the reconstruction process. The reason for this result is that the RBM-OMP-like model requires 4,924,304 parameters and the DBN-OMP-like model only requires 1,847,104 parameters. RBM-OMP-like, DBN-OMP-like, and SSDAE_CS model use neural networks to learn how to best use the structure within the data, so they are still in the same order of magnitude.
Table 2 Average reconstruction time of MNIST testing set for different M and reconstruction algorithms
Finally, some experiments have also been made to prove that the proposed model is stable and has a strong denoising ability. In Fig. 6, visual evaluation of a reconstructed test image using the proposed CS model is presented. In the experiments, the white Gaussian noise is added to the training and testing sets and the SSDAE_CS model with different layers are retrained for experimental comparison. Figure 7 shows the tendency of the mean PSNR with the number of measurements in SSDAE_CS model. It can be seen from Fig. 7 that the mean PSNR of the noisy SSDAE_CS model is 3–5 dB lower than the model without noise. To verify the effect of different coefficient noises on the signal reconstruction process of SSDAE_CS model, the comparative experiments are performed when the number of measurements is M=64, and the results are shown in Fig. 8.
Visual evaluation of the SSDAE_CS Layer_5 when λ=0.1
The variation tendency of mean PSNR in testing set when λ=0.1
The mean PSNR for different coefficient in the testing dataset when M=64
In this paper, to solve the two most important issues in CS, a SSDAE_CS model has been developed, which contains two sub-networks: an encoder sub-network and a decoder sub-network, respectively, used for compressed sampling and signal recovery. The two sub-networks are integrated into SSDAE_CS model by jointly training parameters to improve the overall performance of CS, but they can be applied to different scenarios as two independent individuals. It is found from simulations that the proposed model requires less the number of measurements to achieve successful reconstruction than other CS reconstruction algorithms, and has a good denoising performance, especially in the case of a few measurements, the performance of the proposed model is better than other methods. In run time of signal reconstructions, the SSDAE_CS model is also faster than other signal recovery algorithms. Considering reconstruction performance, time cost, and denoising ability, the proposed model has a strong attraction for the recovery problems of a large number of signals.
The above paragraph summarizes the advantages of our work, but there are still shortcomings in our work, mainly focusing on accurately reconstructing signals with a few measurements, which requires lots of time and data for training. In further work, transfer learning, which is a convenient alternative for leveraging existing models and updating them on smaller computational platforms and target data sets [33], could be taken into account to address this issue. Additionally, there still exist some compressed sensing problems of big-size nature images; it is worthy to develop convolutional method [34] for sense images, so as to reduce the memory of measurement matrix. Last but not least, residual learning [35] could also be introduced to further increase the depth of network.
BAE:
Basic autoencoder
BCS-BP:
Bayesian framework via belief propagation
BCS-LP:
Bayesian via laplace prior
BPDN:
Basis pursuit denoising
CoSaMP:
Compressive sampling matching pursuit
DAE:
Denoise autoencoder
IHT:
Iterative hard thresholding
MSE:
OMP:
Orthogonal matching pursuit
PSNR:
Peak signal-to-noise ratio
SAE:
Sparse autoencoder
SSDAE_CS:
Stacked sparse denoising autoencoder compressed sensing
Minimization of total variation
D. L. Donoho, Compressed sensing. IEEE Trans. Inf. Theory. 52(4), 1289–1306 (2006).
L. Weizman, Y. C Eldar, D. B Bashat, Compressed sensing for longitudinal MRI: an adaptive-weighted approach. Med. Phys.42(9), 5195–5208 (2015).
Z. Zhang, C. Wang, C. Gan, et al., Automatic modulation classification using convolutional neural network with features fusion of SPWVD and BJD. IEEE Trans. Sign. Inf. Process. Netw.https://doi.org/10.1109/TSIPN.2019.2900201.
C. Yan, H. Xie, J. Chen, et al., A fast Uyghur text detector for complex background images. IEEE Trans. Multimed.20(12), 3389–3398 (2018).
C. Yan, L. Li, C. Zhang, et al., Cross-modality bridging and knowledge transferring for image understanding. IEEE Trans. Multimed. https://doi.org/10.1109/TMM.2019.2903448.
E. Sejdic, I. Orovic, S. Stankovic, Compressive sensing meets time-frequency: an overview of recent advances in time-frequency processing of sparse signals. Digit. Signal Proc.77:, 22–35 (2018).
Z. Zhang, L. Wang, Y. Zou, et al., The optimally designed dynamic memory networks for targeted sentiment classification. Neurocomputing. 309:, 36–45 (2018).
J. Liu, K. Huang, G. Zhang, An efficient distributed compressed sensing algorithm for decentralized sensor network. Sensors. 17(4), 907 (2017).
Z. Zhang, Y. Zou, C. Gan, Textual sentiment analysis via three different attention convolutional neural networks and cross-modality consistent regression. Neurocomputing. 275:, 1407–1415 (2018).
P. Wojtaszczyk, Stability and instance optimality for Gaussian measurements in compressed sensing. Found. Comput. Math.10(1), 1–13 (2010).
W. Lu, W. Li, K. Kpalma, et al., Compressed sensing performance of random Bernoulli matrices with high compression ratio. IEEE Signal Process. Lett.22(8), 1074–1078 (2015).
S. Foucart, Sparse recovery algorithms: sufficient conditions in terms of restricted isometry constants. Springer Proc. Math.13:, 65–77 (2012).
R. A Devore, Deterministic constructions of compressed sensing matrices. J. Complex.23(4-6), 918–925 (2007).
W. Yin, Practical compressive sensing with Toeplitz and circulant matrices. Vis. Commun. Image Proc., 77440K (2010). https://doi.org/10.1117/12.863527.
V. Abolghasemi, S. Ferdowsi, B. Makkiabadi, et al., in IEEE 18th European Signal Processing Conference. On optimization of the measurement matrix for compressive sensing (IEEE, Aalborg, 2010), pp. 427–431.
X. Gao, J. Zhang, W. Che, et al., in Data Compression Conference. Block-based compressive sensing coding of natural images by local structural measurement matrix (IEEE, Snowbird, 2015), pp. 133–142.
W. Lu, N. Vaswani, Regularized modified BPDN for noisy sparse reconstruction with partial erroneous support and signal value knowledge. IEEE Trans. Signal Process. 60(1), 182–196 (2010).
C. Li, W. Yin, H. Jiang, et al., An efficient augmented Lagrangian method with applications to total variation minimization. Comput. Optim. Appl.56(3), 507–530 (2013).
J. A Tropp, A. C Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory. 53(12), 4655–4666 (2007).
D. Needell, J. A. Tropp, CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Commun. ACM. 12:, 93–100 (2010).
T. Blumensath, M. E Davies, Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal.27(3), 265–274 (2009).
S. Ji, Y. Xue, L. Carin, Bayesian compressive sensing. IEEE Trans. Signal Process. 56(6), 2346–2356 (2008).
D. Baron, S. Sarvotham, R. G Baraniuk, Bayesian compressive sensing via belief propagation. IEEE Trans Signal Process. 58(1), 269–280 (2009).
C. A Metzler, A. Maleki, R. G Baraniuk, From denoising to compressed sensing. IEEE Trans. Inf. Theory. 62(9), 5117–5144 (2014).
A. Mousavi, A. B. Patel, R. G. Baraniuk, in IEEE Allerton Conference on Communication, Control, and Computing. A deep learning approach to structured signal recovery (IEEE, Monticello, 2016), pp. 1336–1343.
L. Polania, K. Barner, Exploiting restricted Boltzmann machines and deep belief networks in compressed sensing. IEEE Trans. Signal Process. 65(17), 4538–4550 (2017).
A. Mousavi, R. G. Baraniuk, in IEEE International Conference on Acoustics, Speech and Signal Processing. Learning to invert: signal recovery via deep convolutional networks (IEEE, New Orleans, 2017), pp. 2272–2276.
K. Kulkarni, S. Lohit, P. Turaga, R. Kerviche, et al., in IEEE Conference on Computer Vision and Pattern Recognition. ReconNet: non-iterative reconstruction of images from compressively sensed measurements (IEEE, Las Vegas, 2016), pp. 449–458.
P. Vincent, A connection between score matching and denoising autoencoders. Neural Comput.23(7), 1661–1674 (2011).
P. Xiong, H. Wang, M. Liu, et al., A stacked contractive denoising auto-encoder for ECG signal denoising. Physiol. Meas.37(12), 2214–2230 (2016).
A. Lemme, R. F Reinhart, J. J Steil, Online learning and generalization of parts-based image representations by non-negative sparse autoencoders. Neural Netw.33(9), 194–203 (2012).
J. Xu, L. Xiang, Q. Liu, et al., Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE Trans. Med. Imaging. 35(1), 119–130 (2016).
D. Xu, D. Lan, H. Liu, et al., Compressive sensing of stepped-frequency radar based on transfer learning. IEEE Trans. Signal Process. 63(12), 3076–3087 (2015).
J. Du, X. Xie, C. Wang, et al., Fully convolutional measurement network for compressive sensing image reconstruction. Neurocomputing. 328:, 105–112 (2019).
K. He, X. Zhang, S. Ren, et al., in 2016 IEEE Conference on Computer Vision and Pattern Recognition. Deep residual learning for image recognition (IEEE, Las Vegas, 2016), pp. 770–778.
The authors want to acknowledge the help of all the people who influenced the paper. Specifically, they want to acknowledge the anonymous reviewers for their reasonable comments.
This work is supported by Natural Science Foundation of China (Grant Nos. 61702066 and 11747125), Chongqing Research Program of Basic Research and Frontier Technology (Grant Nos. cstc2017jcyjAX0256 and cstc2018jcyjAX0154), and Research Innovation Program for Postgraduate of Chongqing (Grant Nos. CYS17217 and CYS18238).
The datasets used during the current study are available from the corresponding author on reasonable request.
School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
Zufan Zhang
, Yunfeng Wu
& Chenquan Gan
School of cyber security and information law, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
Qingyi Zhu
Search for Zufan Zhang in:
Search for Yunfeng Wu in:
Search for Chenquan Gan in:
Search for Qingyi Zhu in:
QZ analyzed the connection between compressed sensing and deep learning. ZZ realized the deduction and design of SSDAE_CS model. YW completed the simulation of the experiments and the analysis of the results, as well as drafting the manuscript. CG checked the manuscript and offered critical suggestions to design the algorithm. All authors read and approved the final manuscript.
Correspondence to Chenquan Gan.
Zhang, Z., Wu, Y., Gan, C. et al. The optimally designed autoencoder network for compressed sensing. J Image Video Proc. 2019, 56 (2019) doi:10.1186/s13640-019-0460-5
Stacked sparse denoising autoencoder
Multiple nonlinear measurement
|
CommonCrawl
|
Why do volume and surface area of unit ball in $\mathbb{R}^d$ behave the way they do for $d \uparrow$?
Is there any demonstrative / intuitive explanation for the behavior of the surface area and the volume of the unit ball as the dimension increases?
I sort of get that it tends to zero, because all the coordinates become smaller and smaller (although I'm not quite satisfied with this "explanation"). But why the maximum? And why is the maximum not at the same dimension for the two quantities?
There is probably some buzzword I should google, but I can't figure it out.
edit: I just saw the related question Volumes of n-balls: what is so special about n=5? which is still "unanswered" and does not cover the topic of surface area.
geometry spheres
rehctawrats
rehctawratsrehctawrats
$\begingroup$ This seems related to Why does the volume of the unit sphere go to zero? $\endgroup$ – robjohn♦ Jan 12 '18 at 16:36
$\begingroup$ Thank's EVERYONE for answering. I feel satisfied now, however all of the answers and also math.stackexchange.com/q/67054 contribute to that feeling of understanding. $\endgroup$ – rehctawrats Jan 17 '18 at 21:28
First, why there is a maximum: I think the easy way to get the meaning of it is if you kook at the recursion relationship between $V_n$ and $V_{n-2}$. $$V_n(R)=\frac{2\pi R^2}{n}V_{n-2}(R)$$ Ignoring $R$ since it's $1$ in your problem. you see that the volume increases or decreases by a factor of $2\pi/n$. You can see that if $n\ge7$ that fraction is less than 1, so the volume decreases. You therefore get $V_1<V_3<V_5$ and then $V_5>V_7>V_9>...$ And similarly for even dimensions. You would need to go into details of the gamma function to show the relationship between odd and even dimensions. It's just a little more calculation involved.
The second question is why they have maxima at different dimensionality. If you remember, the volume of the $n$-dimensional sphere becomes surface area of the $n+1$ dimensional sphere. If you look at this link, you get $$A_{n+1}(R)=2\pi RV_n(R)$$ Therefore if you have a maxima at $5$ in volume, you will get a maxima at $6$ in surface area.
AndreiAndrei
$\begingroup$ How is the gamma function involved in the relationship between odd and even dimensions? (just the idea to the answer of this question) $\endgroup$ – rehctawrats Jan 12 '18 at 14:21
$\begingroup$ Note that in the wikipedia article that I've linked to in my answer, they give the formula for $$V_n=\frac{\pi^{n/2}R^n}{\Gamma(n/2+1)}$$ In the end, for integers $\Gamma(n)=n!$ and for half integers is $$\Gamma(n+1/2)=\sqrt\pi\frac{(2n)!}{n!4^n}$$. $\endgroup$ – Andrei Jan 12 '18 at 15:12
All the "strangeness" in the exposed results is just due to the fact that the volume of a $n-$Ball of radius $R$ should "normally" be compared to the volume of a $n-$Cube of side $2R$.
Then $$ \upsilon _{\,n} = {{V_{\,n} (R)} \over {\left( {2R} \right)^{\,n} }} = {{\pi ^{\,n/2} } \over {2^{\,n} \Gamma \left( {1 + n/2} \right)}}\, = {{\Gamma \left( {1/2} \right)^{\,n} } \over {2^{\,n} \Gamma \left( {1 + n/2} \right)}} $$
and this is a monotonic decreasing function for $n \in \mathbb N$,
which tastes "natural", because at increasing $n$, the volume of the $n$-ball will be less than that of the cylinder based on the $(n-1)-$ball, which in turn is always less than that of the unitary cube.
Reversing the view, the ratio you have considered is that of the ball vs. the cube with one vertex at the origin:
in $3$D this corresponds to the unit cube covering one octant of the ball.
So you are multiplying the ratio $\upsilon(n)$ above (decreasing) by the number of octants, which is $2^n$ (increasing).
The maximum is just the result of the combination of the two opposite rates.
G CabG Cab
What is "volume" in $n$ dimensions? It is a certain quantity obtained by calculating a certain measure defined on $\mathbb{R}^{n}$. We know exactly which measure it is - it is the product measure which gives the box $\prod_{i=1}^{n}[a_i,b_i]$ the measure $\prod_{i=1}^{n}(b_i-a_i)$. The fact that we use this measure and not any other measure (there are lots of possibilities) as the "natural" measure on $\mathbb{R}^n$ is probably because
A. If is very easy to calculate this measure of boxes.
B. Many important and interesting subsets of $\mathbb{R}^n$ can be approximated very well by boxes.
We call this measure of sets the volume of sets, although a more proper name should be the content of sets, because volume, really, is a three-dimensional notion. Once we agree on this measure, we find that when we come to measure the volume of the unit-ball, i.e., the set $\{x\in\mathbb{R}^n:\sum_ix_i^2\leq 1\}$ it turns out that the largest copy of the unit-box (i.e. the set $\{x\in\mathbb{R}^n:\max_i|x_i|\leq 1\}$) that fits into the unit-ball has to be dilated by a factor of $n^{1/2}$ (i.e., the unit cube has to be multiplied by $n^{-1/2}$ in order to fit into the unit ball). Now this does not prove anything yet, but it clearly gives some recognition of the fact that the unit-ball in $\mathbb{R}^n$ does not occupy much volume, and that it is much "harder" to be inside a $100$-dimensional ball than inside a $3$-dimensional ball because there are $97$ more inequalities that have to be satisfied. Of course, all this is just wave-handing heuristics, but when you perform the calculation you find a number that goes to zero approximately like $n^{-1/2}$, validating some of the feeling of small volume we got by comparing to the cube - the basic building block of the underlying measure.
As for the question why is there a maximum - there is really a trivial answer: the volume starts somewhere, and then it decreases to zero, so it must have a maximum.
Regarding the third question - why does the surface area attain a maximum in a different dimension than the volume, I don't see any "naturally" good answer. Just as well, I could ask "why should it?"
uniquesolutionuniquesolution
$\begingroup$ " the volume starts somewhere, and then it decreases to zero, so it must have a maximum" --- Nice observation! $\endgroup$ – Dave L. Renfro Jan 11 '18 at 16:12
$\begingroup$ Thanks for pointing out the factor $n^{-1/2}$. But the number of inequalities does not give any insight since the same holds for the unit cube which has constant volume 1. 2${}^{\text{nd}}$ question: perfect partly answer but doesn't explain the location of the maximums. 3${}^{\text{rd}}$ question: "that's just the way it is" is not helpful. $\endgroup$ – rehctawrats Jan 12 '18 at 14:32
Letting $V_n(r)$ be the volume of the sphere of radius $r$ in $\mathbb{R}^n$, then we get the recursion $$ V_n(r)=\int_{-r}^rV_{n-1}\!\left(\sqrt{r^2-t^2}\right)\mathrm{d}t\tag1 $$ Homogeneity tells us that $$ V_n(r)=\Omega_nr^n\tag2 $$ So let us try to show $(2)$ inductively using $(1)$. Suppose that $(2)$ is true for $n-1$, then $$ \begin{align} V_n(r) &=\int_{-r}^rV_{n-1}\!\left(\sqrt{r^2-t^2}\right)\mathrm{d}t\\ &=\int_{-r}^r\Omega_{n-1}\left(r^2-t^2\right)^{\frac{n-1}2}\mathrm{d}t\\ &=r^n\int_{-1}^1\Omega_{n-1}\left(1-t^2\right)^{\frac{n-1}2}\mathrm{d}t\\[9pt] &=\Omega_nr^n\tag3 \end{align} $$ where $$ \begin{align} \Omega_n &=\Omega_{n-1}\int_{-1}^1\left(1-t^2\right)^{\frac{n-1}2}\mathrm{d}t\\ &=\Omega_{n-1}\int_0^1(1-t)^{\frac{n-1}2}t^{-\frac12}\mathrm{d}t\\ &=\Omega_{n-1}\frac{\Gamma\!\left(\frac{n+1}2\right)\Gamma\!\left(\frac12\right)}{\Gamma\!\left(\frac{n+2}2\right)}\tag4\\ \end{align} $$ Since $V_1(r)=2r$, this completes the induction.
$\Gamma\!\left(\frac12\right)=\sqrt\pi$ and because Gamma is log-convex we have $$ \Gamma\!\left(\frac{n+1}2\right)\le\Gamma\left(\frac{n}2\right)^{\frac12}\Gamma\left(\frac{n+2}2\right)^{\frac12}\tag5 $$ and $$ \Gamma\!\left(\frac{n+2}2\right)\le\Gamma\left(\frac{n+1}2\right)^{\frac12}\Gamma\left(\frac{n+3}2\right)^{\frac12}\tag6 $$ Inequalities $(5)$ and $(6)$, along with $\Gamma(n+1)=n\Gamma(n)$, imply that $$ \sqrt{\frac2{n+1}}\le\frac{\Gamma\!\left(\frac{n+1}2\right)}{\Gamma\!\left(\frac{n+2}2\right)}\le\sqrt{\frac2n}\tag7 $$ Thus, $(4)$ and $(7)$ show that $$ \sqrt{\frac{2\pi}{n+1}}\le\frac{\Omega_n}{\Omega_{n-1}}\le\sqrt{\frac{2\pi}n}\tag8 $$ From $(8)$, we know that if $n\ge6$, then $\Omega_{n+1}\lt\Omega_n$ and if $n\le5$, $\Omega_{n-1}\lt\Omega_n$. However, this doesn't tell us which is greater, $\Omega_5$ or $\Omega_6$. To determine this, we can apply $(4)$ twice to get $$ \Omega_n=\frac{2\pi}n\Omega_{n-2}\tag9 $$ We know that $\Omega_1=2$ and $\Omega_2=\pi$. Equation $(9)$ yields that $$ \begin{array}{c|c|c} n&\Omega_n&\approx\Omega_n\\\hline 1&2&2.00000000000000\\ 2&\pi&3.14159265358979\\ 3&\frac{4\pi}3&4.18879020478639\\ 4&\frac{\pi^2}2&4.93480220054468\\ 5&\frac{8\pi^2}{15}&5.26378901391432\\ 6&\frac{\pi^3}6&5.16771278004997 \end{array}\tag{10} $$ Thus, $\Omega_5$ is the greatest.
Since $$ \begin{align} S_{n-1}(r) &=\frac{\mathrm{d}}{\mathrm{d}r}V_n(r)\\ &=n\Omega_nr^{n-1}\\[5pt] &=2\pi\Omega_{n-2}r^{n-1}\\[6pt] &=\omega_{n-1}r^{n-1} \end{align} $$ we see that $\omega_6$ is greatest, but that is the (six-dimensional) surface area of the unit sphere in $\mathbb{R}^7$. So, we need to be clear about which dimension we are talking about. The dimension of the surface, or the dimension of the embedding space. In the chart in the question, it seems that the dimension is the dimension of the surface, not the dimension of the embedding space.
robjohn♦robjohn
$\begingroup$ Thanks for your effort. I was looking for demonstrative / intuitive explanations. $\endgroup$ – rehctawrats Jan 17 '18 at 21:21
$\begingroup$ There's also some good further analysis of non-integer values of n at projecteuclid.org/download/pdfview_1/euclid.aaa/1313170929 "The Hyperspherical Functions of a Derivative" [Nenad Cakić, Duško Letić, Branko Davidović], (2010), going into greater detail of the maxima at DIM 5.2569... and DIM 7.2569... $\endgroup$ – Charles Rockafellor Jan 27 '19 at 1:38
Not the answer you're looking for? Browse other questions tagged geometry spheres or ask your own question.
Why does the volume of the unit sphere go to zero?
Volumes of n-balls: what is so special about n=5?
Muddled Notions Regarding the Measurement of Quantities in Different Dimensions
Intuition: 5 regular polyhedra, 6 regular 4-polytopes, and then 3 regular d-polytopes
Surface area and volume of cone
Volume and surface area of sphere, cone, cylinder etc
Dimension that maximizes surface area and volume unit $n$-sphere
Trapezoid Volume and Surface Area
|
CommonCrawl
|
Proceedings of the International Astronomical Union (5)
Journal of Developmental Origins of Health and Disease (2)
Mineralogical Magazine (2)
Canadian Journal of Neurological Sciences (1)
Epidemiology and Psychiatric Sciences (1)
European Astronomical Society Publications Series (1)
Journal of Radiotherapy in Practice (1)
MRS Online Proceedings Library Archive (1)
Microscopy and Microanalysis (1)
International Astronomical Union (5)
Nestle Foundation - enLINK (2)
Brazilian Society for Microscopy and Microanalysis (SBMM) (1)
Canadian Neurological Sciences Federation (1)
Materials Research Society (1)
Prevalence of Autism Spectrum Disorder in a large Italian catchment area: a school-based population study within the ASDEU project
A. Narzisi, M. Posada, F. Barbieri, N. Chericoni, D. Ciuffolini, M. Pinzino, R. Romano, M.L. Scattoni, R. Tancredi, S. Calderoni, F. Muratori
Journal: Epidemiology and Psychiatric Sciences / Volume 29 / 2020
Published online by Cambridge University Press: 06 September 2018, e5
Print publication: 2020
This study aims to estimate Autism Spectrum Disorders (ASD) prevalence in school-aged children in the province of Pisa (Italy) using the strategy of the ASD in the European Union (ASDEU) project.
A multistage approach was used to identify cases in a community sample (N = 10 138) of 7–9-year-old children attending elementary schools in Pisa – Italy. First, the number of children with a disability certificate was collected from the Local Health Authority and an ASD diagnosis was verified by the ASDEU team. Second, a Teacher Nomination form (TN) to identify children at risk for ASD was filled in by teachers who joined the study and the Social Communication Questionnaire (SCQ) was filled in by the parents of children identified as positive by the TN; a comprehensive assessment, which included the Autism Diagnostic Observation Schedule-Second Edition, was performed for children with positive TN and SCQ⩾9.
A total of 81 children who had a disability certificate also had ASD (prevalence: 0.79%, i.e. 1/126). Specifically, 66 children (57 males and nine females; 62% with intellectual disability –ID-) were certified with ASD, whereas another 15 (11 males and four females; 80% with ID) were recognised as having ASD among those certified with another neurodevelopmental disorder. Considering the population of 4417 (children belonging to schools which agreed to participate in the TN/SCQ procedure) and using only the number of children certified with ASD, the prevalence (38 in 4417) was 0.86%, i.e. one in 116. As far as this population is concerned, the prevalence rises to 1% if we consider the eight new cases (six males and two females; no subject had ID) identified among children with no pre-existing diagnoses and to 1.15%, i.e., one in 87, if probabilistic estimation is used.
This is the first population-based ASD prevalence study conducted in Italy so far and its results indicate a prevalence of ASD in children aged 7–9 years of about one in 87. This finding may help regional, national and international health planners to improve ASD policies for ASD children and their families in the public healthcare system.
Histopathological Effects on Gills of Nile Tilapia (Oreochromis niloticus, Linnaeus, 1758) Exposed to Pb and Carbon Nanotubes
Edison Barbieri, Janaína Campos-Garcia, Diego S. T. Martinez, José Roberto M. C. da Silva, Oswaldo Luiz Alves, Karina F. O. Rezende
Journal: Microscopy and Microanalysis / Volume 22 / Issue 6 / December 2016
Published online by Cambridge University Press: 21 December 2016, pp. 1162-1169
The effect of heavy metal in fish has been the focus of extensive research for many years. However, the combined effect of heavy metals and nanomaterials is still a new subject that needs to be studied. The aim of this study was to examine histopathologic alterations in the gills of Nile tilapia (Oreochromis niloticus) to determine possible effects of lead (Pb), carbon nanotubes, and Pb+carbon nanotubes on their histological integrity, and if this biological system can be used as a tool for evaluating water quality in monitoring programs. For this, tilapia were exposed to Pb, carbon nanotubes and Pb+carbon nanotubes for 4 days. The main alterations observed were epithelial structure, hyperplasia and displacement of epithelial cells, and alterations of the structure and occurrence of aneurysms in the secondary lamella. The most severe alterations were related to the Pb+carbon nanotubes. We conclude that the oxidized multi-walled carbon nanotubes enhanced the acute lead toxicity in Nile tilapias. This work draws attention to the implications of carbon nanomaterials released in the aquatic environment and their interaction with classical pollutants.
Initial outcomes of a harmonized approach to collect welfare data in sport and leisure horses
E. Dalla Costa, F. Dai, D. Lebelt, P. Scholz, S. Barbieri, E. Canali, M. Minero
Journal: animal / Volume 11 / Issue 2 / February 2017
A truthful snapshot of horse welfare conditions is a prerequisite for predicting the impact of any actions intended to improve the quality of life of horses. This can be achieved when welfare information, gathered by different assessors in diverse geographical areas, is valid, comparable and collected in a harmonized way. This paper aims to present the first outcomes of the Animal Welfare Indicators (AWIN) approach: the results of on-farm assessment and a reliable and harmonized data collection system. A total of 355 sport and leisure horses, stabled in 40 facilities in Italy and in Germany, were evaluated by three trained assessors using the AWIN welfare assessment protocol for horses. The AWINHorse app was used to collect, store and send data to a common server. Identified welfare issues were obesity, unsatisfactory box dimensions, long periods of confinement and lack of social interaction. The digitalized data collection was feasible in an on-farm environment, and our results suggest that this approach could prove useful in identifying the most relevant welfare issues of horses in Europe or worldwide.
In vitro isolation from Amblyomma ovale (Acari: Ixodidae) and ecological aspects of the Atlantic rainforest Rickettsia, the causative agent of a novel spotted fever rickettsiosis in Brazil
M. P. J. SZABÓ, F. A. NIERI-BASTOS, M. G. SPOLIDORIO, T. F. MARTINS, A. M. BARBIERI, M. B. LABRUNA
Journal: Parasitology / Volume 140 / Issue 6 / May 2013
Published online by Cambridge University Press: 30 January 2013, pp. 719-728
Recently, a novel human rickettsiosis, namely Atlantic rainforest spotted fever, was described in Brazil. We herein report results of a survey led around the index case in an Atlantic rainforest reserve in Peruibe municipality, southeastern Brazil. A Rickettsia parkeri-like agent (Rickettsia sp. Atlantic rainforest genotype) and Ricketsia bellii were isolated from adult Amblyomma ovale ticks collected from dogs. Molecular evidence of infection with strain Atlantic rainforest was obtained for 30 (12·9%) of 232 A. ovale adult ticks collected from dogs. As many as 88·6% of the 35 examined dogs had anti-Rickettsia antibodies, with endpoint titres at their highest to R. parkeri. High correlation among antibody titres in dogs, A. ovale infestations, and access to rainforest was observed. Amblyomma ovale subadults were found predominantly on a rodent species (Euryoryzomys russatus). From 17 E. russatus tested, 6 (35·3%) displayed anti-Rickettsia antibodies, with endpoint titres highest to R. parkeri. It is concluded that Atlantic rainforest genotype circulates in this Atlantic rainforest area at relatively high levels. Dogs get infected when bitten by A. ovale ticks in the forest, and carry infected ticks to households. The role of E. russatus as an amplifier host of Rickettsia to A. ovale ticks deserves investigation.
ASTEP South: a first photometric analysis
N. Crouzet, T. Guillot, D. Mékarnia, J. Szulágyi, L. Abe, A. Agabi, Y. Fanteï-Caujolle, I. Gonçalves, M. Barbieri, F.-X. Schmider, J.-P. Rivet, E. Bondoux, Z. Challita, C. Pouzenc, F. Fressin, F. Valbousquet, A. Blazit, S. Bonhomme, J.-B. Daban, C. Gouvret, D. Bayliss, G. Zhou, the ASTEP team
Journal: Proceedings of the International Astronomical Union / Volume 8 / Issue S288 / August 2012
The ASTEP project aims at detecting and characterizing transiting planets from Dome C, Antarctica, and qualifying this site for photometry in the visible. The first phase of the project, ASTEP South, is a fixed 10 cm diameter instrument pointing continuously towards the celestial South Pole. Observations were made almost continuously during 4 winters, from 2008 to 2011. The point-to-point RMS of 1-day photometric lightcurves can be explained by a combination of expected statistical noises, dominated by the photon noise up to magnitude 14. This RMS is large, from 2.5 mmag at R = 8 to 6% at R = 14, because of the small size of ASTEP South and the short exposure time (30 s). Statistical noises should be considerably reduced using the large amount of collected data. A 9.9-day period eclipsing binary is detected, with a magnitude R = 9.85. The 2-season lightcurve folded in phase and binned into 1,000 points has a RMS of 1.09 mmag, for an expected photon noise of 0.29 mmag. The use of the 4 seasons of data with a better detrending algorithm should yield a sub-millimagnitude precision for this folded lightcurve. Radial velocity follow-up observations reveal a F-M binary system. The detection of this 9.9-day period system with a small instrument such as ASTEP South and the precision of the folded lightcurve show the quality of Dome C for continuous photometric observations, and its potential for the detection of planets with orbital periods longer than those usually detected from the ground.
Time domain astronomy from Dome C: results from ASTEP
J.-P. Rivet, L. Abe, K. Agabi, M. Barbieri, N. Crouzet, I. Goncalves, T. Guillot, D. Mekarnia, J. Szulagyi, J.-B. Daban, C. Gouvret, Y. Fantei-Caujolle, F.-X. Schmider, T. Furth, A. Erikson, H. Rauer, F. Fressin, A. Alapini, F. Pont, S. Aigrain
ASTEP (Antarctic Search for Transiting Exo Planets) is a research program funded mainly by French ANR grants and by the French Polar Institute (IPEV), dedicated to the photometric study of exoplanetary transits from Antarctica.
The preliminary "pathfinder" instrument ASTEP–South is described in another communication (Crouzet et al., these proceedings), and we focus in this presentation on the main instrument of the ASTEP program: "ASTEP–400", a 40 cm robotized and thermally-controlled photometric telescope operated from the French-Italian Concordia station (Dome C, Antarctica).
ASTEP–400 has been installed at Concordia during the 2009-2010 summer campaign. Since, the telescope has been operated in nominal conditions during 2010 and 2011 winters, and the 2012 winterover is presently in progress. Data from the first two winter campaigns are available and processed. We give a description of the ASTEP–400 telescope from the mechanical, optical and thermal point of view. Control and software issues are also addressed. We end with a discussion of some astronomical results obtained with ASTEP–400.
Exposure to maternal smoking during fetal life affects food preferences in adulthood independent of the effects of intrauterine growth restriction
C. Ayres, P. P. Silveira, M. A. Barbieri, A. K. Portella, H. Bettiol, M. Agranonik, A. A. Silva, M. Z. Goldani
Journal: Journal of Developmental Origins of Health and Disease / Volume 2 / Issue 3 / June 2011
Print publication: June 2011
Experimental animal studies have shown that nicotine exposure during gestation alters the expression of fetal hypothalamic neuropeptides involved in the control of appetite. We aimed to determine whether the exposure to maternal smoking during gestation in humans is associated with an altered feeding behavior of the adult offspring. A longitudinal prospective cohort study was conducted including all births from Ribeirão Preto (São Paulo, Brazil) between 1978 and 1979. At 24 years of age, a representative random sample was re-evaluated and divided into groups exposed (n = 424) or not (n = 1586) to maternal smoking during gestation. Feeding behavior was analyzed using a food frequency questionnaire. Covariance analysis was used for continuous data and the χ 2 test for categorical data. Results were adjusted for birth weight ratio, body mass index, gender, physical activity and smoking, as well as maternal and subjects' schooling. Individuals exposed to maternal smoking during gestation ate more carbohydrates than proteins (as per the carbohydrate-to-protein ratio) than non-exposed individuals. There were no differences in the consumption of the macronutrients themselves. We propose that this adverse fetal life event programs the individual's physiology and metabolism persistently, leading to an altered feeding behavior that could contribute to the development of chronic diseases in the long term.
Risk factors for sedentary behavior in young adults: similarities in the inequalities
F. S. Fernandes, A. K. Portella, M. A. Barbieri, H. Bettiol, A. A. M. Silva, M. Agranonik, P. P. Silveira, M. Z. Goldani
Journal: Journal of Developmental Origins of Health and Disease / Volume 1 / Issue 4 / August 2010
Physical activity is a known protective factor, with benefits for both metabolic and psychological aspects of health. Our objective was to verify early and late determinants of physical activity in young adults. A total of 2063 individuals from a birth cohort in Ribeirão Preto, Brazil, were studied at the age of 23–25 years. Poisson regression was performed using three models: (1) early model considering birth weight, gestational age, maternal income, schooling and smoking; (2) late model considering individual's gender, schooling, smoking and body mass index; and (3) combined (early + late) model. Physical activity was evaluated using the International Physical Activity Questionnaire, stratifying the individuals into active or sedentary. The general rate of sedentary behavior in the sample was 49.6%. In the early model, low birth weight (relative risk (RR) = 1.186, confidence interval (95%CI) 1.005–1.399) was a risk factor for sedentary activity. Female gender (RR = 1.379, 95%CI = 1.259–1.511) and poor schooling (RR = 1.126, 95%CI = 1.007–1.259) were associated with sedentary behavior in the late model. In the combined model, only female gender and participant's schooling remained significant. An interaction between birth weight and individual's schooling was found, in which sedentary behavior was more prevalent in individuals born with low birth weight only if they had higher educational levels. Variables of early development and social insertion in later life interact to determine an individual's disposition to practice physical activities. This study may support the theoretical model 'Similarities in the inequalities', in which opposed perinatal backgrounds have the same impact over a health outcome in adulthood when facing unequal social achievement during the life-course.
The SARG Planet Search
K. Goździewski, A. Niedzielski, J. Schneider, S. Desidera, R. Gratton, A. Martinez Fiorenzano, M. Endl, R. Claudi, R. Cosentino, S. Scuderi, M. Bonavita, M. Barbieri, G. Bonanno, M. Cecconi, S. Lucatello, F. Marzari
Journal: European Astronomical Society Publications Series / Volume 42 / 2010
Published online by Cambridge University Press: 19 April 2010, pp. 117-124
We present the radial velocity planet search in moderately wide binaries with similar components (twins) ongoing at Telescopio Nazionale Galileo (TNG) using the Galileo High Resolution Spectrograph (Spettrografo Alta Risoluzione Galileo, SARG). We discuss the sample selection, the observing and analysis procedures, the main results of the radial velocity monitoring and the implications in terms of planet frequency in binary systems. We also briefly discuss the second major science goal of the SARG survey, the search for abundance anomalies caused by the ingestion of planetary material by the central star. Finally, we present some preliminary conclusions regarding the frequency of planets in binary systems.
Io, the closest Galileo's Medicean Moon: Changes in its Sodium Cloud Caused by Jupiter Eclipse
Cesare Grava, Nicholas M. Schneider, Cesare Barbieri
Journal: Proceedings of the International Astronomical Union / Volume 6 / Issue S269 / January 2010
Published online by Cambridge University Press: 03 November 2010, pp. 224-228
Print publication: January 2010
We report results of a study of true temporal variations in Io's sodium cloud before and after eclipse by Jupiter. The eclipse geometry is important because there is a hypothesis that the atmosphere partially condenses when the satellite enters the Jupiter's shadow, preventing sodium from being released to the cloud in the hours immediately after the reappearance. The challenge lies in disentangling true variations in sodium content from the changing strength of resonant scattering due Io's changing Doppler shift in the solar sodium absorption line. We undertook some observing runs at Telescopio Nazionale Galileo (TNG) at La Palma Canary Island with the high resolution spectrograph SARG in order to observe Io entering into Jupiter's shadow and coming out from it. The particular configuration chosen for the observations allowed us to observe Io far enough from Jupiter and to disentangle line-of-sight effects looking perpendicularly at the sodium cloud. We will present results which took advantage of a very careful reduction strategy. We remove the dependence from γ-factor, which is the fraction of solar light available for resonant scattering, in order to remove the dependence on the radial velocity of Io with respect to the Sun.
This work has been supported by NSF's Planetary Astronomy Program, INAF/TNG and the Department of Astronomy and Cisas of University of Padova, through a contract by the Italian Space Agency ASI.
1 - Mechanisms and Demographics in Trauma
By Pedro Barbieri, Department of Anesthesia, Hospital Britanico de Buenos Aires, University of El Salvador School of Medicine, Buenos Aires, Argentina, Daniel H. Gomez, Department of Anesthesia, Hospital Universitario Austral, Pilar, Buenos Aires, Argentina, Peter F. Mahoney, Military Critical Care, Royal Centre for Defence Medicine, Birmingham, United Kingdom, Pablo Pratesi, Department of Emergency Medicine, Austral University Hospital, Pilar, Buenos Aires, Argentina, Christopher M. Grande, Department of Anesthesiology, University of Pittsburgh School of Medicine, Pennsylvania, and International TraumaCare (ITACCS), Baltimore, Maryland
Edited by Charles E. Smith, Case Western Reserve University, Ohio
Book: Trauma Anesthesia
Print publication: 23 June 2008, pp 1-8
The aim of this chapter is to put trauma in context as a major health issue and give practitioners an understanding of the underlying causes and mechanisms.
Injury is the leading cause of death in people aged between 1 and 44 years in the United States and a leading cause of death worldwide [1]. It can be defined as a "physical harm or damage to the structure or function of the body, caused by an acute exchange of energy (mechanical, chemical, thermal, radioactive, or biological) that exceeds the body's tolerance" [2, 3].
In 2002, 33 million patients were processed by emergency departments in the United States, and 161,269 died by traumatic injury [4]. Trauma is the leading cause of years of potential life lost for people younger than 75 years and this implies a huge expense to the health care system and massive amounts of resources used for care and rehabilitation [5].
Demographics is the statistical study of human populations, especially with reference to size and density, distribution, and vital statistics. Data on the demographics of trauma in the United States have been obtained from a number of sources listed in the references to this chapter.
In a recent report from the Federal Bureau of Investigation's (FBI) Uniform Crime Reporting Program, the FBI estimated that more than 1.4 million drivers were arrested for driving under the influence of alcohol or narcotics, and an estimated 254,000 persons were injured in crashes where police reported that alcohol was present – an average of one person injured approximately every two minutes.
HD 17156 : a progress report
Mauro Barbieri, Roi Alonso, M. Cecconi, R. U. Claudi, S. Desidera, M. Endl, A. F. Martinez Fiorenzano, R. Gratton
Journal: Proceedings of the International Astronomical Union / Volume 4 / Issue S253 / May 2008
We present an analysis of the HD planetary system based on a photometric transit dataset and radial velocities obtained on 3 December 2007. We also present limits on the presence of close stellar companions based on high resolution images.
QuantEYE, the quantum optics instrument for OWL
C. Barbieri, V. Da Deppo, M. D'Onofrio, D. Dravins, S. Fornasier, R.A.E. Fosbury, G. Naletto, R. Nilsson, T. Occhipinti, F. Tamburini, H. Uthas, L. Zampieri
Journal: Proceedings of the International Astronomical Union / Volume 1 / Issue S232 / November 2005
A brief description of the QuantEYE instrument proposed as a focal plane instrument for OWL is given. This instrument is dedicated to the very high speed observation of many active phenomena with a photon counting capability of up to 1GHz. The system samples the beam in 10$\times$10 subpupils, each focused on a fast photon counting detector.
Pilot study to investigate the toxicity of Aloe vera gel in the management of radiation induced skin reactions for post-operative primary breast cancer
Dana J. Dudek, Jennifer Thompson, Martina M. Meegan, Tara R. Haycocks, Clarice Barbieri, Lee A. Manchul
Journal: Journal of Radiotherapy in Practice / Volume 1 / Issue 4 / December 2000
Published online by Cambridge University Press: 21 August 2006, pp. 197-204
The purpose of this Phase 2 Breast Skin Care Pilot Study was to compare the acute skin reaction in patients undergoing radiation therapy for early breast cancer who use Aloe vera gel on the irradiated skin, with the acute skin reaction in patients from our earlier study who followed a normal skin care routine. Two secondary objectives were to assess patient compliance with the use of Aloe vera gel and the ease of using two skin toxicity scoring tools.
A total of 109 patients undergoing radiotherapy following surgery for breast cancer between October 1997 and February 1998 consented to participate in this study. Each patient applied the Aloe vera gel three times daily to the irradiated area during radiation treatment. Skin reactions were assessed objectively on a weekly basis during radiation using the Radiation Therapy Oncology Group (RTOG) and the Acute Skin Reaction Index (ASRI) skin scoring tools and subjectively by patients. All patients were followed for up to 3 weeks following treatment.
The use of Aloe vera gel did not increase the acute skin reactions due to irradiation when compared with the two arms of the previous Phase 1 Breast Skin Care Study. This pilot study demonstrated that patients could safely use Aloe vera gel while undergoing radiation therapy treatments. All patients complied uniformly with the instructions of using the gel during the study. The ASRI skin scoring tool was easier to use and more sensitive in displaying differences in skin reactions in comparison to the RTOG scale.
An enriched mantle source for Italy's melilitite-carbonatite association as inferred by its Nd-Sr isotope signature
F. Castorina, F. Stoppa, A. Cundari, M. Barbieri
Journal: Mineralogical Magazine / Volume 64 / Issue 4 / August 2000
New Sr-Nd isotope data were obtained from Late Pleistocene carbonatite-kamafugite associations from the Umbria-Latium Ultra-Alkaline District of Italy (ULUD) with the aim of constraining their origin and possible mantle source(s). This is relevant to the origin and evolution of ultrapotassic (K/Na ≫2) and associated rocks generally, notably the occurrences from Ugandan kamafugites,Western Australian lamproites and South African orangeites. The selected ULUD samples yielded 87Sr/86Sr and 143Nd/144Nd ranging from 0.7100 to 0.7112 and from 0.5119 to 0.5121 respectively, similar to cratonic potassic volcanic rocks with higher Rb/Sr and lower Sm/Nd ratios than Bulk Earth. Silicate and carbonate fractions separated from melilitite are in isotopic equilibrium, supporting the view that they are cogenetic. The ULUD carbonatites yielded the highest radiogenic Sr so far reported for carbonatites. In contrast, sedimentary limestones from ULUD basement formations are lower in radiogenic Sr, i.e. 87Sr/86Sr = 0.70745–0.70735. The variation trend of ULUD isotopic compositions is similar to that reported for Ugandan kamafugites and Western Australian lamproites and overlaps the values for South African orangeites in the εSr-εNd diagram. A poor correlation between Sr/Nd and 87Sr/86Sr ratios in ULUD rocks is inconsistent with a mantle source generated by subduction-driven processes, while the negligible Sr and LREE in sedimentary limestones from the ULUD region fail to account for a hypothetical limestone assimilation process. The Nd model ages of 1.5–1.9 Ga have been inferred for a possible metasomatic event, allowing further radiogenic evolution of the source, a process which may have occurred in isolation until eruption time. While the origin of this component remains speculative, the Sr-Nd isotope trend is consistent with a simple mixing process involving an OIB-type mantle and a component with low εNd and high εSr.
Sic Thin Film Characterization and Stress Measurements for High Temperature Sensors Applications
C. Gourbeyre, P. Aboughe-nze, C. Malhaire, M. Le Berre, Y. Monteil, D. Barbieri
Journal: MRS Online Proceedings Library Archive / Volume 546 / 1998
Published online by Cambridge University Press: 10 February 2011, 91
In this work related to high temperature pressure sensors, a study was undertaken on the characterisation of β-SiC thin films through X-Ray Diffraction, AFM and stress measurements. 3C-SiC films were grown on (100) Si substrate in a vertical reactor by atmospheric-pressure chemical vapour deposition (APCVD). Silane and propane were used as precursor gas and hydrogen as carrier gas. Prior to the growth, Si surfaces were annealed at 1000°C for 5 minutes and carbonized with C3H8 at 1150°C during 10 minutes. At 1350°C, SiH4, C3H8 and H2 were introduced for the SiC deposition. The atomic ratio of Si/C in the gas phase was 0.3 and the growth rate was 3 μm/h. Only the process time was varying to obtain different SiC layer thicknesses varying from 3 to 9 μm as measured by FT-IR spectrometry. Stress measurements of thin β-SiC layers deposited on thick Si substrates were performed at room temperature using the bending plate method and the Stoney's equation. The stress for a 3 μm thick SiC layer is evaluated to be in the range of 400 MPa in tension. When increasing the SiC film thickness, this stress decreases due to the relaxation of the structure with bending. SiC/Si membranes were obtained by KOH etching and studied as a function of pressure in the [0-50 mbar] range. A residual stress in the membrane was deduced from the load-deflection measurement technique reaching 105 MPa in tension.
Plagiogranites and gabbroic rocks from the Mingora ophiolitic mélange, Swat Valley, NW Frontier Province, Pakistan
M. Barbieri, A. Caggianelli, M. R. Di Florio, S. Lorenzoni
Journal: Mineralogical Magazine / Volume 58 / Issue 393 / December 1994
Major, trace element composition and Sr isotopic data were collected for gabbroic rocks, plagiogranites and albitites in the ophiolite assemblage from Swat Valley (NW Frontier Province, Pakistan). Petrographic study revealed that these rocks were subjected to important structural and mineralogical modifications due to greenschist-epidote-amphibolite facies sub-sea-floor metamorphism and to brecciation. On the other hand, the examination of whole rock chemical composition and of chemical trends showed that these rocks were affected by some chemical modifications, concerning especially Na2O, K2O and Rb. The very low contents of HFS (high field strength) and RE elements found in gabbroic rocks and plagiogranites were considered to be a primary magmatic feature pointing in part to their cumulitic nature and in part to an origin from a refractory parental magma. The Sr isotopic data indicate that gabbroic rocks and plagiogranites were subjected to exchange with sea water. The particular chemical features shared by gabbroic rocks and plagiogranites suggested that fractional crystallization was a possible evolution process. In contrast, albitites are characterized by anomalously high contents in HFSE and LREE and by values of the 87Sr/86Sr ratio very close to sea water. These features suggest a more complex origin with respect to gabbroic rocks and plagiogranites.
Evidence that Charcot-Marie-Tooth disease with tremor coincides with the Roussy-Levy syndrome
F. Barbieri, A. Filla, M. Ragno, C. Crisci, L. Santoro, M. Corona, G. Campanella
Journal: Canadian Journal of Neurological Sciences / Volume 11 / Issue S4 / November 1984
We report data on 3 members of a family affected by a dominantly inherited disorder closely resembling Roussy-Levy syndrome (RLS). Electrophysiological findings showed a marked decrease of motor and sensory conduction velocities and EMG signs of mild neurogenic damage. Light and electron microscopy of sural nerve biopsy showed a hypertrophic neuropathy with diffuse onion-bulb formations and marked decrease of large size fibers. Teased fiber preparations evidenced reduced internodal lengths and segmental demyelination. Other data from the literature on RLS are reviewed and discussed. The hypothesis that RLS is not a disease entity but a hypertrophic-type of Charcot-Marie-Tooth disease with essential tremor (HMSN type 1) is strongly supported.
|
CommonCrawl
|
Biological methane production under putative Enceladus-like conditions
Ruth-Sophie Taubner1,2,
Patricia Pappenreiter3,
Jennifer Zwicker4,
Daniel Smrzka4,
Christian Pruckner1,
Philipp Kolar1,
Sébastien Bernacchi5,
Arne H. Seifert5,
Alexander Krajete5,
Wolfgang Bach6,
Jörn Peckmann ORCID: orcid.org/0000-0002-8572-00604,7,
Christian Paulik ORCID: orcid.org/0000-0002-1177-15273,
Maria G. Firneis2,
Christa Schleper1 &
Simon K.-M. R. Rittmann ORCID: orcid.org/0000-0002-9746-32841
Nature Communications volume 9, Article number: 748 (2018) Cite this article
Archaeal physiology
Rings and moons
The detection of silica-rich dust particles, as an indication for ongoing hydrothermal activity, and the presence of water and organic molecules in the plume of Enceladus, have made Saturn's icy moon a hot spot in the search for potential extraterrestrial life. Methanogenic archaea are among the organisms that could potentially thrive under the predicted conditions on Enceladus, considering that both molecular hydrogen (H2) and methane (CH4) have been detected in the plume. Here we show that a methanogenic archaeon, Methanothermococcus okinawensis, can produce CH4 under physicochemical conditions extrapolated for Enceladus. Up to 72% carbon dioxide to CH4 conversion is reached at 50 bar in the presence of potential inhibitors. Furthermore, kinetic and thermodynamic computations of low-temperature serpentinization indicate that there may be sufficient H2 gas production to serve as a substrate for CH4 production on Enceladus. We conclude that some of the CH4 detected in the plume of Enceladus might, in principle, be produced by methanogens.
Saturn's icy moon Enceladus emits jets of mainly water (H2O) from its south-polar region1. Besides H2O, the ion and neutral mass spectrometer (INMS) onboard NASA's Cassini probe detected methane (CH4), carbon dioxide (CO2), ammonia (NH3), molecular nitrogen (N2), and molecular hydrogen (H2) in the plume2. In addition, carbon monoxide (CO) and ethene (C2H4) were found among other substances with moderate ambiguity3,4,5,6 (Table 1). At 1608.3 ± 4.5 kg m[−31, Enceladus possesses a relatively high-bulk density for an icy moon, which leads to the assumption that a substantial part of its core consists of chondritic rocks7. At the boundary between the liquid water layer and the rocky core, geochemical interactions are assumed to occur at low to moderate temperatures (<100 °C)2,7,8. The most prominent potential source of H2 in Enceladus' interior may be oxidation of native and ferrous iron in the course of serpentinization of olivine in the chondritic core. Olivine hydrolysis at low temperatures is a key process for sustaining chemolithoautotrophic life on Earth9 and if H2 is produced in significant amounts on Enceladus, then it could also serve as a substrate for biological CH4 production. Considering that 139 ± 28 × 109 to 160 ± 43 × 109 kg carbon year−1 of the CH4 found in the atmosphere of Earth is emitted from natural sources10, including biological methanogenesis, the question was raised if CH4 detected in the plume of Enceladus could in principle also originate from biological activity11.
Table 1 Compilation of Cassini's INMS data on Enceladus' plume composition over the last decade
To date, methanogenic archaea are the only known microorganisms that are capable of performing biological CH4 production in the absence of oxygen12,13. On Earth, methanogens are found in a wide range of pH (4.5–10.2), temperatures (<0–122 °C), and pressures (0.005–759 bar)13 that overlap with conditions predicted in Enceladus' subsurface ocean, i.e., temperatures between 0 and above 90 °C8, pressures of 40–100 bar8, a pH between 8.5–10.58 and 10.8–13.514, and a salinity in the range of our oceans. While autotrophic, hydrogenotrophic methanogens might metabolise some of the compounds found in Enceladus' plume, other compounds which were detected in the plume with different levels of ambiguity, such as formaldehyde (CH2O), methanol (CH3OH), NH3, CO, and C2H4 are known to inhibit growth of methanogens on Earth at certain concentrations15,16,17.
Here we show that methanogens can produce CH4 under Enceladus-like conditions, and that the estimated H2 production rates on this icy moon can potentially be high enough to support autotrophic, hydrogenotrophic methanogenic life.
Effect of gaseous inhibitors on methanogens
To investigate growth of methanogens under Enceladus-like conditions, three thermophilic and methanogenic strains, Methanothermococcus okinawensis (65 °C)18, Methanothermobacter marburgensis (65 °C)19, and Methanococcus villosus (80 °C)20, all able to fix carbon and gain energy through the reduction of CO2 with H2 to form CH4, were investigated regarding growth and biological CH4 production under different headspace gas compositions (Table 2) on H2/CO2, H2/CO, H2, Mix 1 (H2, CO2, CO, CH4, and N2) and Mix 2 (H2, CO2, CO, CH4, N2, and C2H4). These methanogens were prioritised due to their ability to grow (1) in a temperature range characteristic for the vicinity of hydrothermal vents21, (2) in a chemically defined medium22, and (3) at low partial pressures of H223. Also, in the case of M. okinawensis, the location of isolation was taken into consideration, since the organism was isolated from a deep-sea hydrothermal vent field at Iheya Ridge in the Okinawa Trough, Japan, at a depth of 972 m below sea level18, suggesting a tolerance toward high pressure.
Table 2 Composition of the different test gases for the low-pressure experiments
While M. okinawensis, M. marburgensis, and M. villosus all showed growth on H2/CO2 to similar optical densities, no growth of M. marburgensis could be observed when C2H4 (Mix 2) was supplied in the headspace (Fig. 1). Growth of both M. villosus and M. okinawensis was observed even when CO and C2H4 were both present in the headspace gas. However, while M. villosus showed prolonged lag phases and irregular growth under certain conditions, M. okinawensis grew stably and reproducibly on the different gas mixtures without extended lag phases (Fig. 1). As expected, the final optical densities did not reach those of the experiments with H2/CO2, likely because in Mix 1 and Mix 2 lower absolute amounts of convertible gaseous substrate (H2/CO2) were available compared to the growth under pure H2/CO2. Consequently, growth kinetics showed a different, gas-limited linear inclination in the closed batch setup when using Mix 1 and Mix 222,24. Due to its reproducible growth, M. okinawensis was chosen for more extensive studies on biological CH4 production under putative Enceladus-like conditions.
Influence of the different headspace gas compositions on growth of M. marburgensis, M. villosus, and M. okinawensis. The error bars show standard deviations calculated from triplicates. OD curves of a, d, g, j, m M. marburgensis, b, e, h, k, n M. villosus and c, f, i, l, o M. marburgensis for a–c H2/CO2, d–f H2/CO, g–i H2, j–l Mix 1, and m–o Mix 2. Growth of M. marburgensis was inhibited by the presence of C2H4 (see Table 2 for detailed gas composition). Only M. marburgensis seemed to be able to use sodium hydrogen carbonate (supplied in the medium) as C-source in case of a lack of CO2 (H2 or H2/CO as sole gas in the headspace). Both, M. villosus and M. okinawensis showed growth when Mix 1 and Mix 2 were applied to the serum bottle headspace; however, M. villosus exhibited extended lag phases. The dips in the graphs b, c were caused by substrate limitation due to depletion of serum bottle headspace of H2/CO2 at high-optical cell densities
M. okinawensis tolerates Enceladus-like conditions at 2 bar
Growth and turnover rates (calculated via the decrease in headspace pressure) of M. okinawensis cultures were determined in the presence of selected putative liquid inhibitors detected in Enceladus' plume (NH3, given as NH4Cl, CH2O, and CH3OH). While growth of M. okinawensis could still be observed at the highest concentration of NH4Cl added to the medium (16.25 g L−1 or 0.30 mol L−1), the organism grew only in the presence of up to 0.28 mL L−1 (0.01 mol L−1) CH2O. This is less than, but importantly still in the same order of magnitude of, the observed maximum value of 0.343 mL L−1 CH2O detected in the plume25. Growth and CH4 production of M. okinawensis in closed batch cultivation was shown at CH3OH and NH4Cl concentrations exceeding those reported for Enceladus' plume4,5,25,26.
To explore how the presence of these inhibitors might influence growth and turnover rates of M. okinawensis, we have applied these compounds at various concentrations in a multivariate design space setting (Design of Experiment (DoE)). At different concentrations of CH2O, CH3OH, and NH4Cl, M. okinawensis cultures showed growth (Fig. 2) and turnover rates from 0.015 ± 0.012 to 0.084 ± 0.018 h−1 (Supplementary Fig. 1; experiments L and K in Fig. 2). CH3OH amendments at concentrations between 9.09 and 210.91 µL L−1 (0.22–5.21 mmol L−1) did not reduce or improve growth of M. okinawensis (Fig. 2 and Supplementary Tables 1 and 2). Compared to the highest applied CH2O concentration, the turnover rate of M. okinawensis was ~5.6-fold higher at the lowest tested concentration. The results of this experiment indicated that M. okinawensis possessed a physiological tolerance towards a broad multivariate concentration range of CH2O, CH3OH, and NH4Cl and was able to perform the autocatalytic conversion of H2/CO2 to CH4 while gaining energy for growth.
Schematic of the experimental setting and DoE raw data growth curves showing OD measurements. The DoE is based on a central composite design (figure in the upper left corner). NH4Cl, CH2O, and CH3OH were used as factors during the experiment and systematically varied in a multivariate design space (see Supplementary Table 1 for the concrete values). Each of the factors setting was examined in triplicates. The centre point (O) was examined in quintuplicates. The colours of the dots and the letters of the figure in the upper left corner correspond to the growth curves. The line labelled ZC represents the optical density of a corresponding zero control experiment, which was done with the same medium as the experiments labelled with O (central point), but without inoculum. The different colours represent different performances. For better readability, the error bars in this diagram were excluded, which were in a standard deviation range between 0.0009 and 0.1544. According to statistical selection criteria three experiments (one experiment F and two experiments O) were excluded from ANOVA analysis (Supplementary Table 2)
We used the mean liquid inhibitor concentrations for CH2O determined in the DoE experiment (DoE centre points) and Enceladus-like concentrations for CH3OH and NH4Cl (Supplementary Table 3) to test growth and turnover rates of M. okinawensis, using different gases in the headspace (H2/CO2, Mix 1, and Mix 2 (Fig. 3)). Under all tested headspace gas compositions, M. okinawensis showed gas-limited growth (max. OD values of 0.67 ± 0.02, 0.17 ± 0.03, and 0.13 ± 0.03 after ~237 h for H2/CO2, Mix 1 and Mix 2, respectively). The calculated turnover rates correlated with the different convertible amounts of H2/CO2 in Mix 1 and Mix 2. Hence, M. okinawensis was able to grow and to convert H2/CO2 to CH4 when CH2O, CH3OH, NH4Cl, CO and C2H4 were present in the growth medium at the concentrations calculated from Cassini's INMS data (assuming 1 bar, compare Tables 1 and 2). The mixing ratios of these putative inhibitors were based on INMS data4,5,25,26 but higher than those calculated by using the most recent Cassini data2,6 (Table 1). This demonstrates that growth and biological CH4 production of M. okinawensis is possible even at higher inhibitor concentrations.
Growth and turnover rate of M. okinawensis under Enceladus-like conditions at 2 bar. a, c, e Growth curves (OD578 nm) and b, d, f turnover rates (h−1) as a measure of CH4 production of M. okinawensis on a, b H2/CO2 (4:1), c, d Mix 1 and e, f Mix 2. For detailed composition of gases and media see Table 2 and Supplementary Table 3. I and II (light and dark colours, respectively) denote two independent experiments (each performed in triplicates, error bars = standard deviation). Enceladus-like concentrations were used for NH4Cl and CH3OH and mean liquid inhibitor concentrations determined in the DoE were used for CH2O. The dip in a was caused by substrate limitation due to depletion of serum bottle headspace of H2/CO2 at high-optical cell densities
M. okinawensis tolerates Enceladus-like conditions up to 50 bar
Due to the fact that methanogens on Enceladus would possibly need to grow at hydrostatic pressures up to 80 bar8 and beyond, the effect of high pressure on the conversion of headspace gas for M. okinawensis was examined in a pressure-resistant closed batch bioreactor. Headspace H2/CO2 conversion and CH4 production was examined at 10, 20, 50, and 90 bar, either using H2/CO2 in a 4:1 ratio or applying H2/CO2/N2 in a 4:1:5 ratio. A gas conversion of >88% was shown for each of the experiments (Supplementary Fig. 2) except for the 90 bar experiment using H2/CO2/N2, where the headspace gas conversion was found to be at 66.4%. However, no headspace gas conversion and CH4 production could be detected when cultivating M. okinawensis at 90 bar using H2/CO2 only (data not shown).
Final experiments were designed to investigate headspace H2/CO2 conversion and CH4 production of M. okinawensis according to INMS data (Table 3) and under conditions of high pressure (10.7 ± 0.1, 25.0 ± 0.7, and 50.4 ± 1.7 bar). Turnover rate, methane evolution rate (MER, calculated via pressure drop) and biological CH4 production (calculated via gas chromatography measurements) for these experiments are shown in Fig. 4. When simultaneously applying putative gaseous (Table 4) and liquid inhibitors (Supplementary Table 3) under high-pressure conditions, we reproducibly demonstrated that M. okinawensis was able to perform H2/CO2 conversion and CH4 production under Enceladus-like conditions.
Table 3 Concentrations of gaseous species in growth medium
CH4 production, MER, and turnover rate of M. okinawensis under Enceladus-like conditions at high pressure. Biological CH4 production determined by gas chromatography (blue) (Vol.-% h−1) and turnover rates (h−1) (green) and MER·10 (mmol L−1 h−1) (red) measured from headspace gas conversion using M. okinawensis (experiment 1 in light colours, experiment 2 in dark colours) under putative Enceladus-like conditions in a 2.0 L bioreactor (for detailed medium composition, see Supplementary Table 3 and for detailed gas composition see Table 4, n = 2). The positive control experiment contained also the liquid inhibitors but only H2/CO2 (4:1) in the headspace. For high-pressure experiments without any inhibitors see Supplementary Fig. 2
Table 4 Gas composition of experiments performed in the 2.0 L bioreactor
Methanogenic life could be fuelled by H2 from serpentinization
In light of these experimental findings and the presence of H2 in Enceladus' plume2, the question arose if serpentinization reactions can support a rate of H2 production that is high enough to sustain autotrophic, hydrogenotrophic methanogenic life. To address this question, we used the PHREEQC27 code to model serpentinization-based H2 production rates under Enceladus-like conditions (Table 5) with the assumption that the rate-limiting step of the serpentinization reaction is the dissolution of olivine. H2 production rates are poorly constrained, as they strongly depend on assumed grain size and temperature. These rates correspond to the low end of the range of H2 production rates, which were based on a thermal cooling and cracking model28. Of the many reactions involved in serpentinization of peridotite, dissolution of the Fe(II)-bearing primary phases is a critical one29, and the only one for which kinetic data are available. In the model, CO2 reduction to CH4 is predicted to take place once enough H2 in the system was produced to generate thermodynamic drive for the reaction. While abiotic CH4 production is kinetically more sluggish than olivine dissolution30, biological CH4 production is fast and may be controlled by the rate at which H2 is supplied. The abiotic CH4 production rates listed in Table 5 are hence also modelled such that olivine dissolution is the rate-limiting step. The results of these thermodynamic and kinetic computations show that H2 and CH4 production is predicted for a range of rock compositions (Table 5) and temperature conditions (Supplementary Table 4). The model system essentially represents a closed system with high-rock porosity, such as proposed for Enceladus2. The computational results predict how much H2 and CH4 should form within the intergranular space inside Enceladus' silicate core with water-to-rock-ratios between 0.09 and 0.12 (Table 5). The serpentinization reactions are predicted to produce solutions with circumneutral to high pH between 7.3 and 11.3, as well as amounts of H2 that greatly exceed the amount of dissolved inorganic carbon (DIC) trapped in the pore space. As the computations indicate that there is ample thermodynamic drive for reducing DIC to CH4, these results corroborate the idea that serpentinization reactions on Enceladus might fuel autotrophic, hydrogenotrophic methanogenic life. However, we would like to point out that if methanogenic life were indeed active on Enceladus, biological CH4 production would always compete with abiotic CH4 generation processes resulting in a mixed CH4 production.
Table 5 H2 and CH4 production rates from serpentinization calculated for 50 °C and 50 bar
In this study, we show that the methanogenic strain M. okinawensis is able to propagate and/or to produce CH4 under putative Enceladus-like conditions. M. okinawensis was cultivated under high-pressure (up to 50 bar) conditions in defined growth medium and gas phase, including several potential inhibitors that were detected in Enceladus' plume2,4,6. The only difference between the growth conditions of M. okinawensis and the putative Enceladus-like conditions was the lower pH value applied during the high-pressure experiments. Due to the supply of CO2 at high-pressure in the experiments the pH decreased to ~5, while pH values between 7.3 and 13.5 were estimated for Enceladus' subsurface ocean (this study and refs. 8,14).
Another point of debate might be the cultivation temperatures used for the thermophilic and hyperthermophilic methanogens in this study. The mean temperature in the subsurface ocean of Enceladus might be just above 0 °C except for the areas where hydrothermal activity is assumed to occur. In these hydrothermal settings temperatures higher than 90 °C are supposedly possible8, and are therefore the most likely sites for higher biological activity on Enceladus. Although methanogens are found over a wide temperature range on Earth, including temperatures around 0 °C31, growth of these organisms at low temperatures is observed to be slow13.
We estimated H2 production rates between 4.03 and 50.7 nmol g−1 L−1 d−1 in the course of serpentinization on Enceladus (Table 5). These estimates are rather conservative, as they are based on the assumption of small specific mineral surface areas. In a recent study, the rate of serpentinization has been estimated from a physical model that predicts how fast cracking fronts propagate down into Enceladus' core28. Combining this physically controlled advancement of serpentinization (8 × 1011 g y[−128) with our estimates for kinetically limited rates of H2 production leads to overall rates of 3–40 × 104 mol H2 y−1 for Enceladus. Although still high enough to support biological methanogenesis, these rates are orders of magnitude lower than the previously suggested 10 × 108 mol H2 y[−128 assuming that the speed of cracking front propagation controls the rate of H2 production. We hence suggest that reaction kinetics may play an important role in determining the overall H2 production rate on Enceladus. Our computed steady-state H2 production rates are lower than the 1–5 × 109 mol H2 y−1 estimated from Cassini data2. This apparent discrepancy in flux rates can be reconciled if the Enceladus plume was a transient (i.e., non-steady state) phenomenon. The predicted H2/CH4 ratio of 2.5 (Fo90:En:Diop = 8:1:1) to 4 (Fo90) for the magnesian compositions of Enceladus' core (Table 5) are consistent with the relative proportions of the two gases in the plume (0.4–1.4% H2, 0.1–0.3% CH4)2.
Based on our estimated H2 production rate, we can calculate how much of the available DIC on Enceladus could be fixed into biomass through autotrophic, hydrogenotrophic methanogenesis. If we assume a typical elementary composition of methanogen biomass32, 7.13 g carbon could be fixed per g hydrogen fixed. Under optimal growth conditions, ~3%22,23 of the available carbon can be assimilated into biomass, and assuming that methanogens possess a molecular weight of ~30.97 g C-mol[−122 and that the total amount of H2 produced would be available for the carbon and energy metabolism of autotrophic, hydrogenotrophic methanogens, a biomass production rate between 20 and 257 C-nmol g−1 L−1 y−1 could be achieved. In another approach, we can use the actually predicted CH4 production rates of 1.32–2.05 nmol g−1 L−1 d−1 (Table 5) and a Gibbs energy dissipation approach in which we assume that 10% of the energy of CH4 production is fuelling biosynthesis28. This yields similar numbers of biomass production rate, i.e., 28 and 56 C-nmol g−1 L−1 y−1.
Based on our findings, it might be interesting to search for methanogenic biosignatures on icy moons in future space missions. Methanogens produce distinct and lasting biosignatures, in particular lipid biomarkers like ether lipids and isoprenoid hydrocarbons. Other potential biomarkers for methanogens are high-nickel (Ni) concentrations (and its stable isotopes33), as Ni is e.g., part of methyl-coenzyme M reductase, the key enzyme of biological methanogenesis23. However, both lipid biomarkers and Ni-based biosignatures are likely only to be identifiable at the site of biological methanogenesis, and the effect of dilution with increasing distance away from the methanogen habitat is likely to prevent their use as a general marker for biological methanogenesis in Enceladus' plume or in a subsurface ocean. If, however, bubble scrubbing would occur, a process by which organic compounds and cells adhere to bubble surfaces and are carried away as bubbles rise, which was suggested to occur on Enceladus34, the amount of bioorganic molecules and cells would be much higher and future lander missions could easily collect physical evidence for the presence of autotrophic, hydrogenotrophic methanogenic life on Enceladus.
Additionally, one could consider using stable isotopes of CH4 and CO2 and ratios of low-molecular weight hydrocarbons to evaluate the possibility of biological methanogenesis on Enceladus11. But given the uncertainties on the geological and hydrogeological boundary conditions that influence the targeted isotope and molecular patterns in Enceladus' plume, such an approach is not trivial. In contrast to biological and thermogenic CH4 production, the latter resulting from the decomposition of organic matter, abiogenic CH4 is believed to be produced by metal-catalysed Fischer–Tropsch or Sabatier type reactions under hydrothermal conditions and particularly in the course of serpentinization of ultramafic rocks35. Although biologically produced CH4 is usually characterised by its strong 13C depletion, growth of methanogens at high-hydrostatic pressures and high temperatures, which is typical of deep-sea hydrothermal systems, may significantly reduce kinetic isotope fractionation and result in relatively high δ13C values of CH4, hampering discrimination from non-microbial CH436. Given such uncertainties, multiply substituted, so-called 'clumped' isotopologues of CH4 emerge as new proxy to constrain its mode of formation and to recognise formation environments like serpentinization sites37.
Another approach to identify the origin of CH4 could be CH4/(ethane + propane) ratios, as low ratios are typical of settings dominated by thermogenic CH438. However, this ratio may fall short to unequivocally discriminate abiogenic from biologically produced CH4. For instance the ratio of CH4 concentration to the sum of C2+ hydrocarbon concentration (C1/C2+) in the serpentinite-hosted Lost City Hydrothermal Field of 950 ± 76 was found to be most similar to ratios obtained in experiments with Fischer–Tropsch type reactions (<100–>3000). Thermogenic reactions produce C1/C2+ ratios less than ~100, whereas biological methanogenesis results in ratios of 2000–1300039. More than 30 years of research on CH4 production have revealed that its biologic, thermogenic or abiogenic origin on Earth is often difficult to trace40. However, the experimental and modelling results presented in this study together with the estimates of the physicochemical conditions on Enceladus from earlier contributions make it worthwhile to increase efforts in the search for signatures for autotrophic, hydrogenotrophic methanogenic life on Enceladus and beyond.
Estimations of Enceladus' interior structure
Due to its rather small radius, the uncompressed density of the satellite is almost equal to its bulk density, which makes a simplified model of Enceladus' interior reasonable. Enceladus was divided into a rocky core (core density of 2300–2550 kg m−3), a liquid water layer (density of 960–1080 kg m−3), and an icy shell (ice density of 850–960 kg m−3) and hydrostatic equilibrium was assumed. Calculations of the hydrostatic pressure based on Enceladus mass of 1.0794 × 1020 kg41 and its mean radius of 252.1 km41 assuming a core radius of 190–200 km, an subsurface ocean depth of 60–10 km and a corresponding ice shell thickness of 2.1–52.1 km results in a pressure of ~44.3–25.2 bar or 80.1–56.2 bar (depending on the method, Supplementary Methods) at the water-core-boundary. For the high-pressure experiments in this study, including all inhibitors, three pressure values were chosen that lie in the range given by Hsu et al. (10–80 bar)8 and are related to our calculations, i.e., 10, 25, and 50 bar.
Low-pressure experiments
Growth and tolerance towards putative inhibitors of the three methanogenic strains Methanothermococcus okinawensis DSM 14208, Methanothermobacter marburgensis DSM 2133, and Methanococcus villosus DSM 22612 were elucidated (Fig. 1). All strains were obtained from the Deutsche Stammsammlung von Mikroorganismen und Zellkulturen GmbH, Braunschweig, Germany. Growth was resolved by optical density (OD) measurements (λ = 578 nm). H2/CO2–CH4 gas conversion [%], turnover rate [h−1] (see Equation (2) below), and MER were calculated from the decrease of the bottle headspace pressure and/or from measuring CH4 production in a closed batch setup22,24. The headspace pressures were measured using a digital manometer (LEO1-Ei, −1…3barrel, Keller, Jestetten, Germany) with filters (sterile syringe filters, w/0.2c μm cellulose, 514-0061, VWR International, Vienna, Austria), and cannulas (Gr 14, 0.60 × 30 mm, 23 G × 1 1/4", RX129.1, Braun, Maria Enzersdorf, Austria). The detailed setting can be seen in Fig. 3(a) in Taubner and Rittmann22. All pressure values presented in this study are indicated as relative pressure in bar.
For the experiments at 2 bar regarding CO and C2H4 tolerance (see Figs. 1 and 3), the strains were incubated in the dark either in a water bath (M. marburgensis and M. okinawensis, 65 ± 1 °C) or in an air bath (M. villosus, 80 ± 1 °C). The methanogens were cultivated in 50 mL of their respective chemically defined growth medium. Compositions of the different growth media of the experiments shown in Figs. 1 and 3 can be found in Supplementary Tables 3 and 5–10. The final preparation of the medium in the anaerobic culture flasks was performed in an anaerobic chamber (Coy Laboratory Products, Grass Lake, USA). Experiments were performed over a time of 210–270 h. After each incubation period, serum bottle headspace pressure measurement (in order to be unbiased, flasks were previously cooled down to room temperature), OD-sampling, and gassing with designated gas or test gas was performed. OD measurement was performed at 578 nm in a spectrophotometer (DU800, Beckman Coulter, USA). A zero control was incubated together with each individual experiment and the OD of this control was subtracted from the measured OD from the inoculated flasks each time.
For hydrogenotrophic methanogens, which utilise H2 as electron donor for the reduction of CO2 to produce CH4 and H2O as their metabolic products, the following stoichiometric reaction equation was used12,23,24:
$$4{\mathrm{H}}_{2({\mathrm{g}})} + {\mathrm{CO}}_{2({\mathrm{g}})} \to {\mathrm{CH}}_{4({\mathrm{g}})} + 2{\mathrm{H}}_2{\mathrm{O}}_{({\mathrm{aq}})}\quad \Delta {G}^0 = - 135\,{\mathrm{kJ}}\,{\mathrm{mol}}^{ - 1}.$$
The turnover rate [h−1] correlates with the catalytic efficiency per unit of time, i.e., it is a way to indirectly quantify CH4 productivity. By assuming the above-mentioned chemical CO2 methanation stoichiometry and neglecting biomass formation, the turnover rate is an equivalent method for indirect quantification of CH4 production. It is defined as
$${\mathrm{turnover}}\,{\mathrm{rate}}\,[\mathrm{h}^{ - 1}] = \frac{{\Delta p}}{{\Delta p_{\mathrm{max}} \cdot \Delta t}},$$
where Δp [bar] is the difference in pressure before and after incubation, Δpmax [bar] is the maximal theoretical difference that would be feasible due to stoichiometric reasons22, and Δt [h] is the time period of incubation.
For the initial pressure experiments at 2 bar, the three methanogenic strains were tested under five different gas phase compositions (Table 2). A significant change in OD and turnover rate was observed between these experiments (as can be seen in Fig. 1). When Mix 1 and Mix 2 were applied, only a maximum of 22.66 ± 0.23 Vol.-% H2 (average, Table 2) could be converted to CH4 and biomass.
To evaluate the influences of the potential inhibitors NH3, CH2O, and CH3OH on the growth of M. okinawensis, several preliminary experiments were performed. For easier handling, NH3 was substituted by NH4Cl. Based on INMS data (Table 1) the amount of NH4Cl was calculated according to Henry's law. For that, Henry's law constant was calculated to be 0.1084 mol m−3 Pa−1 at 64 °C. This results in 11.6 g L−1 (0.22 mol L−1) NH4Cl to have ~1% of NH3 in the gaseous phase at equilibrium for the experiments under closed batch conditions. The influence of NH4Cl between 0.25 and 16.25 g L−1 (4.67 and 303.79 mmol L−1), CH2O between 0 to 111 µL L−1 (0–4.03 mmol L−1), and CH3OH between 0 and 200 µL L−1 (0–4.94 mmol L−1) was tested individually. CH2O (37 Vol.-%) and CH3OH (98 Vol.-%) were used as stock solutions.
To find an appropriate ratio for the final experiments, an experiment in a DoE setting was established. A central composite design with the parameters shown in Supplementary Table 1 and Fig. 2 was chosen. The design space is spherical with a normalised radius equal to one. Experiments A–N were done in triplicates; experiments O were performed in quintuplicate. The results of these experiments in terms of OD can be seen in Fig. 2 and in terms of turnover rate in Supplementary Fig. 1. Each incubation time period was 10.0 ± 0.5 h. The ANOVA analysis of this study can be found in Supplementary Table 2.
The setting for the experiments under Enceladus-like conditions at 2 bar pressure included the medium described in Supplementary Table 3 and Mix 2 (Table 2) as gaseous phase. As can be seen in Fig. 3 there was a lag phase of two days, but after that continuous but slow growth was observed.
To calculate the molar concentration of H2 and CO2 in the medium (Table 3), Henry's law was used:
$$M = k_{\mathrm H} \cdot p_X,$$
where p X is the partial pressure of the respective gas and kH is Henry's constant as a function of temperature:
$$k_{\rm H}{\mathrm{ = }}k_{\rm H}^ \ominus \cdot {\rm e}^{\left( {\frac{{{\mathrm{ - }}\Delta _{{\mathrm{soln}}}{\mathrm{H}}}}{R} \cdot \left( {\frac{1}{T} - \frac{1}{{T^ \ominus }}} \right)} \right)}{\mathrm{,}}$$
where \({\mathrm{\Delta }}_{{\mathrm{soln}}}{\mathrm{H}}\) is the enthalpy change of the dissolution reaction. For \(k_{\mathrm H}^ \ominus\) the values 7.9 × 10−4 mol L−1 bar−1 and 3.4 × 10−2 mol L−1 bar−1 and for \(\frac{{ - {\mathrm{\Delta }}_{{\mathrm{soln}}}{\mathrm{H}}}}{R}\) the values 500 K and 2400 K for H2 and CO2, respectively, were used. This results in a Henry constant at 65 °C of 6.481 × 10−4 mol L−1 bar−1 and 1.329 × 10−2 mol L−1 bar−1 for H2 and CO2, respectively.
Another potential liquid inhibitor detected in Enceladus' plume was hydrogen cyanide (HCN)3,4. However, calculations on HCN stability under the assumed conditions on Enceladus show that HCN would hydrolyse into formic acid and ammonia42. Further investigations on the stability of HCN at different pH values and temperatures yielded similar results43,44. It was therefore assumed that HCN might originate either from a very young pool, a recent aqueous melt, or from the icy matrix on Enceladus4,42. Due to this reasoning and the low probability of HCN presence in the subsurface ocean of Enceladus, HCN was neglected in all growth media used to perform the experiments.
High-pressure experiments
M. okinawensis initial high-pressure experiments were performed at its optimal growth temperature of 65 ± 1 °C using a chemically defined medium (250 mL, see Supplementary Table 10 for exact composition) and a fixed stirrer speed of 100 r.p.m. in a 0.7 L stirred stainless steel Büchi reactor. Before each of the experiments, the reactor was filled with medium and the entire setting was autoclaved under CO2 atmosphere to assure sterile conditions. Thereafter, the inoculum (1 Vol.-%), the NaHCO3, l-cysteine, Na2S·9 H2O (0.5 M) and trace element solution were transferred via a previously autoclaved transfer vessel into the reactor. Then the reactor was set under pressure with the selected gas mixture (added ~5 bar in discrete steps every 10 min). The initial high-pressure experiments were performed using both an H2/CO2 (4:1) gas phase and an H2/CO2/N2 (4:1:5) mixture. The reactor was equipped with an online pressure (ASIC Performer pressure sensor 0–400 bar, Parker Hannifin Corporation, USA) and temperature probe (thermo element PT100, −75 °C − 350 °C, TC Mess- und Regeltechnik GmbH, Mönchengladbach, Germany). The conversion was always above 88% except for the 90 bar experiment, wherein also N2 fixation into biomass could be assumed. Interestingly, the time until start of the conversion decreases for the H2/CO2 experiments upon an increase of headspace pressure in the initial setup. This could be an indication for a barophilic nature of this organism, but also due to the experimental closed batch setup.
Increasing pCO2 and associated pH change was determined using a pH probe (see Supplementary Fig. 3). This analysis showed that even a rather small pCO2 (2 bar) already decreases the pH from nearly neutral to >5 due to the medium composition, which is due to application of a medium with low-buffering capacity, also possibly occurring in Enceladus' subsurface ocean. However, it remains an open question if the medium on Enceladus is buffered. This would lead to higher possible pCO245 without having a drastic influence on the reported pH. Furthermore, we calculated if NaHCO3 could be used as source of dissolved inorganic carbon and what would be the effect on the pH of the medium. Postberg et al. suggested a concentration of 0.02–0.1 mol kg−1 NaCO3 and 0.05–0.2 mol kg−1 NaCl in the medium to reach a pH level between 8.5 and 946. Calculations on the concentrations of dissolved CO2 in the high-pressure experiments were performed by using the mole fraction of dissolved CO2 in H2O depending on pCO2. The mole fractions for pCO2 of 0.7, 1.2, and 3.1 bar (as used in the experiments) at 65 °C were generated by extrapolation of given values in the region of pCO2 = 0–1 bar. H2/CO2 conversion and CH4 production could still be measured at 10 bar, 20 bar and 50 (i.e., pCO2 = 10) bar with H2/CO2 (4:1) gas phase, but no decrease in pressure was observed at 90 bar with H2/CO2 (4:1) gas phase (i.e., pCO2 = 18 bar) after >110 h (data not shown). It is assumed that no growth occurs under these conditions due to the high pCO2 (18 bar) and the associated decrease to a pH of <3, which is beyond the reported pH tolerance of M. okinawensis18.
The high-pressure experiments under Enceladus-like conditions were carried out in the presence of both gaseous and liquid inhibitors applying the optimal growth temperature of 65 ± 1 °C at individual pressures of 10, 25, and 50 bar in a stirred 2.0 L Büchi reactor at 250 r.p.m. The gas ratios of the final gas mixtures are reported in Table 4. The final liquid medium was the same as the one used in the final 2 bar experiments (incl. the liquid inhibitors, Supplementary Table 3). Due to the results from the pH experiments (Supplementary Fig. 3), the low pCO2 (~3 bar) was chosen to avoid a pH shift to more acidic values, which is not representative of Enceladus-like conditions. For final high-pressure experiments, the medium volume was set to 1.1 L and M. okinawensis H2/CO2 grown pre cultures of an OD = 34.4 and OD = 34.5 were used as inoculums (11 mL each). Preparation of the high OD M. okinawensis suspension was performed by collecting 1 L of serum bottle grown fresh culture (OD ~0.7), centrifuging the cells at 5346×g anaerobically for 20 min (Heraeus Multifuge 4KR Centrifuge, Thermo Fisher Scientific, Osterode, Germany), and re-suspending the cells in 20 mL of freshly reduced appropriate growth medium. The gases were added into the bioreactor headspace in the following order: CO, N2, CO2, H2, and C2H4 (5 bar every 10 min). During all high-pressure experiments OD measurements were not conducted because upon reactor depressurisation cell envelopes of M. okinawensis were found to be disrupted. To determine the amount of produced CH4 in the high-pressure experiments, gas samples were taken after reducing the pressure in the reactor down to 1.36 ± 0.25 bar. The sample was stored in 120 mL serum bottles and sealed with black septa (3.0 mm, Butyl/PTFE, La Pha Pack, Langerwehe, Germany). The volumetric concentrations of CH4 was determined using a gas chromatograph (7890 A GC System, Agilent Technologies, Santa Clara, USA) equipped with a TCD detector and a 19808 ShinCarbon ST Micropacked Column (Restek GmbH, Bad Homburg, Germany)22.
To determine the CH4 production [Vol.-% h−1] shown in Fig. 4, the value of CH4 Vol.-% was divided by the time of biological CH4 production in h. To exclude a potential lag phase, the starting point of biological CH4 production was set to the point in time when the decrease in pressure exceeded the initial pressure by 5% for 10 bar experiments or by 1% during the other experiments.
Serpentinization simulations
The PHREEQC27 code was used to simulate serpentinization reactions from 25 to 100 °C and from 25 to 50 bar in order to assess H2 production on Enceladus. The Amm.dat and llnl.dat databases were used for all simulations, which account for temperature and pressure dependent equilibrium constants for dissolved species and solid phases up to 100 °C and 1000 bar. Solution composition was taken from the chemical composition of erupting plume of Enceladus4, as the true chemistry of its subsurface sea is unknown. Dissolved concentrations of Ca2+, Fe2+, Mg2+, and SiO2 were assumed to be seawater-like and values from McCollom and Bach47 were used. At the very low water-to-rock ratios of our model, the compositions of the interacting fluids will be entirely rock buffered, so that the model results are insensitive to the choice of the starting fluid composition. DIC concentration was set to 0.04 mol L−1 taken from Glein et al.14, who estimated a possible range of 0.005–1.2 molal DIC in Enceladus' subsurface ocean. The solid phase assemblage was composed of varying amounts of olivine, enstatite and diopside, as well as varying olivine compositions. Most planetary bodies exhibit olivine solid solutions (Mg, Fe)2SiO4 that are dominated by forsterite (Mg2SiO4). This has been shown for micrometeorites found on Earth, lunar meteorites, comets, and asteroids48,49,50,51. Stony iron meteorites, such as pallasites contain forsteritic olivine with up to 20% Fe2+ content52. Olivines in chondrites show a more varied compositions ranging between 7 and 70% ferrous iron53,54, to almost pure fayalite (Fe2SiO4)55. A realistic assumption is that olivines on Enceladus have a more forsteritic composition that resembles those of stony iron and lunar meteorites.
A composition of Fo90 was adopted for olivine. Calculations were limited to Fo90, fayalite, enstatite, and diopside, as experimental data on their dissolution kinetics at high pH and low temperature are available. Kinetic rate laws were applied for forsteritic olivine, fayalite, enstatite, and diopside from Wogelius and Walther56,57, Daval et al.58, Oelkers and Schott59, and Knauss et al.60, respectively. Dissolution rate laws for Fo90 and fayalite at 100 °C were extrapolated from rate data in Wogelius and Walther57 using the Arrhenius equation. Enstatite and diopside rate laws at 100 °C were power-law fitted from experimental data provided by Oelkers and Schott59 and Knauss et al.60, respectively. All rate laws are valid over a pH range from 2 to 12 at all temperatures. Kinetic rate laws were multiplied by the total surface area of each mineral present in solution in order to calculate moles of minerals dissolved per time. Surface areas of 590 cm2 g−1 for Fo90 and fayalite61, 800 cm2 g−1 for enstatite59, and 550 cm2 g−1 for diopside60 were used. These specific surface areas have been suggested to be typical for fine-grained terrestrial rocks. We adopted these numbers in our computations, as we have no constraints on what specific surface areas in Enceladus may be. If the core of Enceladus was similar to carbonaceous chondrite, then the average mineral grain size is smaller and hence the specific surface areas greater than we assumed62. We choose to use fairly small specific surface areas to provide conservative estimates for H2 production rates. Model 1 uses Fo90, enstatite and diopside in a ratio of 8:1:1, model 2 uses a pure Fo90 composition. The effect of ferrous iron content in olivine on H2 production rates was tested in computations where Fo50 (Fo:Fa 1:1, model 3) and Fo20 (Fo:Fa 2:8, model 4) were dissolved as the sole mineral. Fo50 and Fo20 were dissolved according to dissolution rates of Wogelius and Walther (their equation (6))57, and for temperatures beyond 25 °C dissolution rates for fayalite were extrapolated to 50 and 100 °C after Daval et al.58. Model 1 contained 40 mol of Fo90 and 5 mol of enstatite and diopside. For models 2–4, 55 mol of Fo90, Fo50, and Fo20 were used. Applying these amounts yield water-to-rock-ratios between 0.09 and 0.12 (Table 5).
The most likely environmental conditions present within Enceladus are temperatures between 25 and higher than 90 °C at 25–80 bar8, and temperatures of 50 °C and pressures of 50 bar were chosen for the four different models. In a separate set of computations, temperatures were altered to 25 and 100 °C, and pressures were set at 25 and 100 bar. These results are shown in Supplementary Table 4. As pressure has a negligible effect on H2 production, only the variations in temperature change are shown.
The data sets analysed during the current study are available in this article and its Supplementary Information file, or from the corresponding author on request.
Porco, C. C. et al. Cassini observes the active south pole of Enceladus. Science 311, 1393–1401 (2006).
Waite, J. H. et al. Cassini finds molecular hydrogen in the Enceladus plume: evidence for hydrothermal processes. Science 356, 155–159 (2017).
Waite, J. H. et al. Cassini ion and neutral mass spectrometer: Enceladus plume composition and structure. Science 311, 1419–1422 (2006).
Waite, J. H. et al. Liquid water on Enceladus from observations of ammonia and 40Ar in the plume. Nature 460, 487–490 (2009).
ADS CAS Article Google Scholar
Bouquet, A., Mousis, O., Waite, J. H. & Picaud, S. Possible evidence for a methane source in Enceladus' ocean. Geophys. Res. Lett. 42, 1334–1339 (2015).
Magee, B. A. & Waite Jr, J. H. Neutral gas composition of Enceladus' plume – model parameter insights from Cassini-INMS. In 48th Lunar and Planetary Science Conference, abstr. 2974 (2017).
Sekine, Y. et al. High-temperature water-rock interactions and hydrothermal environments in the chondrite-like core of Enceladus. Nat. Commun. 6, 8604 (2015).
Hsu, H.-W. et al. Ongoing hydrothermal activities within Enceladus. Nature 519, 207–210 (2015).
Mayhew, L. E., Ellison, E. T., McCollom, T. M., Trainor, T. P. & Templeton, A. S. Hydrogen generation from low-temperature water-rock reactions. Nat. Geosci. 6, 478–484 (2013).
Tian, H. et al. The terrestrial biosphere as a net source of greenhouse gases to the atmosphere. Nature 531, 225–228 (2016).
McKay, C. P., Porco, C. C., Altheide, T., Davis, W. L. & Kral, T. A. The possible origin and persistence of life on Enceladus and detection of biomarkers in the plume. Astrobiology 8, 909–919 (2008).
Liu, Y. & Whitman, W. B. Metabolic, phylogenetic, and ecological diversity of the methanogenic archaea. in. Ann. N. Y. Acad. Sci. 1125, 171–189 (2008).
Taubner, R.-S., Schleper, C., Firneis, M. G. & Rittmann, S. K.-M. R. Assessing the ecophysiology of methanogens in the context of recent astrobiological and planetological studies. Life 5, 1652–1686 (2015).
Glein, C. R., Baross, J. A. & Waite, J. H. The pH of Enceladus' ocean. Geochim. Cosmochim. Acta 162, 202–219 (2015).
Schink, B. Inhibition of methanogenesis by ethylene and other unsaturated hydrocarbons. FEMS Microbiol. Lett. 31, 63–68 (1985).
Kato, S., Sasaki, K., Watanabe, K., Yumoto, I. & Kamagata, Y. Physiological and transcriptomic analyses of the thermophilic, aceticlastic methanogen methanosaeta thermophila responding to ammonia stress. Microbes Environ. 29, 162–167 (2014).
Daniels, L., Fuchs, G., Thauer, R. K. & Zeikus, J. G. Carbon monoxide oxidation by methanogenic bacteria. J. Bacteriol. 132, 118–126 (1977).
Takai, K., Inoue, A. & Horikoshi, K. Methanothermococcus okinawensis sp. nov., a thermophilic, methane-producing archaeon isolated from a western Pacific deep-sea hydrothermal vent system. Int. J. Syst. Evol. Microbiol. 52, 1089–1095 (2002).
Schönheit, P., Moll, J. & Thauer, R. K. Growth parameters (Ks, μmax, Ys) of Methanobacterium thermoautotrophicum. Arch. Microbiol. 127, 59–65 (1980).
Bellack, A., Huber, H., Rachel, R., Wanner, G. & Wirth, R. Methanocaldococcus villosus sp. nov., a heavily flagellated archaeon that adheres to surfaces and forms cell–cell contacts. Int. J. Syst. Evol. Microbiol. 61, 1239–1245 (2011).
Martin, W., Baross, J., Kelley, D. & Russell, M. J. Hydrothermal vents and the origin of life. Nat. Rev. Microbiol. 6, 805–814 (2008).
Taubner, R.-S. & Rittmann, S. K.-M. R. Method for indirect quantification of CH4 production via H2O production using hydrogenotrophic methanogens. Front. Microbiol. 7, 532 (2016).
Thauer, R. K., Kaster, A.-K., Seedorf, H., Buckel, W. & Hedderich, R. Methanogenic archaea: ecologically relevant differences in energy conservation. Nat. Rev. Microbiol. 6, 579–591 (2008).
Rittmann, S. K.-M. R., Seifert, A. & Herwig, C. Essential prerequisites for successful bioprocess development of biological CH4 production from CO2 and H2. Crit. Rev. Biotechnol. 35, 141–151 (2015).
Waite, J. H. et al. Enceladus plume composition. In EPSC-DPS Joint Meeting 2011 (2011).
Perry, M. E. et al. Inside Enceladus' plumes: the view from Cassini's mass spectrometer. In American Astronomical Society, DPS Meeting 47, abstr. 410.04 (2015).
Parkhurst, D. L. & Appelo, C. A. J. User's Guide to PHREEQC (Version 2): A Computer Program for Speciation, Batch-Reaction, One-Dimensional Transport, and Inverse Geochemical Calculations. Water-Resources Investigations Report 99–4259 (USGS, 1999).
Steel, E. L., Davila, A. & McKay, C. P. Abiotic and biotic formation of amino acids in the Enceladus ocean. Astrobiology 17, 862–875 (2017).
Bach, W. Some compositional and kinetic controls on the bioenergetic landscapes in oceanic basement. Front. Microbiol. 7, 107 (2016).
McCollom, T. M. Abiotic methane formation during experimental serpentinization of olivine. Proc. Natl Acad. Sci. USA 113, 13965–13970 (2016).
Cavicchioli, R. Cold-adapted archaea. Nat. Rev. Microbiol. 4, 331–343 (2006).
Duboc, P., Schill, N., Menoud, L., Van Gulik, W. & Von Stockar, U. Measurements of sulfur, phosphorus and other ions in microbial biomass: influence on correct determination of elemental composition and degree of reduction. J. Biotechnol. 43, 145–158 (1995).
Cameron, V., Vance, D., Archer, C. & House, C. H. A biomarker based on the stable isotopes of nickel. Proc. Natl Acad. Sci. USA 106, 10944–10948 (2009).
Porco, C. C., Dones, L. & Mitchell, C. Could it be snowing microbes on Enceladus? Assessing conditions in its plume and implications for future missions. Astrobiology 17, 876–901 (2017).
Horita, J. & Berndt, M. E. Abiogenic methane formation and isotopic fractionation under hydrothermal conditions. Science 285, 1055–1057 (1999).
Takai, K. et al. Cell proliferation at 122 C and isotopically heavy CH4 production by a hyperthermophilic methanogen under high-pressure cultivation. Proc. Natl Acad. Sci. USA 105, 10949–10954 (2008).
Wang, D. T. et al. Nonequilibrium clumped isotope signals in microbial methane. Science 348, 428 LP–428431 (2015).
Whiticar, M. J. Carbon and hydrogen isotope systematics of bacterial formation and oxidation of methane. Chem. Geol. 161, 291–314 (1999).
Proskurowski, G. et al. Abiogenic hydrocarbon production at lost city hydrothermal field. Science 319, 604–607 (2008).
Etiope, G. & Sherwood Lollar, B. Abiotic methane on earth. Rev. Geophys. 51, 276–299 (2013).
NASA. Enceladus: by the Numbers. https://solarsystem.nasa.gov/planets/enceladus/facts (2017).
Glein, C. R., Zolotov, M. Y. & Shock, E. L. Liquid water vs. hydrogen cyanide on Enceladus. In American Geophysical Union, Fall Meeting, abstract #P23B-1365 (2008).
Sanchez, R. A., Ferbis, J. P. & Orgel, L. E. Studies in prebiodc synthesis: II. Synthesis of purine precursors and amino acids from aqueous hydrogen cyanide. J. Mol. Biol. 30, 223–253 (1967).
Miyakawa, S., James Cleaves, H. & Miller, S. L. The cold origin of life: A. Implications based on the hydrolytic stabilities of hydrogen cyanide and formamide. Orig. Life. Evol. Biosphys. 32, 195–208 (2002).
Roosen, C., Ansorge-Schumacher, M., Mang, T., Leitner, W. & Greiner, L. Gaining pH-control in water/carbon dioxide biphasic systems. Green Chem. 9, 455 (2007).
Postberg, F. et al. Sodium salts in E-ring ice grains from an ocean below the surface of Enceladus. Nature 459, 1–4 (2009).
McCollom, T. M. & Bach, W. Thermodynamic constraints on hydrogen generation during serpentinization of ultramafic rocks. Geochim. Cosmochim. Acta 73, 856–875 (2009).
Demidova, S. I., Nazarov, M. A., Ntaflos, T. & Brandstätter, F. Possible serpentine relicts in lunar meteorites. Petrology 23, 116–126 (2015).
Kurat, G., Koeberl, C., Presper, T., Brandstätter, F. & Maurette, M. Petrology and geochemistry of Antarctic micrometeorites. Geochim. Cosmochim. Acta 58, 3879–3904 (1994).
Lisse, C. M. et al. Comparison of the composition of the tempel 1 ejecta to the dust in Comet C/Hale–Bopp 1995 and YSO HD 100546. ICARUS 187, 69–86 (2007).
Cruikshank, D. P. & Hartmann, W. K. The meteorite-asteroid connection: two olivine-rich asteroids. Science 223, 281–283 (1984).
Buseck, P. R. & Goldstein, J. I. Olivine compositions and cooling rates of pallasitic meteorites. Bull. Geol. Soc. Am. 80, 2141–2158 (1969).
Chizmadia, L. J., Rubin, A. E. & Wasson, J. T. Mineralogy and petrology of amoeboid olivine inclusions in CO3 chondrites: relationship to parent-body aqueous alteration. Meteorit. Planet. Sci. 37, 1781–1796 (2002).
Dodd, R. T. The petrology of chondrules in the sharps meteorite. Contrib. Mineral. Petrol. 31, 201–227 (1971).
Hua, X. & Buseck, P. R. Fayalite in the Kaba and Mokoia carbonaceous chondrites. Geochim. Cosmochim. Acta 59, 563–578 (1995).
Wogelius, R. A. & Walther, J. V. Olivine dissolution at 25 °C: effects of pH, CO2, and organic acids. Geochim. Cosmochim. Acta 55, 943–954 (1991).
Wogelius, R. A. & Walther, J. V. Olivine dissolution kinetics at near-surface conditions. Chem. Geol. 97, 101–112 (1992).
Daval, D. et al. The effect of silica coatings on the weathering rates of wollastonite (CaSiO3) and forsterite (Mg2SiO4): an apparent paradox? In Water Rock Interaction - WRI-13 Proc. 13th International Conference on Water Rock Interaction (eds Birkle, P. & Torres-Alvarado, I. S.) 713–717 (Taylor & Francis, 2010).
Oelkers, E. H. & Schott, J. An experimental study of enstatite dissolution rates as a function of pH, temperature, and aqueous Mg and Si concentration, and the mechanism of pyroxene/pyroxenoid dissolution. Geochim. Cosmochim. Acta 65, 1219–1231 (2001).
Knauss, K. G., Nguyen, S. N. & Weed, H. C. Diopside dissolution kinetics as a function of pH, CO2, temperature, and time. Geochim. Cosmochim. Acta 57, 285–294 (1993).
Golubev, S. V., Pokrovsky, O. S. & Schott, J. Experimental determination of the effect of dissolved CO2 on the dissolution kinetics of Mg and Ca silicates at 25 °C. Chem. Geol. 217, 227–238 (2005).
Bland, P. A. et al. Why aqueous alteration in asteroids was isochemical: high porosity ≠ high permeability. Earth Planet. Sci. Lett. 287, 559–568 (2009).
Barbara Reischl, MSc and Annalisa Abdel Azim, MSc are gratefully acknowledged for expert technical assistance during the closed batch experiments. We thank Dr. Jessica Koslowski for proofreading and comments on the manuscript. R.-S.T. would like to thank Mark Perry and Britney Schmidt for discussions. We also thank David Parkhust for helping with the PHREEQC code and Silas Boye Nissen for his support in the initial attempts of the serpentinization modelling. Financial support was obtained from the Österreichische Forschungsförderungsgesellschaft (FFG) with the Klimafonds Energieforschungprogramm in the frame of the BioHyMe project (grant 853615). R.-S.T. was financed by the University of Vienna (FPF-234) and a fellowship of L'Oréal Österreich.
Archaea Biology and Ecogenomics Division, Department of Ecogenomics and Systems Biology, Universität Wien, 1090, Vienna, Austria
Ruth-Sophie Taubner, Christian Pruckner, Philipp Kolar, Christa Schleper & Simon K.-M. R. Rittmann
Department of Astrophysics, Universität Wien, 1180, Vienna, Austria
Ruth-Sophie Taubner & Maria G. Firneis
Institute for Chemical Technology of Organic Materials, Johannes Kepler Universität Linz, 4040, Linz, Austria
Patricia Pappenreiter & Christian Paulik
Department of Geodynamics and Sedimentology, Center for Earth Sciences, Universität Wien, 1090, Vienna, Austria
Jennifer Zwicker, Daniel Smrzka & Jörn Peckmann
Krajete GmbH, 4020, Linz, Austria
Sébastien Bernacchi, Arne H. Seifert & Alexander Krajete
Geoscience Department, Universität Bremen, 28359, Bremen, Germany
Wolfgang Bach
Institute for Geology, Center for Earth System Research and Sustainability, Universität Hamburg, 20146, Hamburg, Germany
Jörn Peckmann
Ruth-Sophie Taubner
Patricia Pappenreiter
Jennifer Zwicker
Daniel Smrzka
Christian Pruckner
Philipp Kolar
Sébastien Bernacchi
Arne H. Seifert
Alexander Krajete
Christian Paulik
Maria G. Firneis
Christa Schleper
Simon K.-M. R. Rittmann
R.-S.T., P.P., C.Pr., P.K., and S.K.-M.R.R. performed the experiments. R.-S.T., P.P., S.B., A.H.S., C.Pa., and S.K.-M.R.R. designed the experiments. D.S. and J.Z. designed and performed PHREEQC modelling. W.B. supervised the PHREEQC modelling. R.-S.T., P.P., J.Z., D.S., S.B., A.H.S., A.K., W.B., J.P., C.Pa, M.G.F., C.S., and S.K.-M.R.R. discussed the data. R.-S.T., J.Z., D.S., W.B., J.P., C.S., and S.K.-M.R.R. wrote the manuscript.
Correspondence to Simon K.-M. R. Rittmann.
R.-S.T., P.P., D.S., J.Z., C.Pr., P.K., W.B., J.P., C.Pa., M.F., C.S., and S.K.-M.R.R. declare no competing financial interests. Due to an engagement in the Krajete GmbH, A.H.S., S.B., and A.K. declare competing financial interests.
Electronic supplementary material
Taubner, RS., Pappenreiter, P., Zwicker, J. et al. Biological methane production under putative Enceladus-like conditions. Nat Commun 9, 748 (2018). https://doi.org/10.1038/s41467-018-02876-y
DOI: https://doi.org/10.1038/s41467-018-02876-y
Plausible Emergence of Biochemistry in Enceladus Based on Chemobrionics
Georgios Angelis
, Golfo G. Kordopati
, Eleni Zingkou
, Anastasia Karioti
, Georgia Sotiropoulou
& Georgios Pampalakis
Chemistry – A European Journal (2021)
Seeding Biochemistry on Other Worlds: Enceladus as a Case Study
Harrison B. Smith
, Alexa Drew
, John F. Malloy
& Sara Imari Walker
Astrobiology (2020)
Towards Determining Biosignature Retention in Icy World Plumes
Kathryn Bywaters
, Carol R. Stoker
, Nelio Batista Do Nascimento
& Lawrence Lemke
The Carbonate Geochemistry of Enceladus' Ocean
Christopher R. Glein
& J. Hunter Waite
Geophysical Research Letters (2020)
Archaea Biotechnology
Kevin Pfeifer
, İpek Ergal
, Martin Koller
, Mirko Basen
, Bernhard Schuster
& Simon K.-M.R. Rittmann
Biotechnology Advances (2020)
Planetary Interiors
|
CommonCrawl
|
BMC Ecology and Evolution
Admixture with indigenous people helps local adaptation: admixture-enabled selection in Polynesians
Mariko Isshiki ORCID: orcid.org/0000-0002-6565-69701,
Izumi Naka1,
Ryosuke Kimura2,
Nao Nishida3,
Takuro Furusawa4,
Kazumi Natsuhara5,
Taro Yamauchi6,
Minato Nakazawa7,
Takafumi Ishida1,
Tsukasa Inaoka8,
Yasuhiro Matsumura9,
Ryutaro Ohtsuka10 &
Jun Ohashi ORCID: orcid.org/0000-0003-1765-12831
BMC Ecology and Evolution volume 21, Article number: 179 (2021) Cite this article
Homo sapiens have experienced admixture many times in the last few thousand years. To examine how admixture affects local adaptation, we investigated genomes of modern Polynesians, who are shaped through admixture between Austronesian-speaking people from Southeast Asia (Asian-related ancestors) and indigenous people in Near Oceania (Papuan-related ancestors).
In this study local ancestry was estimated across the genome in Polynesians (23 Tongan subjects) to find the candidate regions of admixture-enabled selection contributed by Papuan-related ancestors.
The mean proportion of Papuan-related ancestry across the Polynesian genome was estimated as 24.6% (SD = 8.63%), and two genomic regions, the extended major histocompatibility complex (xMHC) region on chromosome 6 and the ATP-binding cassette transporter sub-family C member 11 (ABCC11) gene on chromosome 16, showed proportions of Papuan-related ancestry more than 5 SD greater than the mean (> 67.8%). The coalescent simulation under the assumption of selective neutrality suggested that such signals of Papuan-related ancestry enrichment were caused by positive selection after admixture (false discovery rate = 0.045). The ABCC11 harbors a nonsynonymous SNP, rs17822931, which affects apocrine secretory cell function. The approximate Bayesian computation indicated that, in Polynesian ancestors, a strong positive selection (s = 0.0217) acted on the ancestral allele of rs17822931 derived from Papuan-related ancestors.
Our results suggest that admixture with Papuan-related ancestors contributed to the rapid local adaptation of Polynesian ancestors. Considering frequent admixture events in human evolution history, the acceleration of local adaptation through admixture should be a common event in humans.
The human occupation of Oceania began approximately 47,000 years ago [38]. The first immigrants settled Sahul, a continent that comprised the land masses of present-day Australia, New Guinea, and the surrounding small islands. They are considered the ancestors of modern Papuans and Aboriginal Australians. They colonized the islands of New Britain and New Ireland, reaching the Solomon Islands by 28,000 years ago [65]. This region of initial colonization is known as Near Oceania. Probably due to the large expanse of ocean to the east of Near Oceania, Remote Oceania, which includes eastern part of the Solomon Islands, Vanuatu, Fiji and all the islands of Polynesia, remained unoccupied until the Late Holocene.
Austronesian (AN)-speaking people from Southeast Asia who possessed the advanced navigation skills necessary for a long-distance voyage first colonized Remote Oceania. They are called the Lapita people after their culture, Lapita, which is characterized by pottery decorated with distinctive motifs. Remains of their characteristic pottery suggest that they originated in Taiwan and arrived in the Bismarck Archipelago about 3500 years ago [6, 28, 57]. Lapita people then expanded into Remote Oceania using their advanced navigation skills. They reached western Polynesia, Tonga and Samoa, 2900‒2700 years ago [9, 52, 53], and finally Hawaii, Easter Island and New Zealand by 1200–800 years ago [4, 17, 19]. They are considered the direct ancestors of modern Polynesians.
Several genetic studies have found that about 20–30% of the modern Polynesian genome was derived from Papuan-related ancestry, and the rest was derived from Asian-related ancestry [24, 27, 56, 66], indicating that the Asian-related Polynesian ancestors admixed with indigenous people in Near Oceania, the Papuan-related ancestors, during their expansion from Near Oceania to Polynesia. The admixture between Asian- and Papuan-related ancestors has been estimated to have occurred about 3000 years ago [47, 66].
Homo sapiens populations have experienced admixture many times during the species' expansion and adaptation all over the world [16]. Admixture with populations from different genetic backgrounds would lead to introduction of adaptive genetic variants at intermediate frequencies in the gene pool, and thus can enable rapid adaptation to the local environments. Such admixture-mediated adaptation must have played an important role in human evolution. In the case of Polynesians, their Papuan-related ancestors, who had inhabited Oceania for tens of thousands of years, might already have had some genetic components adapted to Oceanian environment at the time of admixture. Therefore, it is expected that if the Polynesian ancestors had acquired genomic regions adaptive to the Oceanian environment through admixture, the frequency of those regions would increase by natural selection, and the regions would contribute to the adaptation of Polynesians. The phenomenon whereby genome regions introduced through admixture increase the fitness of admixed populations is called "adaptive introgression". This phenomenon is well-studied in the case of the admixture between modern humans and archaic hominins, such as Neanderthals and Denisovans [50], and adaptive introgression between modern human populations, often called as admixture-enabled selection, has also been studied in a few specific groups [7, 8, 12, 15, 21, 22, 37, 42, 44, 54, 60, 68]. Recently African ancestry enrichment around the human leukocyte antigen (HLA) region and signals of polygenic selection on immune function were observed in Latin American populations, suggesting admixture drove rapid adaptive evolution in human populations [37]. Investigating the effect of admixture on local adaptation in Polynesian genomes is therefore important for deepening our understanding of human evolution.
Previously we detected the signatures of local selective sweeps in Polynesian genomes by comparing the haplotype variation between Tongans and reference populations [27]. In this study, local ancestry was estimated across the genome in the same Polynesian subjects to reveal the effect of admixture on their local adaptation. Genomic regions with particularly high levels of Papuan-related ancestry in Polynesian genomes could have undergone admixture-enabled selection, based on the principle that the proportions of local ancestry are expected to be similar across the genome, unless affected by natural selection. We detected the signatures of admixture-enabled selection from Papuan-related ancestors in two genomic regions (chromosomes 6 and 16) in Polynesians. One of the regions harbored the ATP-binding cassette transporter sub-family C member 11 (ABCC11), and the ABCC11 allele (rs17822931-C) which determines wet earwax, was likely to have experienced a strong positive selection in Polynesians after the admixture. Our results suggest that the admixture with Papuan-related ancestors contributed to the rapid adaptation of Polynesian ancestors to the environment of Oceania.
Clustering analysis and Admixture proportion
Principal component analysis (PCA) and ADMIXTURE analysis [2] were performed on 179 individuals from five Oceanian and three Asian populations (Fig. 1). Percentages of variance was 7.18% for PC1 and 3.40% for PC2 (Fig. 1b). AN-speaking admixed populations (i.e. Munda, Rawaki and Tongans) were plotted between Papuans (i.e. Gidra) and Asians (i.e. CHB, Ami and Atayal) as expected from their population histories. Two Tongan populations obtained from different studies were clustered. Figure 1c illustrates individual ancestry proportion inferred by ADMIXTURE analysis for numbers of postulated ancestral populations (K) ranging from two to six. K = 5 provided the lowest cross-validation error (Additional file 1: Fig. S1). Assuming red and blue components for K = 2 as Asian- and Papuan-related ancestries, respectively, the proportion of Asian-related ancestry for Tongans was estimated as 71.4% (SD = 2.21%).
PCA and ADMIXTURE analysis of eight populations. a A map of eight populations analyzed in this study. AN: AN-speaking population. NAN: NAN-speaking population. b PCA plot for the eight populations. c Results of ADMIXTURE analysis for K ranging from 2 to 6. The lowest cross-validation error was obtained for K = 5. The map depicted in Fig. 1 (a) was taken from FREEWORLDMAP.NET (https://www.freeworldmaps.net/). 1 Tongans obtained from Kimura et al. [27]. 2Tongans obtained from Qin and Stoneking [48] and Pugach et al. [46]
The f3 statistics for Tongans were estimated using the Gidra as a proxy for Papuan-related ancestors and the Han Chinese population of Beijing (CHB) [61] or Aboriginal Taiwanese [30, 46, 48] for Asian-related ancestors, to examine whether they descended from a mixture of the two ancestral populations. Concordant with previous studies, negative f3 statistics were observed regardless of which populations were assumed as a proxy for Asian-related ancestors, indicating that Tongans or Polynesians were the descendants of a mixture of Papuan- and Asian-related ancestors (Table 1).
Table 1 Results of 3-Population test for Tongans
The proportion of Asian-related ancestry in Polynesian genomes was estimated using the f4 ratio test, assuming the phylogeny shown in Additional file 1: Figure S2. The proportion of Asian-related ancestry was estimated as 67.4% (SE = 1.22%, Z = 55.2).
Natural selection acted on the genomic regions derived from Papuan-related ancestors in Polynesians
The contributions of Asian- and Papuan-related ancestry across the Polynesian genome (Papuan versus Asian ancestry) were measured using the Effective Local Ancestry Inference (ELAI) algorithm [15] with CHB and Gidra as proxies for Asian- and Papuan-related ancestors, respectively. Figure 2 shows the mean proportion of Papuan-related ancestry across the Polynesian genome, estimated by the ELAI program with 100 admixture generations. The mean proportion of Papuan-related ancestry was estimated as 24.6% (SD = 8.63%), which corresponded with the previous studies [24, 66]. Two genomic regions displayed proportions of Papuan-related ancestry more than 5 SD greater than the mean (Fig. 2) (chr6:26471596–28011652 and chr16:48226479–48266831). The detected regions on chromosome 6 and 16 contained 51 SNPs and 4 SNPs, respectively. The high Papuan-related ancestry region on chromosome 6 (Papuan-related ancestry proportion: 67.72–69.37%) was located within the extended major histocompatibility complex (xMHC) region. 43 genes such as histone protein genes and DNA-binding protein genes are clustered within the region. Since this region contained 51 SNPs in strong LD, the candidate genes cannot be detected from the current dataset. The high Papuan-related ancestry region on chromosome 16 (Papuan-related ancestry proportion: 67.85–69.03%) contained only the ABCC11 gene.
Proportion of Papuan-related ancestry across Polynesian genome. Each color represents a different chromosome. Red dashed line represents the genome-wide mean. Blue, orange, and green dashed lines represent 2 SD, 4 SD, and 5 SD from the mean, respectively. Proportions of Papuan-related ancestry were deviated from the mean by more than 5 SD in genomic regions on chromosomes 6 (xMHC region) and 16 (ABCC11 gene)
Since local ancestry inference results can be very sensitive to the choice of source populations, we performed ELAI analysis assuming Aboriginal Taiwanese, who are considered to be closer to Asian-related ancestors of Polynesians than CHB are, as a proxy for Asian-related ancestors. The two genomic regions were also detected in the ELAI analysis (Additional file 1: Fig. S3), suggesting that the high Papuan-related ancestry regions did not come from the differentiation between CHB and Asian-related ancestors of Polynesians.
A recent study demonstrated that Polynesian genomes contained European and Native American ancestries [20]. Native American ancestry was observed in Eastern Polynesians while European ancestry was observed in all Polynesian populations analyzed in the study. Thus, we conducted ELAI analysis with three-way admixture model assuming CEU from1000 Genomes Project Phase 3 [1] as a proxy for European-related ancestors. Although a relatively small contribution from European-related ancestors was observed, the highest degrees of Papuan-related ancestry were also detected in the two genomic regions detected above (Additional file 1: Fig. S4).
Coalescent simulations
To examine whether genetic drift alone could cause genomic regions to display proportions of Papuan-related ancestry more than 5 SD greater than the mean, ELAI analysis was conducted for the whole-genome data generated by coalescent-based simulations, assuming selective neutrality (see Methods for details). As shown in Additional file 1: Fig. S5, the mean and SD of Papuan-related ancestry estimated from the simulation data were similar to those of real data. To evaluate the false discovery rate (FDR) of our approach, we performed 100 independent runs of coalescent-based simulation and subsequent ELAI analysis, and then counted the number of independent genomic regions showing Papuan-related ancestry more than 5 SD above the mean for each simulation run. Out of the 100 simulation runs, no genetic regions showed a proportion of Papuan-related ancestry more than 5 SD above the mean in 91 runs, and the excess was detected in a single genomic region in each of the remaining 9 runs. The threshold (mean + 5sd) used in this study therefore seemed to yield a family-wise error rate (FWER) of 0.09. Under the condition of FWER = 0.09, our results corresponded to FDR of 0.045 (= 0.09/2), since two regions showed such deviations in the real genotype data. Thus, the xMHC and ABCC11 regions could be shaped by genetic drift, but are more likely to have been shaped by positive selection since the admixture of the Papuan- and Asian-related ancestors of modern Polynesians.
Selection acted on the earwax-associated SNP (rs17822931) on ABCC11
To evaluate the intensity of positive selection on the ABCC11 gene, we focused on a nonsynonymous SNP (G180R), rs17822931. This SNP is known to affect apocrine secretory cell function and determine phenotypes such as earwax type and body odor [34, 67]. The derived allele of rs17822931, rs17822931-T or 180R, associated with a dry type of earwax and a reduction in body odor, was frequently observed in Northeast and East Asia [34, 67]. The allele frequencies in four Oceanian populations and 40 individuals of HapMap CHB are shown in Fig. 3 and Table 2. The ancestral allele, rs17822931-C or 180G, was found to be dominant in Oceanian populations.
Distribution of rs17822931 in Oceanian populations. Allele frequencies of rs17822931-C (ABCC11 allele for wet-type earwax) and -T (ABCC11 allele for dry-type earwax) for four Oceanian populations genotyped in this study. The allele frequency in 40 individuals from HapMap CHB was added for comparison. The map depicted in Figs. 3 was taken from FREEWORLDMAP.NET (https://www.freeworldmaps.net/)
Table 2 Frequency and LD statistics of rs12445647-T and rs17822931-C
The approximate Bayesian computation with a forward-time simulation was conducted to estimate the selection coefficient, s, for rs17822931-C in Tongans. Here, only simulation runs that resembled the observed allele frequency in Tonga when the run was terminated were accepted. The distribution of s in 10,000 accepted runs is shown in Additional file 1: Fig. S6. The mean of s was 0.0217, and the 95% credible interval was 0.0124–0.0309.
Origin of rs17822931-C in Polynesians
Next, to confirm if the rs17822931-C allele in Tongans or Polynesians originated from Papuan-related ancestors, rs12445647 was genotyped as a tag SNP for rs17822931-C derived from Papuan-related ancestors. As shown in Table 2, rs12445647-T was observed at a high frequency in Oceanian populations compared to the other populations. More importantly, the r2 value, a measure of linkage disequilibrium (LD), between rs12445647-T and rs17822931-C was higher in Polynesian and Papuan populations than those in Asian populations. Such higher LD shared with modern Polynesian and modern Papuan populations implies that haplotype harboring rs12445647-T and rs17822931-C in Polynesians originated mainly from Papuan-related ancestors.
Haplotype and expression level of ABCC11
The rapid increase in allele frequency of rs17822931-C in Polynesian ancestors might be due to the fact that the haplotype harboring rs12445647-T and rs17822931-C, which is thought to have been introgressed from Papuan-related ancestors to the Polynesian ancestors, is functionally different from the haplotype harboring rs12445647-G and rs17822931-C. Since the amino acid in position 180 of the ABCC11 protein is glycine in both haplotypes, there seems to be no difference in the protein function. We therefore examined the possibility that the ABCC11 mRNA expression level was different between the haplotype harboring rs12445647-T and rs17822931-C and the haplotype harboring rs12445647-G and rs17822931-C, using publicly available data: genotype data of the 1000 Genomes Project Phase 3 populations [1] and microarray data of the HapMap3 populations [13, 18, 59] obtained from the ArrayExpress database at EMBL-EBI. Since no significant association of rs12445647 with the ABCC11 expression level was observed in 217 unrelated subjects with the rs17822931-CC genotype (Additional file 1: Fig. S7), haplotype harboring rs12445647-T and rs17822931-C, originated from Papuan-related ancestors, does not seem to be a special haplotype that affects the expression level of the ABCC11 gene.
The results of PCA and ADMIXTURE were consistent with previous studies, suggesting that most fraction of Polynesian genomes were derived from Asian-related ancestry [24, 27, 46, 56, 66]. The f3 statistics revealed that Polynesians experienced admixture between Papuan- and Asian-related ancestors (Table 1). The proportion of Asian-related ancestry was estimated as 71.4% and 67.4% by ADMIXTURE (K = 2) and the f4 ratio test, respectively. The mean proportion of Asian-related ancestry across the Polynesian genome was estimated to be 75.4% (SD = 8.63%) in ELAI analysis. The estimates were equivalent to the proportions estimated by STRUCTURE analysis [45] in our previous study [27]. Based on these results, the proportion of Asian-related ancestry in the Polynesian genome is expected to fall within 70–80%.
Two genomic regions, xMHC and ABCC11, with proportions of Papuan-related ancestry greater than 5 SD above the mean were detected in this study (Fig. 2). To examine the possibility that the excess of Papuan-related ancestry was an artifact of the reference populations assumed, we used other reference populations to confirm the results. The genomic regions of xMHC and ABCC11 were detected even when Aboriginal Taiwanese were used as a proxy for Asian-related ancestry and when the recent contact of Europeans was assumed (Additional file 1: Figs. S3 and S4). Thus, it is plausible that the detected regions were not an artifact of the reference populations assumed. However, we cannot deny the possibility that the excess of Papuan-related ancestry came from the differentiation between the Lapita and the current East Asian populations as the Lapita people, the direct Asian-related ancestors of Polynesians, were distinct from extant East Asian populations today [56]. A coalescent simulation assuming selective neutrality suggested that these regions were likely to have been subjected to admixture-enabled selection (i.e., The admixture with Papuan-related ancestors have contributed to the rapid local adaptation of Polynesian ancestors). One of the two candidate regions of admixture-enabled selection was located in the xMHC region. This region overlapped the candidate regions for selective sweeps identified in our previous study with the same genotype data of Tongans [27]. Genes involved with various biological processes such as the immune response and epigenetic regulation of gene expression clustered within the region. However, a single candidate polymorphism subjected to positive selection was difficult to be identified from many SNPs that exist in the region due to strong LD between them. Thus, in this study, we focused on the other region, where only the ABCC11 gene is located as a protein-coding gene. This region was not detected in our previous scan for positive selection using LD-based methods [27]. A number of LD-based methods, such as REHH [55], iHS [63], rMHH and rHH [26], have been developed to detect signatures of recent positive selection acting on beneficial derived allele. In the LD-based methods, the ratio of the degree of LD (e.g., extended haplotype homozygosity in REHH) between the derived allele and the ancestral allele is expressed as a test statistic. The test statistic for beneficial derived allele is expected to be larger than that for neutral derived allele at the same frequency, since the former exhibits higher degree of LD. LD-based methods are suited for detecting recent positive selection acting on derived allele, but would not show high statistical power for that on ancestral allele in an admixed population, since haplotypes harboring the ancestral allele have already had many recombination events before the admixture in ancestral populations and no extended LD is observed even if the ancestral allele rapidly increases its frequency after the admixture. This is thought to be the main reason that the positive selection that acted on the ABCC11 gene was not detected in our previous study [27].
A nonsynonymous SNP of ABCC11, rs17822931, which affects apocrine secretory cell function and determines the type of earwax and body odor an individual produces, exhibited a large difference in frequency between East Asians and Oceanian populations (Fig. 3). Analysis of the tag SNP, rs12445647, suggested that the majority of rs17822931-C alleles in Polynesians originated mainly from Papuan-related ancestors, not from Asian-related ancestors (Table 2). Therefore, the frequency of rs17822931-C had been low in Polynesian ancestors at the time of admixture and has been rapidly increased due to positive selection since then. The rs17822931-T, a derived allele of rs17822931, has been shown to have experienced positive selection in East Asians as an adaptation to a colder climate, with a selection coefficient estimated as approximately 0.01 [26, 39]. The selection coefficient of rs17822931-C was estimated to be 0.0217 in Polynesians (Additional file 1: Fig. S6), indicating that the positive selection acting on the ancestral allele in Polynesian ancestors was stronger than that on the derived allele in East Asians. To the best of our knowledge, this study is the first report of positive selection having acted on rs17822931-C, an ancestral allele associated with the production of wet earwax.
The frequency of rs17822931-T shows the north–south gradient in East Asia [67]. Considering that rs17822931-T experienced positive selection in northeast Asia [39], it is likely that the observed frequency gradient along the latitude may be a result of the shifting balance of which allele was selected more strongly in each environment (e.g. climatic conditions, temperature and pathogen prevalence). However, the evolutionary significance of rs17822931 is still unknown. One possible explanation for positive selection acting on rs17822931-C in Polynesian ancestors is the association of rs17822931 with the amount of apocrine colostrum secretion [36]. Colostrum has an important role in the development of the immune system in newborns [62]. Women with the rs17822931-C allele are significantly less likely to lack colostrum and can produce significantly more colostrum than women without rs17822931-C [36]. Since various pathogens were present in tropical region, colostrum may have been important until the development of modern medical technologies. Genes involved in immune function often present strong signatures of selection and admixture-enabled selection against immune-related genes has been reported in admixed populations [7, 8, 12, 15, 21, 22, 42, 44, 49, 54, 60, 68]. Consistently, as xMHC region also contains genes involved in immune responses against pathogens, infectious disease is a possible driver of the selection in these regions. However, as well as the high apocrine secretion, the ABCC11 wild-type protein has a function to transport various substrates such as bile acids, conjugated steroids, and cyclic nucleotides [11]. Since these substrates were involved with various physiological processes, further investigation is necessary to clarify the driving force of the positive selection acted on the ABCC11 in Polynesian ancestors.
Two recent ancient DNA studies suggested that the first immigrants into Remote Oceania almost entirely lacked Papuan-related ancestry components [31, 56]. Thus, it is possible that admixture occurred after the settlement of Polynesian ancestors in Remote Oceania dissimilar to the postulated population history in this study that Polynesian ancestors got admixed with Papuan-related ancestors before the expansion into Remote Oceania. Even though the location and timing of the admixture event may need further investigation, there is no doubt that admixture-enabled selection has occurred in Polynesian ancestors after the admixture with Papuan-related ancestors.
In this study, we detected two genomic regions subjected to admixture-enabled selection in Polynesians. It is considered that de novo mutations adaptive to the environment generally takes much longer time to reach high frequencies. Therefore, for Asian-related ancestors of Polynesians, it would have been advantageous to acquire pre-existing genetic materials through admixture with Papuan-related ancestors who had already adapted to over tens of thousands of years. The acceleration of adaptation is also observed in Latin American populations [37]. As Homo sapiens have experienced admixture many times in the last few thousand years [16], admixture-enabled selection should be a common event in humans.
Subjects and data
A genome-wide SNP dataset comprising 24 Tongan individuals, AN-speaking Polynesians living in Ha'apai Island and Nuku'alofa of the Kingdom of Tonga, and 24 individuals from Gidra, NAN-speaking Melanesians (Papuans) in the lowlands of Western Province, Papua, New Guinea [27], 21 individuals from Munda, AN-speaking Melanesians in the New Georgia Islands in the western part of the Solomon Islands [21], 24 individuals from Rawaki, AN-speaking Micronesians migrated from the overpopulated Gilbert Islands (Kiribati) to the New Georgia Islands in the 1960s. All individuals were genotyped using Affymetrix GeneChip® Human Mapping 250 K Nsp SNP array. After merged with the HapMap genotype data of 45 unrelated individuals from CHB [61], the dataset consisted of 231,049 autosomal SNPs.
Since Lapita people, the direct ancestors of Polynesian people, are suspected to have originated in Taiwan, the other dataset which includes Aboriginal Taiwanese was also prepared. 35 Aboriginal Taiwanese (16 individuals from Atayal and 19 individuals from Ami) and six Tongans [30, 46, 48] were added to the above dataset. After SNPs with a genotyping rate lower than 0.95 were filtered out, 179 individuals and 49,523 SNPs were left.
PCA and ADMIXTURE
A principal component analysis (PCA) and ADMIXTURE analysis were performed on the second dataset comprising of 179 individuals and 49,523 autosomal SNPs. PCA was conducted with using PLINK software v1.90b5.2 [10]. ADMIXTURE analysis was carried out using ADMIXTURE version 1.3.0 [2] for different values of K (from K = 2 through K = 6). Cross-validation procedure implemented in ADMIXTURE package was performed to find the best value of K. The results were drawn using pophelper R package 2.3.0 [14].
3-population test and F4 ratio test
The 3-population test and f4 ratio test were conducted on the dataset which contained Aboriginal Taiwanese using the AdmixTools package version 4.1 [43]. As an outgroup for the f4 ratio test, the HapMap data of 60 unrelated individuals from Yoruba in Ibadan, Nigeria (YRI) [61] were also merged. After merging all the datasets, those SNPs with a genotyping rate lower than 0.95 were filtered out, leaving 49,523 SNPs to be used.
The f3(C; A, B) statistics should be negative if a population C has descended from a mixture of populations A and B. Assuming CHB or Taiwanese as a proxy of Asian-related and Gidra as that of Papuan-related ancestors, f3(Tonga; CHB or Taiwanese, Gidra) was calculated to test if admixture occurred in the ancestors of Tonga population.
An f4 ratio test, assuming the population relationships shown in Additional file 1: Fig. S2, was performed to estimate the proportion of admixture. The proportion of Asian-related and Papuan-related ancestry, α and 1-α, respectively, was estimated by computing the ratio of two f4 statistics:
α = f4(CHB, YRI; Tonga, Gidra)/f4(CHB, YRI; Taiwan, Gidra) [43, 51].
Genome-wide scan for natural selection
To detect signals of natural selection having acted over the genomic regions derived from Papuan-related ancestors in Polynesian genomes, ELAI analysis [15] was preformed across the genome for 23 Tongan subjects assuming CHB and Gidra as a proxy for their Asian- and Papuan- related ancestors, respectively. The ELAI method computes expected ancestry dosage at each marker for each individual using a two-layer hidden Markov Model. To exclude close relatives, the Identical-By-Descent (IBD) value of each pair of individuals was checked. The calculation of IBD values was performed after LD pruning using PLINK software v1.90b5.2 [10] with the following settings, which define window size, step and the r2 threshold: –indep-pairwise 50 5 0.5. Since one pair of individuals showed an IBD value higher than 0.125 (IBD value = 0.5185), one individual from this pair was excluded from the following analyses.
A total of 162,358 autosomal SNPs that showed a genotyping rate higher than 0.95 and that were polymorphic in each population were used for ELAI analysis. ELAI analysis was performed with the ELAI version 1.00 software with settings which defined the number of EM steps as 20, the upper layer number of clusters as 2, and lower layer number of clusters as 10, in accordance with the manual [15]. Since the ancestors of Polynesians were considered to have reached Oceania about 3000 years ago, the admixture generations were set as 100, which is consistent with the dates of admixture for Polynesian populations estimated in the previous studies: 83 (95% CI 66–112) generations [46], 90 generations (95% CI 77–131) [47], and 99 generations (95% CI 19–267) [66]. Statistical analysis was conducted using R version 3.5.3 (https://www.R-project.org/). The mean value of Papuan-related ancestry proportion at each SNP among Tongan subjects was calculated and plotted across the genome using R package "ggplot2" version 3.1.1 [64]. The list of NCBI RefSeq genes in the detected regions was downloaded from the UCSC Table Browser and implemented in the UCSC Genome Browser [23, 25].
The ELAI analysis was also conducted assuming Aboriginal Taiwanese (Ami and Atayal, n = 35) [30, 46, 48] as a proxy for Asian-related ancestors. The analysis was performed on the dataset consisting of 49,523 autosomal SNPs with the same parameters as above. In addition, to examine the effect of recent European contact, the ELAI analysis was conducted with three-way admixed model assuming CHB, Gidra and CEU (n = 40) as a proxy for Asian-, Papuan- and European- related ancestry, respectively. The analysis was performed on the dataset consisting of 198,803 autosomal SNPs with the same parameters as above.
Coalescent-based simulation
Coalescent simulations under the assumption of selective neutrality were performed to address whether genetic drift alone could produce similar patterns of admixture across the genome to the ones observed in Tongan subjects. Coalescent-based simulations were performed using the R package "scrm" version 1.7.3.1 [58]. To reproduce the population history of Gidra, CHB and Tonga, a simple population history was assumed based on a single dispersal model into Asia [33, 35, 41] was assumed as described below. First, two subpopulations (Anc1 and Anc2) diverged from one ancestral population 1667 generations ago, which corresponds to 50,000 years ago when generation time is 30 years. Next, subpopulations diverged from Anc1 and Anc2, respectively, and admixed with each other 100 generations ago (3000 years ago). The descendants of Anc1 and Anc2 were regarded as Gidra and CHB, respectively, and the admixed population was regarded as the Tongan (Polynesians) population. Segregating sites within a 1 Mb-long sequence were sampled 3000 times (i.e., 3 Gb long) for 48, 46 and 90 chromosomes from hypothetical Gidra, Tongan and CHB populations, respectively. The mutation rate and recombination rate were set as 1.2 × 10–8/ base/generation and 1.3 × 10–8/base/generation [3, 29], respectively. The admixture rate in the simulation was given from the mean proportions of Papuan-related ancestry and Asian-related ancestry estimated in the ELAI analysis. The genotype data for 24, 23 and 45 individuals from the hypothetical Gidra, Tongan and CHB populations were generated from the sequences obtained. Considering the SNP ascertainment bias observed in real data, 162,358 SNPs, which exhibited the similar distribution of minor allele frequencies in the real data, were randomly extracted from the simulated genotype data. The above coalescent simulations were performed several times for various population sizes (N = 250, 500, 750, 1,000, 5,000 and 10,000 for each population). Since the mean and SD of Papuan-related ancestry estimated from simulation data for N = 1,000 were most similar to those of real data (Mean = 24.0%, SD = 10.3%), we therefore set a population size of 1,000 for each population in the following analysis. The R code for coalescent simulation under the assumption of selective neutrality is provided in Additional file 2.
To evaluate the family-wise error rate (FWER) and false discovery rate (FDR) of our approach, we performed 100 independent runs of coalescent-based simulations and subsequent ELAI analyses with the same settings as described above. The number of independent genomic regions that exceeded 5 SD from the mean was counted for each simulation run. Here, the mean and SD of Papuan-related ancestry were determined in each run.
Linkage disequilibrium analysis
To identify a tag SNP for rs17822931-C derived from Papuan-related ancestors, the LD (D' and r2) of rs17822931 with other SNPs in the franking region of the ABCC11 was evaluated in 14 Papuans from Simons Genome Diversity Project [33] using Haploview 4.1 [5]. Then, an SNP, rs12445647, which was in strong LD with rs17822931 in SGDP Papuans (D' = 1 and r2 = 1) and observed in high frequency in Papuans but low frequency in populations from the 1000 Genomes Project Phase 3 [1], was selected as a tag SNP. The LD of rs17822931 with rs12445647 was evaluated in each population of YRI, CEU, CHB, JPT, CHS, CDX, and KHV in the 1000 Genomes Project Phase 3 (1000 Genomes Project Consortium et al. 2015) using LDlink [32].
Genotyping rs17822931 and rs12445647 in Oceanian populations
Two SNPs, rs17822931 and rs12445647, were genotyped by the TaqMan assay for a total of 616 adult subjects (18 years old or older) from four Oceanian populations: Tongan (n = 174), Munda (n = 170), Gidra (n = 165) and Rawaki (n = 107). Munda people were AN-speaking Melanesians in the New Georgia Islands in the western part of the Solomon Islands. Rawaki village was also located in the Solomon Islands but the inhabitants were regarded as AN-speaking Micronesians as they had migrated there from the overpopulated Gilbert Islands (Kiribati) in the 1960s [40]. Blood sampling was conducted after obtaining informed consent from each subject. Genomic DNA was extracted from peripheral blood using a QIAamp Blood Kit (Qiagen, Hilden, Germany). The LD of rs17822931 with rs12445647 was evaluated in each Oceanian population using Haploview 4.1 [5].
Approximate Bayesian computation for estimation of selection coefficient
The approximate Bayesian computation was used to estimate the selection coefficient (s) for rs17822931-C. We used a forward-time simulation assuming the relative fitness of the CC, CT, and TT genotypes at rs17822931 to be 1, 1-s, and 1–2 s, respectively. The change in allele frequency of rs17822931-C was modeled as follows: the expected allele frequency of rs17822931-C at generation t + 1 is given by
$$p_{t + 1} = \frac{{p_{t}^{2} + p_{t} \left( {1 - p_{t} } \right)\left( {1 - s} \right)}}{{p_{t}^{2} + 2p_{t} \left( {1 - p_{t} } \right)\left( {1 - s} \right)\left( {1 - p_{t} } \right)^{2} \left( {1 - 2s} \right)}},$$
where pt is the allele frequency of rs17822931-C in a Polynesian population at generation t since the admixture of Papuan-related and Asian-related ancestors. Assuming that the population size, N, is constant, pt is expressed by it/2N, where it is the number of copies of rs17822931-C at generation t. The number of copies of rs17822931-C at generation t + 1 follows the binomial distribution:
$$\Pr ob\left( {i_{ + 1} {|}p_{t} } \right) = \left( {_{{i_{t + 1} }}^{2N} } \right)p_{t + 1}^{i_t + 1} \left( {1 - p_{t + 1} } \right)^{2N - i_{t + 1}}$$
In the computer simulation, it+1 is generated as a random number based on Pt. The initial allele frequency of rs17822931-C in a Polynesian population soon after admixture was given based on the present allele frequencies in Papuan (Gidra) and Asian (CHB) populations (i.e. 0.915 and 0.049, respectively) and the admixture proportion estimated by ELAI analysis (i.e. 0.246 for Papuan-related ancestry and 0.754 for Asian-related ancestry) as follows: 0.915*0.246 + 0.049*0.754 = 0.26. The population size, N, was set as 1,000. In each simulation run, the value of s was randomly generated using a uniform distribution in the range (0, 1). The value of s was recorded only when the allele frequency of rs17822931-C after 100 generations, corresponding to 3000 years, fell within ±5% of the observed allele frequency in the present Tongan population (i.e. 0.722 to 0.798). The mean and 95% credible interval of s were calculated for 10,000 successful runs. The computer simulation mentioned above was implemented in R 3.5.3. The R code for approximate Bayesian computation with forward simulation is provided in Additional file 3.
Comparison of the expression level of ABCC11 between haplotypes
The expression level of ABCC11 was compared between haplotypes harboring rs12445647-T and rs17822931-C and harboring rs12445647-T and rs17822931-C using publicly available data: genotype data of the 1000 Genomes Project Phase 3 populations [1] and microarray data of the HapMap3 populations [13, 18, 59] obtained from the ArrayExpress database at EMBL-EBI (www.ebi.ac.uk/arrayexpress) under accession number E-MTAB-264. A total of 217 unrelated subjects with the rs17822931-CC genotype commonly included in the two datasets were used for a regression analysis, where the independent variable of rs12445647 was coded as the number of copies of rs12445647-T (i.e., GG = 0, GT = 1, TT = 2). The 217 subjects belonged to CHB, GIH, JPT, LWK, MEX, MKK and YRI in the 1000 Genomes Project.
The data newly created in this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
xMHC:
Extended major histocompatibility complex
ABCC11:
ATP-binding cassette transporter sub-family C member 11
HLA:
Human leukocyte antigen
PCA:
Principal component analysis
CHB:
Han Chinese population of Beijing
ELAI:
Effective Local Ancestry Inference
FDR:
False discovery rate (FDR)
FWER:
Family-wise error rate
LD:
Linkage disequilibrium
1000 Genomes Project Consortium, Auton A, Brooks LD, Durbin RM, Garrison EP, Kang HM, Korbel JO, Marchini JL, McCarthy S, McVean GA, et al. A global reference for human genetic variation. Nature. 2015;526:68–74. https://doi.org/10.1038/nature15393.
Alexander DH, Novembre J, Lange K. Fast model-based estimation of ancestry in unrelated individuals. Genome Res. 2009;19:1655–64. https://doi.org/10.1101/gr.094052.109.
Altshuler DL, Durbin RM, Abecasis GR, Bentley DR, Chakravarti A, Clark AG, Collins FS, De La Vega FM, Donnelly P, Egholm M, et al. A map of human genome variation from population-scale sequencing. Nature. 2010;467:1061–73. https://doi.org/10.1038/nature09534.
Athens JS, Toggle HD, Ward JV, Welch DJ. Avifaunal extinctions, vegetation change, and Polynesian impacts in prehistoric Hawai'i. Archaeol Ocean. 2002;37:57–78. https://doi.org/10.1002/j.1834-4453.2002.tb00507.x.
Barrett JC, Fry B, Maller J, Daly MJ. Haploview: analysis and visualization of LD and haplotype maps. Bioinformatics. 2005. https://doi.org/10.1093/bioinformatics/bth457.
Bellwood P, Burns E. The Batanes archaeological project and the" out-of-Taiwan" hypothesis for Austronesian dispersal. J Austronesian Stud. 2005;1:1–36.
Bhatia G, Tandon A, Patterson N, Aldrich MC, Ambrosone CB, Amos C, Bandera EV, Berndt SI, Bernstein L, Blot WJ, et al. Genome-wide scan of 29,141 African Americans finds no evidence of directional selection since admixture. Am J Hum Genet. 2014;95:437–44. https://doi.org/10.1016/j.ajhg.2014.08.011.
Brisbin A, Bryc K, Byrnes J, Zakharia F, Omberg L, Degenhardt J, Reynolds A, Ostrer H, Mezey JG, Bustamante CD. PCAdmix: principal components-based assignment of ancestry along each chromosome in individuals with admixed ancestry from two or more populations. Hum Biol. 2012;84:343–64. https://doi.org/10.3378/027.084.0401.
Burley DV, Dickinson WR, Barton A, Shutler R. Lapita on the Periphery. New data on old problems in the Kingdom of Tonga. Archaeol Ocean. 2001;36:89–104. https://doi.org/10.1002/j.1834-4453.2001.tb00481.x.
Chang CC, Chow CC, Tellier LCAM, Vattikuti S, Purcell SM, Lee JJ. Second-generation PLINK: rising to the challenge of larger and richer datasets. Gigascience. 2015;4:7. https://doi.org/10.1186/s13742-015-0047-8.
Chen ZS, Guo Y, Belinsky MG, Kotova E, Kruh GD. Transport of bile acids, sulfated steroids, estradiol 17-β-D- glucuronide, and leukotriene C4 by human multidrug resistance protein 8 (ABCC11). Mol Pharmacol. 2005;67:545–57. https://doi.org/10.1124/mol.104.007138.
Deng L, Ruiz-Linares A, Xu S, Wang S. Ancestry variation and footprints of natural selection along the genome in Latin American populations. Sci Rep. 2016;6:1–7. https://doi.org/10.1038/srep21766.
Dimas AS, Nica AC, Montgomery SB, Stranger BE, Raj T, Buil A, Giger T, Lappalainen T, Gutierrez-Arcelus M, McCarthy MI, et al. Sex-biased genetic effects on gene regulation in humans. Genome Res. 2012. https://doi.org/10.1101/gr.134981.111.
Francis RM. POPHELPER: an R package and web app to analyse and visualise population structure. Mol Ecol Resour. 2017;17:27–32. https://doi.org/10.1111/1755-0998.12509.
Guan Y. Detecting structure of haplotypes and local ancestry. Genetics. 2014;196:625–42. https://doi.org/10.1534/genetics.113.160697.
Hellenthal G, Busby GBJ, Band G, Wilson JF, Capelli C, Falush D, Myers S. A genetic atlas of human admixture history. Science. 2014;343:747–51. https://doi.org/10.1126/science.1243518.
Hogg AG, Higham TFG, Lowe DJ, Palmer JG, Reimer PJ, Newnham RM. A wiggle-match date for Polynesian settlement of New Zealand. Antiquity. 2003;77:116–25. https://doi.org/10.1017/S0003598X00061408.
Houldcroft CJ, Petrova V, Liu JZ, Frampton D, Anderson CA, Gall A, Kellam P. Host genetic variants and gene expression patterns associated with epstein-barr virus copy number in lymphoblastoid cell lines. PLoS ONE. 2014. https://doi.org/10.1371/journal.pone.0108384.
Hunt TL, Lipo CP. Late colonization of Easter Island. Science. 2006;311:1603–6. https://doi.org/10.1126/science.1121879.
Ioannidis AG, Blanco-Portillo J, Sandoval K, Hagelberg E, Miquel-Poblete JF, Moreno-Mayar JV, Rodríguez-Rodríguez JE, Quinto-Cortés CD, Auckland K, Parks T, et al. Native American gene flow into Polynesia predating Easter Island settlement. Nature. 2020. https://doi.org/10.1038/s41586-020-2487-2.
Isshiki M, Naka I, Watanabe Y, Nishida N, Kimura R, Furusawa T, Natsuhara K, Yamauchi T, Nakazawa M, Ishida T, et al. Admixture and natural selection shaped genomes of an Austronesian-speaking population in the Solomon Islands. Sci Rep. 2020;10:6872. https://doi.org/10.1038/s41598-020-62866-3.
Johnson NA, Coram MA, Shriver MD, Romieu I, Barsh GS, London SJ, Tang H. Ancestral Components of Admixed Genomes in a Mexican Cohort. Copenhaver GP, ed. PLoS Genet. 2011; 7:e1002410. https://doi.org/10.1371/journal.pgen.1002410.
Karolchik D. The UCSC table browser data retrieval tool. Nucleic Acids Res. 2004;32:493D – 496. https://doi.org/10.1093/nar/gkh103.
Kayser M, Lao O, Saar K, Brauer S, Wang X, Nürnberg P, Trent RJ, Stoneking M. Genome-wide analysis indicates more Asian than Melanesian Ancestry of Polynesians. Am J Hum Genet. 2008;82:194–8. https://doi.org/10.1016/j.ajhg.2007.09.010.
Kent JW, Sugnet CW, Furey TS, Roskin KM, Pringle TH, Zahler AM, Haussler D. The human genome browser at UCSC. Genome Res. 2002;12(996–1006):2002. https://doi.org/10.1101/gr.229102.
Kimura R, Fujimoto A, Tokunaga K, Ohashi J. 2007. A Practical Genome Scan for Population-Specific Strong Selective Sweeps That Have Reached Fixation. Harpending H, ed. PLoS One 2:e286. https://doi.org/10.1371/journal.pone.0000286.
Kimura R, Ohashi J, Matsumura Y, Nakazawa M, Inaoka T, Ohtsuka R, Osawa M, Tokunaga K. Gene flow and natural selection in oceanic human populations inferred from genome-wide SNP typing. Mol Biol Evol. 2008;25:1750–61. https://doi.org/10.1093/molbev/msn128.
Kirch PV, Hunt TL. Radiocarbon dates from the Mussau Islands and the Lapita Colonization of the Southwestern Pacific. Radiocarbon. 1988;30:161–9. https://doi.org/10.1017/S0033822200044106.
Kong A, Frigge ML, Masson G, Besenbacher S, Sulem P, Magnusson G, Gudjonsson SA, Sigurdsson A, Aslaug J, Adalbjorg J, et al. Rate of de novo mutations and the importance of father's age to disease risk. Nature. 2012;488:471–5. https://doi.org/10.1038/nature11396.
Lazaridis I, Patterson N, Mittnik A, Renaud G, Mallick S, Kirsanow K, Sudmant PH, Schraiber JG, Castellano S, Lipson M, et al. Ancient human genomes suggest three ancestral populations for present-day Europeans. Nature. 2014;513:409–13. https://doi.org/10.1038/nature13673.
Lipson M, Skoglund P, Spriggs M, Valentin F, Bedford S, Shing R, Buckley H, Phillip I, Ward GK, Mallick S, et al. Population turnover in remote oceania shortly after initial settlement. Curr Biol. 2018;28:1157-1165.e7. https://doi.org/10.1016/j.cub.2018.02.051.
Machiela MJ, Chanock SJ. LDlink: A web-based application for exploring population-specific haplotype structure and linking correlated alleles of possible functional variants. Bioinformatics. 2015;31:3555–7. https://doi.org/10.1093/bioinformatics/btv402.
Mallick S, Li H, Lipson M, Mathieson I, Gymrek M, Racimo F, Zhao M, Chennagiri N, Nordenfelt S, Tandon A, et al. The simons genome diversity project: 300 genomes from 142 diverse populations. Nature. 2016;538:201–6. https://doi.org/10.1038/nature18964.
Martin A, Saathoff M, Kuhn F, Max H, Terstegen L, Natsch A. A functional ABCC11 allele is essential in the biochemical formation of human axillary odor. J Invest Dermatol. 2010;130:529–40. https://doi.org/10.1038/jid.2009.254.
Mellars P, Gori KC, Carr M, Soares PA, Richards MB. Genetic and archaeological perspectives on the initial modern human colonization of southern Asia. Proc Natl Acad Sci U S A. 2013;110:10699–704. https://doi.org/10.1073/pnas.1306043110.
Miura K, Yoshiura K, Miura S, Shimada T, Yamasaki K, Yoshida A, Nakayama D, Shibata Y, Niikawa N, Masuzaki H. A strong association between human earwax-type and apocrine colostrum secretion from the mammary gland. Hum Genet. 2007;121:631–3. https://doi.org/10.1007/s00439-007-0356-9.
Norris ET, Rishishwar L, Chande AT, Conley AB, Ye K, Valderrama-Aguirre A, Jordan IK. Admixture-enabled selection for rapid adaptive evolution in the Americas. Genome Biol. 2020;21:29. https://doi.org/10.1186/s13059-020-1946-2.
O'Connell JF, Allen J. The process, biotic impact, and global implications of the human colonization of Sahul about 47,000 years ago. J Archaeol Sci. 2015;56:73–84. https://doi.org/10.1016/j.jas.2015.02.020.
Ohashi J, Naka I, Tsuchiya N. The impact of natural selection on an ABCC11 SNP determining earwax type. Mol Biol Evol. 2011;28:849–57. https://doi.org/10.1093/molbev/msq264.
Ohashi J, Naka I, Kimura R, Tokunaga K, Yamauchi T, Natsuhara K, Furusawa T, Yamamoto R, Nakazawa M, Ishida T, et al. Polymorphisms in the ABO blood group gene in three populations in the New Georgia group of the Solomon Islands. J Hum Genet. 2006;51:407–11. https://doi.org/10.1007/s10038-006-0375-8.
Oppenheimer S. Out-of-Africa, the peopling of continents and islands: tracing uniparental gene trees across the map. Philos Trans R Soc B Biol Sci. 2012;367:770–84. https://doi.org/10.1098/rstb.2011.0306.
Patin E, Lopez M, Grollemund R, Verdu P, Harmant C, Quach H, Laval G, Perry GH, Barreiro LB, Froment A, et al. Dispersals and genetic adaptation of Bantu-speaking populations in Africa and North America. Science. 2017;356:543–6. https://doi.org/10.1126/science.aal1988.
Patterson N, Moorjani P, Luo Y, Mallick S, Rohland N, Zhan Y, Genschoreck T, Webster T, Reich D. Ancient admixture in human history. Genetics. 2012;192:1065–93. https://doi.org/10.1534/genetics.112.145037.
Pierron D, Heiske M, Razafindrazaka H, Pereda-Loth V, Sanchez J, Alva O, Arachiche A, Boland A, Olaso R, Deleuze JF, et al. Strong selection during the last millennium for African ancestry in the admixed population of Madagascar. Nat Commun. 2018;9:1–9. https://doi.org/10.1038/s41467-018-03342-5.
Pritchard JK, Stephens M, Donnelly P. Inference of population structure using multilocus genotype data. Genetics. 2000;155:945–59.
Pugach I, Duggan AT, Merriwether DA, Friedlaender FR, Friedlaender JS, Stoneking M. The gateway from near into remote oceania: new insights from genome-wide data. Mol Biol Evol. 2018;35:871–86. https://doi.org/10.1093/molbev/msx333.
Pugach I, Matveyev R, Wollstein A, Kayser M, Stoneking M. Dating the age of admixture via wavelet transform analysis of genome-wide data. Genome Biol. 2011;12:R19. https://doi.org/10.1186/gb-2011-12-2-r19.
Qin P, Stoneking M. Denisovan ancestry in East Eurasian and Native American populations. Mol Biol Evol. 2015;32:2665–74. https://doi.org/10.1093/molbev/msv141.
Quintana-Murci L. Human immunology through the lens of evolutionary genetics. Cell. 2019;177:184–99. https://doi.org/10.1016/j.cell.2019.02.033.
Racimo F, Sankararaman S, Nielsen R, Huerta-Sánchez E. Evidence for archaic adaptive introgression in humans. Nat Rev Genet. 2015;16:359–71. https://doi.org/10.1038/nrg3936.
Reich D, Thangaraj K, Patterson N, Price AL, Singh L. Reconstructing Indian population history. Nature. 2009;461:489–94. https://doi.org/10.1038/nature08365.
Rieth T, Cochrane EE. 2015. The chronology of colonization in Remote Oceania. In: Cochrane EE, Hunt TL, eds. Oxford University Press.
Rieth TM, Hunt TL. A radiocarbon chronology for Sāmoan prehistory. J Archaeol Sci. 2008;35:1901–27. https://doi.org/10.1016/j.jas.2007.12.001.
Rishishwar L, Conley AB, Wigington CH, Wang L, Valderrama-Aguirre A, King JI. Ancestry, admixture and fitness in Colombian genomes. Sci Rep. 2015;5:12376. https://doi.org/10.1038/srep12376.
Sabeti PC, Reich DE, Higgins JM, Levine HZP, Richter DJ, Schaffner SF, Gabriel SB, Platko JV, Patterson NJ, McDonald GJ, et al. Detecting recent positive selection in the human genome from haplotype structure. Nature. 2002;419:832–7. https://doi.org/10.1038/nature01140.
Skoglund P, Posth C, Sirak K, Spriggs M, Valentin F, Bedford S, Clark GR, Reepmeyer C, Petchey F, Fernandes D, et al. Genomic insights into the peopling of the Southwest Pacific. Nature. 2016;538:510–3. https://doi.org/10.1038/nature19844.
Spriggs M. Chronology of the neolithic transition in Island Southeast Asia and the Western Pacific: a view from 2003. Rev Archaeol. 2003;24:57–80.
Staab PR, Zhu S, Metzler D, Lunter G. Scrm: Efficiently simulating long sequences using the approximated coalescent with recombination. Bioinformatics. 2015;31:1680–2. https://doi.org/10.1093/bioinformatics/btu861.
Stranger BE, Montgomery SB, Dimas AS, Parts L, Stegle O, Ingle CE, Sekowska M, Smith GD, Evans D, Gutierrez-Arcelus M, et al. 2012. Patterns of Cis Regulatory Variation in Diverse Human Populations. Barsh GS, ed. PLoS Genet. 8:e1002639. https://doi.org/10.1371/journal.pgen.1002639.
Tang H, Choudhry S, Mei R, Morgan M, Rodriguez-Cintron W, Burchard EG, Risch NJ. Recent genetic selection in the ancestral admixture of Puerto Ricans. Am J Hum Genet. 2007;81:626–33. https://doi.org/10.1086/520769.
The International HapMap Consortium. A haplotype map of the human genome. Nature. 2005;437:1299–320. https://doi.org/10.1038/nature04226.
Uruakpa F, Ismond MA, Akobundu EN. Colostrum and its benefits: a review. Nutr Res. 2002;22:755–67. https://doi.org/10.1016/S0271-5317(02)00373-1.
Voight BF, Kudaravalli S, Wen X, Pritchard JK. 2006. A Map of Recent Positive Selection in the Human Genome. Hurst L, ed. PLoS Biol. 4:e72. https://doi.org/10.1371/journal.pbio.0040072.
Wickham H. ggplot2: Elegant Graphics for Data Analysis. New York, NY: Springer New York; 2016.
Wickler S, Spriggs M. Pleistocene human occupation of the Solomon Islands, Melanesia. Antiquity. 1988;62:703–6. https://doi.org/10.1017/S0003598X00075104.
Wollstein A, Lao O, Becker C, Brauer S, Trent RJ, Nürnberg P, Stoneking M, Kayser M. Demographic history of oceania inferred from genome-wide data. Curr Biol. 2010;20:1983–92. https://doi.org/10.1016/j.cub.2010.10.040.
Yoshiura KI, Kinoshita A, Ishida T, Ninokata A, Ishikawa T, Kaname T, Bannai M, Tokunaga K, Sonoda S, Komaki R, et al. A SNP in the ABCC11 gene is the determinant of human earwax type. Nat Genet. 2006;38:324–30. https://doi.org/10.1038/ng1733.
Zhou Q, Zhao L, Guan Y. Strong selection at MHC in Mexicans since Admixture. PLoS Genet. 2016;12:1–17. https://doi.org/10.1371/journal.pgen.1005847.
We are deeply grateful to people from the Kingdom of Tonga for their kind cooperation in providing blood samples for genotyping. We thank Drs. Taniela Palu (Ministry of Health, Kingdom of Tonga), Viliami Tangi (Diabetes Clinic, Kingdom of Tonga), and Kazumichi Katayama (Primate Research Institute, Kyoto University) for research on the Tongan population. We also wish to acknowledge Dr. Irina Pugach and Prof. Mark Stoneking, who kindly provided the genotype data of Taiwanese populations. This study was partly supported by a Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science, and Technology of Japan.
This work was partly supported by JSPS KAKENHI Grant Number 25291103, JSPS KAKENHI Grant Number 21H02570, and Grant-in-Aid for JSPS Fellows Grant Number 19J12435. The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, 113-0033, Japan
Mariko Isshiki, Izumi Naka, Takafumi Ishida & Jun Ohashi
Department of Human Biology and Anatomy, Graduate School of Medicine, University of the Ryukyus, Nishihara, 903-0125, Japan
Ryosuke Kimura
Genome Medical Science Project, Research Center for Hepatitis and Immunology, National Center for Global Health and Medicine, Chiba, 272-8516, Japan
Nao Nishida
Graduate School of Asian and African Area Studies, Kyoto University, Kyoto, 606-8501, Japan
Takuro Furusawa
Department of International Health and Nursing, Faculty of Nursing, Toho University, Tokyo, 143-0015, Japan
Kazumi Natsuhara
Faculty of Health Sciences, Hokkaido University, Sapporo, 060-0812, Japan
Taro Yamauchi
Graduate School of Health Sciences, Kobe University, Kobe, 654-0142, Japan
Minato Nakazawa
Department of Human Ecology, Faculty of Agriculture, Saga University, Saga, 840-8502, Japan
Tsukasa Inaoka
Faculty of Health and Nutrition, Bunkyo University, Chigasaki, 253-8550, Japan
Yasuhiro Matsumura
Japan Wildlife Research Center, Tokyo, 130-8606, Japan
Ryutaro Ohtsuka
Mariko Isshiki
Izumi Naka
Takafumi Ishida
Jun Ohashi
MI conceived the study. MI designed the analyses. RK, TF, KN, TY, MN, TI, RE, TI, YM, RO, and JO collected the samples. RK and I.N. extracted DNA from blood samples. MI conducted the SNP genotyping experiment. MI carried out the statistical analyses and computer simulations. MI wrote the manuscript with support from JO. JO supervised the project. All authors read and approved the final manuscript.
Correspondence to Jun Ohashi.
This study was approved by the National Health Ethics & Research Committee of Tonga, and the Research Ethics Committees of The University of Tokyo and of the Faculty of Medicine, The University of Tokyo. A written informed consent was obtained from each participant.
Additional figures.
R code for coalescent simulation under the assumption of selective neutrality.
R code for approximate Bayesian computation with forward simulation.
Isshiki, M., Naka, I., Kimura, R. et al. Admixture with indigenous people helps local adaptation: admixture-enabled selection in Polynesians. BMC Ecol Evo 21, 179 (2021). https://doi.org/10.1186/s12862-021-01900-y
Positive selection
Admixture-enabled selection
Rapid adaptation
Genetic ancestry
Submission enquiries: [email protected]
|
CommonCrawl
|
communications physics
Electronic pair alignment and roton feature in the warm dense electron gas
Linear response of a superfluid Fermi gas inside its pair-breaking continuum
H. Kurkjian, J. Tempere & S. N. Klimin
Coherent multidimensional spectroscopy of dilute gas-phase nanosystems
Lukas Bruder, Ulrich Bangert, … Frank Stienkemeier
Observing the emergence of a quantum phase transition shell by shell
Luca Bayha, Marvin Holten, … Selim Jochim
Unconventional pairing in few-fermion systems at finite temperature
Daniel Pęcak & Tomasz Sowiński
Localisation of weakly interacting bosons in two dimensions: disorder vs lattice geometry effects
Luis A. González-García, Santiago F. Caballero-Benítez & Rosario Paredes
Enhancing violations of Leggett-Garg inequalities in nonequilibrium correlated many-body systems by interactions and decoherence
J. J. Mendoza-Arenas, F. J. Gómez-Ruiz, … L. Quiroga
Quantum halo states in two-dimensional dipolar clusters
G. Guijarro, G. E. Astrakharchik & J. Boronat
Universal prethermal dynamics of Bose gases quenched to unitarity
Christoph Eigen, Jake A. P. Glidden, … Zoran Hadzibabic
Emergence of multi-body interactions in a fermionic lattice clock
A. Goban, R. B. Hutson, … J. Ye
Tobias Dornheim ORCID: orcid.org/0000-0001-7293-66151,2,
Zhandos Moldabekov ORCID: orcid.org/0000-0002-9725-92081,2,
Jan Vorberger ORCID: orcid.org/0000-0001-5926-91922,
Hanno Kählert3 &
Michael Bonitz3
Communications Physics volume 5, Article number: 304 (2022) Cite this article
Electronic properties and materials
The study of matter under extreme densities and temperatures as they occur, for example, in astrophysical objects and nuclear fusion applications has emerged as one of the most active frontiers in physics, material science, and related disciplines. In this context, a key quantity is given by the dynamic structure factor S(q, ω), which is probed in scattering experiments—the most widely used method of diagnostics at these extreme conditions. In addition to its importance for the study of warm dense matter, the modelling of such dynamic properties of correlated quantum many-body systems constitutes an important theoretical challenge. Here, we report a roton feature in the dynamic structure factor S(q, ω) of the warm dense electron gas, and introduce a microscopic explanation in terms of an electronic pair alignment model. Our results will have direct impact on the interpretation of scattering experiments and may provide insights into the dynamics of a number of correlated quantum many-body systems such as ultracold helium, dipolar supersolids, and bilayer heterostructures.
Matter at extreme densities and temperatures is ubiquitous throughout our universe1 and naturally occurs in astrophysical objects such as giant planet interiors2, brown dwarfs3, and neutron stars4. In addition, such warm dense matter (WDM) conditions are relevant for technological applications such as the discovery of novel materials5,6, hot-electron chemistry7, and inertial confinement fusion8,9. Consequently, WDM is nowadays routinely realized in experiments in large research facilities around the globe such as the National Ignition Facility10 in the USA, the European XFEL in Germany11, and SACLA12 in Japan. Indeed, the advent of new experimental techniques for the study of WDM13 has facilitated a number of spectacular achievements5,14,15,16 and has opened up new possibilities for the exciting field of laboratory astrophysics.
One of the central practical obstacles regarding the study of WDM is given by the lack of reliable diagnostics. The extreme conditions prevent the straightforward measurement even of basic system parameters like the electronic temperature, which have to be inferred indirectly from other observations. In this situation, the X-ray Thomson scattering (XRTS) technique17 has emerged as a widely used method of diagnostics. In particular, an XRTS measurement gives one access to the dynamic structure factor (DSF) S(q,ω) describing the full spectrum of density fluctuations in the system. The task at hand is then to match the experimental observation with a suitable theoretical model, thereby inferring important system parameters like the electronic temperature T, density ρ, or charge state Z. Yet, the rigorous theoretical modelling of WDM18, in general, and of an XRTS signal19, in particular, constitutes a difficult challenge. Indeed, the physical properties of WDM are characterized by the intriguingly intricate interplay of a number of effects such as the Coulomb interaction between charged electrons and ions, partial ionization and the formation of atoms and molecules, quantum effects like Pauli blocking and diffraction, and strong thermal excitations out of the ground state.
Very recently, Dornheim et al.20 have presented accurate results for the DSF of the uniform electron gas (UEG)21 in the WDM regime based on ab initio path integral Monte Carlo (PIMC) simulations. In particular, the UEG assumes a homogeneous neutralizing ionic background, and, therefore, allows us to exclusively focus on the rich effects inherent to the electrons. The accurate treatment of electron–ion interactions is beyond the scope of the present work and the interested reader is referred e.g. to refs. 22,23. The UEG constitutes one of the most fundamental model systems in physics, quantum chemistry, and related disciplines24, and constitutes the basis for a number of important developments, such as the success of density functional theory in the description of real materials. From a practical perspective, accurate results for the DSF of the UEG are indispensable for the interpretation of WDM experiments, and directly enter models such as the widely used Chihara decomposition17.
While the availability of highly accurate results for S(q,ω) constitutes an important step towards our understanding of the dynamics of correlated electronic matter, their theoretical interpretation has remained unclear. For example, the exact calculations by Dornheim et al.20 have uncovered a negative dispersion in the UEG that closely resembles the roton feature in quantum liquids such as 4He25 and 3He26,27. Despite speculations about a possible excitonic interpretation of this effect28,29, its precise nature is hitherto unknown. This reflects the notorious difficulty to describe the dynamics of correlated quantum many-body systems, which constitutes a challenge in a number of research fields. In the present work, we introduce a new paradigm—the structural alignment of pairs of electrons. It allows us to understand and accurately capture both the roton feature in the strongly coupled UEG and the XC-induced red-shift of the DSF at metallic densities. Therefore, it is of substantial importance for the description and diagnostics of WDM.
Spectrum of density fluctuations
In Fig. 1a, we show results for the corresponding spectrum of density fluctuations ω(q) that we estimate from the maximum in the DSF at the electronic Fermi temperature θ = kBT/EF = 1 (with EF being the Fermi energy) and the density parameter \({r}_{s}=\overline{a}/{a}_{{{{{{{{\rm{B}}}}}}}}}=10\) (with \(\overline{a}\) being half the average distance to the nearest neighbour and aB the first Bohr radius). We note that the entire depicted q-range is easily accessible in XRTS experiments at modern free electron laser facilities such as the European XFEL11. The dotted green curve shows the ubiquitous random phase approximation (RPA), which entails a mean-field description of the electronic density response to an external perturbation; see the 'Methods' section for details. The dash-dotted blue curve shows exact PIMC results that have been obtained on the basis of the full frequency-dependent local field correction G(q, ω), which contains the complete wave-vector and frequency resolved information about electronic exchange-correlation effects. Finally, the dashed black curve corresponds to the static approximation, i.e., by setting G(q, ω) = G(q, 0); see Dornheim et al.20 for a detailed explanation of the PIMC calculations.
Fig. 1: Spectrum of density fluctuations of the uniform electron gas.
a ω(q) at the electronic Fermi temperature (θ = 1) at rs = 10. Dotted green: random phase approximation; dash-dotted blue: exact path integral Monte Carlo (PIMC) results20; dashed black: static approximation G(q, ω) ≈ G(q, 0). For small wave numbers, the spectrum features a single sharp plasmon excitation. This collective regime, where the wave length λ = 2π/q is much larger than the average interparticle distance d, λ ≫ d, is well described by the random phase approximation (RPA). Upon entering the pair continuum (shaded grey), S(q, ω) becomes substantially broadened. The regime with λ ~ d (shaded red) features a hitherto unexplained pronounced red-shift Δωxc compared to RPA, which eventually resembles the roton feature known from ultracold helium26,27,35. Finally, the single-particle regime with q ≫ qF and λ ≪ d is dominated by a broad peak with ω(q) ~ q2. b ω-dependence of S(q, ω) at q ≈ 2.1qF, rs = 10, and θ = 1. The static approximation entails an effective average over the less trivial structure of the full PIMC curve, with the shaded blue area being a measure for the uncertainty in the latter.
Let us next discuss the different physical regimes shown in Fig. 1. For small q, i.e., in the collective regime where the wave length is much larger than the average interparticle distance (λ ≫ d ~ 2rs) the spectrum consists of a single, sharp plasmon peak that is exactly captured by all three theories. For completeness, we note that the plasmon is replaced by an acoustic mode in quantum liquids, and the slope is then determined by the sound speed c. Indeed, an ion acoustic mode can also be observed in realistic WDM systems, but for the (quantum) OCP there is no acoustic branch even in the limit of large rs. Upon increasing q, we enter the pair continuum, where the plasmon decays into a multitude of other excitations and ceases to be a sharp feature24. From a comparison to the simulations, it is evident that the RPA breaks down in this regime and does not capture the intriguing non-monotonous behaviour of the exact PIMC data. Indeed, the latter exhibit a pronounced minimum in ω(q) around q = 2qF, which closely resembles the well-known roton feature in both 4He25 and 3He26,27. We stress that this is a real physical trend, which has been observed experimentally for electrons in alkali metals30. In the present work, we demonstrate that this red-shift Δωxc compared to RPA is a direct consequence of the alignment of pairs of electrons where λ ~ d, and show that it can be understood and accurately quantified in terms of the microscopic spatial structure of the system. Finally, a further increase of q eventually brings us into the single-particle regime (λ ≪ d), where ω(q) is known to increase quadratically with q.
The static approximation leads to a substantial improvement over the RPA, and qualitatively reproduces the pronounced red-shift and even exhibits a shallow minimum in ω(q) at the correct position. A more detailed insight is given in Fig. 1b, where we show the full ω-dependence of S(q, ω) in the vicinity of the roton feature. The exact PIMC curve shows a nontrivial shape consisting of a pronounced peak at ω(q) and an additional shoulder around ωRPA(q). In contrast, the dashed black curve features a single broad peak that is located between both aforementioned features. In fact, the static approximation can be understood as a kind of frequency-averaged description of the actual spectrum of density fluctuations, and, therefore, reproduces frequency-averaged properties like the static structure factor S(q)31 with high accuracy. Moreover, it does capture the correlation-induced shift in S(q, ω) towards lower frequencies, which is the root cause of the roton feature in the UEG that is studied in the present work.
Electronic pair alignment
To understand the physical origin of the latter, it is well-worth to explore possible analogies to other systems such as 4He25 and 3He26,27. In addition, we mention the extensively explored negative dispersion in the classical one-component plasma (OCP)32,33,34. Both cases have been explained by the onset of spatial localization of the particles35, and the roton feature can then be quantified in terms of S(q), e.g. via the Feynman ansatz36 for He. In stark contrast, such structural arguments do not apply to the present case of the warm dense UEG. Indeed, the maximum in S(q) does not exceed 1.02 even at rs = 10 (see below) and the system is largely disordered.
To explain the physical mechanism behind the red-shift and eventual roton feature in the warm dense UEG, we explore the nature of the excitations of density fluctuations in this regime in Fig. 2. More specifically, the green bead depicts an arbitrary reference particle, and the blue beads other electrons which are, on average, disordered; this can be seen by the absence of pronounced features in S(q). In addition, we note that the depiction of the electrons as point particles in Fig. 2 does not constitute a simplification. All quantum mechanical delocalization effects are inherently included in the evaluation of the thermodynamic expectation values within PIMC, which, in our model, enter both the effective potential and the pair correlation function in Eq. (3) below. From a mathematical perspective, the dynamic structure factor entails the same information as the density response function that describes the response to an external harmonic perturbation24. The latter is depicted by the black sinusoidal line and induces the leftmost blue particle to follow the perturbation, i.e., the blue arrow. This reaction of the system is associated with a change in the potential energy by an amount ΔW. In the case of λ ~ d, the particles will actually align themselves to the minima of the effective potential energy (shaded green area), which leads to a lowering of the interaction energy compared to the unperturbed case. Equivalently, we can say that a density fluctuation contains comparably less energy when λ ~ d as it coincides with a spatial pattern that minimizes the potential energy landscape. This electronic pair alignment is highly sensitive to an accurate treatment of electronic XC-effects and becomes more pronounced with increasing rs. It should be noted that the main purpose of Fig. 2 is the illustration of the spatial geometry of this effect; no actual perturbation of the system is assumed in our model. The exact spectrum of density fluctuations can be expressed as
$$\omega (q) ={\omega }_{{{{{{{{\rm{RPA}}}}}}}}}(q)-\Delta {\omega }_{{{{{{{{\rm{xc}}}}}}}}}(q),\\ ={\omega }_{{{{{{{{\rm{RPA}}}}}}}}}(q)-\alpha (q)\Delta {W}_{{{{{{{{\rm{xc}}}}}}}}}(q),$$
where we have assumed in the second line that the kinetic contribution to ω(q) is accurately treated within RPA. The corresponding absence of XC-effects onto the momentum distribution n(q) is demonstrated in the 'Methods' section. The screening coefficient37α(q) = χ(q)/χ0(q) is given by the ratio of the full and noninteracting static density response functions and takes into account the fact that, on the static level (i.e., in the limit of ω → 0), the UEG does not react to an external perturbation in the limit of λ ≫ d. A more detailed discussion of the role of α(q) in our model is given below. The exchange–correlation correction to the potential part of the excitation energy of a density fluctuation of wavenumber q is given by
$$\Delta {W}_{{{{{{{{\rm{xc}}}}}}}}}(q)=\Delta W(q)-\Delta {W}_{{{{{{{{\rm{RPA}}}}}}}}}(q),$$
where ΔW(q) denotes the actual change in the interaction energy. Equation (1) implies that the observed red-shift in ω(q) is a direct consequence of the insufficient description of the electronic pair alignment within RPA. To quantitatively evaluate this effect, we express the energy shift ΔW as
$$\Delta W(q)=n\int\,{{{{{{{\rm{d}}}}}}}}{{{{{{{\bf{r}}}}}}}}\,g(r)\left[\phi (r)-\phi ({r}_{q})\right],$$
where ϕ(r) denotes the effective potential between two electrons in the presence of the electron gas38. From a physical perspective, Eq. (3) can be interpreted as follows: a reference particle at r = 0 is located in the minimum of an external sinusoidal perturbation, cf. Fig. 2. On average, it will encounter a second particle at r with the probability P(r) = ng(r), where g(r) is the usual radial distribution function. Finally, we have to evaluate the difference in the effective potential between r, and the position of the nearest minimum in the external potential, which we denote rq. Without loss of generality, we assume a perturbation along the x-direction. For λ ≫ d, this physical picture breaks down as the translation of the second particle to rq will be increasingly blocked by other electrons in the system. This is a direct manifestation of screening in our model, and is taken into account by the coefficient α(q) in Eq. (1). A relation of the energy shift Eq. (3) to the XC-contribution to the self energy known from Green functions theory is given in the 'Methods' section.
Fig. 2: Illustration of the alignment of electron pairs.
Let the green bead be a fixed reference particle. Without an external perturbation [ϕext, dark grey sinusoidal line], the system is disordered on average in the WDM regime, see the blue beads. In order to follow ϕext, the particles have to re-align themselves to the minima of the latter, see blue arrow + red bead. In the process, they change the potential energy of the green reference particle by an amount of ΔW, see Eq. (3). In the regime of electronic pair alignment, λ ~ d, the energy shift ΔW is substantially negative. In other words, a density fluctuation with a corresponding q = 2π/λ contains comparably less energy due to its alignment to the potential energy landscape of the system (shaded green area). The random phase approximation (RPA) substantially underestimates this effect and, therefore, does not capture this correlation-induced red-shift.
Effective electron–electron potential
The appropriate effective potential between two electrons has to 1) take into account all effects of the surrounding medium and 2) not include any XC-effects between the electrons themselves. This is a crucial difference to the effective interaction of two test charges in a medium, where one can simply use dielectric theories24. In the present context, the appropriate potential has been derived by Kukkonen and Overhauser39 (KO) within linear-response theory, and is given by a functional of the local field correction (LFC), ϕ(r) = ϕ[G(q, 0)](r). Here we use exact PIMC results40 for G(q, 0) and perform a Fourier transform to obtain the corresponding KO potential ϕ(r). The results are shown in Fig. 3 for the metallic density of rs = 4 (a) and the more strongly coupled case of rs = 10 (b), with the solid red, dotted green, and dashed blue lines depicting the KO potential with the LFC, the KO potential in RPA, and the bare Coulomb potential, respectively. The impact of the medium vanishes for r → 0, and all curves converge. In addition, both KO potentials quickly decay for r ≳ 2rs and do not exhibit the long Coulombic tail. For completeness, we note that a subsequent direct PIMC study38 of the effective potential has revealed excellent agreement to the KO expression, which further confirms the accuracy of the present results.
Fig. 3: Effective Kukkonen Overhauser (KO) potential between a pair of electrons surrounded by the electronic medium.
Dotted green: random phase approximation (RPA); solid red: full KO potential using exact PIMC data for the local field correction (LFC) G(q, 0)40; dashed blue: bare Coulomb. The inset shows the contribution ΔWr(q) to the full shift ΔW(q) for q ≈ 2qF within LFC and RPA as a function of the distance between the particles r. a For rs = 4, positive and negative contributions to ΔW(q) approximately cancel within the RPA, and the red-shift in ω(q) is mainly due to the reduction of the interaction energy described by the exact path integral Monte Carlo (PIMC) results. b For rs = 10, the RPA even predicts an unphysical increase of W, whereas our PIMC results again correctly describe the minimization of the interaction energy for λ ~ d.
The insets in Fig. 3 show the respective contributions to ΔW(2qF) [Eq. (3)], and the vertical yellow line depicts the corresponding wave length λ. For rs = 4, the positive and negative contributions to ΔWRPA nearly cancel. Consequently, the observed XC-induced red-shift in Fig. 4a is predominantly due to the lowering of the interaction energy, i.e., ΔW(q), in the regime of electronic pair alignment. For rs = 10, the situation is more subtle, and the RPA predicts a substantial increase in ω(q) for q ~ 2qF. This is a direct consequence of the pair correlation function gRPA(r), which is known to be strongly negative for small r at these conditions. The exact PIMC results indicate a similar trend as for the metallic density and again indicate a lowering of W due to the electronic pair alignment. Therefore, it is the combination of (1) removing the spurious RPA prediction for W and (2) further adding the correct decrease in W quantified by our PIMC simulations that leads to the large down-shift of the actual ω(q) compared to RPA.
Fig. 4: Electronic pair alignment model for the spectrum of density fluctuations.
Shown is the dispersion ω(q) [i.e., position of the maximum in S(q, ω)] of the warm dense electron gas at the electronic Fermi temperature for a rs = 4 and b rs = 10. Dotted green: random phase approximation (RPA); dash-dotted blue: exact path integral Monte Carlo (PIMC) results20 using the full G(q, ω); dashed black: static approximation, i.e., setting Gstatic(q, ω) = G(q, 0); solid yellow: classical one-component plasma (OCP) at Γ = 1.92 (rs = 4) and at Γ = 4.8 (rs = 10); solid red: electronic pair alignment model [Eq. (1)] introduced in this work; the red arrows indicate the corresponding average red-shift compared to RPA. We can distinguish 3 distinct physical regimes: (I) λ ≫ d [collective], (II) λ ~ d [electronic pair alignment, two-particle excitations], and (III) λ ≪ d [single-particle]. The insets show PIMC and RPA results for the static structure factor S(q) and illustrate the absence of spatial structure at these conditions.
Red-shift and roton feature
Let us now apply these insights to the spectrum of density fluctuations depicted in Fig. 4. Specifically, the solid red lines show our present model Eq. (1), which reproduces the correct behaviour of ω(q) at both densities. We note that it follows the static approximation rather than the full dynamic PIMC data at rs = 10. This is expected, as Eq. (3) constitutes an average over changes in the effective potential ϕ for different initial positions r. Therefore, it gives us the average change in ω(q), i.e., the location of the peak of the broad dashed black curve in Fig. 1, and not the actual position of the sharper roton peak. The predictive capability of our model is demonstrated over a broad range of densities and temperatures in the 'Methods' section.
In combination, our results provide a complete description of ω(q) over the full q-range, and allow us to give a simple and physically intuitive explanation of both the XC-induced red-shift at metallic density and the roton feature at stronger coupling. For small q, the main feature of ω(q) is given by the sharp plasmon peak. In particular, the plasmon is a collective excitation and involves all particles in the system. Upon entering the pair continuum, the DSF broadens, and we find an initial increase of ω(q) with q; this is a well-known kinetic effect due to quantum delocalization and is qualitatively reproduced by the RPA. Correspondingly, it is not present in ω(q) of the classical OCP (yellow curves in Fig. 4) at the same conditions. In the vicinity of qF, the potential contribution to ω(q) starts to be shaped by the electronic pair alignment, and the corresponding lowering of the interaction energy W leads to the observed red-shift. In other words, the non-monotonic roton feature at rs = 10 is a consequence of two competing trends: (1) the delocalization-induced quadratic increase in ω(q) and (2) the decrease of the interaction contribution due to electronic pair alignment.
An additional insight into the physical origin of the excitations in this regime comes from the effective potential shown in Fig. 3. Specifically, ϕ(r) vanishes for r ≳ 2rs, which means that the change in the interaction energy W is of a distinctly local nature. This, in turn, strongly implies that the roton feature is due to two-particle excitations—the alignment of electron pairs. In this light, we can even explain the nature of the nontrivial structure of the full S(q, ω) shown in Fig. 1b: the large ΔW leading to the actual roton peak in the blue curve must be a result of configurations where two particles are initially separated by a short distance r < rs. Only in this case will the corresponding change in ϕ(r) be sufficient for the observed lowering in ω(q). The additional shoulder in S(q, ω) is then due to transitions where the particles have even in their initial configuration been effectively separated, so that the change in ϕ(r) and, hence, the resulting ΔW(q) are comparably small.
Going back to Fig. 4, we note that a further increase in q again changes the nature of the excitations. In particular, ω(q) is exclusively shaped by single-particle effects when λ ≪ rs.
We are convinced that our findings for the spectrum of density fluctuations of the UEG—one of the most fundamental quantum systems in the literature—will open up many avenues for future research in a diverse array of fields. First and foremost, the archetypical nature of the UEG has allowed us to clearly isolate the rich interplay of the correlated electrons with each other, which constitutes an indispensable basis for the study of realistic systems. The roton feature has already been experimentally observed for electrons in metals at ambient conditions28,30, and understanding the interplay of the electronic pair alignment with the presence of the ions will be an important next step. We expect that the observation of a non-monotonous ω(q) will also be possible in the WDM regime for real materials such as hydrogen, since the presence of bound states leads to an effectively reduced electronic density41 and, therefore, an increased effective Wigner-Seitz radius \({r}_{s}^{* }\). Indeed, preliminary simulation results confirm that the presence of mobile ions does not weaken the roton feature but leads even to a stabilization.
Let us now return to the alternative interpretation of the roton as an excitonic mode29. Common to both interpretations is the governing role of short range correlations—of electron pairs or electron-hole pairs (excitons), respectively. However, in contrast to the ground state results of Takada29, we do not find zeroes of the retarded dielectric function in the range of the roton minimum at the considered temperatures42. This rules out an interpretation in terms of collective modes.
Our improved microscopic theory for the DSF in the regime of λ ~ d will be particularly relevant for the interpretation of XRTS measurements17, which constitute the key diagnostics of state-of-the-art WDM experiments.
Going beyond the study of WDM, we stress that the proposed concept of electronic pair alignment is very general, and will likely help to shed light on the mechanism behind the spectrum of density fluctuations and elementary excitations in a number of correlated quantum systems. This includes the improved understanding of the "original" roton mode in liquid 4He25 (and the corresponding emergence of a sharp quasiparticle peak with the onset of superfluidity) and 3He26,27. In addition, we mention the transition from the liquid regime to a highly ordered Wigner crystal43 in strongly coupled bilayer heterostructures44, and the impact of supersolidity45 onto the DSF of strongly coupled dipole systems46. Specifically, quantum dipole systems are known to exhibit a roton feature in the DSF35,47 and, in addition to their well-known physical realization with ultracold atoms48, naturally emerge in electron-hole bilayer systems in the case of small layer separation.
Dynamic structure factor and density response
The dynamic structure factor S(q, ω) is directly connected to the linear density response function by the well-known fluctuation–dissipation theorem24,
$$S({{{{{{{\bf{q}}}}}}}},\omega )=-\frac{{{{{{{{\rm{Im}}}}}}}}\chi ({{{{{{{\bf{q}}}}}}}},\omega )}{\pi n(1-{e}^{-\beta \omega })}.$$
It is very convenient to express the latter as
$$\chi ({{{{{{{\bf{q}}}}}}}},\omega )=\frac{{\chi }_{0}({{{{{{{\bf{q}}}}}}}},\omega )}{1-\frac{4\pi }{{q}^{2}}\left[1-G({{{{{{{\bf{q}}}}}}}},\omega )\right]{\chi }_{0}({{{{{{{\bf{q}}}}}}}},\omega )},$$
where χ0(q, ω) describes the density response of a noninteracting system at the same conditions and can be readily computed. As we have mentioned in the main text, the dynamic LFC37G(q, ω) contains all electronic XC-effects; setting G(q, ω) ≡ 0 thus corresponds to the RPA, which entails a mean-field description (i.e., Hartree) of the electronic density response. The DSF within RPA, the static approximation Gstatic(q, ω) ≡ G(q, 0), or using the full PIMC results for G(q, ω)20 are then obtained by inserting the corresponding χ(q, ω) into Eq. (4). The DSF of the classical OCP is obtained from molecular dynamics simulations using the LAMMPS code49.
Definition of the effective potential
The effective potential described in the main text has been first derived by Kukkonen and Overhauser39, and is given by
$${\Phi }^{{{{{{{{\rm{KO}}}}}}}}}({{{{{{{\bf{q}}}}}}}})=\frac{4\pi }{{q}^{2}}+{\left[\frac{4\pi }{{q}^{2}}\left(1-G({{{{{{{\bf{q}}}}}}}},0)\right)\right]}^{2}\chi ({{{{{{{\bf{q}}}}}}}},0);$$
see also the excellent discussion by Giuliani and Vignale24. It is exclusively a functional of the LFC, and, therefore, highly sensitive to electronic XC-effects. In the present work, we use the accurate neural net representation of G(q,ω = 0; rs, θ) by Dornheim et al.40 that is based on exact PIMC simulation data. The results for ϕ(r) in coordinate space are then obtained via a simple one-dimensional Fourier transform, which we solve numerically.
Spectral representation of the DSF
An additional motivation for the present electronic pair alignment model is given by the exact spectral representation of the DSF24,
$$S({{{{{{{\bf{q}}}}}}}},\omega )=\mathop{\sum}\limits_{m,l}{P}_{m}{\left\Vert {n}_{ml}({{{{{{{\bf{q}}}}}}}})\right\Vert }^{2}\delta (\omega -{\omega }_{lm}).$$
Here l and m denote the eigenstates of the full N-body Hamiltonian, ωlm = (El − Em)/ℏ is the energy difference, and nml is the usual transition element from state m to l induced by the density operator \(\hat{n}({{{{{{{\bf{q}}}}}}}})\). Equation (7) implies that S(q, ω) is fully defined by the possible transitions between the (time-independent) eigenstates; the full frequency dependence comes from the corresponding energy differences. In other words, no time propagation is needed. The translation of our electronic pair alignment model and the corresponding impact of ΔW(q) on ω(q) into the language of Eq. (7) is then straightforward. Firstly, we assume a continuous distribution of eigenstates, which we examine in coordinate space. In the regime of λ ~ d, the excitations primarily involve only two particles, as the effective potential ϕ(r) decays rapidly with r. The probability P(r) = ng(r) thus plays the role of Pm in Eq. (7), and the energy shift can be expressed as ωlm = ΔWlm + ΔKlm. The kinetic contribution is accurately captured by the RPA as we demonstrate in the next section. Finally, we note that Eq. (7) even gives us some insight into the nontrivial shape of the exact PIMC results for S(q,ω) shown in Fig. 1 in the main text. In particular, the roton peak around ωp must be due to transitions where the down-shift ΔW is substantial. This is only the case for electron pairs that have been separated by r < rs in the initial state. The substantial reduction in the interaction energy of such a pair due to an excitation with λ ~ d is thus the microscopic explanation for the observed roton feature.
Momentum distribution of the correlated electron gas
In Fig. 5, we show the momentum distribution function n(k) of the UEG at the electronic Fermi temperature θ = 1. Specifically, the symbols show exact PIMC results50 for different values of rs, and the dashed black line the Fermi distribution describing a noninteracting Fermi gas at the same conditions. Clearly, n(k) is hardly influenced by the Coulomb interaction for both rs = 4 (blue diamonds) and rs = 10 (green crosses); correlation effects only manifest for much larger rs, cf. the yellow triangles that have been obtained for a strongly coupled electron liquid (rs = 50). This is a strong indication that the main error in ωRPA(q) is due to ΔW and not the kinetic part.
Fig. 5: Momentum distribution n(k) of the uniform electron gas at the electronic Fermi temperature θ = 1.
The symbols show exact path integral Monte Carlo (PIMC) results for rs = 2 (red circles), rs = 4 (blue diamonds), rs = 10 (green crosses), and rs = 50 (yellow triangles); the dashed black line shows the Fermi distribution function describing a noninteracting Fermi gas. Taken from Dornheim et al.50, and reproduced with the permission of the American Physical Society.
Results for other temperatures and densities
In the main text, we have restricted ourselves to the representative cases of rs = 4 (metallic density) and rs = 10 (boundary to the electron liquid regime21) at the electronic Fermi temperature, θ = kBT/EF. The validity of our electronic pair alignment model is demonstrated for a vast range of densities and temperatures in Fig. 6.
Fig. 6: Wavenumber dependence of the spectrum of density fluctuations ω(q) at different conditions.
Dotted green: random phase approximation (RPA); dashed black: static approximation; solid blue: exact path integral Monte Carlo (PIMC) results20; solid red: electronic pair alignment model [Eq. (1)] introduced in this work. a Results for different values of the density parameter rs at θ = 1; b results for different values of the temperature parameter θ = kBT/EF for rs = 10.
Connection of the electronic pair alignment model to Green functions theory
In the following, we connect the shift of the plasmon dispersion to the energy change of a test particle, ΔWxc. Kwong and Bonitz51 have derived a direct relation between the DSF and the single-particle nonequilibrium Green function (NEGF) δG< that is produced by a short monochromatic field pulse, \(U(t,q)={U}_{0}(t)\cos qx\), and is valid in case of a weak excitation (linear response). Here we rewrite this in terms of the spectral function of the occupied states, δA,
$$S(\omega ,q)=\frac{1}{\pi {n}_{0}\hslash {U}_{0}(\omega )}\mathop{\sum}\limits_{p}\delta A(p,\omega ,U),$$
$$\delta A(p,\omega ,U)=A(p,\omega ,U)-A(p,\omega ,0),$$
where the argument U comprises the dependencies on U0 and q. Note that δA is proportional to U0, cancelling its appearance in the denominator. Thus, in linear response there is a direct linear relation between the frequency dependencies of the DSF and the field-induced correction to the single-particle spectral function. Now the question is how the peak position of the DSF, ω(q), that is discussed in the main part of the paper, is related to the peak position δE(p, U) of the spectral function δA.
To answer this question we follow the approach by Kwong and Bonitz51 and outline the main steps. First the Keldysh-Kadanoff-Baym equations (KBE) are solved for the NEGF, G(t1,t2), in the two-time plane, in the presence of the field U. The spectral information is then contained in the dependence of A(p, τ, U) on the difference time, τ = t1 − t2, and the numerical result can be expressed as a Fourier series
$$A(p,\tau ,U)=\mathop{\sum}\limits_{a}{C}_{a}{e}^{i\frac{{E}_{a}(p)}{\hslash }\tau }{e}^{-{\Gamma }_{a}(p)\tau }.$$
The exponential damping ansatz is known to be a poor approximation for small τ, and can be straightforwadly improved; however, for the present discussion ansatz (10) is sufficient.
In the field-free case, U0 → 0, and a given exchange-correlation self energy of the uniform electron gas, Σxc(p, τ) [the Hartree term vanishes for the UEG], the sum (10) contains only a small number s of terms. For example, in the quasiparticle approximation, there is only one term, s = 1, with \({E}_{1}(p)=\frac{{p}^{2}}{2m}+{{{{{{{\rm{Re}}}}}}}}\,{\Sigma }_{xc}(p,\tau )\) and \(\hslash {\Gamma }_{1}(p)={{{{{{{\rm{Im}}}}}}}}\,{\Sigma }_{xc}(p,\tau )\). Since the system is stationary, there is no dependence on the center of mass time, T = (t1 + t2)/2. Now, when the field U is turned on, it excites plasma oscillations with wavenumber q which give rise to one additional contribution to the sum with (within linear response)
$${E}_{2}(p,q)=\delta {\Sigma }_{H}(p,q)+{{{{{{{\rm{Re}}}}}}}}\,\delta {\Sigma }_{xc}(p,q),$$
$$\hslash {\Gamma }_{2}(p,q)={{{{{{{\rm{Im}}}}}}}}\,\delta {\Sigma }_{xc}(p,q),$$
where δΣH and δΣxc are the linear perturbations of the selfenergies due to the external field, where we suppress the time dependencies. The specific contribution to the single-particle spectrum that is caused by plasmons can be isolated by considering the difference δA, where s field-free terms cancel. Now, Fourier transforming with respect to τ yields a Lorentzian in frequency space with the peak position of δA(p, ω) given by E2(p, q) = δE(p, q),
$$\delta E(p,q)\approx \delta {\Sigma }_{H}(p,q)+{{{{{{{\rm{Re}}}}}}}}\,\delta {\Sigma }_{xc}(p,q),$$
with the life time Γ2. If the KBE are solved in the presence of the field on the mean-field level (Σxc = 0), the second term in Eq. (13) vanishes which is known to yield the plasmon spectrum (peak of the DSF) on RPA-level, ωRPA(q)51. If exchange-correlation effects are restored, the plasmon spectrum and energy spectrum δE undergo synchronous changes,
$$ {\omega }_{{{{{{\rm{RPA}}}}}}}(q) \to \omega (q)\\ \delta {\Sigma }_{H}(p,q) \to \delta {\Sigma }_{H}(p,q)+{{{{{{\rm{Re}}}}}}}\,\delta {\Sigma }_{xc}(p,q).$$
Equivalently, we may subtract the terms on the left-hand side. This yields, on one hand, the frequency change, Δωxc = ω(q) − ωRPA(q), that is caused by exchange-correlation effects. On the other hand, this leads to the xc-induced difference of energy dispersions
$$\Delta \delta {E}_{{{{{{{{\rm{xc}}}}}}}}}(p,q)\approx {{{{{{{\rm{Re}}}}}}}}\,\delta {\Sigma }_{{{{{{{{\rm{xc}}}}}}}}}(p,q).$$
Thus, we have established a direct link between the two exchange-correlation energy effects, Δωxc and \({{{{{{{\rm{Re}}}}}}}}\,\delta {\Sigma }_{xc}(p,\,q)\). Taking into account the linear relation (8) and subtracting the mean field (RPA) expressions, we expect a proportionality also for the peak positions,
$$\Delta {\omega }_{{{{{{{{\rm{xc}}}}}}}}}(q) \sim \mathop{\sum}\limits_{p}{{{{{{{\rm{Re}}}}}}}}\,\delta {\Sigma }_{{{{{{{{\rm{xc}}}}}}}}}(p,q)$$
While, the KBE procedure has been successfully demonstrated for the computation of the plasmon spectrum51, the change of the single-particle energy, Eq. (15), is presently not available. Physically, δΣxc(p, q) has the meaning of the field-induced change of the energy of a test particle that is related to its interaction with the medium41. Since ΔWxc has exactly this meaning (an approximation to it) we conclude that
$$\mathop{\sum}\limits_{p}\Delta \delta {E}_{{{{{{{{\rm{xc}}}}}}}}}(p,q) \sim \Delta {W}_{{{{{{{{\rm{xc}}}}}}}}}(q).$$
Together with the proportionality (16) this gives a connection to Eq. (1) of the main text.
The neural network representation of the static local field correction is available in Dornheim et al.40. All further data that support the findings of this study are available from the corresponding author upon reasonable request.
Fortov, V. E. Extreme states of matter on earth and in space. Phys.-Usp 52, 615–647 (2009).
Benuzzi-Mounaix, A. et al. Progress in warm dense matter study with applications to planetology. Phys. Scripta T161, 014060 (2014).
Becker, A. et al. Ab initio equations of state for hydrogen (H-REOS. 3) and helium (He-REOS. 3) and their implications for the interior of brown dwarfs. Astrophys. J. Suppl. Ser 215, 21 (2014).
Haensel, P., Potekhin, A. Y. & Yakovlev, D. G. (eds). Equilibrium Plasma Properties. Outer Envelopes 53–114 (Springer New York, 2007).
Kraus, D. et al. Nanosecond formation of diamond and lonsdaleite by shock compression of graphite. Nat. Commun. 7, 10970 (2016).
Lazicki, A. et al. Metastability of diamond ramp-compressed to 2 terapascals. Nature 589, 532–535 (2021).
Brongersma, M. L., Halas, N. J. & Nordlander, P. Plasmon-induced hot carrier science and technology. Nat. Nanotechnol. 10, 25–34 (2015).
Hu, S. X., Militzer, B., Goncharov, V. N. & Skupsky, S. First-principles equation-of-state table of deuterium for inertial confinement fusion applications. Phys. Rev. B 84, 224109 (2011).
Betti, R. & Hurricane, O. A. Inertial-confinement fusion with lasers. Nat. Phys. 12, 435–448 (2016).
Zylstra, A. B. et al. Burning plasma achieved in inertial fusion. Nature 601, 542–548 (2022).
Tschentscher, T. et al. Photon beam transport and scientific instruments at the European XFEL. Appl. Sci. 7, 592 (2017).
Pile, D. First light from sacla. Nat. Photon. 5, 456–457 (2011).
Falk, K. Experimental methods for warm dense matter research. High Power Laser Sci. Eng 6, e59 (2018).
Fletcher, L. B. et al. Ultrabright x-ray laser scattering for dynamic warm dense matter physics. Nat. Photon. 9, 274–279 (2015).
Kraus, D. et al. Formation of diamonds in laser-compressed hydrocarbons at planetary interior conditions. Nat. Astron. 1, 606–611 (2017).
Knudson, M. D. et al. Direct observation of an abrupt insulator-to-metal transition in dense liquid deuterium. Science 348, 1455–1460 (2015).
Glenzer, S. H. & Redmer, R. X-ray Thomson scattering in high energy density plasmas. Rev. Mod. Phys 81, 1625 (2009).
Graziani, F., Desjarlais, M. P., Redmer, R. & Trickey, S. B. (eds). Frontiers and Challenges in Warm Dense Matter (Springer, International Publishing, 2014).
Mo, C. et al. First-principles method for x-ray Thomson scattering including both elastic and inelastic features in warm dense matter. Phys. Rev. B 102, 195127 (2020).
Dornheim, T., Groth, S., Vorberger, J. & Bonitz, M. Ab initio path integral Monte Carlo results for the dynamic structure factor of correlated electrons: from the electron liquid to warm dense matter. Phys. Rev. Lett. 121, 255001 (2018).
Dornheim, T., Groth, S. & Bonitz, M. The uniform electron gas at warm dense matter conditions. Phys. Reports 744, 1–86 (2018).
Fortmann, C., Wierling, A. & Röpke, G. Influence of local-field corrections on Thomson scattering in collision-dominated two-component plasmas. Phys. Rev. E 81, 026405 (2010).
Simoni, J. & Daligault, J. First-principles determination of electron-ion couplings in the warm dense matter regime. Phys. Rev. Lett. 122, 205001 (2019).
Giuliani, G. & Vignale, G. Quantum Theory of the Electron Liquid (Cambridge University Press, 2008).
Griffin, A. et al. Excitations in a Bose-condensed Liquid. Cambridge Studies in Low Temperature Physics (Cambridge University Press, 1993).
Godfrin, H. et al. Observation of a roton collective mode in a two-dimensional Fermi liquid. Nature 483, 576–579 (2012).
Dornheim, T., Moldabekov, Z. A., Vorberger, J. & Militzer, B. Path integral Monte Carlo approach to the structural properties and collective excitations of liquid 3he without fixed nodes. Sci. Rep. 12, 708 (2022).
Takada, Y. & Yasuhara, H. Dynamical structure factor of the homogeneous electron liquid: its accurate shape and the interpretation of experiments on aluminum. Phys. Rev. Lett. 89, 216402 (2002).
Takada, Y. Emergence of an excitonic collective mode in the dilute electron gas. Phys. Rev. B 94, 245106 (2016).
vom Felde, A., Sprösser-Prou, J. & Fink, J. Valence-electron excitations in the alkali metals. Phys. Rev. B 40, 10181–10193 (1989).
Dornheim, T. et al. Effective static approximation: a fast and reliable tool for warm-dense matter theory. Phys. Rev. Lett. 125, 235001 (2020).
Mithen, J. P., Daligault, J. & Gregori, G. Onset of negative dispersion in the one-component plasma. in AIP Conference Proceedings Vol. 1421, 68–72 (2012).
Arkhipov, Y. V. et al. Direct determination of dynamic properties of coulomb and Yukawa classical one-component plasmas. Phys. Rev. Lett. 119, 045001 (2017).
Arkhipov, Y. V. et al. Dynamic characteristics of three-dimensional strongly coupled plasmas. Phys. Rev. E 102, 053215 (2020).
Kalman, G. J., Hartmann, P., Golden, K. I., Filinov, A. & Donkó, Z. Correlational origin of the roton minimum. Europhys. Lett. 90, 55002 (2010).
Feynman, R. P. & Cohen, M. Energy spectrum of the excitations in liquid helium. Phys. Rev. 102, 1189–1204 (1956).
Article ADS MATH Google Scholar
Kugler, A. A. Theory of the local field correction in an electron gas. J. Stat. Phys 12, 35 (1975).
Dornheim, T., Tolias, P., Moldabekov, Z. A., Cangi, A. & Vorberger, J. Effective electronic forces and potentials from ab initio path integral Monte Carlo simulations. J. Chem. Phys. 156, 244113 (2022).
Kukkonen, C. A. & Overhauser, A. W. Electron-electron interaction in simple metals. Phys. Rev. B 20, 550–557 (1979).
Dornheim, T. et al. The static local field correction of the warm dense electron gas: an ab initio path integral Monte Carlo study and machine learning representation. J. Chem. Phys 151, 194104 (2019).
Kremp, D., Schlanges, M. & Kraeft, W.-D. Quantum Statistics of Nonideal Plasmas (Springer, 2005).
Hamann, P., Vorberger, J., Dornheim, T., Moldabekov, Z. A. & Bonitz, M. Ab initio results for the plasmon dispersion and damping of the warm dense electron gas. Contrib. Plasma Phys. 60, e202000147 (2020).
Zhou, Y. et al. Bilayer Wigner crystals in a transition metal dichalcogenide heterostructure. Nature 595, 48–52 (2021).
Du, L. et al. Engineering symmetry breaking in 2d layered materials. Nature Reviews Physics 3, 193–206 (2021).
Saccani, S., Moroni, S. & Boninsegni, M. Excitation spectrum of a supersolid. Phys. Rev. Lett. 108, 175301 (2012).
Navon, N., Smith, R. P. & Hadzibabic, Z. Quantum gases in optical boxes. Nature Physics 17, 1334–1341 (2021).
Filinov, A., Prokof'ev, N. V. & Bonitz, M. Berezinskii-Kosterlitz-Thouless transition in two-dimensional dipole systems. Phys. Rev. Lett. 105, 070401 (2010).
Ni, K. K., Ospelkaus, S., Nesbitt, D. J., Ye, J. & Jin, D. S. A dipolar gas of ultracold molecules. Phys. Chem. Chem. Phys. 11, 9626–9639 (2009).
Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117, 1–19 (1995).
Dornheim, T., Böhme, M., Militzer, B. & Vorberger, J. Ab initio path integral Monte Carlo approach to the momentum distribution of the uniform electron gas at finite temperature without fixed nodes. Phys. Rev. B 103, 205142 (2021).
Kwong, N.-H. & Bonitz, M. Real-time Kadanoff-Baym approach to plasma oscillations in a correlated electron gas. Phys. Rev. Lett. 84, 1768–1771 (2000).
This work was partly funded by the Center for Advanced Systems Understanding (CASUS) which is financed by Germany's Federal Ministry of Education and Research (BMBF) and by the Saxon Ministry for Science, Culture and Tourism (SMWK) with tax funds on the basis of the budget approved by the Saxon State Parliament. M.B. acknowledges support by the DFG via project BO1366/15. The PIMC calculations were carried out at the Norddeutscher Verbund für Hoch- und Höchstleistungsrechnen (HLRN) under grant shp00026 and on a Bull Cluster at the Center for Information Services and High Performance Computing (ZIH) at Technische Universität Dresden.
Open Access funding enabled and organized by Projekt DEAL.
Center for Advanced Systems Understanding (CASUS), Untermarkt 20, Görlitz, D-02826, Germany
Tobias Dornheim & Zhandos Moldabekov
Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Bautzner Landstraße 400, Dresden, D-01328, Germany
Tobias Dornheim, Zhandos Moldabekov & Jan Vorberger
Institut für Theoretische Physik und Astrophysik, Christian-Albrechts-Universität, Leibnizstraße 15, Kiel, D-24098, Germany
Hanno Kählert & Michael Bonitz
Tobias Dornheim
Zhandos Moldabekov
Jan Vorberger
Hanno Kählert
Michael Bonitz
T.D. developed the original idea, produced all graphics, and substantially contributed to writing the manuscript. Z.M. and J.V. contributed to the analysis and to writing the manuscript. H.K. carried out classical MD simulations and contributed to writing the manuscript. M.B. developed the Green functions theory, and contributed to the analysis and to writing the manuscript.
Correspondence to Tobias Dornheim.
Communications Physics thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Dornheim, T., Moldabekov, Z., Vorberger, J. et al. Electronic pair alignment and roton feature in the warm dense electron gas. Commun Phys 5, 304 (2022). https://doi.org/10.1038/s42005-022-01078-9
Focus Collections
Communications Physics (Commun Phys) ISSN 2399-3650 (online)
|
CommonCrawl
|
Methodology article
Feature selection and causal analysis for microbiome studies in the presence of confounding using standardization
Emily Goren ORCID: orcid.org/0000-0002-0202-58561,
Chong Wang1,2,
Zhulin He1,
Amy M. Sheflin3,
Dawn Chiniquy4,
Jessica E. Prenni3,
Susannah Tringe4,
Daniel P. Schachtman5 &
Peng Liu ORCID: orcid.org/0000-0002-2093-80181
Microbiome studies have uncovered associations between microbes and human, animal, and plant health outcomes. This has led to an interest in developing microbial interventions for treatment of disease and optimization of crop yields which requires identification of microbiome features that impact the outcome in the population of interest. That task is challenging because of the high dimensionality of microbiome data and the confounding that results from the complex and dynamic interactions among host, environment, and microbiome. In the presence of such confounding, variable selection and estimation procedures may have unsatisfactory performance in identifying microbial features with an effect on the outcome.
In this manuscript, we aim to estimate population-level effects of individual microbiome features while controlling for confounding by a categorical variable. Due to the high dimensionality and confounding-induced correlation between features, we propose feature screening, selection, and estimation conditional on each stratum of the confounder followed by a standardization approach to estimation of population-level effects of individual features. Comprehensive simulation studies demonstrate the advantages of our approach in recovering relevant features. Utilizing a potential-outcomes framework, we outline assumptions required to ascribe causal, rather than associational, interpretations to the identified microbiome effects. We conducted an agricultural study of the rhizosphere microbiome of sorghum in which nitrogen fertilizer application is a confounding variable. In this study, the proposed approach identified microbial taxa that are consistent with biological understanding of potential plant-microbe interactions.
Standardization enables more accurate identification of individual microbiome features with an effect on the outcome of interest compared to other variable selection and estimation procedures when there is confounding by a categorical variable.
Advancements in next-generation sequencing (NGS) technologies have recently allowed for unprecedented examination of the community of microorganisms in a host or site of interest, referred to as a microbiome [29]. Early cultivation-dependent methods only allowed for detection of a small fraction of the total microbial species present. In contrast, NGS technologies can rapidly detect thousands of microbes in each sample by determining the nucleotide sequences of short microbial DNA fragments. These fragments may either correspond to targets of a specific genetic marker, commonly the 16S ribosomal RNA gene for taxonomic identification of bacteria as in amplicon sequencing, or result from shearing all the DNA in a sample as in shotgun metagenome sequencing [40]. For each fragment, the corresponding nucleotide sequence is referred to as a "read," the length of which is dependent on the specific NGS system [33].
Both amplicon-based and shotgun metagenomic approaches can enumerate the relative abundance of thousands of microbial features per sample. Use of amplicon sequencing for microbial enumeration is more common than shotgun metagenome sequencing due to reduced cost and complexity. For this reason, we focus on amplicon-based microbiome data here, and refer the reader to Sharpton [47] for detailed coverage of metagenomic sequencing and Knight et al. [28] for a thorough comparison of the two approaches. In order to enumerate microbes, amplicon reads are typically clustered into operational taxonomic units (OTUs) according to a fixed level of sequence similarity (e.g., 97%) [62], or as advocated by Callahan et al. [8], enumerated on the basis of denoised sequences termed exact amplicon sequence variants (ASVs). Both OTUs and ASVs may be classified into known taxa [44]. The resulting microbiome data for each sample are high-dimensional nonnegative integer counts across potentially thousands of features (taxa, OTUs, or ASVs). These counts represent relative, not absolute, numbers for each sample due to varying library sizes, a technical limitation of NGS approaches. Consequently, microbiome data must be normalized, rarefied, or treated as compositional in order to make comparisons across samples and it is unresolved which method is optimal for a particular research question and data set [17, 35, 61].
Microbiome studies have uncovered associations between microbes and human, animal, and plant health outcomes. Randomized clinical trials have been performed to determine the causal effect of fecal microbiota transplantation [9], but these do not provide causal inference on the contribution of individual microbiome features. It is important to identify individual microbiome features with a causal effect on the outcome because such discoveries may lead to development of microbial interventions for treatment of disease or optimization of crop yields. A recent review highlights the importance of identifying individual taxa with biologically relevant roles in microbiome studies [3].
Recently, there has been interest in causal inference in microbiome studies [65]. The gold standard for causal inference is to randomly assign treatments (here, microbiome interventions) and estimate the causal effect. However, this is challenging in microbiome studies since many microorganisms cannot be directly cultured [53], and random assignment of microbiomes to units is often not possible. To date, causal inference in microbiome studies has been primarily limited to causal mediation analysis that determines if a causal effect of treatment is transmitted through the microbiome [52, 58, 71]. Software has been developed to apply Granger causality [19] to microbiome time series [2], but the performance of such an approach has not been thoroughly evaluated using simulation studies.
In this work, we aim to identify individual microbial features with a causal effect on an outcome in a population of interest using causal inference. Here, the microbiome features are considered to be multivariate exposures, and are often of much higher dimension than the sample size. Previous work on high-dimensional causal inference is typically limited to settings with high-dimensional confounders rather than exposures (e.g., Schneeweiss et al. [45]) or directed graphical modeling [38]. Recently, Nandy et al. [36] considered directed graphical modeling for estimation of joint simultaneous interventions. However, their approach requires linearity and Gaussianity assumptions for high-dimensional inference, which are inappropriate for microbiome count data. There are proposed approaches for causal inference for multivariate exposures or treatments using the potential-outcomes framework, and such approaches often rely on the generalized propensity score [24]. Siddique et al. [50] compared inverse probability of treatment weighting, propensity score adjustment, and targeted maximum likelihood approaches for multivariate exposures. Wilson et al. [64] proposed Bayesian model averaging over different sets of confounders when the set of true confounding variables is unknown. When the exposures are time-varying, Taubman et al. [54] considered g-estimation and Hernán et al. [21] proposed a marginal structural model. However, in all of these studies with multivariate exposures, the exposure dimensionality is smaller than the sample size.
In addition to the high dimensionality, causal inference for microbiome studies is complicated by potentially complex interactions among host, environment, and microbiome. For example, there could be categorical confounding variables that affect both the outcome and some of the microbiome features. To overcome the challenges of the high dimensionality and presence of categorical confounding variables in microbiome studies, we propose standardization on the confounder and use the potential-outcomes framework for causal inference [26]. The potential-outcomes framework [22, 37, 42] conceptually frames causal inference as a missing data problem: the outcome can only be measured under the exposure actually received, making the outcome unobservable under all other possible values of the exposure. We refer the reader to Hernán and Robins [20] for a more detailed introduction. To deal with high-dimensionality of the microbiome exposure and categorical confounding variables, we propose variable screening, selection, and estimation of microbiome effects conditional on the confounder (i.e., stratification), followed by standardization to obtain estimates of effects in the population of interest. Conditioning on the confounder for microbiome feature screening, selection, and estimation avoids complications due to high marginal confounder-induced correlation between features. Further, conditional estimation naturally allows for effect modification (i.e., interaction between the confounder and microbiome features), affording flexibility to capture host-environment-microbiome interactions. Standardization allows for estimation and ranking of microbiome feature effects in the target population, which has policy and epidemiological relevance. Even if conditions for causal inference do not hold, avoiding such marginal correlation allows for superior identification of associational microbiome effects.
In this manuscript we begin by defining the estimands of interest and outlining conditions required for causal inference in "Model and assumptions" section. We then propose our estimation approach with standardization in "Methods" section. Next, we demonstrate the feasibility of our approach through simulation studies in "Simulation studies" section and present a real data application using an agricultural microbiome study in "Real data analysis" section. This paper ends with a discussion and conclusion.
Model and assumptions
Notation, microbiome effects, confounding
Consider a study with n samples (indexed by \(i=1,\dots ,n\)) aimed at identifying the population effect \({\varvec{\beta }} = (\beta _1, \dots , \beta _p)'\) of p microbiome features (e.g., taxa, ASVs, OTUs) \({\varvec{A}}_i = (A_{i1}, \dots , A_{ip})'\) on an outcome \(Y_i \in {\mathbb {R}}\), such as a health response of interest. For formulating the estimand, we assume that \({\varvec{A}}_i\) has been appropriately normalized. Importantly, \(Y_i\) represents the observed outcome for sample i, which differs from the notion of a potential outcome [42]. Define the potential outcome \(Y_i^{\varvec{a}}\) as the value the outcome would take under the (possibly counterfactual) microbiome value \({\varvec{a}} = (a_1, \dots , a_p)'\). Assume that the expected potential outcome is related to the population effect \({\varvec{\beta }}\) through a linear function of the microbiome features as
$$\begin{aligned} {{\,\mathrm{E}\,}}\left( Y_i^{\varvec{a}}\right) = \beta _0 + \sum _{j=1}^p \beta _j a_j, \end{aligned}$$
where for each j, \(\beta _j\) represents the effect of the jth microbiome feature in the population and \(a_j\) is the potential or counterfactual value of the jth microbiome feature. In terms of (1), identifying which microbiome features have a causal effect on the response corresponds to estimation and inference for \(\beta _j\) (\(1\le j\le p)\). For generality, the formulation of (1) ignores possible microbe-microbe interactions and any constraints of carrying capacity.
Note that the model in (1) is defined for the potential outcomes, not the observed data, and is thus a marginal structural model [21]. In the presence of a confounding variable \(L_i\) that affects both \({\varvec{A}}_i\) and \(Y_i\), this model generally does not hold for the observed data because confounding implies \({{\,\mathrm{E}\,}}\left( Y_i^{\varvec{a}}\right) \ne {{\,\mathrm{E}\,}}\left( Y_i \,\vert \,{\varvec{A}}_i = {\varvec{a}} \right)\). Consequently, specific assumptions and methodology are required to obtain an estimator \(\hat{\varvec{\beta }}\) of \({\varvec{\beta }}\) that has causal, rather than merely associational, interpretation. In the next sub-section, we address the assumptions required for such a causal interpretation. We restrict our attention to the case where the confounder \(L_i\) is categorical with a finite number of levels, each represented sufficiently in the study of n samples.
Assumptions for causal inference
Under the potential-outcomes framework, ascribing a causal interpretation to an estimate of \({\varvec{\beta }}\) requires three assumptions: positivity, conditional exchangeability, and consistency [20]. Positivity requires positive probability for each possible microbiome level, conditional on the confounder. To formalize this, let \({\mathcal {A}}\) denote the set of all possible microbiome values in the population. The positivity condition holds if \({{\,\mathrm{Pr}\,}}\left( {\varvec{A}}_i = {\varvec{a}} \,\vert \,L_i = l\right) > 0\) for all \({\varvec{a}} \in {\mathcal {A}}\) and all levels l of confounder \(L_i\) such that \({{\,\mathrm{Pr}\,}}(L_i = l) \ne 0\) in the population of interest, henceforth denoted by the set \({\mathcal {L}}\). Clearly, if a given microbe is either absent or below the limit of detection across all samples, its effect on the response cannot be determined. Hence, this assumption requires a large enough sequencing depth in order to sufficiently enumerate any present microbes with a causal effect. Practical considerations for evaluating the positivity assumption are covered by Westreich and Cole [63].
To meet the conditional exchangeability requirement, the data-generating mechanism for each possible microbiome must depend only on the confounder, formalized as for all \(l \in {\mathcal {L}}\), where denotes statistical independence. Conditional exchangeability requires no unmeasured confounding. This assumption is most justifiable in experiments where the confounder is randomly assigned as in our motivating study described later in "Real data analysis" section, where agricultural plots are randomized to either low or high nitrogen fertilizer.
The consistency criterion is met if the observed outcome for each unit is the potential outcome under the observed microbiome, formally stated as \({\varvec{A}}_i = {\varvec{a}} \implies Y_i^{\varvec{a}} = Y_i\). For microbiome data, this necessitates appropriate normalization. Since NGS-based technologies enumerate based on genetic material, the resulting counts can arise from both viable and non-viable microbes [6]. In order to met the consistency assumption, relevant microbes with the same normalized count cannot have disparate effects due to differential viability. When there is concern that this assumption may be violated, it is possible to restrict amplification of RNA target genes to only viable bacterial cells [41]. We note that even if these three conditions cannot be verified, our proposed method has utility in estimation of associational, rather than causal, effects.
Our goal is to estimate the population microbiome effects \({\varvec{\beta }}\) of (1) and infer which microbiome features are relevant to the response, that is, \(\{1 \le j \le p: \beta _j \ne 0 \}\). We propose computing an estimate \(\hat{\varvec{\beta }}^l\) for each stratum \(l \in {\mathcal {L}}\) of the confounder, followed by standardization to the confounder distribution, thereby obtaining a population-level estimate \(\hat{\varvec{\beta }}\). Under the assumptions stated in "Assumptions for causal inference" section, there is no confounding within each stratum l of the confounder. Beyond elimination of confounding, conditioning on a stratum of the confounder avoids marginal correlation between features induced by the relationship with the confounder that can hinder feature selection performance. Figure S7 in the Additional file 1 shows microbiome data from an agricultural study described in "Real data analysis" section where many features are highly correlated when considered marginally, but are relatively uncorrelated within each level of a fertilizer confounder. Combining the assumptions of "Assumptions for causal inference" section with the model in (1) and allowing for effect modification, we have
$$\begin{aligned} {{\,\mathrm{E}\,}}\left( Y_i \,\vert \,{\varvec{A}}_i = {\varvec{a}}, L_i = l \right) = \beta _0^l + \sum _{j=1}^p \beta _j^l a_j, \end{aligned}$$
where \({\varvec{\beta }}^l = (\beta _1^l, \dots , \beta _p^l)'\) is the corresponding stratum-specific effect. There is effect modification if \({\varvec{\beta }}^l \ne {\varvec{\beta }}^{l'}\) for some \(l \ne l' \in {\mathcal {L}}\).
Standardizing the stratum-specific mean outcomes to the confounder distribution produces the population mean outcome function
$$\begin{aligned} {{\,\mathrm{E}\,}}\left( Y_i \,\vert \,{\varvec{A}}_i\right) = \sum _{l \in {\mathcal {L}} } \left( \beta _0^l + \sum _{j=1}^p A_{ij} \beta _j^l\right) {{\,\mathrm{Pr}\,}}\left( L_i = l\right) . \end{aligned}$$
By linearity, the effect in the population corresponding to a one-unit increase in the jth microbiome feature, controlling for all others, is represented by \(\beta _j = \sum _{l\in {\mathcal {L}}}\beta _j^l {{\,\mathrm{Pr}\,}}\left( L_i = l\right)\) for \(j = 1, \dots , p\). Given a suitable estimator \(\hat{\varvec{\beta }}^l\) of \({\varvec{\beta }}^l\) for all \(l \in {\mathcal {L}}\), the resulting population-standardized estimate of \(\beta _j\) is
$$\begin{aligned} {\hat{\beta }}_j = \sum _{l\in {\mathcal {L}}}\hat{\beta _j^l} {{\,\mathrm{Pr}\,}}\left( L_i = l\right) . \end{aligned}$$
Essentially, each stratum-specific estimate is weighted by the prevalence of the confounder in the target population, represented by \({{\,\mathrm{Pr}\,}}\left( L_i = l\right)\). The population-level value is obtained through a weighted average of stratum-specific estimates.
Feature selection and estimation
In this section, we propose a feature selection and estimation procedure for stratum-specific coefficients \({\varvec{\beta }}^l\), performed independently for each confounder level \(l \in {\mathcal {L}}\). Within each stratum, we make a sparsity assumption that few microbiome features have an effect on the response and correspondingly most entries of \({\varvec{\beta }}^l\) are zero, and also assume that the outcome is normally distributed with constant variance. Commonly, \(n \ll p\) for microbiome features for taxa at the level of species (and perhaps genera), OTUs, or ASVs. Consequently, we suggest penalized least squares estimation that induces shrinkage towards zero via a penalty function \(p_{\lambda }\), where \(\lambda\) is a tuning parameter controlling the amount of shrinkage. We suggest choosing \(\lambda\) using the Bayesian information criterion (BIC) [46] due to its consistency property in selecting the true features in certain settings [59] and nonconsistency of prediction accuracy criteria such as cross-validation [30]. Possible choices for penalties that perform variable selection through shrinkage-induced sparsity include the least absolute shrinkage and selection operator (LASSO) [55] and smoothly clipped absolute deviation (SCAD) [13], among others [69].
Due to the high dimensionality of microbiome data, variable screening in conjunction with penalized estimation may improve accuracy and algorithmic stability [14]. The sure independence screening (SIS) of Fan and Lv [14] retains features attaining the highest marginal correlation with the response, which may lead to poor performance when irrelevant features are more highly correlated with the response, marginally, than relevant ones. Since this is likely the case for microbiome data, we instead consider using the iterative sure independence screening procedure proposed by Fan and Lv [14] and implemented by Saldana and Feng [43] that avoids such a drawback by performing iterative feature recruitment and deletion based on a given penalty \(p_{\lambda }\). Since features that are constant across all (or nearly all) samples are collinear with the model intercept, we recommend removing features with very low abundances such as those that are zero for most samples (e.g., Xiao et al. [67]).
Post-selection inference and error rate control
Inference on which microbiome features have a population-level effect, conducted by testing the null hypothesis \(H_{0j}: \beta _j = 0\) for the jth feature (\(1\le j \le p)\), is challenging using penalized least squares estimation. For example, the asymptotic distribution of the LASSO may not be continuous and is difficult to characterize in high-dimensional settings [27]. Many approaches for error rate control post-variable selection using penalized regression make use of data splitting techniques [7, 12] but have low power for the small sample sizes common to microbiome studies. Due to these reasons, for inference we propose using the debiased, also known as desparsified, LASSO [56, 70] applied to the estimate \(\hat{\varvec{\beta }}^l\) obtained using the LASSO penalty with the iterative SIS procedure. To make the computation tractable, we only apply the debiasing procedure to the features not screened out by the iterative SIS procedure and let \(\hat{\varvec{b}}^l\) denote the resulting estimate. Under regularity assumptions and appropriate penalization, the debiased LASSO estimator has a limiting normal distribution [12].
For the jth feature, the standardized debiased iterative SIS-LASSO estimate \({\hat{b}}_j\) and its standard error are given by
$$\begin{aligned} {\hat{b}}_j = \sum _{l \in {\mathcal {L}}} {\hat{b}}_j^l\Pr (L_i = l) , \quad {\text {se}}({\widehat{b}}_j ) = \sqrt{\sum _{l \in {\mathcal {L}}} \left[ {\text {se}}({\hat{b}}_j^l)\Pr (L_i = l) \right] ^2}, \end{aligned}$$
respectively, where the standard error formula follows from the independence of the strata. To obtain an estimator of the standard error, we plug-in the estimate \(\widehat{\text {se}}_j^l\) of \({\text {se}}({\hat{b}}_j^l)\) given by Dezeure et al. [11] under homoscedastic errors if the jth feature was not removed by screening in the lth confounder stratum. We compute a p-value for testing \(H_{0j}: \beta _j = 0\) versus \(H_{1j}: \beta _j \ne 0\) according to \(p_j = 2[1 - \Phi ( |{\hat{b}}_j |/ \widehat{\text {se}}_j )]\) if feature j was not screened out in all confounder strata for \(j= 1,\dots ,p\), where \(\Phi (\cdot )\) denotes the standard normal cdf. To control the false discovery rate (FDR), we apply the Benjamini–Hochberg (BH) adjustment across all p features [4] to account for multiplicity in all features, including those that were removed from all strata.
Simulation studies
Here, we evaluate our proposed standardization method using simulation studies. The simulation settings were designed to mimic microbiome studies seen in practice. To emulate species-level data, we consider \(p = 2000\) microbiome features. To reflect data summarized at the genus level, we also consider \(p = 50\). We consider sample sizes of \(n = 50\) and \(n=100\), and assume the confounder is a binary indicator that takes the value one for \(i = 1, \dots , n/2\) and zero for \(i = n/2 + 1, \dots , n\).
Data-generating model for microbiome features
Conditional on the confounder \(L_i = 0\), the count data for the jth microbiome feature were drawn independently from a negative binomial distribution with mean \(\gamma _{0j}\) and dispersion \(\phi _j\) parameterized such that \({{\,\mathrm{Var}\,}}( A_{ij} ) = \gamma _{0j} + \phi _j(\gamma _{0j})^2\). That is, when \(L_i = 0\), the baseline mean for feature j is \(\gamma _{0j}\). When the confounder is present (\(L_i = 1\)), the microbiome feature counts were drawn independently from a negative binomial distribution with mean \(\gamma _{0j}\gamma _{1j}\) and dispersion \(\phi _j\). Hence, \(\gamma _{1j}\) represents the multiplicative change in the mean relative to when the confounder is absent. If \(\gamma _{1j} \ne 1\), then feature j is affected by the confounder and otherwise \(\gamma _{1j} = 1\). The first \(30\%\) of features were set to be affected by the confounder (differentially abundant between condition \(L_i = 0\) and condition \(L_i = 1\)). More specifically, we simulated parameters \(\gamma _{0j}\) and \(\gamma _{1j}\) from the following distributions for \(j = 1,\dots , p\):
$$\begin{aligned}&\gamma _{0j} {\mathop {\sim }\limits ^{\text {ind}}}\left\{ \begin{array}{ll} \log {\mathcal {N}}\left( 1/2,~ 9/4 \right) &{}{\hbox { if}}\ \beta _j = 0 \\ \delta _{\{5\}} &{}{\hbox { if}}\ \beta _j \ne 0 \\ \end{array} \right. \\&\gamma _{1j} {\mathop {\sim }\limits ^{\text {ind}}}\left\{ \begin{array}{ll} \log {\mathcal {N}}\left( \pm 1/4,~ 9/4 \right) &{}{\hbox { if feature}}\; j \hbox { is affected by } L_i \\ \delta _{\{1\}} &{} {\text{ otherwise }} \\ \end{array} \right. \end{aligned}$$
where \(\delta _{\{x\}}\) represents a point mass at x. Our rationale for setting the baseline mean to five for relevant features \((\beta _j \ne 0)\) was to ensure that they were sufficiently abundant for feature selection. We set the dispersions \(\phi _j = 10^{-1}\) for all features \(j = 1, \dots , p\) and simulated the microbiome count data \({\varvec{A}}_i\) with negative binomial distributions. In addition, we conducted a second set of simulations with \(\phi _j = 10^{-6}\), which approximates a Poisson distribution.
Data-generating model for response
Given the confounder and microbiome features \({\varvec{A}}_i\) simulated from the above subsection, we draw the responses independently from a normal distribution with mean \(\mu _i(\tilde{\varvec{A}}_i, L_i)\) and variance \(\sigma ^2\), where \(\tilde{\varvec{A}_i}\) represents \({\varvec{A}}_i\) after centering and scaling (to mean zero and variance one within strata) and
$$\begin{aligned} \mu _i(\tilde{\varvec{A}}_i, L_i) = \left\{ \begin{array}{ll} \beta _0 + \sum _{j=1}^p {\tilde{A}}_{ij} \beta _j &{}{\hbox { if}}\ L_i = 0 \\ \beta _0 + \beta _\ell + \sum _{j=1}^p {\tilde{A}}_{ij} \delta \beta _j &{}{\hbox { if}}\ L_i = 1 \\ \end{array} \right. \end{aligned}$$
for \(i = 1, \dots , n\). For more intuitive comparison of effect modification size, model (6) has an additive effect \(\beta _\ell\) for the intercept and multiplicative effect \(\delta\) for microbiome feature effects when \(L_i = 1\) compared with \(L_i =0\). In particular, \(\beta _\ell\) represents the direct confounder effect and \(\delta\) is an effect modification parameter. Our simulation considers the case when there is no effect modification (\(\delta = 1\)) as well as strong effect modification (\(\delta = -0.9\)) where the relevant microbiome effects are large within each level of the confounder but small overall in the population. The response variability was set to \(\sigma ^2 = 1/16\) for all scenarios. A total of \(s = 5\) features were set to be relevant, with the non-zero elements of \({\varvec{\beta }}\) set to \((3,-3,3,-3,3)\). Our motivation for setting \(\left| \beta _j \right| = 3\) for all relevant j is to ensure the \(\beta _{\min }\) property for model selection consistency is met within all strata for all simulation scenarios [7]. The choice of \(s = 5\) yields sparsity such that \(s < n_l / log(p)\) for most, but not all, simulation scenarios. Three scenarios covering differing proportions of the relevant features set to be confounded (\(\beta _j \ne 0\) and \(\gamma _{1j} \ne 1\)) were considered: either all (100% confounded), the first three (60% confounded), or none (0% confounded).
To summarize our simulation settings, we have considered two dimensions of microbiome features: \(p=2000\) and \(p=50\); two sample sizes: \(n=50\) and \(n=100\); two distributions of microbiome count data: negative binomial and Poisson; inclusion of effect modifier: none or strong effect modifier; and three different proportions of confounded relevant features: 100%, 60%, and 0%. Hence, in total, we examined 48 different simulation settings. For each simulation setting, a total of 100 data sets were simulated.
Screening, penalization, and comparison models
We denote our proposed approach of estimation conditional on each stratum followed by standardization as "Conditional Std". We investigate the performance of variable section using the LASSO and SCAD penalties for \(p_\lambda\) both with and without screening, as well as the proposed inference procedure using the debiased LASSO with iterative SIS described in "Post-selection inference and error rate control" section.
We compare our approach with existing penalized regression models applied to the pooled data set, as opposed to conditionally on each stratum. A total of six comparison models are constructed based on three inclusion strategies for the confounder effect \(\beta _\ell\) of Eq. (6) and two possibilities for modeling effect modification. The confounder effect is either subject to screening and variable selection ("Select L"), forced to be included without penalization ("Require L"), or removed from the model entirely ("Ignore L"). We either model each microbiome feature effect as common across all confounder strata (corresponding to models with the aforementioned names) or allow for effect modification through stratum-specific microbiome feature effects denoted with the suffix "EffMod." For each of the six models under comparison, we also investigate the performance of variable section using the LASSO and SCAD penalties for \(p_\lambda\) both with and without screening, as well as the proposed inference procedure using the debiased LASSO with iterative SIS.
Table 1 presents the objective function for our proposed "Conditional Std" approach and the other six models under comparison. For the proposed approach "Conditional Std," screening is based on iterative SIS recommended defaults applied to each stratum, whereas for all other approaches it is applied to the entire data set to correspond with the assumed model, resulting in different maximum model sizes shown in Table 2. The variables considered in the iterative SIS procedure for each model detailed in Table 2 correspond to those penalized in the objective function in Table 1. For "Conditional Std" and models allowing effect modification (suffix "EffMod"), the population estimates are computed according to Eq. (4). These models center and scale each microbiome feature within each stratum, denoted by \({\tilde{A}}_{ij}\). For models that do not allow for effect modification, the microbiome features are centered and scaled to have mean zero and variance one across all observations, regardless of stratum, denoted by \({\dot{A}}_{ij}\).
Simulation performance was summarized across all 100 simulated data sets for each scenario, model, and variable selection method considered using the true positive rate (TPR) and false positive rate (FPR). Given the selected variables, TPR measures the proportion of relevant features detected, while FPR measures the proportion of irrelevant features declared to be relevant, and these are computed here at the population-level by
$$\begin{aligned} {\text {TPR}}= & {} \frac{\sum _{j=1}^pI({\hat{\beta }}_j \ne 0)I(\beta _j \ne 0)}{\sum _{j=1}^p I(\beta _j \ne 0)}, \end{aligned}$$
$$\begin{aligned} {\text {FPR}}= & {} \frac{\sum _{j=1}^pI({\hat{\beta }}_j \ne 0)I(\beta _j = 0)}{\sum _{j=1}^p I(\beta _j = 0)} \end{aligned}$$
for all methods except the debiased LASSO inference procedure where \(I({\hat{\beta }}_j \ne 0)\) is replaced with the decision rule induced by the corresponding hypothesis test with FDR control at 0.05. An ideal method would take (TPR, FPR) values (1, 0). Additional file 1: Table S1 shows the average TPR and FPR across the 100 simulated data sets for the 12 simulation settings with Poisson distributed features and \(n = 100\). The table lists the results for our proposed approach "Conditional Std" model and the other six models under comparison across different variable selection methods. Generally, the proposed "Conditional Std" model performed better than other models applied to the entire data set across different variable selection methods considered. When effect modification is present, the proposed approach has the highest mean TPR and lowest mean FPR for both the LASSO and SCAD penalties, both with and without screening, often achieving perfect rates on average. For the debiased LASSO applied after iterative SIS with the BH procedure and FDR control set to 0.05 (denoted by "iterSIS-dbLASSO-BH"), the proposed approach has the highest TPR and among the lowest FPR under strong effect modification across variable selection methods. This is not the case only when no effect modification is present, under high dimensionality (\(p = 2000\)), and not all relevant features are not confounded.
For post-selection inference based on the debiased LASSO following screening with iterative SIS, we evaluated the area under the receiver operating characteristic curve (AUC) using the p-values for testing \(H_{0j}: \beta _j = 0\) as the classifiers. AUC aggregates classification performance of TPR versus FPR across different classification thresholds, taking the value 1 for perfect prediction, 0.5 for random guessing, and 0 for always wrong prediction. Box plots of the AUC across 100 data sets for each model are shown in Fig. 1 for 12 simulation settings with \(n = 100\) and Poisson features (results for \(n = 50\) and negative binomial features are presented in Additional file 1: Figs. S1–S3). The proposed approach has near perfect ranking under low dimensionality (\(p = 50\)) for all settings and under high dimensionality (\(p=2000\)) when all relevant features are impacted by the confounder. Similar to the results in Additional file 1: Table S1, the proposed approach performs best out of all models considered except when effect modification is not present and at least some relevant features are not confounded. Among the models that do not use a standardization approach, those that allow for effect modification (labeled with "EffMod") perform better when there is an effect modifier in the data generation, whereas those that do not allow for effect modification perform better when there is no effect modifier in the data generation. For both cases, the proposed standardization approach is superior or competitive.
To evaluate false discovery rate (FDR) control for varying thresholds \(\alpha = (0.01, 0.02, \dots , 0.10)\) commonly used in practice, we computed the false discovery proportion (FDP) at a given \(\alpha\) value for debiased LASSO inference according to
$$\begin{aligned} {\text {FDP}}(\alpha ) = \frac{\sum _{j=1}^pI(q_{j}< \alpha )I(\beta _j = 0)}{\sum _{j=1}^p I(q_{j} < \alpha )}, \end{aligned}$$
where \(q_j\) is the BH-adjusted p-value (or q-value) for feature j. A well performing model will have \({\text {FDP}}(\alpha ) \le \alpha\). For \(n = 100\) and Poisson features, Fig. 2 shows that the proposed "Conditional Std" model appropriately controls FDR under low dimensionality (\(p = 50\)). For high dimensionality (\(p = 2000\)), the proposed approach does not control FDR when at least some relevant features are not confounded, though the observed mean FDP does not exceed the nominal level greatly when compared to other competing models applied to the pooled data. The FDR control for the other six models under comparison is either very conservative or highly liberal. Similar results were seen for \(n = 50\) and negative binomial features, though lack of FDR control was more common for the \(n = 50\) case (Additional file 1: Figs. S4–S6).
Real data analysis
We conducted a microbiome study to investigate the effect of the rhizosphere microbiome of the cereal crop sorghum (Sorghum bicolor) on the phenotype 12-oxo phytodienoic acid (OPDA) production in the root. Sorghum root production of OPDA is of primary interest due to OPDA having both independent plant defense functions and being an important precursor to Jasmonic acid, which functions in plant immune responses that are induced by beneficial bacteria [57, 60]. The study analyzed here is part of an experiment described by Sheflin et al. [48]; we subset on \(n = 34\) samples collected in September across high and low nitrogen fertilizer. Rhizosphere microbiome data were collected using 16S amplicon sequencing and clustered at 97% sequence identity. The resulting 5584 OTUs were rarefied to 20,000 reads per observation and low abundance OTUs (less than 4 non-zero observations out of 34) were excluded [67], leaving a total of 4244 OTUs.
Pairwise Spearman's correlations for the feature counts are shown in Figure S7 in the Additional file 1 for the 150 largest marginal correlations (pooling samples over nitrogen fertilizer levels), which contrast to the small correlations within nitrogen stratum. Using our proposed procedure of testing the standardized feature effect using the debiased LASSO following iterative SIS applied to each nitrogen level, a total of four microbiome features with an effect on root ODPA production were identified while FDR was controlled at 0.05 with BH adjustment (Table 3). Nitrogen stratum-specific residuals did not indicate any violation of the assumptions of constant variance or normality (Figs. S8–S9 of the Additional file 1).
Each microbiome feature effect identified at the study population-level was only identified in one nitrogen condition, though abundance did not differ greatly between the two nitrogen strata (Table 3). Specifically, only one feature was estimated to be more abundant under low nitrogen, and this feature was classified as belonging to the Rhodospirillaceae family (nonsulfur photosynthetic bacteria), of which nearly all members have the capacity to fix molecular nitrogen [34]. Various strains of Rhodospirillaceae have shown potential to promote plant growth in the grass species Brachiaria brizantha [51]. Consequently, the increased levels of root OPDA content may have been the result of bacterial synthesis [15]. While less is known about the three additional significant features, the overall findings are in alignment with biological understanding of potential plant-microbe interactions.
Simulation results Box plots of the area under the curve (AUC) from 100 simulation replications for \(n = 100\) and Poisson features using p-values based on the debiased LASSO estimate following iterative sure independence screening
Simulation results Mean estimated false discovery proportion (FDP) for \(n = 100\) and Poisson features at varying nominal false discovery rate (FDR) values using Benjamini–Hotchberg adjusted p-values based on the debiased LASSO estimate following iterative sure independence screening (iterative SIS). The \(y = x\) line is shown in black; any values above this line indicate lack of FDR control
Table 1 Models considered in simulation studies using penalized regression (with penalty \(p_{\lambda }\)) for a binary confounder \(L_i \in \{0,1\}\)
Table 2 Variable screening and selection for models considered in simulation studies for a binary confounder \(L_i \in \{0,1\}\)
Table 3 Sorghum study analysis results: features with a significant effect on sorghum root ODPA production in the study population with FDR control at the 0.05 level using the Benjamini–Hochberg (BH) procedure on the debiased LASSO estimate following sure independence screening (iterative SIS)
We have proposed and evaluated methodology for causal inference for individual features in high-dimensional microbiome data using standardization. These techniques are typically employed in epidemiology and use the potential-outcomes framework, in contrast to graphical models, which are a more common approach for high-dimensional causal inference but usually require Gaussian assumptions for inference that are often violated by microbiome data [38]. Instead, our approach conditions on the confounder and shows favorable results for Poisson and negative binomial microbiome features. Compared to estimation methods applied to the entire data set, the proposed standardization approach typically demonstrated superior recovery of relevant microbiome effects accross multiple variable screening and selection procedures.
Association and causation are not equivalent even for a one-dimensional treatment or exposure, and the challenges of causal analysis are exacerbated for high-dimensional exposures. Caution must be taken in interpreting causal effects when the assumptions needed for causal inference, such as no unmeasured confounding or consistency, cannot be verified. Consequently, any microbiome features identified should be either validated in experimental studies if possible, or more closely scrutinized according to guidelines for evidence of causation. However, even if conditions for causal inference do not hold, our method may provide better recovery of associational microbiome effects as compared to models applied to the pooled data, when there are features impacted by the confounder.
Some have advocated that microbiome data must be treated as compositional [17]. Due to the sum to library size constraint, which is not removed by rarefying but rather made constant across all samples, microbiome data technically lie in a simplex space [1]. One goal of our funded project is to identify microbial features that can be intervened upon to produce a favorable outcome. Hence we analyze count data, not compositional data where it is impossible to alter a feature without changing at least one other so as to retain the same total sum across features. When microbiome features are high dimensional, and in particular there is no dominating feature, the impact of this issue may be minimal. Moreover, microbiome data often exhibit many zeros and the popular centered log-ratio approach for compositional data applies log transformation after adding an arbitrary pseudocount, the choice of which may impact the analysis [10]. In cases when compositional analysis is preferred, such as when taxa are summarized at the level of genus or higher typically leading to \(p < n\) with a lower prevalence of zeros, our strategy of standardization could be altered in a straightforward way by replacing penalized least squares with a regularized method for compositional covariates [31, 49].
Depending on the underlying biology, the taxonomic structure or phylogeny may be important in the relationship between the microbiome and outcome. If so, higher power may be achieved by using a different penalty that leverages such information. The group LASSO selects groups of features [68] and modifications have been developed for microbiome applications incorporating multiple levels of taxonomic hierarchy [16]. Other options include a phylogeny-based penalty that penalizes coefficients along a supplied phylogenetic tree [66] or a kernel-based penalty incorporating a desired ecological distance [39]. To increase power and address the challenge of FDR control, the hierarchical taxonomic structure could be utilized in a multi-stage FDR controlling approach [23]. Applications of these methods require the taxa assignments and phylogenetic tree, which may be incompletely elucidated for novel microbial species, or measured with error [18, 32].
While simulation studies showed our proposed approach had higher power and better control of FDR at the nominal level compared to other approaches for most scenarios considered, use of the BH procedure with the debiased LASSO and the iterative SIS procedure failed to control FDR for some cases under high dimensionality. Recently, Javanmard and Javadi [25] showed that the BH procedure may fail to control FDR using the debiased LASSO due to correlation between estimates, but we found little indication of highly correlated estimates in our simulation studies. Correspondingly, applying the Benjamini–Yekutieli adjustment [5] did not result in better FDR control. Instead, it appears our sample sizes were too small to achieve a high enough probability of the sure screening property, leading to relevant features being screened out by the iterative SIS procedure. While additional methodological advancement is needed for valid inference following both variable screening and selection when sample sizes are small, our method performed competitively in recovering relevant features.
We have addressed the problem of selecting microbiome features relevant to an outcome of interest under confounding by a categorical variable. Our results indicate that standardization enables more accurate identification of individual microbiome features with an effect on the outcome of interest compared to other variable selection and estimation procedures.
R and R markdown code for simulation studies and data analysis are provided in Additional file 2: R Code. Sequence data can be found in the NCBI SRA submission library under the following accession numbers: sequencing project IDs #1095844, #1095845, #1095846; SRA identifier #SRP165130.
Aitchison J. The statistical analysis of compositional data. J R Stat Soc Ser B (Methodol). 1982;44:139–77.
Baksi KD, Kuntal BK, Mande SS. TIME: a web application for obtaining insights into microbial ecology using longitudinal microbiome data. Front Microbiol. 2018;9:36.
Banerjee S, Schlaeppi K, van der Heijden MGA. Keystone taxa as drivers of microbiome structure and functioning. Nat Rev Microbiol. 2018;16(9):567–76.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B (Methodol). 1995;57(1):289–300.
Benjamini Y, Yekutieli D. The control of the false discovery rate in multiple testing under dependency. Ann Stat. 2001;29(4):1165–88.
Boers SA, Jansen R, Hays JP. Suddenly everyone is a microbiota specialist. Clin Microbiol Infect. 2016;22(7):581–2.
Bühlmann P, Kalisch M, Meier L. High-dimensional statistics with a view toward applications in biology. Annu Rev Stat Appl. 2014;1(1):255–78.
Callahan BJ, McMurdie PJ, Holmes SP. Exact sequence variants should replace operational taxonomic units in marker-gene data analysis. ISME J. 2017;11:2639–43.
Camacho-Ortiz A, Gutiérrez-Delgado EM, Garcia-Mazcorro JF, Mendoza-Olazarán S, Martínez-Meléndez A, Palau-Davila L, Baines SD, Maldonado-Garza H, Garza-González E. Randomized clinical trial to evaluate the effect of fecal microbiota transplant for initial Clostridium difficile infection in intestinal microbiome. PLoS ONE. 2017;12:0189768.
Costea PI, Zeller G, Sunagawa S, Bork P. A fair comparison. Nat Methods. 2014;11(4):359.
Dezeure R, Bühlmann P, Zhang C-H. High-dimensional simultaneous inference with the bootstrap. TEST. 2017;26(4):685–719.
Dezeure R, Bühlmann P, Meier L, Meinshausen N. High-dimensional inference: confidence intervals, p-values and R-software hdi. Stat Sci. 2015;30:533–58.
Fan J, Li R. Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc. 2001;96(456):1348–60.
Fan J, Lv J. Sure independence screening for ultrahigh dimensional feature space. J R Stat Soc Ser (Stat Methodol). 2008;70(5):849–911.
Forchetti G, Masciarelli O, Alemano S, Alvarez D, Abdala G. Endophytic bacteria in sunflower (Helianthus annuus l.): isolation, characterization, and production of jasmonates and abscisic acid in culture medium. Appl Microbiol Biotechnol. 2007;76(5):1145–52.
Garcia TP, Müller S, Carroll RJ, Walzem RL. Identification of important regressor groups, subgroups and individuals via regularization methods: application to gut microbiome data. Bioinformatics. 2014;30(6):831–7.
Gloor GB, Macklaim JM, Pawlowsky-Glahn V, Egozcue JJ. Microbiome datasets are compositional: And this is not optional. Front Microbiol. 2017;8:2224.
Golob JL, Margolis E, Hoffman NG, Fredricks DN. Evaluating the accuracy of amplicon-based microbiome computational pipelines on simulated human gut microbial communities. BMC Bioinform. 2017;18(1):283.
Granger CWJ. Investigating causal relations by econometric models and cross-spectral methods. Econometrica. 1969;37(3):424–38.
Hernán MA, Robins JM. Causal inference. Boca Raton: Chapman & Hall/CRC; 2019.
Hernán MA, Brumback B, Robins JM. Marginal structural models to estimate the joint causal effect of nonrandomized treatments. J Am Stat Assoc. 2001;96(454):440–8.
Holland PW. Causal inference, path analysis, and recursive structural equations models. Sociol Methodol. 1988;1988:449–84.
Hu J, Koh H, He L, Liu M, Blaser MJ, Li H. A two-stage microbial association mapping framework with advanced FDR control. Microbiome. 2018;6(1):131.
Imai K, Van Dyk DA. Causal inference with general treatment regimes: generalizing the propensity score. J Am Stat Assoc. 2004;99(467):854–66.
Javanmard A, Javadi H. False discovery rate control via debiased lasso. Electron J Stat. 2019;13(1):1212–53.
Keiding N, Clayton D. Standardization and control for confounding in observational studies: a historical perspective. Stat Sci. 2014;29(4):529–58.
Knight K, Fu W. Asymptotics for lasso-type estimators. Ann Stat. 2000;28(5):1356–78.
Knight R, Vrbanac A, Taylor BC, Aksenov A, Callewaert C, Debelius J, Gonzalez A, Kosciolek T, McCall L-I, McDonald D, et al. Best practices for analysing microbiomes. Nat Rev Microbiol. 2018;16:410–22.
Lederberg J, Mccray AT. Ome SweetOmics: a genealogical treasury of words. Scientist. 2001;15(7):8.
Leng C, Lin Y, Wahba G. A note on the lasso and related procedures in model selection. Stat Sin. 2006;16:1273–84.
Lin W, Shi P, Feng R, Li H. Variable selection in regression with compositional covariates. Biometrika. 2014;101(4):785–97.
Lindgreen S, Adair KL, Gardner PP. An evaluation of the accuracy and speed of metagenome analysis tools. Sci Rep. 2016;6:19233.
Liu L, Li Y, Li S, Hu N, He Y, Pong R, Lin D, Lu L, Law M. Comparison of next-generation sequencing systems. J Biomed Biotechnol. 2012.
Madigan M, Cox SS, Stegeman RA. Nitrogen fixation and nitrogenase activities in members of the family rhodospirillaceae. J Bacteriol. 1984;157(1):73–8.
McMurdie PJ, Holmes S. Waste not, want not: why rarefying microbiome data is inadmissible. PLoS Comput Biol. 2014;10:1–12.
Nandy P, Maathuis MH, Richardson TS. Estimating the effect of joint interventions from observational data in sparse high-dimensional settings. Ann Stat. 2017;45(2):647–74.
Neyman J. On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Stat Sci. 1923;5(4):465–80.
Pearl J. Causality models: reasoning and inference. 2nd ed. Cambridge: Cambridge University Press; 2009.
Randolph TW, Zhao S, Copeland W, Hullar M, Shojaie A. Kernel-penalized regression for analysis of microbiome data. Ann Appl Stat. 2018;12(1):540–66.
Riesenfeld CS, Schloss PD, Handelsman J. Metagenomics: genomic analysis of microbial communities. Annu Rev Genet. 2004;38(1):525–52.
Rogers GB, Stressmann FA, Koller G, Daniels T, Carroll MP, Bruce KD. Assessing the diagnostic importance of nonviable bacterial cells in respiratory infections. Diagn Microbiol Infect Dis. 2008;62(2):133–41.
Rubin DB. Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psychol. 1974;66(5):688–701.
Saldana D, Feng Y. SIS: an R package for sure independence screening in ultrahigh-dimensional statistical models. J Stat Softw. 2018;83(2):1–25.
Schloss PD, Westcott SL. Assessing and improving methods used in operational taxonomic unit-based approaches for 16S rRNA gene sequence analysis. Appl Environ Microbiol. 2011;77(10):3219–26.
Schneeweiss S, Rassen JA, Glynn RJ, Avorn J, Mogun H, Brookhart MA. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology (Cambridge, Mass). 2009;20(4):512.
Schwarz G. Estimating the dimension of a model. Ann Stat. 1978;6(2):461–4.
Sharpton TJ. An introduction to the analysis of shotgun metagenomic data. Front Plant Sci. 2014;5:209.
Sheflin AM, Chiniquy D, Yuan C, Goren E, Kumar I, Braud M, Brutnell T, Eveland AL, Tringe S, Liu P, Kresovich S, Marsh EL, Schachtman DP, Prenni JE. Metabolomics of sorghum roots during nitrogen stress reveals compromised metabolic capacity for salicylic acid biosynthesis. Plant Direct. 2019;3(3):00122.
Shi P, Zhang A, Li H. Regression analysis for microbiome compositional data. Ann Appl Stat. 2016;10(2):1019–40.
Siddique AA, Schnitzer ME, Bahamyirou A, Wang G, Holtz TH, Migliori GB, Sotgiu G, Gandhi NR, Vargas MH, Menzies D, et al. Causal inference with multiple concurrent medications: a comparison of methods and an application in multidrug-resistant tuberculosis. Stat Methods Med Res. 2018;28:3534–49.
Silva MCP, Figueiredo AF, Andreote FD, Cardoso EJBN. Plant growth promoting bacteria in brachiaria brizantha. World J Microbiol Biotechnol. 2013;29(1):163–71.
Sohn MB, Li H, et al. Compositional mediation analysis for microbiome studies. Ann Appl Stat. 2019;13(1):661–81.
Stewart EJ. Growing unculturable bacteria. J Bacteriol. 2012;194:4151–60.
Taubman SL, Robins JM, Mittleman MA, Hernán MA. Intervening on risk factors for coronary heart disease: an application of the parametric g-formula. Int J Epidemiol. 2009;38(6):1599–611.
Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol). 1996;58(1):267–88.
van de Geer S, Bühlmann P, Ritov Y, Dezeure R. On asymptotically optimal confidence regions and tests for high-dimensional models. Ann Stat. 2014;42(3):1166–202.
Van der Ent S, Van Wees SC, Pieterse CM. Jasmonate signaling in plant interactions with resistance-inducing beneficial microbes. Phytochemistry. 2009;70(13–14):1581–8.
Wang C, Hu J, Blaser MJ, Li H. Estimating and testing the microbial causal mediation effect with high-dimensional and compositional microbiome data. Bioinformatics. 2019;36:347–55.
Wang H, Li R, Tsai C-L. Tuning parameter selectors for the smoothly clipped absolute deviation method. Biometrika. 2007;94(3):553–68.
Wasternack C. Action of jasmonates in plant stress responses and development-applied aspects. Biotechnol Adv. 2014;32(1):31–9.
Weiss S, Xu ZZ, Peddada S, Amir A, Bittinger K, Gonzalez A, Lozupone C, Zaneveld JR, Vázquez-Baeza Y, Birmingham A, Hyde ER, Knight R. Normalization and microbial differential abundance strategies depend upon data characteristics. Microbiome. 2017;5(1):27.
Westcott SL, Schloss PD. De novo clustering methods outperform reference-based methods for assigning 16S rRNA gene sequences to operational taxonomic units. PeerJ. 2015;3:1487.
Westreich D, Cole SR. Invited commentary: positivity in practice. Am J Epidemiol. 2010;171(6):674–7.
Wilson A, Zigler CM, Patel CJ, Dominici F. Model-averaged confounder adjustment for estimating multivariate exposure effects with linear regression. Biometrics. 2018;74(3):1034–44.
Xia Y, Sun J. Hypothesis testing and statistical analysis of microbiome. Genes Dis. 2017;4(3):138–48.
Xian J, Chen L, Yu Y, Zhang X, Chen J. A phylogeny-regularized sparse regression model for predictive modeling of microbial community data. Front Microbiol. 2018;9:3112.
Xiao J, Chen L, Johnson S, Yu Y, Zhang X, Chen J. Predictive modeling of microbiome data using a phylogeny-regularized generalized linear mixed model. Front Microbiol. 2018;9:1391.
Yuan M, Lin Y. Model selection and estimation in regression with grouped variables. J R Stat Soc Ser B (Stat Methodol). 2006;68(1):49–67.
Zhang C-H, et al. Nearly unbiased variable selection under minimax concave penalty. Ann Stat. 2010;38(2):894–942.
Zhang C-H, Zhang SS. Confidence intervals for low dimensional parameters in high dimensional linear models. J R Stat Soc Ser B (Stat Methodol). 2014;76(1):217–42.
Zhang J, Wei Z, Chen J. A distance-based approach for testing the mediation effect of the human microbiome. Bioinformatics. 2018;34(11):1875–83.
The authors would like to thank Chaohui Yuan for assistance with R code.
This research was supported by the Office of Science (BER), US Department of Energy (DE-SC0014395).
Department of Statistics, Iowa State University, 2438 Osborn Dr, Ames, IA, 50011, USA
Emily Goren, Chong Wang, Zhulin He & Peng Liu
Department of Veterinary Diagnostic and Production Animal Medicine, Iowa State University, 2203 Lloyd Veterinary Medical Center, Ames, IA, 50011, USA
Chong Wang
Department of Horticulture and Landscape Architecture, Colorado State University, 301 University Ave, Fort Collins, CO, 80523, USA
Amy M. Sheflin & Jessica E. Prenni
Department of Energy, Joint Genome Institute, 2800 Mitchell Dr, Walnut Creek, CA, 94598, USA
Dawn Chiniquy & Susannah Tringe
Department of Agronomy and Horticulture, University of Nebraska, 1825 N 38th St, Lincoln, NE, 68583, USA
Daniel P. Schachtman
Emily Goren
Zhulin He
Amy M. Sheflin
Dawn Chiniquy
Jessica E. Prenni
Susannah Tringe
Peng Liu
EG, CW, ZH and PL contributed to the methods development and simulation study design. DPS performed field experiments. AMS and DC performed laboratory experiments. EG performed simulation studies and analyzed results. EG wrote the paper with contributions from CW, ZH, AMS, JEP, ST, DPS, and PL. All authors read and approved the final manuscript.
Correspondence to Peng Liu.
. Additional simulation results and data analysis.
Additional file 2: R Code
. R and R markdown code for all simulation studies and data analysis.
Goren, E., Wang, C., He, Z. et al. Feature selection and causal analysis for microbiome studies in the presence of confounding using standardization. BMC Bioinformatics 22, 362 (2021). https://doi.org/10.1186/s12859-021-04232-2
High-dimensional feature selection
Microbiome analysis
Analysis and modelling of complex systems
|
CommonCrawl
|
eISSN:
Networks & Heterogeneous Media
March 2018 , Volume 13 , Issue 1
Select all articles
Export/Reference:
Derivation of a rod theory from lattice systems with interactions beyond nearest neighbours
Roberto Alicandro, Giuliano Lazzaroni and Mariapia Palombaro
2018, 13(1): 1-26 doi: 10.3934/nhm.2018001 +[Abstract](2312) +[HTML](204) +[PDF](360.33KB)
We study continuum limits of discrete models for (possibly heterogeneous) nanowires. The lattice energy includes at least nearest and next-to-nearest neighbour interactions: the latter have the role of penalising changes of orientation. In the heterogeneous case, we obtain an estimate on the minimal energy spent to match different equilibria. This gives insight into the nucleation of dislocations in epitaxially grown heterostructured nanowires.
Roberto Alicandro, Giuliano Lazzaroni, Mariapia Palombaro. Derivation of a rod theory from lattice systems with interactions beyond nearest neighbours. Networks & Heterogeneous Media, 2018, 13(1): 1-26. doi: 10.3934\/nhm.2018001.
Stochastic homogenization of maximal monotone relations and applications
Luca Lussardi, Stefano Marini and Marco Veneroni
2018, 13(1): 27-45 doi: 10.3934/nhm.2018002 +[Abstract](1701) +[HTML](165) +[PDF](488.35KB)
We study the homogenization of a stationary random maximal monotone operator on a probability space equipped with an ergodic dynamical system. The proof relies on Fitzpatrick's variational formulation of monotone relations, on Visintin's scale integration/disintegration theory and on Tartar-Murat's compensated compactness. We provide applications to systems of PDEs with random coefficients arising in electromagnetism and in nonlinear elasticity.
Luca Lussardi, Stefano Marini, Marco Veneroni. Stochastic homogenization of maximal monotone relations and applications. Networks & Heterogeneous Media, 2018, 13(1): 27-45. doi: 10.3934\/nhm.2018002.
Stationary solutions and asymptotic behaviour for a chemotaxis hyperbolic model on a network
Francesca R. Guarguaglini
This paper approaches the question of existence and uniqueness of stationary solutions to a semilinear hyperbolic-parabolic system and the study of the asymptotic behaviour of global solutions. The system is a model for some biological phenomena evolving on a network composed by a finite number of nodes and oriented arcs. The transmission conditions for the unknowns, set at each inner node, are crucial features of the model.
Francesca R. Guarguaglini. Stationary solutions and asymptotic behaviour for a chemotaxis hyperbolic model on a network. Networks & Heterogeneous Media, 2018, 13(1): 47-67. doi: 10.3934\/nhm.2018003.
On a vorticity-based formulation for reaction-diffusion-Brinkman systems
Verónica Anaya, Mostafa Bendahmane, David Mora and Ricardo Ruiz Baier
2018, 13(1): 69-94 doi: 10.3934/nhm.2018004 +[Abstract](2399) +[HTML](1072) +[PDF](6157.76KB)
We are interested in modelling the interaction of bacteria and certain nutrient concentration within a porous medium admitting viscous flow. The governing equations in primal-mixed form consist of an advection-reaction-diffusion system representing the bacteria-chemical mass exchange, coupled to the Brinkman problem written in terms of fluid vorticity, velocity and pressure, and describing the flow patterns driven by an external source depending on the local distribution of the chemical species. A priori stability bounds are derived for the uncoupled problems, and the solvability of the full system is analysed using a fixed-point approach. We introduce a primal-mixed finite element method to numerically solve the model equations, employing a primal scheme with piecewise linear approximation of the reaction-diffusion unknowns, while the discrete flow problem uses a mixed approach based on Raviart-Thomas elements for velocity, Nédélec elements for vorticity, and piecewise constant pressure approximations. In particular, this choice produces exactly divergence-free velocity approximations. We establish existence of discrete solutions and show their convergence to the weak solution of the continuous coupled problem. Finally, we report several numerical experiments illustrating the behaviour of the proposed scheme.
Ver\u00F3nica Anaya, Mostafa Bendahmane, David Mora, Ricardo Ruiz Baier. On a vorticity-based formulation for reaction-diffusion-Brinkman systems. Networks & Heterogeneous Media, 2018, 13(1): 69-94. doi: 10.3934\/nhm.2018004.
On Lennard-Jones systems with finite range interactions and their asymptotic analysis
Mathias Schäffner and Anja Schlömerkemper
2018, 13(1): 95-118 doi: 10.3934/nhm.2018005 +[Abstract](1655) +[HTML](176) +[PDF](452.77KB)
The aim of this work is to provide further insight into the qualitative behavior of mechanical systems that are well described by Lennard-Jones type interactions on an atomistic scale. By means of $Γ$-convergence techniques, we study the continuum limit of one-dimensional chains of atoms with finite range interactions of Lennard-Jones type, including the classical Lennard-Jones potentials. So far, explicit formula for the continuum limit were only available for the case of nearest and next-to-nearest neighbour interactions. In this work, we provide an explicit expression for the continuum limit in the case of finite range interactions. The obtained homogenization formula is given by the convexification of a Cauchy-Born energy density.
Furthermore, we study rescaled energies in which bulk and surface contributions scale in the same way. The related discrete-to-continuum limit yields a rigorous derivation of a one-dimensional version of Griffith' fracture energy and thus generalizes earlier derivations for nearest and next-to-nearest neighbors to the case of finite range interactions.
A crucial ingredient to our proofs is a novel decomposition of the energy that allows for refined estimates.
Mathias Sch\u00E4ffner, Anja Schl\u00F6merkemper. On Lennard-Jones systems with finite range interactions and their asymptotic analysis. Networks & Heterogeneous Media, 2018, 13(1): 95-118. doi: 10.3934\/nhm.2018005.
Fisher-KPP equations and applications to a model in medical sciences
Benjamin Contri
2018, 13(1): 119-153 doi: 10.3934/nhm.2018006 +[Abstract](1650) +[HTML](169) +[PDF](460.08KB)
This paper is devoted to a class of reaction-diffusion equations with nonlinearities depending on time modeling a cancerous process with chemotherapy. We begin by considering nonlinearities periodic in time. For these functions, we investigate equilibrium states, and we deduce the large time behavior of the solutions, spreading properties and the existence of pulsating fronts. Next, we study nonlinearities asymptotically periodic in time with perturbation. We show that the large time behavior and the spreading properties can still be determined in this case.
Benjamin Contri. Fisher-KPP equations and applications to a model in medical sciences. Networks & Heterogeneous Media, 2018, 13(1): 119-153. doi: 10.3934\/nhm.2018006.
Green's function for elliptic systems: Moment bounds
Peter Bella and Arianna Giunti
We study estimates of the Green's function in $\mathbb{R}^d$ with $d ≥ 2$, for the linear second order elliptic equation in divergence form with variable uniformly elliptic coefficients. In the case $d ≥ 3$, we obtain estimates on the Green's function, its gradient, and the second mixed derivatives which scale optimally in space, in terms of the "minimal radius" $r_*$ introduced in [Gloria, Neukamm, and Otto: A regularity theory for random elliptic operators; ArXiv e-prints (2014)]. As an application, our result implies optimal stochastic Gaussian bounds on the Green's function and its derivatives in the realm of homogenization of equations with random coefficient fields with finite range of dependence. In two dimensions, where in general the Green's function does not exist, we construct its gradient and show the corresponding estimates on the gradient and mixed second derivatives. Since we do not use any scalar methods in the argument, the result holds in the case of uniformly elliptic systems as well.
Peter Bella, Arianna Giunti. Green\'s function for elliptic systems: Moment bounds. Networks & Heterogeneous Media, 2018, 13(1): 155-176. doi: 10.3934\/nhm.2018007.
Entropy-preserving coupling conditions for one-dimensional Euler systems at junctions
Jens Lang and Pascal Mindt
This paper is concerned with a set of novel coupling conditions for the 3× 3 one-dimensional Euler system with source terms at a junction of pipes with possibly different cross-sectional areas. Beside conservation of mass, we require the equality of the total enthalpy at the junction and that the specific entropy for pipes with outgoing flow equals the convex combination of all entropies that belong to pipes with incoming flow. Previously used coupling conditions include equality of pressure or dynamic pressure. They are restricted to the special case of a junction having only one pipe with outgoing flow direction. Recently, Reigstad [SIAM J. Appl. Math., 75:679-702,2015] showed that such pressure-based coupling conditions can produce non-physical solutions for isothermal flows through the production of mechanical energy. Our new coupling conditions ensure energy as well as entropy conservation and also apply to junctions connecting an arbitrary number of pipes with flexible flow directions. We prove the existence and uniqueness of solutions to the generalised Riemann problem at a junction in the neighbourhood of constant stationary states which belong to the subsonic region. This provides the basis for the well-posedness of the homogeneous and inhomogeneous Cauchy problems for initial data with sufficiently small total variation.
Jens Lang, Pascal Mindt. Entropy-preserving coupling conditions for one-dimensional Euler systems at junctions. Networks & Heterogeneous Media, 2018, 13(1): 177-190. doi: 10.3934\/nhm.2018008.
RSS this journal
Tex file preparation
Open Choice
Abstracted in
Add your name and e-mail address to receive news of forthcoming issues of this journal:
Select the journal
Select Journals
|
CommonCrawl
|
Discrete & Continuous Dynamical Systems - B
March 2015 , Volume 20 , Issue 2
An immersed interface method for Pennes bioheat transfer equation
Champike Attanayake and So-Hsiang Chou
2015, 20(2): 323-337 doi: 10.3934/dcdsb.2015.20.323 +[Abstract](3030) +[PDF](410.1KB)
We consider an immersed finite element method for solving one dimensional Pennes bioheat transfer equation with discontinuous coefficients and nonhomogenous flux jump condition. Convergence properties of the semidiscrete and fully discrete schemes are investigated in the $L^{2}$ and energy norms. By using the computed solution from the immerse finite element method, an inexpensive and effective flux recovery technique is employed to approximate flux over the whole domain. Optimal order convergence is proved for the immersed finite element approximation and its flux. Results of the simulation confirm the convergence analysis.
Champike Attanayake, So-Hsiang Chou. An immersed interface method for Pennes bioheat transfer equation. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 323-337. doi: 10.3934/dcdsb.2015.20.323.
On the Boltzmann equation for charged particle beams under the effect of strong magnetic fields
Mihai Bostan
The subject matter of this paper concerns the paraxial approximation for the transport of charged particles. We focus on the magnetic confinement properties of charged particle beams. The collisions between particles are taken into account through the Boltzmann kernel. We derive the magnetic high field limit and we emphasize the main properties of the averaged Boltzmann collision kernel, together with its equilibria.
Mihai Bostan. On the Boltzmann equation for charged particle beams under the effect of strong magnetic fields. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 339-371. doi: 10.3934/dcdsb.2015.20.339.
Chaos control in a pendulum system with excitations
Xianwei Chen, Zhujun Jing and Xiangling Fu
This paper is devoted to investigate the problem of controlling chaos for a pendulum system with parametric and external excitations. By using Melnikov methods, the criteria of controlling chaos are obtained. Numerical simulations are given to illustrate the effect of the chaos control for this system, suppression of homoclinic chaos is more effective than suppression of heteroclinic chaos, and the chaotic motions can be suppressed to period-motions by adjusting parameters of chaos-suppressing excitation. Finally, we calculate the maximum Lyapunov exponents (LE) in parameter-plane and observe the frequency of chaos-suppressing excitation also play an important role in the process of chaos control.
Xianwei Chen, Zhujun Jing, Xiangling Fu. Chaos control in a pendulum system with excitations. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 373-383. doi: 10.3934/dcdsb.2015.20.373.
Asymptotic behavior for a reaction-diffusion population model with delay
Keng Deng and Yixiang Wu
In this paper, we study a reaction-diffusion population model with time delay. We establish a comparison principle for coupled upper/lower solutions and prove the existence/uniqueness result for the model. We then show the global asymptotic behavior of the model.
Keng Deng, Yixiang Wu. Asymptotic behavior for a reaction-diffusion population model with delay. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 385-395. doi: 10.3934/dcdsb.2015.20.385.
A phase field $\alpha$-Navier-Stokes vesicle-fluid interaction model: Existence and uniqueness of solutions
Ariane Piovezan Entringer and José Luiz Boldrini
In this work we analyze a system of nonlinear evolution partial differential equations modeling the fluid-structure interaction associated to the dynamics of an elastic vesicle immersed in a moving incompressible viscous fluid. This system of equations couples an equation for a phase field variable, used to determine the position of vesicle membrane deformed by the action of the fluid, to the $\alpha$-Navier- Stokes equations with an extra nonlinear interaction term. We prove global in time existence and uniqueness of solutions for this system in suitable functional spaces even in the three-dimensional case.
Ariane Piovezan Entringer, Jos\u00E9 Luiz Boldrini. A phase field $\\alpha$-Navier-Stokes vesicle-fluid interaction model: Existence and uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 397-422. doi: 10.3934/dcdsb.2015.20.397.
Dynamical complexity of a prey-predator model with nonlinear predator harvesting
R. P. Gupta, Peeyush Chandra and Malay Banerjee
2015, 20(2): 423-443 doi: 10.3934/dcdsb.2015.20.423 +[Abstract](3887) +[PDF](4719.4KB)
The objective of this paper is to study systematically the dynamical properties of a predator-prey model with nonlinear predator harvesting. We show the different types of system behaviors for various parameter values. The results developed in this article reveal far richer dynamics compared to the model without harvesting. The occurrence of change of structure or bifurcation in a system with parameters is a way to predict global dynamics of the system. It has been observed that the model has at most two interior equilibria and can exhibit numerous kinds of bifurcations (e.g. saddle-node, transcritical, Hopf-Andronov and Bogdanov-Takens bifurcation). The stability (direction) of the Hopf-bifurcating periodic solutions has been obtained by computing the first Lyapunov number. The emergence of homoclinic loop has been shown through numerical simulation when the limit cycle arising though Hopf-bifurcation collides with a saddle point. Numerical simulations using MATLAB are carried out as supporting evidences of our analytical findings. The main purpose of the present work is to offer a complete mathematical analysis for the model.
R. P. Gupta, Peeyush Chandra, Malay Banerjee. Dynamical complexity of a prey-predator model with nonlinear predator harvesting. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 423-443. doi: 10.3934/dcdsb.2015.20.423.
Efficient resolution of metastatic tumor growth models by reformulation into integral equations
Niklas Hartung
The McKendrick/Von Foerster equation is a transport equation with a non-local boundary condition that appears frequently in structured population models. A variant of this equation with a size structure has been proposed as a metastatic growth model by Iwata et al.
Here we will show how a family of metastatic models with 1D or 2D structuring variables, based on the Iwata model, can be reformulated into an integral equation counterpart, a Volterra equation of convolution type, for which a rich numerical and analytical theory exists. Furthermore, we will point out the potential of this reformulation by addressing questions coming up in the modelling of metastatic tumour growth. We will show how this approach permits to reduce the computational cost of the numerical resolution and to prove structural identifiability.
Niklas Hartung. Efficient resolution of metastatic tumor growth models by reformulation into integral equations. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 445-467. doi: 10.3934/dcdsb.2015.20.445.
Influence of a spatial structure on the long time behavior of a competitive Lotka-Volterra type system
Hélène Leman, Sylvie Méléard and Sepideh Mirrahimi
To describe population dynamics, it is crucial to take into account jointly evolution mechanisms and spatial motion. However, the models which include these both aspects, are not still well-understood. Can we extend the existing results on type structured populations, to models of populations structured by type and space, considering diffusion and nonlocal competition between individuals?
We study a nonlocal competitive Lotka-Volterra type system, describing a spatially structured population which can be either monomorphic or dimorphic. Considering spatial diffusion, intrinsic death and birth rates, together with death rates due to intraspecific and interspecific competition between the individuals, leading to some integral terms, we analyze the long time behavior of the solutions. We first prove existence of steady states and next determine the long time limits, depending on the competition rates and the principal eigenvalues of some operators, corresponding somehow to the strength of traits. Numerical computations illustrate that the introduction of a new mutant population can lead to the long time evolution of the spatial niche.
H\u00E9l\u00E8ne Leman, Sylvie M\u00E9l\u00E9ard, Sepideh Mirrahimi. Influence of a spatial structure on the long time behavior of a competitive Lotka-Volterra type system. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 469-493. doi: 10.3934/dcdsb.2015.20.469.
Gradient superconvergence post-processing of the tensor-product quadratic pentahedral finite element
Jinghong Liu and Yinsuo Jia
In this article, using the well-known Superconvergent Patch Recovery (SPR) method, we present a gradient superconvergence post-processing scheme for the tensor-product quadratic pentahedral finite element approximation to the solution of a general second-order elliptic boundary value problem in three dimensions over fully uniform meshes. The supercloseness property of the gradients between the finite element solution $u_h$ and the tensor-product quadratic interpolation $\Pi u$ is first given. Then we show that the gradient recovered from the finite element solution by using the SPR method is superconvergent to $\nabla u$ at interior vertices.
Jinghong Liu, Yinsuo Jia. Gradient superconvergence post-processing of the tensor-product quadratic pentahedral finite element. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 495-504. doi: 10.3934/dcdsb.2015.20.495.
Asymptotic spreading of a three dimensional Lotka-Volterra cooperative-competitive system
Yubin Liu and Peixuan Weng
This paper is concerned with a three dimensional diffusive Lotka-Volterra system which is combined with cooperative-competitive interactions between the three species. By using the method of super-sub solutions and comparison principle with cross iteration, some results on the asymptotic spreading speed of the system are established under certain assumptions on the parameters appearing in the system.
Yubin Liu, Peixuan Weng. Asymptotic spreading of a three dimensional Lotka-Volterra cooperative-competitive system. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 505-518. doi: 10.3934/dcdsb.2015.20.505.
Mode structure of a semiconductor laser with feedback from two external filters
Piotr Słowiński, Bernd Krauskopf and Sebastian Wieczorek
We investigate the solution structure and stability of a semiconductor laser receiving time-delayed and frequency-filtered optical feedback from two external filters. This system is referred to as the 2FOF laser, and it has been used as pump laser in optical telecommunication and as light source in sensor applications. The underlying idea is that the two filter loops provide a means of stabilizing and controling the laser output. The mathematical model takes the form of delay differential equations for the (real-valued) population inversion of the laser active medium and for the (complex-valued) electric fields of the laser cavity and of the two filters. There are two time delays, which are the travel times of the light from the laser to each of the filters and back.
Our analysis of the 2FOF laser focuses on the basic solutions, known as continuous waves or external filtered modes (EFMs), which correspond to laser output with steady amplitude and frequency. Specifically, we consider the EFM-surface in the $(\omega_s,\,N_s,\,dC_p)$-space of steady frequency $\omega_s$, the corresponding steady population inversion $N_s$, and the feedback phase difference $dC_p$. This surface emerges as the natural object for the study of the 2FOF laser because it conveniently catalogues information about available frequency ranges of the EFMs. We identify five transitions, through four different singularities and a cubic tangency, which change the type of the EFM-surface locally and determine the EFM-surface bifurcation diagram in the $(\Delta_1,\,\Delta_2)$-plane. In this way, we classify the possible types of the EFM-surface, which consist of a combination of bands (covering the entire $dC_p$-range) and islands (covering only a finite range of $dC_p$).
We also investigate the stability of the EFMs, where we focus on saddle-node and Hopf bifurcation curves that bound regions of stable EFMs on the EFM-surface. It is shown how these stability regions evolve when parameters are changed along a chosen path in the $(\Delta_1,\,\Delta_2)$-plane. From a viewpoint of practical interests, we find various bands and islands of stability on the EFM-surface that may be accessible experimentally.
Beyond their relevance for the 2FOF laser system, the results presented here also showcase how advanced tools from bifurcation theory and singularity theory can be employed to uncover and represent the complex solution structure of a delay differential equation model that depends on a considerable number of input parameters, including two time delays.
Piotr S\u0142owi\u0144ski, Bernd Krauskopf, Sebastian Wieczorek. Mode structure of a semiconductor laser with feedback from two external filters. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 519-586. doi: 10.3934/dcdsb.2015.20.519.
Concentration phenomenon in a nonlocal equation modeling phytoplankton growth
Linfeng Mei, Wei Dong and Changhe Guo
We study a nonlocal reaction-diffusion-advection equation arising from the study of a single phytoplankton species competing for light in a poorly mixed water column. When the diffusion coefficient is very small, the phytoplankton population concentrates around certain zeros of the advection function. The corresponding phytoplankton distribution approaches a $\delta$-like function centered at those zeros.
Linfeng Mei, Wei Dong, Changhe Guo. Concentration phenomenon in a nonlocal equation modeling phytoplankton growth. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 587-597. doi: 10.3934/dcdsb.2015.20.587.
Exponential decay for linear damped porous thermoelastic systems with second sound
Salim A. Messaoudi and Abdelfeteh Fareh
In this paper, we investigate two problems in porous thermoelasticity where the heat conduction is given by Cattaneo's law and prove exponential decay results in the presence of both macro- and micro-dissipations.
Salim A. Messaoudi, Abdelfeteh Fareh. Exponential decay for linear damped porous thermoelastic systems with second sound. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 599-612. doi: 10.3934/dcdsb.2015.20.599.
Uniform controllability of semidiscrete approximations for parabolic systems in Banach spaces
Thuy N. T. Nguyen
We address in this work the minimization of the $L^q$-norm $(q>2)$ of semidiscrete controls for parabolic equation. As shown in [15], under the main approximation assumptions that the discretized semigroup is uniformly analytic and that the degree of unboundedness of control operator is lower than 1/2, uniform controllability is achieved in $L^2$ for semidiscrete approximations for the parabolic systems. The main goal of this paper is to overcome the limitation of [15] about the order 1/2 of unboundedness of the control operator. Namely, we show that the uniform controllability property also holds in $L^q \ (q>2)$ even in the case of a degree of unboundedness greater than 1/2. Moreover, a minimization procedure to compute the approximation controls in $L^q\ (q>2)$ is provided. An example of application is implemented for the one-dimensional heat equation with Dirichlet boundary control.
Thuy N. T. Nguyen. Uniform controllability of semidiscrete approximations for parabolic systems in Banach spaces. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 613-640. doi: 10.3934/dcdsb.2015.20.613.
Global estimates and blow-up criteria for the generalized Hunter-Saxton system
Alejandro Sarria
The generalized, two-component Hunter-Saxton system comprises several well-known models of fluid dynamics and serves as a tool for the study of one-dimensional fluid convection and stretching. In this article a general representation formula for periodic solutions to the system, which is valid for arbitrary values of parameters $(\lambda,\kappa) \in \mathbb{R} \times \mathbb{R}$, is derived. This allows us to examine in great detail qualitative properties of blow-up as well as the asymptotic behaviour of solutions, including convergence to steady states in finite or infinite time.
Alejandro Sarria. Global estimates and blow-up criteria for the generalized Hunter-Saxton system. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 641-673. doi: 10.3934/dcdsb.2015.20.641.
Boundary layer separation of 2-D incompressible Dirichlet flows
Quan Wang, Hong Luo and Tian Ma
In this paper, the solutions of Navier-Stokes equations governing 2-D incompressible flows with the Dirichlet boundary condition are analyzed. We derive a condition for boundary layer separation, and the condition is determined by initial values and external forces. More importantly, the condition can predict when and where the boundary layer separation occurs directly. In addition, we also get an algebraic equation for the separation point and the separation time. The algebraic equation can tell us where the boundary layer separation does not occur in a short period of time. The main technical tool is the geometric theory of incompressible flows developed by T. Ma and S. Wang in [15].
Quan Wang, Hong Luo, Tian Ma. Boundary layer separation of 2-D incompressible Dirichlet flows. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 675-682. doi: 10.3934/dcdsb.2015.20.675.
Optimal harvesting for a stochastic N-dimensional competitive Lotka-Volterra model with jumps
Xiaoling Zou and Ke Wang
Optimization problem for a stochastic N-dimensional competitive Lotka-Volterra system is studied in this paper. The considered system is driven by both white noise and jumping noise, and the jumping noise is modeled by a stochastic integral with respect to a Poisson counting measure generated by a Poisson point process. For two types of objective functions, namely, time-averaged yield and sustained yield, the optimal harvesting efforts as well as the corresponding maximum yields are given respectively. Moreover, almost sure equivalence between these two objective functions is proved by ergodic method. This paper provides us a new idea to study the stochastic optimal harvesting problem with sustained yield, and this idea can be popularized to other stochastic systems.
Xiaoling Zou, Ke Wang. Optimal harvesting for a stochastic N-dimensional competitive Lotka-Volterra model with jumps. Discrete & Continuous Dynamical Systems - B, 2015, 20(2): 683-701. doi: 10.3934/dcdsb.2015.20.683.
|
CommonCrawl
|
Did the COVID-19 Pandemic Have Immediate Impacts on the Socio-Emotional and Digital Skills of Japanese Children
Yusuke Moriguchi, Chifumi Sakata, Xianwei Meng, Naoya Todo
Background: A novel coronavirus, SARS-CoV-2, has spread widely throughout the world. To reduce the spread of infection, children are prevented from going to school and have fewer opportunities for in-person communication. Although the pandemic has impacted the everyday lives of children, its impact on their development is unknown. This cross-sectional study compared Japanese children's socio-emotional behaviors and skills for operating digital devices before and during the pandemic.
Methods: Parents completed a web-based questionnaire before and during the pandemic for children ages 0-9. Children's socio-emotional development in an everyday context was assessed using the Strengths and Difficulties Questionnaire (SDQ). Children's basic touch interaction skills to operate digital devices and skills to use functions of digital devices were also measured.
Results: The results indicated that during the pandemic, children were more prosocial, experienced more problems in their peer relationships, and had better digital skills, but no differences were found in emotional symptoms, conduct problems, hyperactivity between before and during the pandemic. The differences in digital skills was explained by the duration of children's media use.
Conclusions: Overall, our results suggest the pandemic may have immediate impact on children's socio-emotional behaviors and digital skills.
Trial registration: We pre-registered our hypotheses, method, primary analyses, and sample size (https://osf.io/c7p6b)
socio-emotional skill
digital skill
A novel coronavirus, SARS-CoV-2, has now spread widely throughout the world. Due to the virus' high transmission rate, relatively long incubation period, and increased mortality rate in people with certain conditions (e.g., older people), the World Health Organization (WHO) has provided guidelines to help prevent the public from becoming infected with the virus (1). Common strategies include asking or ordering people to stay at home, avoid crowds or large gatherings, and practice social distancing. Consequently, people in several countries have been prevented from going to work or school including kindergarten and have fewer opportunities for in-person communication with others. The clinical course of the coronavirus disease, COVID-19, appears to be relatively mild in children compared to other populations (2, 3), although infants were found to be at high risk of becoming severely or critically ill (4). Nevertheless, the effects of the societal changes implemented to decrease the likelihood of SARS-CoV-2 infection on children's cognitive, social, and emotional development are unknown. According to Bronfenbrenner (5), child development is a function of the interaction between several systems and includes culture, parental occupations, schooling, peer relationships, and parenting. Thus, changes in one system can directly or indirectly affect children's development. In the case of COVID-19, the pandemic can affect parents' work, children's schooling, and media use by both children and adults, which may in turn have significant effects on children's cognitive, social, and emotional development.
Although several psychological studies have examined the effect of the COVID-19 pandemic on children's mental health (6–9), it remains unclear whether the pandemic affects children's development. One study in Italy and Spain reported that children experienced emotional and behavioral problems (e.g., difficulty concentrating) after the pandemic outbreak (8), but this study did not assess the population before the pandemic and thus the change in emotional problems could not be compared. Therefore, the present study examined differences in children's socio-emotional behaviors before and during the pandemic.
Moreover, researchers, educators, and parents realize the negative impacts of the pandemic, but this study also considers the possibility that the pandemic can provide children an opportunity to learn specific skills due to the measures implemented to reduce the transmission of COVID-19. Specifically, many children cannot go to school due to the pandemic, and therefore they may have increased opportunities to use digital devices, for instance, to receive online education (10). Consequently, children's skills in using digital devices may improve as they gain experience in their use. Previous studies have shown that children can operate digital devices beginning in early childhood (11, 12). Specifically, children have the specific touch skills required to operate digital devices, such as the pinch, and tap gestures, and know how to utilize many of the functions of digital devices, such as video-calling (13). Therefore, in this pre-registered study, in addition to examining differences in children's socio-emotional behaviors, we assessed whether children showed the differences in operating digital devices before and during the pandemic.
We assessed whether children's socio-emotional behaviors (emotional symptoms, conduct problems, hyperactivity, peer problems, and prosocial behavior) differed by their experience of the pandemic by comparing their socio-emotional behaviors before and during the pandemic. In terms of prosocial behavior, a previous study reported that experiencing a natural disaster (an 8.0 magnitude earthquake in China) affected children's prosocial behaviors. The researchers compared the prosocial behaviors of two groups: a group 6- and 9-year-old Chinese children who lived near the epicenter of the earthquake before the disaster and a second group of Chinese children matched by age and school of attendance after the earthquake. The results suggested that 6-year-old children became more selfish whereas 9-year-old children became more prosocial immediately after the disaster. Although the COVID-19 pandemic may be different from the earthquake in several ways, e.g., children may feel more anxiety about being infected by the virus, the results from the previous study suggest that experiencing an adversity can have differential effects by age on children's prosocial behaviors.
Thus, we hypothesized that children would experience differences in their social relationships during the pandemic that would show the differences in their socio-emotional behaviors. Moreover, based on the previous study, we expected children's age to moderate the effect of the pandemic on prosocial behaviors. To assess the differences in social relationships, we assessed the durations for children's schooling, outside play, and lessons (e.g., music, dance).
In Japan, the first infected person infected with SARS-CoV-2 was identified in January 2020, and the number of infected people has since increased although the growth rate was lower than in many other countries (14). On 27th February 2020, the government asked all schools across the country to close until March 2020, and the vast majority of schools complied (but nursery schools did not). School started to re-open at the beginning of April 2020, but the government declared a state of emergency covering seven prefectures including Tokyo and Osaka on 7th April 2020. Thus, most of the schools in seven prefectures closed whereas about 80% of the kindergartens and half of elementary schools in the other prefectures started to open on 10th April 2020(15). Subsequently, the declaration to close schools was extended to all regions on 16th April 2020, and most of the schools in all prefectures closed until 6th May 2020. Thus, data for the During-pandemic sample was collected when most of the schools were closed and children had less time for schooling and meeting with friends. For this study, we preregistered our hypotheses, method, primary analyses, and sample size (https://osf.io/c7p6b).
We conducted two cross-sectional studies in which we administered an internet-based survey to parents at two time periods, before and during the pandemic. Our Before-pandemic sample comprised primary caregivers of children ages 0–9 who were randomly selected from the population of a database (Cross Marketing Inc. Tokyo, Japan). The Before-pandemic sample completed the survey 26–30 September 2019. A total of 1215 participants completed the questionnaire, but 293 participants were excluded, of which 255 participants incorrectly answered trap questions and 38 participants inappropriately answered questions (e.g., participants who chose "1" in a series of questions). Out of 922 participants, we assigned the first 70 participants in each age group (4 to 9 years of age) to Study 1 (for a total 420 participants). Sample characteristics are presented in Table 1 and S1.
Our During-pandemic sample was selected in the same way as our Before-Pandemic sample. No participants were the same between the Before- and During- pandemic phases. The During-pandemic sample completed the survey 28–30 April 2020. During recruitment, 1045 participants completed the questionnaire, but 152 participants were excluded, of which 81 participants incorrectly answered trap questions and 71 participants answered inappropriately. After assigning parents to the studies, we had a total of 420 parents in Study 1.
Stimuli and procedure.
The online questionnaire consisted of two parts. In the first part, parents were asked to complete background information about themselves and their children. In the second part, parents were given a questionnaire about their children's socio-emotional development and their social life.
Background information. In the first part, parents answered questions about their background. Background information included parental age, parental education, family size, children's age, children's sex, and children's sleep hours (when children get up and go to sleep). Parental education level was assigned a value from 1 to 5 (1 = less than high school, 2 = high school, 3 = some college, 4 = undergraduate degree, 5 = graduate level).
Socio-emotional behaviors. In the second part, parents answered questions about their children's socio-emotional behaviors. Children's socio-emotional behaviors in an everyday context was assessed using the SDQ (Strengths and Difficulties Questionnaire) (16, 17, 18). The SDQ is a screening measure of social, emotional, and behavioral functioning. The 25-item SDQ is divided into five subscales, namely, emotional symptoms, conduct problems, hyperactivity, peer problems, and prosocial behavior. Emotion symptoms include five items, such as "Often complains of headaches, stomach-aches or sickness." Conduct problems include five items such as "Often fights with other children or bullies them." Hyperactivity includes five items such as "Restless, overactive, cannot stay still for long." Peer problems include five items, such as "Has at least one good friend." Prosocial behavior includes five items such as "Shares readily with other children, for example, toys, treats, pencils." The parents answered whether each item applied to a child on a three-point scale from 0 "not true" to 2 "certainly true."
Social life. To assess the differences in children's social lives, questions were asked regarding the duration of time that, children's schooling, children's outside play, and children's lessons (e.g., music, dance). We asked the number of days of children's schooling per week, and the average hours of outside play and lessons per day.
Analytic plan
Analyses were conducted in R (version 3.6,1). We conducted two analyses. First, we examined dependent variables that may be different before and during the pandemic. In our preregistration for the study, we planned to assess whether period and children's age affected their social life and socio-emotional behaviors using a MANOVA. The analysis included period (Before-pandemic vs. During-pandemic) and age (0 to 9) as independent variables and durations of, children's schooling, children's outside play and lessons, along with sub-scale scores for the SDQ as dependent variables. However, not all dependent variables were normally distributed, and we could not conduct the planned MANOVA. Instead, we conducted MANOVA within the framework of structural equation modelling (SEM). That is, we applied the MANOVA model to the data and estimated the parameters corresponding to the main effects using maximum likelihood estimation with robust (Huber-White) standard errors and a scaled test statistic that is (asymptotically) equal to the Yuan-Bentler test statistic using the "lavaan" package (19).
Second, we conducted a planned SEM analysis to assess the relationships between period and SDQ sub-scale scores, which were mediated by parents' and children's social lives. Specifically, we used variables indicative of children's and parent's social lives if we found significant main effects of period in the preceding analyses. We used background information as control variables for the analyses if we found significant differences in the Before-pandemic and During-pandemic samples.
The descriptive data are reported in Table 1. Children's age in months, parental age, sex ratio (ratio of boys to girls), the number of family members, and parental education did not differ by period, Before-pandemic vs. During-pandemic. Children's sleeping time was higher During-pandemic than Before-pandemic (t (838) = -3.453, p = .001, d = .24). Thus, Before-pandemic and During-pandemic samples were generally matched., We included demographic variables as control variables in our subsequent analyses.
First, we assessed whether period and children's age impacted children's socio-emotional behaviors and social lives. Period was significantly associated with children's peer problems (β = 0.264, p = .033) as well as durations of, children's schooling (β = -4.233, p < .001), children's outside play (β = 0.185, p = .001), and children's lessons (β = -0.052, p = .032) (Positive values represent increases during the pandemic compared to before the pandemic). We found a significant interaction between period and age in prosocial behavior (β = 0.080, p = .001). The effects of period were significant in 5- (β = 1.000, p = .009) and 7-year-old (β = 0.843, p = .036) children. Children's prosocial behavior and peer problems are displayed as a function of age in Figure 1.
Next, we conducted SEM analyses to assess whether the effects of period and children's age on peer problems and prosocial behavior were mediated by differences in children's social lives. Specifically, we used durations of children's schooling, children's outside play, and children's lessons as mediation variables and children's sleeping time as a control variable (Figure 2). We selected the model that included direct paths between period and peer problems and between interaction and prosocial behavior (χ2 = 62.304, RMSEA =.047, CFI = .966) because fit indices indicated that it provided a better fit to the data than a model without the direct path ((χ2 = 80.494, RMSEA =.053, CFI = .952). In this model, period was positively (β = 0.491, p = .022) and negatively (β = -2.843, p < .001) associated with durations of outside play and schooling, respectively. Interaction between age and period negatively associated with schooling (β = -0.214, p < .001). In addition, the duration of play negatively associated with peer problems (β = -0.261, p = .001). However, schooling was not significantly associated with prosocial behavior (β = 0.113, p = .070).
Finally, we evaluated the mediation effects of outside play on the relationship between period and peer problems using Sobel tests. The estimated mediation effect of duration of outside play was significant (β = -0.051, p = .014 95% CI [-0.092 -0.010]). The estimated direct effect of period on peer problems was also statistically significant (β = 0.316, p = .011 95% CI [0.073 0.558]).
The results revealed that peer problems and prosocial behavior, but not emotional symptoms, conduct problems, hyperactivity, differed between before and during the pandemic. Although there were no mediation effects on the relationship between period and prosocial behavior, we found an interaction effect between the pandemic and age in prosocial behavior. The results were partially consistent with our hypothesis that age may modulate the effect of the pandemic on prosocial behavior. Specifically, 4-year-old children scored equally before and during the pandemic, but older children showed more prosocial behavior during the pandemic compared to those before it. One possible interpretation for the increase in prosocial behavior was in-group favoritism. Items used to assess prosocial behavior included children's behavior towards in-group members, such as parents, siblings, or peers. Research on the behavioral immune system suggests that a pathogen infection can induce in-group favoritism and out-group aversion (20). The behavioral immune system refers to a motivational system that helps minimize infection risk by changing cognition, affect, and behavior to avoid infection with a pathogen. It has been consistently reported that the behavioral immune system in individuals at risk of infection facilitates stereotypes and prejudicial attitudes toward outgroup members and increases in-group favoritism, such as greater conformity to social norms and increased collectivism (21, 22). Such in-group favoritism may motivate cognitions and behaviors for the avoidance of novel parasites contained in out-groups and for the management of local infectious disease (22, 23). Thus, children in this study may have increased prosocial behavior toward in-group members at the risk of pathogen infection to avoid infection by out-group members.
Children also showed more problems in their peer relationships during than before the pandemic. Although we found a mediation effect of outside play, the direct effect between the pandemic and peer problems was larger. Other factors, such as level of children's stress, can mediate the relationship between the pandemic and peer problems. Nevertheless, we need to be careful about the interpretation of the results, because some items in peer problem (e.g., "tends to play alone" ) could be increased during the pandemic compared to before pandemic unless children played with siblings as much as they used to play with peers, and the increased scores did not necessarily mean the children were having trouble with peers. Taken together, our results showed that children's some of the socio-emotional behaviors differed before and during the pandemic.
In Study 2, we assessed whether children's digital skills and media use, such as their use of traditional media (TV and video), portable digital media (tablet PC and smartphone), and non-portable digital media (personal computer and gaming), differed before and during the pandemic. We focused on two digital skills that may be improved during early childhood: basic touch interaction skills to operate digital devices (hereafter, referred to as touch skills) and skills to use functions of digital devices (hereafter, function skills). Previous research reported that children develop the touch skills for tablet PC or smartphone, such as tap and drag, during early childhood (13, 24). For this study, we selected the following touch skills that may show age-related changes between ages 0 to 9: tap, double tap, one-hand drag, two-hand drag, enlarge/reduce screen, drawing with fingers, and drawing with digital pens. For function skills, we selected nine functions that previous studies (25) suggested that could be used during early childhood. These nine functions include viewing a picture, watching a video, taking a picture, recording a video, watching YouTube, enjoying apps, calling, video calling and listening to music.
Participants.
The participants were generally the same as in Study 1 except that we assigned the first 70 participants who have children aged 0 to 3 in addition to participants in Study 1 before and during the pandemic (for a total of 1400 participants).
In addition to the background information in Study 1, parents were asked to complete the questionnaire about their own, and children's media use and children's skills for operating digital devices.
Media use. Parents responded to questions about their own, and children's media use. The questions were based on previous surveys of children's media use in Japan (25). First, parents completed questions about the children's media use. Specifically, parents reported the frequency of the media use per week (e.g., how often does your child use (MEDIA NAME) per week?) and per day (e.g., how many hours does your child use (MEDIA NAME) per day?). Media included TV, video, tablet computer, smartphone, non-portable computer and non-portable games. Thereafter, parents also completed the questions about their own media use, and the questions were generally the same for parents and children. If the parents had partners (e.g., husband), they reported the average frequency of their and their partners' media use.
To assess the separate effect of each media on children's skills, we classified media use into traditional media (TV and video), portable digital media (tablet computer and smartphone) and non-portable digital media (non-portable computer and non-portable game). We calculated the average hours per day of children's use of each type of media for the analyses.
Skills to operate digital devices. Parents responded to questions about children's skills for operating digital devices. We focused on two skills that may be developed during early childhood; basic touch interaction skills to operate the digital device (hereafter, touch skills) and skills to use functions of the digital device (hereafter, function skills). In terms of touch skills, previous research reported that children develop the touch skills for tablet PC or smartphone, such as tap and drag during early childhood (13, 24). From these skills, we selected touch skills that may show age-related changes between age 0 to 9; tap, double tap, one-hand drag, two-hands drag, enlarge/reduce screen, drawing by fingers, and drawing by pens. Parents were asked to evaluate each of children's skills using four-point scales (1. not at all, 2. very little, 3. somewhat, 4. to a great extent). In terms of function skills, we selected nine functions that can be used during early childhood based on the previous studies (25). The nine functions included viewing a picture, watching their own video, taking a picture, recording a video, watching YouTube, enjoying apps, call, video-calling and listening to music. Parents were asked to evaluate each of children's skills using four-point scales (1. not at all, 2. very little, 3. somewhat, 4.to a great extent).
Analyses were conducted in R (version 3.6, 1) using the irtoys (26) and lavaan packages. We conducted three analyses. First, we conducted item response theory analyses based on a two-parameter logistic model. That is, we converted the four-point scale data to a two-point scale data (we regarded 1 and 2 as zero, and 3 and 4 as one) and estimated item and ability parameters to separate the difficulty of each item in touch skills and function skills from each participants' ability for touch skills and function skills. Using a two-parameter logistic model, a correct response rate of participant \(i\), whose ability is \({\theta }_{i}\), to item \(j\) \({P}_{j}\left({\theta }_{i}\right)\) are modeled by the following equation.
$${P}_{j}\left({\theta }_{i}\right)=\frac{1}{1+{e}^{-{\alpha }_{j}\left({\theta }_{i}-{\beta }_{j}\right)}}$$
Here, \({\alpha }_{j}\) and \({\beta }_{j}\) are parameters of item 𝑗 which represents item 𝑗's ability to discriminate \({\theta }_{i}\)s and difficulty.
Second, in our preregistration, we planned to conduct a MANOVA. The analysis includes period (Before-pandemic and After-pandemic) and age (0 to 9) as independent variables and the duration of digital devices use for children and parents, and the touch skills and function skills as dependent variables. However, as in Study 1, many dependent variables were not normally distributed, and we could not conduct the planned MANOVA. Instead, we conducted SEM analyses considering non-normal distribution using maximum likelihood estimation to assess whether period and age affected the durations of digital devices use for children and parents, and the touch skills and function skills.
Finally, we conducted SEM analysis to assess the relationship between period and skills for digital devices, which was mediated by the duration of digital media use. Specifically, we used variables assessing the duration of digital media for the mediation analyses if we found significant main effects of period in the second analyses. We used demographic variables as control variables for the analysis.
The descriptive statistics are reported in Table 2. Children's age in months, parental age, sex ratio (ratio of boys to girls), the number of family members, and parental level of education did not differ between the Before-pandemic and During-pandemic samples. In terms of respondent's status (mother vs. father), the ratio of fathers to mothers was higher in the During-pandemic than the Before-pandemic sample (χ2 (1, N = 1400) = 6.116, p = .013). Children's sleeping time was also greater in the During-pandemic sample than Before-pandemic sample (t (1398) = -2.412, p = .016, d = .13). There were no other significant differences in the Before-pandemic and During-pandemic samples. Thus, the Before-pandemic and During-pandemic samples were generally matched. We included demographic variables as covariates in our subsequent analyses.
First, we conducted item response theory (IRT) analyses using a two-parameter logistic model to distinguish between the difficulty of each digital skill and a participant's ability. We reported the difficulty parameter and discrimination parameter of each item in touch and function skills in Table 3. Among the touch skills, tap was the easiest and the two-hand drag was the most difficult for children. In addition, viewing a picture was the easiest and video calling was the most difficult of the function skills.
Then, we assessed whether period and children's age affected each variable in children's media use and digital skills using MANOVA. The results revealed that period was significantly associated with touch skills (β = 0.235, p < .001) and function skills (β = 0.201, p < .001) as well as children's use of traditional media (β = 0.304, p < .001), children's use of portable digital media (β = 0.238, p < .001), children's use of non-portable digital media (β = 0.113, p = .005), parents' use of portable digital media (β = 0.229, p = .007), and parents' use of non-portable digital media (β = 0.164, p = .025). In addition, age was positively associated with touch skills (β = 0.223, p < .001) and function skills (β = 0.178, p < .001) (Fig. 3).
Next, we conducted SEM analyses to assess whether period and age effects on touch and function skills were mediated by the durations of children's and parents' media use controlling for respondent status and children's sleeping time (Fig. 4). The fit indices indicated that the model that included the direct path between period and touch skills and function skills (χ2 = 2719.748, RMSEA = .074, CFI = .936) was slightly worse than the model that excluded the direct path (χ2 = 2692.540, RMSEA = .073, CFI =. 937). Therefore, we selected the latter model as our final model. This model showed that period was positively associated with children's use of traditional media (β = 0.384, p < .001), children's use of portable digital media (β = 0.485, p < .001), children's use of non-portable digital media (β = 0.125, p = .003), and parents' use of portable digital media (β = 0.231 p = .007). Moreover, touch skills were positively related to children's use of traditional media (β = 0.045, p = .016) and children's use of portable digital media (β = 0.210, p < .001). Function skills were also positively related to children's use of traditional media (β = 0.038, p = .031) and children's use of portable digital media (β = 0.207, p < .001).
Finally, we evaluated the mediation effect between period and digital skills using Sobel tests. For touch skills, the estimated mediation effects on children's use of traditional media (β = 0.023, p = .004 95% CI [0.008 0.039]) and children's use of portable digital media (β = 0.094, p < .001 95% CI [0.058 0.138]) were significant. Moreover, for function skills, the estimated mediation effects of children's use of traditional media (β = 0.020, p = .006 95% CI [0.006 0.034]) and children's use of portable digital media (β = 0.095, p < .001 95% CI [0.051 0.139]) were significant.
There were several findings regarding children's digital skills. IRT analyses revealed the developmental sequence of touch and function skills, which was generally consistent with the results of previous behavioral studies (13, 24). In addition, the main analyses revealed that the durations of children's use of traditional and digital media was higher during than before the pandemic. On the other hand, the duration of parental use of digital media, but not traditional media, was higher during than before the pandemic. In addition, both touch skills and function skills were better during than before the pandemic. The results showed that the pandemic may have accelerated the children's skills for operating digital devices. Indeed, the touch skills and function skills in 5-year-olds during the pandemic exceeded those of 6-year-olds before the pandemic.
Moreover, our mediation analyses revealed that the relationship between the pandemic and children's digital skills was mediated by children's use of portable digital media, but not parental use of digital media. Due to the pandemic, children could not go to school or kindergarten, and therefore children in this situation have more opportunities to use portable digital devices, such as for receiving online education (Zhou, Wu, Zhou, & Li, 2020). Consequently, children's skills in operating digital devices may be higher during the pandemic. However, parental use of devices did not have a significant impact on children's touch and function skills. Based on the results, we suggest that the pandemic gave children time to use digital devices, and children used such devices on their own or with their family, which may have facilitated the development of their touch and function skills.
Our results showed that children exhibited better and worse socio-emotional behaviors during the COVID-19 pandemic compared to before the pandemic. Moreover, children showed the better digital skills during the pandemic. Researchers, educators, and parents would have remembered the negative impacts of the pandemic, but, in this sample, the pandemic may not only impair but also accelerate child development. To our knowledge this is one of the first studies to conduct a pre- and post-assessment of the impact of the COVID-19 pandemic on children's behaviors.
One can argue that the parent questionnaire we used to assess digital skills of children may not be valid. However, our IRT analyses revealed that our questionnaire may be valid for assessing children's digital skills. Moreover, under circumstances in which COVID-19 is prevalent, it would be difficult to directly assess children's behaviors through experimental methodologies, especially with a large sample size. Although we assessed children's behavior using an online questionnaire, most of the available research did not utilize the same method of assessment both before and during the pandemic, which results in the difficulty of not having a valid comparison group. It is possible that parents' answers to the surveys can reflect differences in parents, not in children, and we need to be careful about the interpretations of the results. Nevertheless, we believe that web-based surveys may be one of the best methods for addressing the effects of the pandemic on child development.
Another limitation in this study was that we compared the different sample before and the during pandemic. We matched several background information that may affect socio-emotional behaviors and digital skills across samples, but we need to conduct longitudinal research to examine how children change their behaviors across different time points. Moreover, it remains unclear whether the results from this population can be generalized to other populations, because the growth rate in the number of infected persons and deaths in Japan was lower than in other countries (14). Moreover, children's social-emotional development could be more severely impaired due to the pandemic, particularly if this difficult situation continues for a long period. Future research should address these issues.
The study was conducted in accordance with the principles of the Declaration of Helsinki and the procedure of the study was approved by the local ethics committee. Written informed consent (including study purpose, methodology, risks, right to withdraw, duration of the experiment, handling of personal information, and voluntary nature of participation) was obtained from all participating parents prior to administering the survey.
This manuscript did not include any individual person's data
The datasets supporting this article have been uploaded as part of the supplementary material.
This research was supported by grants from JSPS to the first author.
YM and NY developed the study concept. All authors contributed to the study design. Data collection was performed by YM. All authors performed the data analysis and interpretation. YM drafted the manuscript. CS, XM, and NY revised the manuscript. All authors approved the final version of the manuscript for submission.
We thank Chika Harada and Nobuhiro Mihune for the helpful comments on an earlier version of the manuscript.
World Health Organization. Coronavirus disease (COVID-19) advice for the public. Geneva: WHO; 2020. https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public. Accessed 29 Apr 2020.
Pathak EB, Salemi JL, Sobers N, Menard J, Hambleton IR. COVID-19 in children in the United States: Intensive care admissions, estimated total infected, and projected numbers of severe pediatric cases in 2020. J Public Health Manag Prac. Preprint at https://pubmed.ncbi.nlm.nih.gov/32282440/ (2020).
Wu Z, McGoogan JM. Characteristics of and important lessons from the coronavirus disease 2019 (COVID-19) outbreak in China: Summary of a report of 72 314 cases from the Chinese Center for Disease Control and Prevention. JAMA. 2020;323(13):1239–42.
Dong Y, Mo X, Hu Y, Qi X, Jiang F, Jiang Z, Tong S. Epidemiology of COVID-19 among children in China. Pediatrics. 2020;145:e20200702.
Bronfenbrenner U. The Ecology of Human Development. Harvard Univ. Press; 1979.
Golberstein E, Wen H, Miller BF. Coronavirus disease 2019 (COVID-19) and mental health for children and adolescents. JAMA Pediatrics. Preprint at https://jamanetwork.com/journals/jamapediatrics/fullarticle/2764730 (2020).
Xie X, et al. Mental health status among children in home confinement during the coronavirus disease 2019 outbreak in Hubei Province, China. JAMA Pediatrics. Preprint at https://jamanetwork.com/journals/jamapediatrics/fullarticle/2765196, (2020).
Orgilés M, et al. Immediate psychological effects of the COVID-19 quarantine in youth from Italy and Spain. PsyArXiv. Preprint at https://psyarxiv.com/5bpfz/ (2020).
Pisano L, Galimi D, Cerniglia L. A qualitative report on exploratory data on the possible emotional/behavioral correlates of COVID-19 lockdown in 4–10 years children in Italy. PsyArXiv. Preprint at https://psyarxiv.com/stwbn/. Accessed 13 Apr 2020.
Zhou L, Wu S, Zhou M, Li F. 'School's out, but class' on', The largest online education in the world today: Taking China's practical exploration during the COVID-19 epidemic prevention and control as an example. Best Evid Chin Edu. 2012;4(2):501–19.
McClure ER, Chentsova-Dutton YE, Holochwost SJ, Parrott WG, Barr R. Look at that! Aideo chat and joint visual attention development among babies and toddlers. Child Dev. 2018;89(1):27–36.
Huber B, Highfield K, Kaufman J. Detailing the digital experience: Parent reports of children's media use in the home learning environment. Br J Educ Technol. 2018;49(5):821–33.
Vatavu RD, Cramariuc G, Schipor DM. Touch interaction for children aged 3 to 6 years: Experimental findings and relationship to motor skills. Int J Hum Comp St. 2015;74:54–76.
Worldometer. COVID-19 coronovirus pandemic. https://www.worldometers.info/coronavirus/. (Accessed 14 May 2020).
Ministry of Education. Culture, Sports, Science and Technology. Report of opening schools in the wake of the novel coronavirus pandemic in Japan. Available from https://www.mext.go.jp/content/20200413-mxt_kouhou01-000006421_1.pdf (2020).
Goodman R. The Strengths and Difficulties Questionnaire: A research note. J Child Psychol Psychiatr. 1997;38(5):581–6.
Matsuishi T, Nagano M, Araki Y, Tanaka Y, Iwasaki M, Yamashita Y, et al. Scale properties of the Japanese version of the Strengths and Difficulties Questionnaire (SDQ): a study of infant and school children in community samples. Brain Dev. 2008;30(6):410–5.
Moriguchi Y, Shinohara I, Todo N, Meng X. Prosocial behavior is related to later executive function during early childhood: A longitudinal study. European Journal of Dev Psychol. 2020;17(3):352–64.
Rosseel Y. lavaan: An R package for structural equation modeling.
Stat J. Softw. 2012;48(2):1–36.
Ackerman JM, Hill SE, Murray DR. The behavioural immune system: Current concerns and future directions. Soc Personal Psychol Compass. 2018;12(2):e12371.
Murray DR, Schaller M. Threat(s) and conformity deconstructed: Perceived threat of infectious disease and its implications for conformist attitudes and behavior. Eur J Soc Psychol. 2012;42(2):180–8.
Wu BP, Chang L. The social impact of pathogen threat: How disease salience influences conformity. Pers Individ Dif. 2012;53(1):50–4.
Thornhill R, Fincher CL. The parasite-stress theory of sociality, the behavioural immune system, and human social and cognitive uniqueness. Evol Behav Sci. 2014;8(4):257–64.
McKnight L, Cassidy B. Children's interaction with mobile touch-screen devices: Experiences and guidelines for design. Int J Mob Hum Comput Interaction (IJMHCI). 2010;2(2):1–18.
Takaoka J. Actual situation and attitude of media use of parents and children. Child Sci. 2017;14:6–10.
Partchev I, Maris G. irtoys: A collection of functions related to item response theory (IRT). R package version 0.2;2017.
Table 1. Descriptive Statistics for Study 1
Measure BEFORE (N= 420) DURING (N= 420)
Mean SD Mean SD
Parent Measure
Parent's age 39.74 5.81 40.44 5.38
Number of family members 3.99 0.94 4.04 1.05
Parental level of education 3.22 0.89 3.17 0.91
Children Measure
Children's age in months 83.37 20.37 83.95 20.90
Sleeping hours 9.35 0.81 9.54 0.77
Days of schooling per week 4.95 0.49 0.72 1.73
Hours of outside play per day 0.64 0.67 0.83 0.89
Hours of lessons per day 0.26 0.35 0.21 0.37
Conduct problems 2.45 1.73 2.37 1.82
Emotional symptoms 2.10 2.11 2.19 2.14
Hyperactivity 4.03 2.19 4.07 2.42
Peer problems 2.24 1.75 2.50 1.84
Prosocial behavior 5.38 2.45 5.90 2.43
Categorical Measure % %
Children's sex (ration of girls) 50% 50%
Respondent (ratio of mother) 91% 88%
Hour of traditional media per day 2.81 1.67 2.94 1.79
Hour of portable digital media per day 2.06 1.50 2.29 1.66
Hour of non-portable digital media per day 1.24 1.11 1.41 1.45
Touch skill ( ) -0.09 0.89 0.09 0.93
Function skill ( ) -0.1 0.90 0.09 0.96
Categorical Measure Percent Percent
Note. represent participants' ability as the results of item response theory analyses
Table 3. Difficulty and discrimination parameters of each item using two-parameter logistic model.
Touch skill (Difficulty) (Discrimination)
Tap -0.354 4.153
Screen -0.066 3.166
Drawing by pens 0.023 2.372
Drawing by fingers 0.052 2.725
Double tap 0.089 4.388
One-hand drag 0.143 5.392
Two-hands drag 0.321 6.646
Function skills
Viewing a picture -0.377 5.490
Watching a video -0.346 5.107
Taking a picture -0.334 5.118
Watching YouTube 0.001 1.998
Recoding a video 0.165 3.336
Enjoying apps 0.189 2.569
Calling 0.344 2.392
Listening to Music 0.398 2.125
Video-calling 0.739 2.268
Note: and are parameters of item 𝑗 which represents item 𝑗's ability to discriminate participants' and difficulty, respectively.
coveringletter.doc
SupTable.docx
data2.csv
|
CommonCrawl
|
Confused about internal hinges for calcualting reactions
I'm really confused about internal moments.
Regarding image 1,
When taking moments about D (the hinge), why does the book only consider the right side? Could you technically consider the left side (-Va x (L1+L2) - (Vb x L2) = 0) as well? By technically i mean unrealistically because i understand that considering the left side would introduce unwanted unknowns into the 4th equation (first 3 are sum of x,y,moment about A) as the equation would become:
VcL3 - P(L3+L4) - Va x (L1+L2) - (Vb x L2) = 0 am i right???
Why must the triangular distributed load be split as shown to calcualte Ma? I got the question wrong when i assumed the UDL was concentrated at its centroid (right of hinge) as a single force. I feel this contradicts with image 1 (3) because Vc is also a single force on the right hand side of the hinge? My understanding is that normally (without hinge) the whole triangular load would act at the centroid of the whole triangle?
I do understand that internal hinges don't restrict moments and that only shear forces are transferred across but this doesn't make sense.
Thank you and sorry for the long quesiton.
structural-engineering structural-analysis
mdkbear
mdkbearmdkbear
An extended discussion on internal vs. external forces
We usually like to describe hinges as "places where the moment is always zero."
But, wait a minute, the moment is always zero anywhere in a stable structure. Don't believe me? Let's take a look at the most trivial structure ever, a simply-supported beam with a uniform load:
So, it has a span of 8 m and a load of 1 kN/m. Obviously, the reactions are 4 kN each.
We also know that the bending moment at the midspan is given by $M = \dfrac{qL^2}{8}$, which in this case gives us 8 kNm, as shown in the diagram above.
But let's calculate the moment at midspan by hand, using our trusty sum of bending moments approach:
$$\begin{align} \sum M_{\text{midspan}} &= M_{\text{left reaction}} + M_{\text{right reaction}} + M_{\text{load}} \\ &= -(4\cdot4) + (4\cdot4) + (1\cdot8\cdot0) \\ &= 0\text{ kNm} \end{align}$$
So... what's going on here? What's that 8 kNm at midspan if we actually obviously have zero moment there?
Well, what's going on is that internal moment is one thing, external is another, and we can't get them mixed up.
For a more obvious distinction between the two, let's instead look at axial loads: imagine you have a sugar cube between your fingers and you start squeezing it. What'll happen? Well, if you look at the applied forces generally, you'll conclude that nothing will happen: you are applying equal forces in opposite directions, so the net force on the sugar cube is zero! It doesn't matter whether you are squeezing the cube with barely any force at all or if you're putting all your strength into it: you'll always be applying two forces in opposite directions and the net force will always be zero.
But we know that's not how this works; if you squeeze the cube enough, it'll crumble between your fingers. Because the net external force may be zero, but the cube is under extreme internal forces. And the value of this internal force is equal to the force applied by one of your fingers (if you squeeze the cube with a force of 1 kN from each side, the internal force in the cube will be of 1 kN).
The definition of a stable structure is that the external forces are all balanced in every point in space (it doesn't even need to be on the beam. You could calculate the moment at $x = 1000\text{ m}$ and you'll still get zero moment). And when we calculate moment as we did above, what we're calculating is the external moment. Hinges are in no way special with regards to external moment (indeed, nowhere is).
So external moment (and other forces) is useful to know whether the structure is stable: if the external moment were non-zero, that'd mean we're actually dealing with a mechanism that will accelerate over time.
Internal moment (and other forces), however, is useful to know whether the structure can withstand the applied load. And just as the internal force in the cube is equal to the load applied to one side of the sugar cube, so is the internal bending moment equal to the bending moment on one side of the point of interest.
So, if we recalculate the bending moment at midspan looking just at the load to its right, we get:
$$\begin{align} \sum M_{\text{midspan}}^+ &= M_{\text{right reaction}} + M_{\text{load}} \\ &= (4\cdot4) - (1\cdot4\cdot\dfrac{4}{2}) \\ &= 8\text{ kNm} \end{align}$$
Notice that since we are calculating the moment to the right of the midspan, I had to "break up" the uniform load so that I'm considering the effect of its right half. After all, that uniform load is arbitrary. We drew it as one uniform load along the entire span, but should we expect a different result if we instead had two uniform loads of equal value, one on each side of the midspan? Of course not!
So just because we drew the uniform load as covering the entire span (and with a centroid at the midspan), that doesn't mean we can ignore its effect on the internal bending moment calculated when looking at only one side of the beam.
We can also calculate this from the left-hand side, but then we need to remember the sign convention that, for left-hand internal moments, clockwise is positive:
$$\begin{align} \sum M_{\text{midspan}}^- &= M_{\text{left reaction}} + M_{\text{load}} \\ &= (4\cdot4) - (1\cdot4\cdot\dfrac{4}{2}) \\ &= 8\text{ kNm} \end{align}$$
That these two calculations should have equal results is obvious: if the external bending moment is zero at that point, then the internal bending moment must be equal (and opposite, but we dropped the sign for the left-hand side because of the convention) on both sides of that point.
As a refresher, here's the sign convention for internal forces:
Now to answer your actual question
What we really mean when we say that hinges always have zero moment is that they always have zero internal moment.
As such, when calculating this zero-moment at the hinge, we need to do so by only looking at the loads to one side of the hinge. So for your triangular load in the second example, you need to calculate the moment at the hinge by considering the part of the load that's visible on the side you're calculating.
Wasabi♦Wasabi
$\begingroup$ Hi! Thank you so much for the thorough response! I'm however still a bit confused about why in the first image (3) equation forces and reactions on the right side of the hinge contributes to the moment about A; but for the triangular load the force cannot be concentrated at the centroid as a point force? I understand everything you explained about internal moments so thank you so much again! $\endgroup$
– mdkbear
$\begingroup$ @mdkbear I don't think I understood what you're asking here... if you're looking at the second beam with the triangular load and doing an external moment calculation around A, then you CAN simplify the triangular load to a point force at its centroid. You only need to do the decomposition scribbled over the original image if you're doing moment around the hinge; in that case you need to do a sum of the load to the left of the hinge (a simple triangular load which can itself be simplified into a point force) OR to the right (with two loads, one triangular and one uniform). $\endgroup$
– Wasabi ♦
It is important to understand what is an internal hinge, its behavior, and assumptions around it.
An internal hinge represents a discontinuity in the beam, it carries a pair of equal but opposite force couple, one to each side of the hinge, carried over from the beam segment it supports/connect to, so sum Fy is zero. Another important assumption is the hinge can't restrain rotation, therefore, sum M must be zero.
On your questions, the force to the right side of the hinge is the reaction (an unknown) from the right side beam segment, and as stated above, there exists an equal but apposite force to the left, and it becomes an applied load to the left side support system. The sketch below shows the force couple and unknown support reactions (in read), with the available structural equilibrium equations and unknowns indicated below. You should be able to figure out why the solve starts from the right side.
Below is my earlier response, that step by step show you why the solve stars from the right side, rather than the left.
After separated at the internal hinge at "D", it leaves 2 unknown (V_C & V_D) at the right side structure, and you have 2 available equations (sum Fy=0 & sum M=0) to find the solution the unknowns. On the other hand, the left side structure has 3 unknowns (V_A, V_B & V_D), but there are only two equations available, so it is not workable.
Same as above, you must break up the beam at the internal hinge (point B), and start from the right side with the load directly positioned on it to find the internal force V_B, then reverse the direction, and apply V_B across the hinge as an applied load, then work out the reactions on support A in conjunction with the remaining tringle load on the left side beam segment.
Hope both of the above helps.
$\begingroup$ Hi! Thank you for confirming my hypothesis of image 1 it seems like i was right! For 2. What technically is "stopping" me from concentrating the UDL at 1/3 of the beam from the right end and taking moments about A? I fully understand your explanation for why I must split the beam into 2 - because of shear transfers and newtons 3rd law which allows this to happen. But (3) in image 1 allows for Vc to be taken directly directly to A! It's definitely something that doesn't allow me to simplify the UDL into a point force. $\endgroup$
$\begingroup$ I've add additional sketches on the bottom of my response. You should be able to figure it out, and answer your questions. But keep asking, if anything is unclear to you. (Note, without getting V_B first, direct working on the left side beam segment will lead to erroneous results) $\endgroup$
– r13
$\begingroup$ Add'l notes. For (2), on the right side beam segment, without V_B, the beam is "unstable", so V_B must be pointing up and acting as support. On the left side beam segment, the reactions at A will be wrong, because missing the effect from V_B. The V_B on the left side is in opposite direction to the V_B on the right side, so sum FY about the hinge point B equals zero. $\endgroup$
You answered yourself. No matter how much moment is calculated on the left side of the D it has zero effect on the right side of D.
The moment does not pass through the hinge and in this case, no shear transfers from the left to the right either because the two supports A and B have taken care of the vertical forces.
Regarding image two it's just an easier way of bookkeeping of the areas and their CGs.
And the site policy is one question at a time, please.
$\begingroup$ Hi, thank you for answering. Which part of what i said was correct? I'm also confused as to what you mean by "passing through", does this relate to newton's third law? $\endgroup$
$\begingroup$ where you say " I do understand that internal hinges don't restrict moments and that only shear forces are transferred across" you are right. $\endgroup$
Not the answer you're looking for? Browse other questions tagged structural-engineering structural-analysis or ask your own question.
FEA: Changes in initial geometry cause divergence of solution
Beams in Bending: Bending stress / strain distrution
External indeterminate vs internal indeterminate
Internal load in cremona diagram
Transfer of moments in beams with internal hinges
How to provide lateral support for adjacent masonry during installation of steel UC in an old cavity wall?
Beam with internal hinge - Reactions help
Very confused as how to calculate the degree of static indeterminacy of trussed beam
|
CommonCrawl
|
Uplink resource allocation in cooperative OFDMA with multiplexing mobile relays
Salma Hamda ORCID: orcid.org/0000-0003-2864-85061,2,
Mylene Pischella1,
Daniel Roviras1 &
Ridha Bouallegue2
Cooperative relaying is an important feature for the fourth generation wireless system to upgrade system performance. Mobile relays can offer better results than fixed relays without any additional infrastructure cost. However, efficient cooperation decision as well as resource allocation are critical to satisfy model constraints as required quality of service (QoS). In this work, simple mobile users with advantageous channels can act as potential relays for cell edge users for an uplink transmission. They multiplex, in the frequency domain, their own data to that of the relayed sources, with the objective for both relay and sources to reach a target data rate. An optimal joint resource blocks (RB) allocation and power allocation scheme under a required data rate constraint per user is proposed. The optimization problem is formulated to minimize the total system power. Dual decomposition and subgradient method are used to solve the optimization problem after dividing it into independent subproblems with less complexity to find the optimal solution. The cooperation decision and the sources-relays association is either performed as a first step of resource allocation, or jointly optimized with RB and power allocation. Simulation results show that these proposed algorithms both reduce system's power consumption while ensuring the required QoS. Joint optimization of relay selection, RB and power allocation provides a higher power consumption decrease, but requires higher complexity and overhead.
Replying to quality of service requirement with always greedy data application is still an important challenge for wireless cellular networks. Technical constraints push researchers and operators to provide solutions allowing users to acquire high performances independently of their geographical distance from Base Station (BS). In addition to the orthogonal frequency division multiple access (OFDMA) technology, relays are among the principal features of the fourth generation (4G) wireless systems. Relaying technologies, inspired from ad hoc multihop networks, are currently receiving much attention to improve cellular network's performance where bandwidth and power are limited. Instead of deploying BS, relay stations become a solution to reduce high deployment cost and can provide capacity and coverage comparable to small cells. Relaying data aims to upgrade user's performance especially in cell border where users suffer from large signal attenuation. Relaying topology and behavior are standardized in both long term evolution (LTE) Advanced [1] and International Mobile Telecommunication Advanced (IMT-Advanced). In these standards, relays have to be fixed in positions beforehand planned by the operators and become a part of the fixed access network. Each relay is then attached to a designated BS in a static topology. Moreover, relaying data can be considered for a single hop or for multihop using one or multiple relays to transmit information from source to destination. In this context, the LTE Advanced standard allows only two hops when the IEEE.802.s standard offers a multihop relaying scheme [2].
Many relay transmission schemes are proposed to relay information from source to destination in two time intervals [3, 4]. A relay can use the decode and forward scheme (DF) where it decodes the received signal in the first transmit time interval (TTI), re-encodes and then forwards it to destination in the second TTI [5]. A relay may also use amplify and forward scheme (AF) where it just forwards the received signal with an amplification factor. It is proven in [3] that DF scheme can achieve better performance than AF scheme but it is more complex. Several solutions using relays are proposed in the litterature. We can differentiate relays used as virtual multiple input multiple output (MIMO) to exploit spatial diversity [5, 6] which need combining techniques at the destination and relays used as repeaters where source has no direct link to destination [7].
While only fixed relays' architecture is optimized in the standards [8], mobile relays are studied to offer dynamic relaying topology. Mobile relaying has been investigated in the Wireless World Initiative New Radio (WINNER) project [9] contributing in the development and the assessment of 3GPP LTE and IEEE 802.16 (WiMAX) [10] standards and in the Advanced Radio Interface Technologies for 4G Systems (ARTIST4G) project providing innovative concepts to cellular mobile radio communications [11]. Mobile relays can be considered as a serious candidate for the 5G wireless systems. A mobile relay can have the same technical characteristics as a fixed relay but its location dynamically changes. In [12], relaying use cases are studied to prove the relaying improvement for mobile relays. Some examples for this type of mobile relays are relays placed on transportation vehicles such as buses or trains. These relays can be placed to serve users traveling in theses vehicles or to serve users in the street. Another type of mobile relays is to use simple user terminals as relays. Users can have advantageous location and channel conditions to relay some cell border users. This type of mobile relay can upgrade system performance without any additional infrastructure cost. An unpredictable dynamic topology is offered depending on sources and relays mobility [13].
Resource allocation for cooperative networks has been actively studied in the literature for both downlink and uplink. The principal features to discuss are relay selection, subcarriers' allocation and power allocation that can be treated separately or jointly. The selection of relay partners is an important element to successful cooperative strategy [4]. The pairing step may be realized as a centralized process where the BS collects necessary channel and location information from users and relays and decides then to attach users to appropriate relays. Relays selection may also be established in a distributed manner where users or relays decide to make cooperative pairs [3]. It can be made before transmission with the objective to achieve some required level of performance [4]. It can also occur during the transmission time as a proactive selection or as an on-demand relay selection when the direct link's channel quality to the destination decreases. We note that for multihop relaying, an initial path selection from the source to the BS can be initially defined, involving all potential relays [13, 14].
Depending on the system objective and the constraints to respect, resource allocation for a system with relays is generally formulated as an optimization problem. The resource allocation problems are then solved via mathematical tools or heuristics to find the optimal or suboptimal solutions. In [7, 15], the authors formulate an optimization problem to maximize the total system throughput with one source, one destination and a set of fixed AF and DF relays, respectively, where the source may use one or multiple relays to transmit data to destination. In [16], resource allocation considering an uplink relaying system with one destination, several sources, and several fixed relays are studied to maximize system throughput using AF and DF schemes with a minimum data rate constraint per user. In [17], joint power allocation, relay selection, and subcarrier assignment with a minimal data rate per user is discussed for a downlink system model with fixed relays. Downlink energy-efficiency maximization under proportional rate constraint is investigated in [18]. Resource allocation for the multiple access relay channel, with successive interference cancellation at the relay, is studied in [19]. In [20], joint resource allocation is considered for uplink system where relays are fixed. It is solved via an iterative algorithm based on dual decomposition theory. Dual resolution method is adopted after problem adaptations to solve optimization problems in [7, 13, 16]. Dual decomposition [21] consists in dividing the global problem into subproblems to be solved independently. It is a resolution method for convex problems [22] and can be adopted for non-convex problems [23] with some adaptations in the initial problem.
In this work, we propose a new resource allocation algorithm for an uplink multiuser OFDMA relay network in the context of green communications where we aim to save battery life by minimizing the consumed transmit power. We consider a relaying system model where DF relays are simple users with advantageous positions to relay cell-edge users. The main novelty of this work is that relays forward relayed data to the BS and multiplex the relayed data with their own data in different RB. Multiplexing in the frequency domain allows all mobile users to fulfill their QoS constraints, even though some users help others through relaying. In the literature, fixed relays are generally investigated. In addition, relays have no data to transmit to the BS. The major contribution of this work is that relays are mobile users and that have their own information to transmit. To the best of our knowledge, this is the first work studying this system model where mobile relays multiplex their own data to the relayed data.
Two different strategies are studied regarding relay selection: it is either performed before resource allocation, depending on average channel gains. In this case, relayed sources are cell edge users, and a relayed source chooses only one relay. In the second method, relay selection is dynamically performed in each RB, depending on its channel gain. Then, any user may become a relayed source or a relay, and a relayed source may choose different relays on different RB. The RB and power allocation problem is formulated as an optimization problem that aims to minimize the total consumed power, while achieving a target data rate for all users, whether they are relayed source, relays of non relayed sources. Dual Lagrange decomposition is adopted for theoretical resolution and an iterative algorithm is proposed to find the optimal solution.
To summarize, the main contributions of this paper are as follows:
A cooperative relaying model is proposed, where mobile users may serve as relays to other users, while still transmitting their own data to the BS.
The corresponding RB and power allocation algorithm, aiming at minimizing the total consumed power, is determined using Lagrange dual decomposition.
Two relay selection algorithms are proposed: a fixed relay selection strategy, where a source uses the same relay on all RB, and an adaptive strategy where relay selection is jointly optimized with resource allocation. In this case, a source may use different relays on different RB, and may also directly transmit to the BS on some other RB.
The complexity and overhead of the two algorithm's variants are evaluated, and several simulation results are provided to assess their performance.
This paper is organized as follows. Section 2 describes the adopted system model and the constraints to respect, formulates the associated optimization problem, and provides the proposed resolution algorithm. Section 3 details the resolution steps of the optimization problem. Section 4 presents simulation results. Finally, Section 5 concludes the paper.
System model and problem formulation
In this section, we present the adopted system model and assumptions. Then, we formulate the optimization problem and the associated constraints. We finally enumerate the resolution steps in the proposed resolution algorithm.
Relaying is used in this work to improve uplink system performance from users to BS. Simple mobile are used as relays and transmission can be made according to two possible schemes: direct transmission where each user directly transmits to the BS (Fig. 1 a) or cooperative scheme where a user R can relay a source S in addition to its own data (Fig. 1 b) thanks to its position approximately in the halfway between S and BS. We consider a single cell uplink OFDMA transmission system with one BS with an omnidirectional antenna, K users and N RBs. The channel is assumed a frequency-selective Rayleigh fading channel with slow fading and the noise is Additive White Gaussian (AWGN). The users are uniformly distributed in the cell and experience pathloss and log-normal shadowing.
Example 1: transmission schemes
Our model is a cooperative system where some users can be relays for other users while still transmitting their own data. Source's relayed data and relay's data are then multiplexed by the relay and transmitted to the BS. Users are divided into three groups: Not Relayed Sources (NRS), Relays (R), and Relayed Sources (RS) (Fig. 2). These groups are either defined in a first step, if relay selection is fixed, or determined by the joint relay selection and resource allocation algorithm. These strategies are detailed in Sections 2.3.1 and 2.3.2, respectively.
System model - initialized example, fixed relay selection
Mobile users are assumed half-duplex and thus cannot transmit and receive during the same TTI. Full duplex transmission would require that received and transmit data would use distant RBs, to avoid inter-RB interferences. We did not consider that case in this work. The transmission process takes then two phases: In the first TTI, NRS transmit to the BS and RS transmit to their relays while relays are listening. In the second TTI, RS are silent, NRS and R transmit to the BS. R transmit at the same time their own data and the data of their RS thanks to multi-carrier transmission. For relayed data, the DF method is adopted at the relay.
The objective of our model is to outperform the system without cooperation in minimizing the whole system transmit power subject to a constraint of minimal rate per user. The objective has been chosen as optimizing energy consumption to reduce the overall environmental effects.
We consider the average data rate and power per TTI. The user rate for user k and RB j, with DF if relaying is used, can be expressed as follows:
$$\begin{array}{*{20}l} R_{k}^{(j)} &= \log_{2}\left(1 + P_{k}^{(j)} \gamma_{k,k}^{(j)}\right) \text{, if}\ {k}\ \text{is a not relayed source} \end{array} $$
(1a)
$$\begin{array}{*{20}l} R_{k}^{(j)}&= \frac{1}{2} \ \min \left\{\log_{2}\left(1 + P_{k}^{(j)} \gamma_{k,r}^{(j)}\right); \log_{2}\left(1 \!+ P_{r}^{(j)} \gamma_{r,r}^{(j)}\right)\!\right\}\!,\\ &\quad\qquad\quad\qquad\text{if}\ {k}\ \text{is a relayed source with relay}\ {r} \end{array} $$
$$\begin{array}{*{20}l} R_{k}^{(j)} &= \frac{1}{2} \ \log_{2}\left(1 + P_{k}^{(j)} \gamma_{k,k}^{(j)}\right) \text{, if}\ {k}\ \text{is a relay} \end{array} $$
where \(P_{k}^{(j)}\) is the transmit power of user k in RB j and \(\gamma _{k,k^{\prime }}^{(j)}\) is the channel coefficient gain expressed as:
$$ \gamma_{k,k^{\prime}}^{(j)}= \frac{g_{k,k^{\prime}}^{(j)}}{L_{k,k^{\prime}} \ S_{k,k^{\prime}} \ N_{rb}} $$
\(g_{k,k^{\prime }}^{(j)}\) is the square Rayleigh fading in RB j between user k and user k ′ if k≠k ′, or between user k and the BS if k=k ′. \(L_{k,k^{\prime }}\) and \(S_{k,k^{\prime }}\) are respectively the pathloss and the shadowing experienced by user k considering their direct links when k=k ′ and considering the indirect links via user k ′ when k ′≠k. N rb is the noise power per RB.
Problem formulation
Our objective is to minimize the whole system transmit power subject to several constraints. If we consider one NRS, one RS and one R having RBs j, j ′ and j ′′, respectively, Table 1 details the consumed transmit power per user per TTI:
Table 1 Power expended per user per TTI
Let \(\mathcal {S}_{K} =\left \{1,..,K\right \}\) be the set of K users and \(\mathcal {S}_{N} =\left \{1,..,N\right \}\) be the set of N RBs. The general optimization problem is expressed as:
$$\begin{array}{*{20}l} \underset{\mathbf{a, b, P}}{\text{minimize}} & \quad \sum\limits_{k=1}^{K} \sum\limits_{j=1}^{N}\left(1 - \frac{b_{k}}{2}\right) a_{k,k}^{(j)} P_{k}^{(j)}\\[-3pt] & + \frac{1}{2} \sum\limits_{k=1}^{K}\sum_{r\neq k} \sum\limits_{j=1}^{N} { b_{k} a_{k,r}^{(j)} \left(P_{k}^{(j)} + P_{r}^{(j)}\right)} \end{array} $$
$$\begin{array}{*{20}l}[-3pt] \text{subject to} \\[-3pt] & \sum\limits_{k=1}^{K} \sum\limits_{r=1}^{K} a_{k,r}^{(j)} \leq 1 \ \forall j \in \mathcal{S}_{N} \end{array} $$
$$\begin{array}{*{20}l}[-3pt] & \sum\limits_{r=1}^{K} \sum\limits_{j=1}^{N} a_{k,r}^{(j)} R_{k}^{(j)} \geq R_{t} \ \forall k \in \mathcal{S}_{K} \end{array} $$
$$\begin{array}{*{20}l}[-3pt] & a_{k,r}^{(j)} \in \{ 0, 1\} \ \forall \left(k, r, j\right) \in \mathcal{S}_{K} \times \mathcal{S}_{K} \times \mathcal{S}_{N} \end{array} $$
$$\begin{array}{*{20}l}[-3pt] & b_{k} \in \{ 0, 1\} \ \forall k \in \mathcal{S}_{K} \end{array} $$
$$\begin{array}{*{20}l}[-3pt] & P_{k}^{(j)} \geq 0 \ \forall k, j \in \mathcal{S}_{K} \times \mathcal{S}_{N} \end{array} $$
b =[b 1,b 2,....,b K ]T is the vector of users decisions of cooperation. b k =1 is k is a R or a RS, and b k =0 otherwise. Please note that in the joint relay selection strategy, a user is considered a RS if its data is relayed in at least one RB. Similarly, a user is considered a R if it relays some data in at least one RB.
P is the power matrix per user in each RB:
$$ \mathbf{P} = \left(\begin{array}{llll} P_{1}^{(1)} & P_{1}^{(2)}&....&P_{1}^{(N)}\\ P_{2}^{(1)} & P_{2}^{(2)}&....&P_{2}^{(N)}\\.&.&.&.\\.&.&.&.\\ P_{K}^{(1)} & P_{K}^{(2)}&....&P_{K}^{(N)} \end{array} \right) $$
a is the RB allocation matrix per couple of (source, relay) and each RB j:
$$ \mathbf{a} = \left(\begin{array}{llllll} a_{1,1}^{(1)} &..& a_{1,K}^{(1)}& a_{1,1}^{(2)}&..&a_{1,K}^{(N)}\\ a_{2,1}^{(1)} &..& a_{2,K}^{(1)}& a_{2,1}^{(2)}&..&a_{2,K}^{(N)}\\.&.&.&.&.&.\\.&.&.&.&.&.\\ a_{K,1}^{(1)} &..& a_{K,K}^{(1)}& a_{K,1}^{(2)}&..&a_{K,K}^{(N)} \end{array} \right) $$
Constraint (3e) represents the cooperative decision for user k, b k =1 if user k is involved in a cooperative manner (k is a RS or a R), b k =0 otherwise.
Constraints (3b) and (3d) represent the RB allocation constraints, \(a_{k,k}^{(j)} = 1\) means that RB j is assigned to the transmission of user k towards the BS. \(a_{k,r}^{(j)} = 1\) with k≠r means that RB j is assigned to the transmission of user k towards relay r in the first TTI and transmission of relayed data from r to BS in the second TTI. If there exists at least one subcarrier j such that \(a_{k,r}^{(j)} = 1\), then b k =1 and b r =1.
Constraint (3c) indicates the required target data rate per user R t .
Constraint (3f) ensures that all powers are positive.
The first item of the optimization problem (3a) represents both the transmit power for a NRS in two TTIs and the transmit power for a relay for its proper data for only one TTI (expressed by the \(\frac {1}{2}\) factor). The second item of the optimization problem represents the transmit power consumed to transmit relayed data.
The different natures of the constraints makes the problem difficult to solve. Having both continous and boolean variables makes the problem a combinatorial optimization problem with excessive computational complexity to find the global optimal solution. To put our problem in a resolvable form, we relax the boolean variable \(a_{k,r}^{j}\) to be continous in [0,1] based on the time sharing process. A RB is then shared by several users that can have the same RB j but not at the same moment. It is proved that relaxing the optimization problem leads to an upper bound solution of the primal optimization problem [24]. It is also proved in [23, 25] that the duality gap of an optimization problem is considered insignificant if the number of subcarrier is high1.
To solve our optimization problem, we propose a suboptimal heuristic based on the dual method [23] that consists to find iteratively the optimal solution for the two following subproblems:
The optimal power allocation subproblem
The optimal resource block allocation subproblem (and relay selection if relay selection is not fixed)
The Algorithm 1 presents the proposed iterative algorithm, the details of each step will be detailed in Sections 3.1 and 3.2.
Relay selection strategy
Two different relay selection strategies are proposed: a sub-optimal heuristic, and a relay selection that is jointly performed with resource allocation.
Fixed relay selection
The fixed relay selection strategies aims at decreasing the computational complexity of the resource allocation algorithm, and at decreasing the overhead due to information exchange between RS and R as well. With this strategy, potential RS are paired with potential R in a first step, before resource allocation. In this case, the value of b k is fixed in the resource allocation algorithm. In order to simplify relay search, considering d k the distance of user k to the BS, it has been decided the following:
Users with distance \(d_{k} < \frac {R}{3}\) will not have any advantage of being relayed because of their low distance to BS. Furthermore, they are far from cell border users so they are not seen as potential relays. Users with \(d_{k} < \frac {R}{3}\) will be thus non relayed sources and will not act as potential relays.
Users with distance \(d_{k} > \frac {2R}{3}\) are in the cell border and will take advantage of being relayed if a user at mid distance from them and the BS exists. Users with \(d_{k} > \frac {2R}{3}\) are thus potential relayed sources.
Users with distance \(\frac {R}{3} < d_{k} < \frac {2R}{3}\) can act as potential relays for users with \(d_{k} > \frac {2R}{3}\). Because of their relative low distance from the BS, these users will not be relayed.
A mobile user with \(d_{k} > \frac {2R}{3}\) can have only one associated relay in order to lower signaling.
First, each user in the border finds its potential best relay and compares the data rate that it can achieve with the indirect link using this relay to the data rate with the direct link to the BS. It then decides between direct or indirect links. If the user chooses the direct link, it becomes a NRS even if it is in the cell border. A potential Relay not used by any RS becomes also a NRS. At the end of this first step, we have initialized sets of NRS, R and RS depending on the users cooperation decision (see example 1 in Fig. 2). We assume that a relay can support one or more RS but a RS can have only one relay.
The relaying decision consists in comparing direct link to the BS to the best indirect available link. For this, a potential RS s chooses first the best relay r ∗ for it as follows:
$$ r^{*} = {\underset{r}{{\max}}}\,{\min}\left(\tilde{\gamma}_{s,r},\tilde{\gamma}_{r,r}\right) $$
with \(\tilde {\gamma }_{s,r}\) the average channel coefficient gain between s and r defined as follows:
$$ \tilde{\gamma}_{k,k^{\prime}}= \frac{1}{L_{k,k^{\prime}} \ S_{k,k^{\prime}} \ N_{rb}} $$
Once r ∗ is found, s compares it with its direct link to the BS. If \(\tilde {\gamma }_{s,s}<{\min }\left (\tilde {\gamma }_{s,r^{*}},\tilde {\gamma }_{r^{*},r^{*}}\right)\), then relaying will be advantageous for s, relaying scheme via r ∗ is then adopted. Else, relaying is considered not advantageous and s will be a NRS.
Joint relay selection, RB, and power allocation
The second proposed relay selection strategy includes relay selection in the resource allocation algorithm. Then the optimization variables are a, P, and b. Users can transmit directly to the BS, or via relay cooperation. A relay can support one or more RS, and a RS can be relayed by one or more relays, but in different RB. In a specific RB, only one relay is assigned for cooperation.
This implies that in Algorithm 1, users nature (R, RS, or NRS) is updated after RB allocation has been optimized, in step 3. This provides a higher flexibility since a RS is not compelled to transmit all its data through the relay, and can choose several relays. Besides, frequency diversity is exploited in the relay selection and in the RB allocation, which cannot be performed with fixed relay selection. Consequently, higher power consumption decrease are expected. They will, however, be achieved at the expense of additional computational complexity and signalling overhead. These additional costs are detailed in Section 3.4.
The dual method is adopted to resolve theoretically the optimization problem (3). Solving the hard primal problem in the dual domain begins by decomposing it into subproblems easier to solve. The master problem distributes to each subproblem the resources it can use and the price to pay. In turn, each subproblem returns to the master problem its solution with the amount of the resources it uses [21].
The Lagrangian function of problem (3) is written as:
$$ \begin{aligned} {}L\left(\mathbf{a, b, P,} \boldsymbol{\lambda}\right) &= \sum\limits_{k=1}^{K} \sum\limits_{j=1}^{N}\left(1 - \frac{b_{k}}{2}\right) a_{k,k}^{(j)} P_{k}^{(j)}\\& \quad+ \frac{1}{2} \sum\limits_{k=1}^{K}\sum_{r\neq k} \sum\limits_{j=1}^{N} b_{k} a_{k,r}^{(j)} \left(P_{k}^{(j)} + P_{r}^{(j)}\right) \\ &\quad- \sum\limits_{k=1}^{K}\sum\limits_{r=1}^{K}\sum\limits_{j=1}^{N} \ \lambda_{k} \ a_{k,r}^{(j)} \ R_{k}^{(j)} + \sum\limits_{k=1}^{K} \lambda_{k} R_{t} \end{aligned} $$
where λ=[λ 1,λ 2,....,λ K ]T is the vector of dual variables associated to the required data rate constraint.
The Lagrangian dual function is then expressed as:
$$ g\left(\boldsymbol{\lambda}\right)= \left\{ \begin{array}{l} {\underset{\mathbf{a, b, P}}{\min}} \ L\left(\mathbf{a, b, P, } \boldsymbol{\lambda}\right)\\ \text{subject to}\\ \sum_{k=1}^{K} \sum_{r=1}^{K} a_{k,r}^{(j)} \leq 1 \ \forall j \in \mathcal{S}_{N}\\ a_{k,r}^{(j)} \in [\!0..1] \ \forall k, r, j \in \mathcal{S}_{K} \times \mathcal{S}_{K} \times \mathcal{S}_{N}\\ b_{k} \in [\!0..1] \ \forall k \in \mathcal{S}_{K} \\ P_{k}^{(j)} \geq 0 \ \forall k, j \in \mathcal{S}_{K} \times \mathcal{S}_{N} \end{array} \right. $$
The problem can be solved by solving its dual problem as follows:
$$\begin{array}{*{20}l} {\underset{\boldsymbol{\lambda}}{\text{maximize }}}& \ g(\boldsymbol{\lambda})\\ \text{subject to } & \ \lambda_{k} \geq 0 \ \forall k \in \mathcal{S}_{K} \end{array} $$
The dual problem is solved with two levels of optimization. At the lower level, the Lagrangian (8) is decomposed into N subproblems with Lagrangian L (j)(a, b, P) at each RB that can be solved independently. They are solved with a fixed λ. Then, the obtained subproblems solutions are used to update λ. This step is detailed in Section 3.3.
The subproblem for each RB j can be expressed as:
$$ \begin{aligned} {}{\underset{\mathbf{a, b, P}}{\text{minimize}}} &= \sum\limits_{k=1}^{K} \left(1 - \frac{b_{k}}{2}\right) a_{k,k}^{(j)} P_{k}^{(j)}\\ &\quad + \frac{1}{2} \sum\limits_{k=1}^{K}\sum_{r\neq k} b_{k} a_{k,r}^{(j)} \left(P_{k}^{(j)} + P_{r}^{(j)}\right)\\ &\quad - \sum\limits_{k=1}^{K}\sum\limits_{r=1}^{K} \lambda_{k} \ a_{k,r}^{(j)} \ R_{k}^{(j)} \end{aligned} $$
Subject to:
$$\begin{array}{*{20}l} &\sum\limits_{k=1}^{K} \sum\limits_{r=1}^{K} a_{k,r}^{(j)} \leq 1 \ \forall j \in \mathcal{S}_{N} \\ &a_{k,r}^{(j)} \in\, [\!0..1] \ \forall k, r, j \in \mathcal{S}_{K} \times \mathcal{S}_{K} \times \mathcal{S}_{N} \end{array} $$
$$\begin{array}{*{20}l} &b_{k} \in\, [\!0..1] \ \forall k \in \mathcal{S}_{K} \\ & P_{k}^{(j)} \geq 0 \ \forall k, j \in \mathcal{S}_{K} \times \mathcal{S}_{N} \end{array} $$
To solve problem (11), a second decomposition is necessary to solve independently the two subproblems: optimal power allocation and optimal RB allocation (and relay selection in the second relay selection strategy).
Optimal power allocation for a given resource block allocation and relay selection
For a given RB allocation, we aim in this section to find the optimal power allocation. Assuming b k and \(a_{k,r}^{(j)}\) fixed for all k,r and j, only the positive power's constraint remains (Eq. (3f)) and the optimization problem can be expressed as:
$$\begin{array}{*{20}l} {\underset{\mathbf{P}}{\text{minimize}}} & \sum\limits_{k=1}^{K}\left(1 - \frac{b_{k}}{2}\right) a_{k,k}^{(j)} P_{k}^{(j)}\\ & + \frac{1}{2} \sum\limits_{k=1}^{K}\sum_{r\neq k} b_{k} a_{k,r}^{(j)} \left(P_{k}^{(j)} + P_{r}^{(j)}\right)\\ & - \sum\limits_{k=1}^{K}\sum\limits_{r=1}^{K} \lambda_{k} \ a_{k,r}^{(j)} \ R_{k}^{(j)} \end{array} $$
(14a)
$$\begin{array}{*{20}l} \text{subject to} & \ P_{k}^{(j)} \geq 0 \ \forall k, j \in \mathcal{S}_{K} \times \mathcal{S}_{N} \end{array} $$
Since only P is a variable, the optimization Lagrangian of this problem is convex by definition and can be written as:
$$ \begin{aligned} L^{(j)}_{bis}\left(\mathbf{P, } \boldsymbol{\lambda}\right)& = \sum\limits_{k=1}^{K} \left(1 - \frac{b_{k}}{2}\right) a_{k,k}^{(j)} P_{k}^{(j)}\\ &\quad + \frac{1}{2} \sum\limits_{k=1}^{K}\sum_{r\neq k} b_{k} a_{k,r}^{(j)} \left(P_{k}^{(j)} + P_{r}^{(j)}\right)\\ &\quad - \sum\limits_{k=1}^{K}\sum\limits_{r=1}^{K} \lambda_{k} \ a_{k,r}^{(j)} \ R_{k}^{(j)} - \sum\limits_{k=1}^{K} \nu_{k}^{(j)} \ P_{k}^{(j)} \end{aligned} $$
where \(\nu _{k}^{(j)} \) is the Langrangian variable associated to the power constraint. Since the optimization problem (15) is convex, the Karush-Kuhn-Tucker (KKT) conditions are used to find its global optimum:
$$\begin{array}{*{20}l} \Delta L^{(j)}_{bis} &= 0 \end{array} $$
$$\begin{array}{*{20}l} \nu_{k}^{(j)} \ P_{k}^{(j)} &= 0 \ \forall k \in \mathcal{S}_{K} \end{array} $$
$$\begin{array}{*{20}l} \nu_{k}^{(j)} &\geq 0 \ \forall k \in \mathcal{S}_{K} \end{array} $$
(16c)
Considering the different types of users, we evaluate the optimal transmit power for each user in each RB. This is done by differentiating \(L^{(j)}_{bis}\) with respect to P, substituting Eq. (1) into Eq. (15) and applying the KKT conditions. Depending on the user's nature, the theoretical optimal power expressions are calculated as follows:
k is a not relayed source or a relay transmitting its own data in RB j:
$$ \mathit{P_{k}^{(j)} = \left[\frac{\lambda_{k}}{\ln(2)}- \frac{1}{\gamma_{k,k}^{(j)}} \right]^{+}} $$
with [x]+= max {0,x}.
k is a relayed source with relay r
Let us first remind the throughput expression (1b):
$$ \mathit{R_{k}^{(j)}= \frac{1}{2} {\min} \left\{\log_{2}\left(1 + P_{k}^{(j)} \gamma_{k,r}^{(j)}\right); \log_{2}\left(1 + P_{r}^{(j)} \gamma_{r,r}^{(j)}\right)\right\}} $$
In cooperative mode, the total transmit power is minimized when the source and the relay forward the same amount of data. Consequently, the rate is the minimum of the rates on the two links (see Eq. (1b)). To achieve this, we assume that:
$$ \mathit{P_{k}^{(j)} \ \gamma_{k,r}^{(j)} = P_{r}^{(j)} \ \gamma_{r,r}^{(j)}} $$
Solving problem (15) leads to the following expression for the power of the RS k in RB j:
$$ \mathit{P_{k}^{(j)} = \left[ \frac{\lambda_{k} \ \gamma_{r,r}^{(j)}}{\ln(2)\left(\gamma_{k,r}^{(j)}+\gamma_{r,r}^{(j)}\right)} - \frac{1}{\gamma_{k,r}^{(j)}} \right]^{+}} $$
From Eq. (18), we obtain that the power of the relay r for the relayed data of RS k is: :
$$ \mathit{P_{r}^{(j)} = \left[\frac{\lambda_{k} \ \gamma_{k,r}^{(j)}}{\ln(2)\left(\gamma_{k,r}^{(j)}+\gamma_{r,r}^{(j)}\right)} - \frac{1}{\gamma_{r,r}^{(j)}} \right]^{+}} $$
Corresponding to user's nature, optimal power expressions are calculated. We can notice that a relay has different power expressions for its own data (Eq. (17)) and for the data it relays (Eq. (20)). If relay selection is fixed, users nature is known. But in the joint relay selection and resource allocation strategy, for each user, K power values must be computed (one for each RB and for each potential source-relay pair, as well as the power is k is a NRS), although eventually only one of them will be chosen.
Optimal resource block allocation
The second subproblem to solve is the optimal RB allocation using the optimal power allocation studied above. The Lagrangian per RB j can be written as:
$$ \begin{aligned} L^{(j)}\left(\mathbf{a,} \boldsymbol{\lambda}\right)& = \sum\limits_{k=1}^{K}\left(1 - \frac{b_{k}}{2}\right) a_{k,k}^{(j)} P_{k}^{(j)}\\ &\quad + \frac{1}{2} \sum\limits_{k=1}^{K}\sum_{r\neq k} b_{k} a_{k,r}^{(j)} \left(P_{k}^{(j)} + P_{r}^{(j)}\right)\\ &\quad- \sum\limits_{k=1}^{K}\sum\limits_{r=1}^{K} \lambda_{k} \ a_{k,r}^{(j)} \ R_{k}^{(j)} + \sum\limits_{k=1}^{K} \lambda_{k} \ R_{t} \end{aligned} $$
The objective is to minimize L (j), subject to constraints (3b), (12), and (13).
The Lagrangian dual function is written as follows:
$$ g(\boldsymbol{\lambda}) = {\underset{\mathbf{a}}{\min}}\left(L^{(j)}\right) = {\underset{\mathbf{a}}{\max}} \left(-L^{(j)}\right) $$
g(λ) can be written as:
$$ g(\boldsymbol{\lambda}) = {\underset{\mathbf{a}}{\text{maximize}}}\sum\limits_{k=1}^{K} \sum\limits_{r=1}^{K} a_{k,r}^{(j)} \ G_{k,r}^{(j)} -\sum\limits_{k=1}^{K} \ \lambda_{k} \ R_{t} $$
where \(\mathbf {G} = [G_{k,r}^{(j)}]\) is a K×K×N matrix representing the potential gain of couple (k,r) if it earns RB j. The gain function is expressed according to users nature as:
if k=r and k is a not relayed source:
$$ \mathit{G_{k,r}^{(j)} = \lambda_{k} \ \log_{2}\left(1 + P_{k}^{(j)} \gamma_{k,k}^{(j)}\right) - P_{k}^{(j)}} $$
if k=r and k is a relay transmitting its own data in RB j:
$$ \mathit{G_{k,r}^{(j)} = \frac {\lambda_{k}}{2} \ \log_{2} \left(1 + P_{k}^{(j)} \gamma_{k,k}^{(j)}\right) - \frac{1}{2} P_{k}^{(j)} } $$
if k is a relayed source and k≠r:
$$ G_{k,r}^{(j)} =\frac{\lambda_{k}}{2} \log_{2}\left(1 + P_{k,r}^{(j)} \gamma_{k,r}^{(j)}\right) - \frac{1}{2} \left(P_{k,r}^{(j)} + P_{r,r}^{(j)}\right) $$
The gains are calculated for each RB j, then, j is allocated to couple (k,r) maximizing its gain on it:
$$ a_{k,r}^{(j)} = \left\{ \begin{array}{ l} 1 \text{for} \ (k, r)^{*} = {\arg \ }\underset{(k, r)}{\max} \ G_{k,r}^{(j)}\\ 0 \text{otherwise} \end{array}\right. $$
In the joint relay selection, RB and power allocation strategy, if k=r, then user k is a NRS, and b k is set to 0. Otherwise, if k≠r, then k and r are cooperating, which implies that b k =1 and b r =1. If there exists at least one j such that \(a_{k,r}^{(j)} =1\), then user k becomes a RS, and user r a R. Please note that a relay cannot itself be relayed by another mobile user.
Lagrangian variable update
The last step in our algorithm is to update dual variables and to test the convergence condition for solving problem (10). Using results of current iteration t, λ for iteration t+1 is updated for each user as follows:
$$ {}\mathit{\lambda_{k}(t + 1)\! =\! \left[\!\lambda_{k}(t) + \eta_{k}(t) \left(R_{t} - \sum\limits_{r=1}^{K} \sum\limits_{j=1}^{N} a_{k,r}^{(j)}(t) R_{k}^{(j)}(t) \right)\! \right]^{+}} $$
where η is the diminishing step size as the update of dual variable is performed according to the diminishing step approach [26] for each user k. Equation (28) shows that if user k has a data rate higher than R t , it has to reduce its λ k and then to reduce its power consumed to achieve the required data rate. On the other hand, if user k has a lower data rate than R t , the dual variable update allows it to increase its λ k and so its powers' value, it can then reach R t by earning more RB or by raising its consumed power amount.
The algorithm is considered to converge when the variation of λ k is negligible for all k as follows:
$$ \mathit{\left|\frac{\lambda_{k}(t + 1) - \lambda_{k}(t)}{\lambda_{k}(t + 1)}\right| < \epsilon \ \forall k } $$
where ε is set close to zero.
Complexity and overhead comparison of the relay strategies
The complexity of Algorithm 1 depends on the number of iterations until convergence. In each iteration, step 2 requires to compute N×K power values \(P_{k}^{(j)}\) per user and RB if the relay selection is fixed. In the joint relay selection and resource allocation strategy, N 2×K power values must be computed, as explained in Section 3.1.
Similary, step 3 of the algorithm also requires N×K computations of \(G_{k,r}^{(j)}\) if the pairs (k,r) are already known, and N 2×K is they are not. Finally, the determination of b k value at the end of step 3 in the joint relay selection and resource allocation strategy does not incur any additional complexity. We can conclude that the additional complexity of the second relay selection strategy may become an issue only if the number of users is high.
The second relay selection strategy also increases the overhead, since the channel gains between any two pairs of users must be known by the BS. In the fixed relay strategy, only the channel gains between fixed source-relay pairs must be known. Once the BS has determined the values of P,a, and b, one signalling message must be sent to any user, indicating which RB it must use for its own data transmission, and if the user is a relay, which RB it should listen to perform decode and forward. Relays do not need to know which sources they are relaying, and sources omnidirectionnally transmit, so they do not need to know their relays.
Performance evaluations
Simulations are presented in this section to analyze the proposed approach's performance. We consider a single circular cell with radius R=1 Km, K users and N RBs that we vary along simulations. We assume a total bandwidth2 B=20 MHz equitably divided between the RBs. Rayleigh channels with slow fading are considered and the power density for AWGN noise is N 0=−174 dBm/Hz. Users are uniformly distributed in the cell and suffer from log-normal shadowing with standard deviation equal to 6 dB and from pathloss according to the LTE model with frequency F=2.6 GHz: \(L_{dB}(d_{k,k^{\prime }}) = 128.1 + 37.6 \log _{10}(d_{k,k^{\prime }})\) where \(d_{k,k^{\prime }}\) is the distance in Km from user k to user k ′. If k=k ′, \(d_{k,k^{\prime }}\) is the distance of user k to the BS.
The step size for λ k is set to \(\eta _{k} = \frac {\lambda _{k}}{\sqrt {t}}\) for t<2000 where t is the iteration index. When t exceeds 2000, η k becomes invariant. ε from Eq. (29) is set to 0.001. For classical mobile cellular networks, the transmit power of a mobile user is generally of the order of 21 dBm. Considering such emitted power and for cell radius of 1 Km, expected data rates for cell border users are lower than 2 bits/s/Hz. Based on this observation, R t is varied in the simulations in the range [0.5..1.5] bits/s/Hz. Results are averaged over 1000 simulations to get realistic results.
In the following, the proposed solution is compared to the optimal exhaustive solution for a special case with low users and RB number for evaluation. Then, convergence of the proposed solution is studied and the achieved performances are presented.
Performance results with fixed relay selection strategy
Optimality Evaluation
To find the optimal solution, exhaustive search is necessary for both RB allocation and power allocation. The best solution minimizing the system transmit power is then equal to the optimal solution. The complexity of this search is high and grows with the number of users and RBs. For a given number of users, all possible combinations of RB allocations have to be studied. Then, for each RB allocation, optimal power allocation for all users is established ensuring required target data rate. The optimal solution offering the lowest total system power is finally identified. All possible source-relay pairs must be considered which increases again the complexity of the optimal solution search.
With 2 users where user 1 is relay and user 2 is relayed source, the number of RB allocation's possible combinations is
$$ \mathit{M = \sum\limits_{i=1}^{N-1} {C_{N}^{i}} = 2^{N} -2 } $$
where N is the number of RB. Having N=8 RBs, we have M=254, for N=16 RBs, M=65 534 and for N=32 RBs, M exceeds 109 possible RB allocation's combinations.
If we consider three users when one is a not relayed source, one is a relayed source and one is a relay, the number of RB allocation's possible combinations is
$$ \mathit{L =\sum\limits_{i=1}^{N+2} {C_{N}^{i}} \sum\limits_{j=1}^{N-1-i} C_{N-i}^{j} = 3^{N} - 2^{N} - 2^{N+1} + 3} $$
For N=8 RBs, L=5 796, for N=16 RBs, L=42 850 116, and for N=32 RBs, L exceeds 1015 possible RB allocation.
The optimal power allocation via waterfilling method is then performed for each possible RB allocation respecting the required QoS.
Finding the optimal solution requires high computational cost and high time period that can not be realized in realistic cellular networks. Suboptimal solutions are therefore involved to approach the optimal solution. Table 2 compares system transmit powers with our proposed solution and with the optimal solution for two users and eight RBs. Both system models with and without relaying are considered. We can remark that the proposed solution approaches the optimal solution. The difference of proposed model applied to system without relaying is only 1 % comparing to the optimal solution. For the model with relay, the difference is 17 %. The proposed solution can reduce the system transmit power by 39 % comparing to the optimal solution without relaying. Applying the proposed solution to a system model with a higher number of users especially in cell edge can be very interesting in order to decrease system energy consumption3.
Table 2 System transmit power (dBm)
Convergence analysis
In this section, the convergence rate of the proposed algorithm is studied. A simulation is considered convergent if it respects the Lagrangian variable variation constraints (Eq. (29)) and the required data rate per user constraint as R k =R t ±0.1 R t ∀k. The convergence rate is studied for 18 users and different RBs numbers and R t values. The minimal convergence rate is 30 % for 60 RBs and R t =1.5 bits/s/Hz and it can reach 65 % for 192 RBs for the same R t . The convergence rates can be justified by the hard convergence constraints. If we relaxed these constraints by expanding the R t admissible variation range for example, convergence rates would be improved. Then, we can observe that the convergence rate increases when the number of RBs grows, thanks to the increase in frequency diversity. Indeed, users are more likely to find RB with good channel gains, and thus to achieve their required data rate.
Achieved performances
The system transmit power for different users and RB numbers and various R t values is presented in this section. Figures 3 and 4 show the system transmit power for, respectively, 18 and 30 users. The gain offered by the proposed algorithm reaches 21 % for 18 users, 192 RBs and R t =1.5 bits/s/Hz as global gain, and achieves up to 28 % with 576 RB and 30 users. Higher gains are obtained when the number of users increases, since it is then more likely that a mobile user will find another mobile user with adequate location to efficiently serve as a relay. We can also observe that the system transmit power decreases when the number of RB number grows, this is obtained thanks to the frequency diversity. When a high number of RB is available, the RB allocation step can be established more efficiently and the transmit power is then saved.
System transmit power for 18 users, first relay selection strategy
From simulation results, it is shown that the proposed algorithm offers better performance comparing to the model without relaying. The transmit power can be saved especially in the cell edge. This result can be exploited to reduce interference level in a multicell system model.
Performance results with joint relay selection, RB, and power allocation
When joint relay selection, RB and power allocation is used, the system transmit power is even more decreased, as shown by Fig. 5 for 18 users. The power gain is up to 47 % when R t =1 bit/s/Hz, 50 % when R t =1.5 bits/s/Hz and 59 % when R t =0.5 bits/s/Hz.
System transmit power for 18 users, second relay selection strategy
The average percentage of RS, R and NRS as well as the average distance between each user and the BS, depending on its nature, are gathered in Table 3. It is averaged on all RB values (from 60 to 192), with R t =1 bit/s/Hz. Relayed sources are mainly located at the border of the cell, whereas relays are in the second ring in the cell. This is consistent with the areas that are chosen in the fixed relay selection strategy, and thus justifies this choice. The main difference with the fixed relay selection strategy is that NRS can be located anywhere in the cell when relay selection is optimized. Besides, since source-relay pairs may be located anywhere in the cell with this relay selection strategies, the ratio of R and RS is high, and few users remain NRS. The average ratio per users nature also shows that relays help several RS in their transmission.
Table 3 Average ratio per user type and distance to the BS, when R t =1 bit/s/Hz
Finally, Figs. 6 and 7 represent the average transmit power per user depending on its nature, as well as the average transmit power among all users when R t =1 bit/s/Hz and R t =0.5 bits/s/Hz, respectively. Relays consume more power than RS, because they have to transmit their data as well as the relayed data and are inactive half of the time. Nevertheless, the average power per relay remains lower than the average power per user if no relaying is allowed. Consequently, using relays remains beneficial, when considering all users. Besides, RS have low transmit power, even though they are located at the border of the cell. The sum transmit power decrease is only due to the RS power decrease, and the R power increase remains limited enough to achieve a global gain. We can notice that NRS have high transmit power values when R t =1 bit/s/Hz. These users may be located anywhere in the cell (as shown in Table 3), and some of them may be located at the border of cell, with no potential helpful relay. The average transmit power is high because of some NRS users with very high power values. This tendency is less important when the target data rate is low, as shown on Fig. 7.
Average transmit power per user for 18 users, with R t =1 bit/s/Hz
Average transmit power per user for 18 users, with R t =0.5 bit/s/Hz
Besides, since mobile users are moving in the cell, they are relays at some location, but will become relayed sources whenever they move towards the cell edge. The proposed cooperative scenario is based on the assumption that some mobile users accept to relay some other mobile users at some point, knowing that they will be helped through relaying by other mobile users later. The local power consumption increase when a mobile user acts as a relay is compensated for by an important power consumption decrease when the same mobile user becomes a relayed source.
In this paper, we have studied resource allocation for relayed uplink transmission in OFDMA system. Compared to previous published results, our system model considers mobile relays that have to multiplex their own data to the relayed data, so that the relay as well as the relayed sources all achieve the same target data rate. Two strategies have been proposed for relay selection: it is either performed as an initialization phase by the BS, based on average channel gains, or it is jointly optimized with RB and power allocation. An iterative algorithm solving the optimization problem that aims at minimizing the total system transmit power under the target data rate constraint has been determined. The primal optimization problem has been decomposed into subproblems where resource allocation and power allocation are solved in an iterative manner. Dual decomposition and subgradient methods have been used for this purpose.
Simulation results show that the proposed algorithm is very close to optimal solutions found by exhaustive search, with low number of users and RB. When the number of users and RBs is growing, the proposed algorithm gives valuable performances enhancement compared to solutions without relay with the fixed relay selection strategies. With the joint relay selection strategy, power consumption is even lower. This strategy is more flexible and thus better benefits from frequency and multi-user diversity. However, its complexity is higher, and it incurs additional overhead. Comparing the average power per user type (relay, relayed source and non-relayed source) and their location in the cell allows to conclude that the sub-optimal fixed relay strategy achieves a good compromise between transmit power decrease and complexity.
1 We must note that in the final step of problem resolution, \(a_{k,r}^{(j)}\) are converted to boolean variables (Eq. (27))
2 Please note that we do not use RB number compliant with the LTE standard and that the total bandwidth is fixed and does not vary for all simulations.
3 We note that gain values consider power values in mW and not in dBm.
3GPP, Overview of 3GPP release 11 v0.2.0 (2014). [Online]. Available: http://3gpp.org/dynareport/36211.htm. Accessed 2014.
GR Hiertz, D Denteneer, S Max, R Taori, J Cardona, L Berlemann, B Walke, Ieee 802.11s: The wlan mesh standard. IEEE Wireless Commun. 17(1), 104–111 (2010).
Y Yang, H Hu, J Xu, G Mao, Relay technologies for wimax and lte-advanced mobile systems. IEEE Commun. Mag. 47(10), 100–105 (2009).
VK Shah, AP Gharge, A review on relay selection techniques in cooperative communication. Int. J. Eng. Innovative Technol. (IJEIT). 2(4), 65–69 (2012).
K Vardhe, D Reynolds, B Woerner, Joint power allocation and relay selection for multiuser cooperative communication. IEEE Trans. Wireless Commun. 9(4), 1255–1260 (2010).
H Rasouli, S Sadr, A Anpalagan, in Proceeding of IEEE Global Telecommunications Conference GLOBECOM. A fair subcarrier allocation algorithm for cooperative multiuser OFDM systems with grouped users (New Orleans, 2008).
W Dang, M Tao, H Mu, J Huang, Subcarrier-pair based resource allocation for cooperative multi-relay OFDM systems. IEEE Trans. Wireless Commun. 9(5), 1640–1649 (2010).
J Gora, Qos-aware resource management for lte-advanced relay-enhanced network. EURASIP J. Wireless Commun. Netw, 1–18 (2014). doi:10.1186/1687-1499-2014-178.
A Adinoyi, B Bakaimis, L Berlemann, J Boyer, et al, EU FP6 IST-2003-507581 WINNER, d 3.4 definition and assessment of relay based cellular deployment concepts for future radio scenarios considering 1st protocol characteristics (2004).
WiMax, 802.16m and wimax release 2.0 (2010). [Online]. Available: http://www.ieee802.org/16/relay/. Accessed 2010.
ARTIST, 4G presentation. http://cordis.europa.eu/fp7/ict/future-networks/5thconcertation/artist4g.pdf.
R Balakrishnan, X Yang, MV Ian, F Akyildiz, in Proceeding of IEEE Wireless Communications and Networking Conference WCNC. Mobile relay and group mobility for 4G WiMAX networks (CancunQuintana Roo, 2011).
A Papadogiannis, A Saadani, E Hardouin, in Proceeding of IEEE Global Telecommunications Conference GLOBECOM. Exploiting dynamic relays with limited overhead in cellular systems (Honolulu, 2009).
A Behnad, X Gao, X Wang, Distributed resource allocation for multihop decode-and-forward relay systems. IEEE Trans. Vehicular Technol. 64:, 4821–4826 (2015).
X Li, Q Zhang, G Zhang, M Cui, L Yang, J Qin, Joint resource allocation with subcarrier pairing in cooperative ofdm df multi-relay networks. IET Commun. 9:, 78–87 (2015).
H Jeong, JH Lee, H Seo, in Proceeding of IEEE Vehicular Technology Conference VTC Spring. Resource allocation for uplink multiuser OFDM relay networks with fairness constraints (Barcelona, 2009).
D Zhang, Y Wang, J Lu, in Proceeding of IEEE International Conference on Communications (ICC). Qos aware resource allocation in cooperative OFDMA systems with service differentiation (Cape Town, 2010).
K Xiong, P Fan, Y Lu, KB Letaief, Energy efficiency with proportional rate fairness in multirelay ofdm networks. IEEE J. Selected Areas Commun. 34:, 1431–1447 (2016).
ME Soussi, A Zaidi, L Vandendorpe, Df-based sum-rate optimization for multicarrier multiple access relay channel. EURASIP J. Wireless Commun. Netw, 1–19 (2015). doi:10.1186/s13638-015-0361-y.
MS Alam, JW Mark, X Shen, Relay selection and resource allocation for multi-user cooperative ofdma networks. IEEE Trans. Wireless Commun. 12:, 2193–2205 (2013).
DP Palomar, M Chiang, A tutorial on decomposition methods for network utility maximization. IEEE J. Selected Areas Commun. 24(8), 1439–1451 (2006).
S Boyd, L Vanderbergue, Convex Optimization (Cambridge University Press, Cambridge, CB2 8RU, UK, 2004).
W Yu, R Lui, Dual methods for nonconvex spectrum optimization of multicarrier systems. IEEE Trans. Commun. 54:, 1310–1322 (2006).
MK Awad, V Mahinthan, M Mehrjoo, X Shen, JW Mark, A dual-decomposition-based resource allocation for OFDMA networks with imperfect csi. IEEE Trans. Vehicular Technol. 59(5), 2394–2403 (2010).
I Wong, B Evans, in Proceeding of IEEE Global Telecommunications Conference GLOBECOM. OFDMA downlink resource allocation for ergodic capacity maximization with imperfect channel knowledge (Washington, DC, 2007).
DP Bertsekas, Convex Optimization Theory, A. Scientific Ed., (Nashua, 2009).
CEDRIC/LAETITIA Laboratory, Conservatoire National des Arts et Métiers (CNAM), 292 Rue Saint-Martin 75003, Paris, France
Salma Hamda
, Mylene Pischella
& Daniel Roviras
INNOVCOM Laboratory, St Raoued 2083, Ariana, Tunisia
& Ridha Bouallegue
Search for Salma Hamda in:
Search for Mylene Pischella in:
Search for Daniel Roviras in:
Search for Ridha Bouallegue in:
Correspondence to Salma Hamda.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Hamda, S., Pischella, M., Roviras, D. et al. Uplink resource allocation in cooperative OFDMA with multiplexing mobile relays. J Wireless Com Network 2016, 215 (2016). https://doi.org/10.1186/s13638-016-0704-3
Multiplexing mobile relays
OFDMA
Required QoS
Optimization problem
|
CommonCrawl
|
An outbreak vector-host epidemic model with spatial structure: the 2015–2016 Zika outbreak in Rio De Janeiro
W. E. Fitzgibbon1,
J. J. Morgan1 &
G. F. Webb2
Theoretical Biology and Medical Modelling volume 14, Article number: 7 (2017) Cite this article
A deterministic model is developed for the spatial spread of an epidemic disease in a geographical setting. The disease is borne by vectors to
susceptible hosts through criss-cross dynamics. The model is focused on an outbreak that arises from a small number of infected hosts imported into a subregion of the geographical setting. The goal is to understand how spatial heterogeneity of the vector and host populations influences the dynamics of the outbreak, in both the geographical spread and the final size of the epidemic.
Partial differential equations are formulated to describe the spatial interaction of the hosts and vectors. The partial differential equations have reaction-diffusion terms to describe the criss-cross interactions of hosts and vectors. The partial differential equations of the model are analyzed and proven to be well-posed. A local basic reproduction number for the epidemic is analyzed.
The epidemic outcomes of the model are correlated to the spatially dependent parameters and initial conditions of the model. The partial differential equations of the model are adapted to seasonality of the vector population, and applied to the 2015–2016 Zika seasonal outbreak in Rio de Janeiro Municipality in Brazil.
The results for the model simulations of the 2015–2016 Zika seasonal outbreak in Rio de Janeiro Municipality indicate that the spatial distribution and final size of the epidemic at the end of the season are strongly dependent on the location and magnitude of local outbreaks at the beginning of the season. The application of the model to the Rio de Janeiro Municipality Zika 2015–2016 outbreak is limited by incompleteness of the epidemic data and by uncertainties in the parametric assumptions of the model.
The Zika virus is a mosquito borne flavivirus that was first isolated in Uganda in 1947 [1]. Subsequently, it has become prevalent in parts of Africa, Asia, and Central and South America. The geographic distribution of the virus has been steadily increasing since 2015 and its further geographic spread to additional countries that are home to competent mosquito vectors is highly probable. As of September 15, 2016, the World Health Organization reports that local circulation of the virus has been reported by 72 countries and territories. Although there have been reports of transmissions through sexual contact [2], Zika virus appears to be primarily spread through the human population through bites from Aedes mosquitos. The virus incubates in a human host over an asymptomatic period lasting from three to twelve days and once fully developed, the virus disease persists for about a week. It is characterized by low grade fever, rash, joint pain, and conjunctivitis (red eyes). Typically it is mild and seldom requires hospitalization. However the virus has two severe complications which make it a menace to public health. The virus has been linked to an increased risk of Guillian-Barre syndrome which is a severe autoimmune disorder [3]. Perhaps even more serious is its linkage to microcephaly birth defects in newborn babies [4].
Zika epidemics are both year-round and seasonal, dependent upon the year-round prevalence or seasonality of the resident mosquito populations. A recent study [5] describes in detail the potential spread of Zika epidemics into African and Asian-Pacific regions by the importation of infected people. The generation of Zika epidemics by the importation of infected people into year-round or seasonal environments is a major public health concern. Recent mathematical models have been developed to understand these concerns [6–13]. We develop a model that describes both year-round and seasonal host-vector epidemic population dynamics in a geographical region. The disease is borne by vectors to susceptible hosts through criss-cross dynamics in a region of spatially distributed vectors and hosts. The epidemic outbreak begins with the arrival of a small number of viremic hosts in one or more locales in which the disease is not yet present. Our goal is to aid understanding of how the introduction of a small number of infected hosts, in a specific location in a geographic region, will result in a dissipated or a sustained epidemic. The focus of the study is examine the influence of spatial effects on these possible outcomes.
We formulate a criss-cross reaction-diffusion partial differential equations model to describe the spatial evolution of an epidemic. Criss-cross reaction-diffusion models for the circulation of disease between vectors and hosts have been used to describe the spatial spread of malaria [14], the spatial spread of Dengue outbreaks [15, 16], and the spatial spread of other diseases by many authors [17–24]. We apply our model to the 2015–2016 Zika seasonal outbreak in the urban area of Rio de Janeiro Municipality in Brazil. We numerically simulate the model to analyze varied scenarios of Zika seasonal epidemics in Rio de Janeiro, dependent upon the input of local spatial outbreaks at the beginning of the season and the time-limitation of seasonality.
The geographical region is denoted by Ω⊂R 2. The background population of uninfected and susceptible hosts in Ω has geographic density H u (x,y), which is assumed unchanging in time in the demographic and epidemic context of the outbreak. Thus, the model is viewed as applicable to an early phase of the epidemic, during which the epidemic does not alter the local geographic and demographic population structure of hosts. The model consists of the following compartments:
The density of infected hosts H i (t,x,y) at time t at (x,y)∈Ω, with initial condition H i0(x,y).
The density of uninfected vectors V u (t,x,y) at time t at (x,y)∈Ω, with initial condition V u0(x,y).
The density of infected vectors V i (t,x,y) at time t at (x,y)∈Ω, with initial condition V i0(x,y).
Equations of the model
The equations of the model in the case that transmission from vectors to hosts is year-round are
$$\begin{array}{@{}rcl@{}} \frac{\partial}{\partial t} H_{i}(t,x,y) &=& \nabla \cdot \delta_{1}(x,y) \nabla H_{i}(t,x,y) - \lambda(x,y) \, H_{i}(t,x,y) \end{array} $$
$$\begin{array}{@{}rcl@{}} &&+ \sigma_{1}(x,y) \, H_{u}(x,y) \, V_{i}(t,x,y), \\ \frac{\partial}{\partial t} V_{u} (t,x,y) &=& \nabla \cdot \delta_{2}(x,y) \nabla V_{u}(t,x,y) - \sigma_{2}(x,y) V_{u}(t,x,y) H_{i}(t,x,y) \end{array} $$
$$\begin{array}{@{}rcl@{}} &&+ \beta(x,y) \left(V_{u}(t,x,y) + V_{i}(t,x,y) \right) \\ &&- \mu(x,y) \left(V_{u}(t,x,y) + V_{i}(t,x,y) \right) V_{u}(t,x,y), \\ \frac{\partial}{\partial t} V_{i} (t,x,y) &=& \nabla \cdot \delta_{2}(x,y) \nabla V_{i}(t,x,y) + \sigma_{2}(x,y) V_{u}(t,x,y) H_{i}(t,x,y) \end{array} $$
$$\begin{array}{@{}rcl@{}} & & - \mu(x,y) \left(V_{u}(t,x,y) + V_{i}(t,x,y) \right) V_{i}(t,x,y). \end{array} $$
In addition, the following boundary and initial conditions are satisfied:
$$\frac{\partial}{\partial \eta} H_{i}(t,x,y) = 0, \, \frac{\partial}{\partial \eta} V_{u}(t,x,y) = 0, \, \frac{\partial}{\partial \eta} V_{i}(t,x,y) = 0, \, (x,y) \in \partial \Omega, \, t >0, $$
$$H_{i}(0,x,y) = H_{i0}(x,y), \, V_{u}(0,x,y) = V_{u0}(x,y), \, V_{i}(0,x,y) = V_{i0}(x,y), \, (x,y) \in \Omega. $$
The spatially dependent parameters of the model are as follows: λ(x,y) is the loss rate of the infected host population (due to recovery or other removal). β(x,y) is the breeding rate of the vector population. μ(x,y) is the loss rate of the vector population due to environmental crowding. σ 1(x,y) is the transmission rate of uninfected hosts and σ 2(x,y) is the transmission rate of uninfected vectors. The transmission terms for both hosts and vectors are assumed to be of density-dependent form, rather than frequency-dependent form [25]. A comparison of the two forms for spatially dependent models is given in [26]. Since we assume, during the early phase of the epidemic, that the populations of infected hosts (infected vectors) are relatively small fractions of the populations of uninfected hosts (uninfected vectors), the two forms are essentially the same. δ 1(x,y) and δ 2(x,y) are the diffusion rates of the infected hosts and infected vector populations, respectively.
In the Appendix we prove the well-posedness of the model.
The local basic reproduction number
Define the local basic reproduction number of the model (1), (2), (3) as follows:
$$R_{0}(x,y) = \frac{\sigma_{1}(x,y) \sigma_{2}(x,y) H_{u}(x,y)}{\lambda(x,y) \mu(x,y)}. $$
R 0(x,y) is interpreted as the average number of new cases generated by a single case at a given location (x,y) in Ω. An analysis of local reproduction numbers for spatially dependent models is given in [27] and in [28]. Our motivation for this definition is the basic reproduction number R 0 of the spatially independent model (Appendix). Simulations of the spatially dependent model show the following behavior: (1) If R 0(x,y)<1 everywhere in Ω, then the populations of both infected hosts and infected vectors extinguish, and the populations converge to the disease free equilibrium. (2) If R 0(x,y)>1 in some subregion Ω 0⊂Ω, then the populations of both infected hosts and infected vectors may converge from an initial local outbreak to an endemic equilibrium in Ω, even if the average value of R 0(x,y) in all of Ω is <1.
Equations of the model when the vector population is seasonal
If the vector population is seasonal, then equations of the model must be modified to account for seasonality. We assume that the vector population breeding term β(t,x,y) is dependent on time. We assume that in addition to the vector loss parameter μ(x,y) corresponding to carrying capacity, there is a time-independent vector loss term μ 1(x,y), corresponding to the average vector life-span 1/μ 1(x,y). The modified equations are
$$\begin{array}{@{}rcl@{}} \frac{\partial}{\partial t} V_{u} (t,x,y) &=& \nabla \cdot \delta_{2}(x,y) \nabla V_{u}(t,x,y) - \sigma_{2}(x,y) V_{u}(t,x,y) H_{i}(t,x,y) \end{array} $$
$$\begin{array}{@{}rcl@{}} &&+ \beta(t,x,y) \left(V_{u}(t,x,y) + V_{i}(t,x,y) \right) - \mu_{1}(x,y) V_{u}(t,x,y) \\ & & - \mu(x,y) \left(V_{u}(t,x,y) + V_{i}(t,x,y) \right) V_{u}(t,x,y), \\ \frac{\partial}{\partial t} V_{i} (t,x,y) &=& \nabla \cdot \delta_{2}(x,y) \nabla V_{i}(t,x,y) + \sigma_{2}(x,y) V_{u}(t,x,y) H_{i}(t,x,y) \end{array} $$
$$\begin{array}{@{}rcl@{}} & & - \mu(x,y) \left(V_{u}(t,x,y) + V_{i}(t,x,y) \right) V_{i}(t,x,y) \\ & & - \mu_{1}(x,y) V_{i}(t,x,y). \end{array} $$
The 2015–2016 Zika outbreak in Rio de Janeiro municipality
We apply the model (1), (4), (5) to the 2015–2016 Zika epidemic in Rio de Janeiro, Brazil. The host population are the people in the Municipality, which in 2016 is approximately 6,000,000, in a geographical region of approximately 1,200 square kilometers (Source: Instituto Brasileiro de Geografia e Estatistica). The vector population is the female Aedes aegypti mosquito. The Municipality comprises 33 sub-districts, with population densities ranging from 1,000 to 50,000 inhabitants per square kilometer (Fig. 1). The period November through July can be viewed as the seasonal Zika transmission period of the epidemic in the Municipality.
Rio de Janeiro Municipality sub-districts. The sub-district population densities range from 1,000 to 50,000 inhabitants per square kilometer. The Municipality is approximately 50 kilometers east-west by 20 kilometers north-south, with the highest population density in the eastern region. The total population is approximately 6,000,000. (Source: http://www.citypopulation.de/php/brazil-rio. php)
A small number of cases were recorded in the Municipality into the summer of 2015, with the highest number of cases in the eastern region of the Municipality [29, 30]. The Brazilian Health Ministry [31] reported that Rio de Janeiro State (population approximately 16,000,000) registered a count of 60,176 cumulative cases from January 1, 2016 to August 13, 2016 (incidence of approximately 364 cases per 100,000 inhabitants). In [32] the weekly case data for Rio de Janeiro Municipality is given from November 1, 2015 through April 10, 2016, during which time the reporting of cases became mandatory. The cumulative number of reported cases in the Municipality during this period was 25,400 [32] (incidence of approximately 423 cases per 100,000 inhabitants).
Parameterization of the Rio de Janeiro model
We simulate the model (1), (4), (5) for Rio de Janeiro Municipality with some parameters assumed. The available epidemic data used for comparison to our simulations for the Rio de Janeiro Municipality 2015–2016 Zika outbreak is very limited. Further, the number of unreported cases, necessarily unknown, is a limitation of the applicability of the model for this application. A more precise fitting of parameters μ, σ, and β requires much higher data accuracy specific to the Zika epidemic in the Municipality. Our purpose is to provide a qualitative description of a typical vector-borne epidemic spatial outbreak, and our simulation of this particular outbreak, with its limitations on parameterization, serves this purpose.
Explanations for our assumptions on specific parameter values are as follows: The time units for our simulations are weeks. The spatial units are kilometers and Ω=(−25,25)×(−12,12). The boundary conditions for Ω are a reasonable simplification of the costal boundaries and the less populated northern boundary of Rio de Janeiro Municipality. The average length of the infectious period of infected people is approximately 1 to 2 weeks and we set λ(x,y)=1.0 [33, 34]. The average lifespan of female Aedes aegypti mosquitoes is approximately two weeks in an urban environment [35, 36], and we set μ 1(x,y)=0.5. The total uninfected host population is approximately approximately 6,000,000, with geographical density function H u (x,y)=50.0+102 (1.0+ sin(0.02π x) cos(0.03π y)) (Fig. 2 a), which corresponds approximately to the population density distribution in Fig. 1.
a The population of susceptible people H u (x,y) in Rio de Janeiro Municipality, which agrees approximately with the geographical population density in Fig. 1. b The spatially dependent mosquito loss function μ(x,y), which is higher in locations of higher population density due to mosquito control measures
We set the density dependent mosquito loss function μ(x,y)=0.0015(1.0+100 g a u s s(20.0,30.0,x)×g a u s s(0.0,30.0,y)) (Fig. 2 b), which corresponds to higher levels of mosquito control in the eastern region of the Municipality, where the population density is highest. Here g a u s s(m,s d,x) is the probability density function in x of the normal distribution function with mean m and standard deviation sd. Set the transmission parameters σ 1(x,y)=0.00000049, σ 2(x,y)=0.78 (we assume that individual mosquitoes bite multiple people, people receive multiple bites, and the probability of infection of mosquitoes is much higher than the probability of infection of people).
The diffusion terms for the infected people, uninfected mosquitoes, and infected mosquitoes in the model are understood as idealizations of the indirect spatial spread of the Zika virus infection agent. The spatial spread of the virus is dependent on the direct spread of infected people and uninfected/infected mosquitoes. The spatial movement of people in an urban setting is extremely complex, and a major challenge for epidemic modeling. We set the infected people diffusion parameter δ 1=0.2, which provides a simplified way of describing the movement of infected people, in the context of the epidemic, with respect to the spatial spread of the virus. We set the mosquito diffusion parameter δ 2=0.2, which is consistent with an estimated adult mosquito dispersal of 30−50 m per day [36].
For simplicity, we assume that the mosquito life-span is independent of spatial location, and also independent of time in the season, although for some Aedes species, in some environments, the life-span is correlated to temperature [35]. We take the time dependent mosquito breeding function as \(\beta (t,x,y) = 300.0 \, emg(t,\bar {\mu },\bar {\sigma },\bar {\lambda)}\), where emg is the shifted exponentially modified gaussian
$$emg(t,\bar{\mu},\bar{\sigma},\bar{\lambda)}) = \frac{\bar{\lambda}}{2} Exp\left(\frac{\bar{\lambda}}{2}(2 \bar{\mu} + \bar{\lambda} \bar{\sigma}^{2} - 2 \, t)\right) Erfc\left(\frac{1}{\sqrt{2} \, \bar{\sigma}} (\bar{\lambda} \bar{\sigma}^{2} + \bar{\mu} -t)\right) $$
Here Erfc is the complementary error function. The parameters are \(\bar {\mu } = -2.0\), \(\bar {\sigma } = 5.0\), \(\bar {\lambda } = 0.2\). The graph of the seasonal mosquito breeding function β(t) is given is Fig. 3. The assumptions on the parameters of the mosquito population yield a very rapidly rising population at the beginning of the season, which quickly stabilizes to maximal capacity of approximately 14 million, and then declines gradually to very low levels from midway through the season to the end of the season. The total mosquito population, both uninfected and infected, remains mostly uniformly spatially distributed throughout the Municipality throughout the season. The infected mosquito population spatial distribution is very similar to the spatial distribution of infected people. During much of mosquito season, the ratio of total mosquitoes to total people is approximately 2 to 1, which agrees with the ratio in [15].
The time dependent mosquito breeding function β(t) for the 2015–2016 seasonal mosquito population in Rio de Janeiro Municipality. The graph of β(t) rises rapidly in November 2015, to its maximum in early January 2016, and then falls steadily to a low value in May 2016
We set the initial outbreaks in variable locations in the Municipality. For the initial spatial distribution of infected people we set H i (0,x,y)=H i0 g a u s s(x 0,1.0,x)×g a u s s(y 0,1.0,y), centered at (x 0,y 0). The initial number of infected people at the location (determined by H i0) is viewed as small and above a threshold level capable of outbreak. It includes imported cases (first order) and possibly some cases generated by first order cases (higher order).
Simulations of the model for Rio de Janeiro
We provide four simulations of the model with initial outbreaks in different locations in the Municipality.
Example 1. In Example 1 the outbreak begins at time 0 on November 1, 2015 in a small eastern location of the Municipality, where R 0(x,y) is very high. The total number of infected people at time 0 is 10 (H i0=10), with spatial distribution centered at x 0=15 and y 0=0, R 0(15,0)≈2.27. At time 0 the total number of uninfected mosquitoes is 120,000, distributed uniformly throughout the Municipality. The total number of infected mosquitoes at time 0 is 100, with spatial distribution V i (0,x,y)=10.0H i (0,x,y). The simulation of the model (1), (4), (5) over the time period November 1, 2015 to May 21, 2016 is graphed in Figs. 4, 5 and 6. The simulation agrees qualitatively with the weekly reported case data for Rio de Janeiro Municipality in [32] (Fig. 4). The spatial distribution of infected people expands from a very small number of initial cases in a small eastern subregion of the Municipality, and disperses throughout the eastern region of the Municipality (Fig. 5). In Fig. 6 we graph the total number of infected people and the total cumulative number of infected people throughout the season. The simulation agrees qualitatively with the weekly reported case data for Rio de Janeiro Municipality given in [32], with approximately 25,500 reported cases between week 44, 2015 and week 15, 2016.
Example 1. Simulation of the reported infected cases in the Rio de Janeiro Municipality from the beginning of the epidemic season at week 44 in 2015 to week 21 in 2016 (blue graph). The reported case values of the simulation agree qualitatively with the number of reported cases of the Brazilian Health Ministry during this period (grey bars)
Example 1. Model simulation of the spatial distributions of infected cases in the Rio de Janeiro Municipality during the 2015–2016 epidemic season. At time 0 (November 1, 2015) a very small number of cases are located in a small region in the eastern central region of the Municipality. The spatial distributions are graphed at time=1 (week 45 in 2015), time=4 (week 49 in 2015), time=10 (week 3 in 2016), time=15 (week 8 in 2016), time=20 (week 13 in 2016). The cases concentrate in the eastern central region of the Municipality. Top: Density plots. Bottom: Heatmap plots (color magnitude scaled at each time point)
Example 1. a Spatial density of infected people at time t=0 (approximately 10). b The total number of infected people as a function of time. c The cumulative total number of infected people as a function of time, which converges to approximately 26,000 at the end of the 2015–2016 season
Example 2. We repeat the simulation with the only change from Example 1 the location of the initial outbreak. We take the initial outbreak location as the center of the Municipality with x 0=0 and y 0=0, R 0(0,0)≈1.26. The total number of infected people at time 0 is 20 (H i0=20). The infected population again expands from the initial location and disperses throughout the eastern region of the Municipality, but at approximately one-tenth of the number of infected cases as in Example 1 (Figs. 7 and 8). The reason is that R 0(x,y) is much lower in this initial location than the initial location in Example 1, and the rise of the epidemic is much slower than in Example 1. In Fig. 8 (bottom) we repeat the example with the center of the outbreak location at x 0=−10, y 0=0, R 0(−10,0)≈0.52. The infected cases decrease rapidly to 0, because R 0(x,y) is even lower in the region of the outbreak.
Example 2. Model simulation of the spatial distributions of infected cases in the Rio de Janeiro Municipality during the 2015–2016 epidemic season (at the same time points as in Example 1). At time 0 (November 1, 2015) a very small number of cases are located in a small region in the center of the Municipality. The cases concentrate in an eastern central region of the Municipality. Top: Density plots. Bottom: Heatmap plots (color magnitude scaled at each time point)
Top: Example 2. a Spatial density of infected people at time t=0 (approximately 20). b The total number of infected people as a function of time. c The cumulative total number of infected people as a function of time, which converges to approximately 2,500 at the end of the 2015–2016 season. Bottom; Example 2 modified with the initial outbreak shifted to x=−10 and y=0. The cumulative total converges to 50
Example 3. We again repeat the simulation with the only change from Example 1 the location of the initial outbreak. We take the initial outbreak with two locations in the center of the Municipality with
$$\begin{array}{@{}rcl@{}} 1^{st} location: & & x_{0}=0, \, y_{0}=5, \, H_{i0}=20, R_{0}(0,5) \approx 1.26, \\ 2^{nd} location: & & x_{0}=5, \, y_{0}=-5, \, H_{i0}=10, R_{0}(5,-5) \approx 1.60. \end{array} $$
The total number of infected people at time 0 is 30. The infected population again expands from the initial location and disperses throughout the eastern region of the Municipality, but at approximately one-third of the number of infected cases as in Example 1 (Figs. 9 and 10). The reason is that the R 0(x,y) is again lower in the initial outbreak locations than the initial outbreak location in Example 1.
Example 3. Model simulation of the spatial distributions of infected cases in the Rio de Janeiro Municipality during the 2015–2016 epidemic season (at the same time points as in Example 1). At time 0 (November 1, 2015) a very small number of cases are located in two small regions in the center of the Municipality. The cases concentrate in the eastern region of the Municipality. Top: Density plots. Bottom: Heatmap plots (color magnitude scaled at each time point)
Example 3. a Spatial density of infected people at time t=0 (approximately 30). b The total number of infected people as a function of time. c The cumulative total number of infected people as a function of time, which converges to approximately 9,000 at the end of the 2015–2016 season
Example 4. We again repeat the simulation with the only change from Example 1 the location of the initial outbreak. We take the initial outbreak with three locations in the eastern region of the Municipality with
$$\begin{array}{@{}rcl@{}} 1^{st} location: & & x_{0}=5, \, y_{0}=-5, \, H_{i0}=10, R_{0}(5,-5) \approx 1.60, \\ 2^{nd} location: & & x_{0}=0, \, y_{0}=5, \, H_{i0}=30, R_{0}(0,5) \approx 1.26, \\ 3^{rd} location: & & x_{0}=15, \, y_{0}=5, \, H_{i0}=5, R_{0}(15,5) \approx 2.16. \end{array} $$
The total number of infected people at time 0 is 45. The infected population again expands from the initial locations and disperses throughout the eastern region of the Municipality, with the number of total cumulative infected cases the same as in Example 1 (Figs. 11 and 12). The reason is that R 0(x,y) is high in the third location, as it is in the initial location in Example 1.
Example 4. Model simulation of the spatial distributions of infected cases in the Rio de Janeiro Municipality during the 2015–2016 epidemic season (at the same time points as in Example 1). At time 0 (November 1, 2015) a very small number of cases are located in three small eastern regions in the Municipality. The cases concentrate in the northeastern region of the Municipality. Top: Density plots. Bottom: Heatmap plots (color magnitude scaled at each time point)
Example 4. a Spatial density of infected people at time t=0 (approximately 30). b The total number of infected people as a function of time. c The cumulative total number of infected people as a function of time, which converges to approximately 26,000 at the end of the 2015–2016 season (the same number as in Example 1.)
Example 5. We provide a simulation of the model Eqs. (1), (2), (3) to illustrate that the solutions may approach an endemic steady state even if the average value of R 0(x,y)<1 in the spatial domain. We use the same parameters as in Rio de Janeiro Municipality, except that σ 1(x,y)=0.0000001, σ 2(x,y)=0.1, μ(x,y)=0.00005(1.0+100 g a u s s(−20.0,10.0,x)×g a u s s(0.0,10.0,y)), δ 1=0.1, δ 2=0.3, and β is set at the constant value 0.5 (the mosquito population is assumed to be present year-round rather than seasonal). The average value of R 0(x,y) in the whole region is ≈0.984. The results are illustrated in Figs. 13 and 14. For initial data in the eastern region (where R 0(x,y)>1), the number of infected cases increases and converges to an endemic steady state. For initial data in the western region (where R 0(x,y)<1), the number of infected cases first decreases, and then increases to the same endemic state. The simulations indicate the importance of spatial heterogeneity in epidemic models, especially for outbreak scenarios. The importation of a small number of infected cases to isolated localities, may at first dissipate in sub-regions with R 0(x,y)<1, but later rise and spread to sub-regions with R 0(x,y)>1, and establish endemicity in the greater geographical region.
Example 5. Top: Model simulation of the spatial distributions of infected cases at time=0, 10, 40, 130, with the initial data located in the eastern region. Bottom: Model simulation of the spatial distributions of infected cases at time=0, 20, 40, 130, with the initial data located in the western region. Both simulations converge to the same limiting density, but the one with the initial data in the western region first decreases before increasing and converging
Example 5. Left: The spatially dependent net reproduction number R 0(x,y). The average value of R 0(x,y) over the whole spatial region is ≈0.984. Right: The cumulative total number of infected cases in the whole region as a function of time, which for both initial data in the eastern region (blue) and initial data in the western region (red) converge to ≈1,630
Discussion and conclusions
The model (1), (2), (3) describes criss-cross vector-host transmission dynamics of an epidemic outbreak in a geographical region Ω, where the vector population is present year-round. The outbreak occurs with a small number of infected hosts in a small subregion of the much larger geographical region Ω. The diffusion terms describe the on-going average spatial spread of the disease microbial agent within infected vectors and infected hosts in the geographical region. The focus of the model is to describe the geographical spread from an initial localized immigration into the region, in terms of the epidemiological properties of the outbreak vector-host transmission dynamics.
We prove that the partial differential equations model (1), (2), (3) is mathematically well-posed, and compare its properties to an analogous ordinary differential equations model in the spatially independent case (Appendix). The outcomes of the model depend on the spatially distributed local reproduction number R 0(x,y). In the case of year-round vector settings, simulations indicate that the connection of R 0(x,y) to the outcome of an outbreak is as follows: if R 0(x,y)<1 everywhere in Ω, then the epidemic will extinguish; if R 0(x,y)>1 in some subregion of Ω, then the epidemic has the possibility to spread from an initial outbreak to an endemic equilibrium in Ω, even if the average value of R 0(x,y)<1 throughout all of Ω.
The model Eqs. (1), (2), (3) are modified to incorporate seasonality of the vector population in Eqs. (1), (4), (5), and applied to the 2015–2016 Zika outbreak in Rio de Janeiro Municipality. Simulations of the model (Examples 1 and 4) provide qualitative agreement with the reported case data in the Municipality [32]. We argue that the assumption of an unchanging number for the susceptible population is reasonable for the Zika outbreak in Rio de Janeiro Municipality. The justification for this assumption is based on current demographic data for the Municipality [37]. Between 2010 and 2016 the population increased from approximately 6.000,000 at approximately 0.49% per year. The total number of reported cases during the 2015–2016 outbreak is less than 1% of the susceptible population, which is not significantly depleted during the outbreak.
A limitation of our model is the difficulty of estimating the number of unreported cases, and in some examples of Zika epidemics the ratio of reported cases to unreported cases has been quite high. In one study, the Federated States of Micronesia in 2007, the number of reported cases was 108 and the number of unreported cases (estimated through seroconversion testing) was estimated at 74% of the total population of 7,391 [38]. In another study, the French Polynesia outbreak in 2013–2014, the number of reported cases was estimated at 7–17% of the total number of infections, with 94% of the total population infected [33]. The setting for Rio de Janeiro Municipality is very different, however, and the demographic changes in Rio de Janeiro Municipality in one year could off-set a relatively higher ratio of unreported-to-reported cases, given that the reported cases represented approximately 0.4% of the population [31, 32]. Additionally, the probability of Zika re-infection is not yet fully known. Whether Zika could become established as an endemic disease in a larger urban population thus remains unclear [33]. Our model simulations are based on the number of reported cases, but we note that if the ratio of unreported to reported cases is significantly higher, then the parameters must be adjusted.
A limitation of our model is that it does not take into account the possibility of sexual transmission of Zika. It is noted in [2], however, that sexual transmission is a small percentage of total transmission, and may not initiate or sustain an outbreak. Another limitation of our model is that we assume the uninfected mosquito population is uniformly geographically distributed at the beginning of the season, since there is no detailed temporal geographic mosquito data available for Rio de Janeiro Municipality. We note that current investigations are developing such data for geographical regions, which could be implemented eventually for spatial models of vector borne epidemics as described by our model. One such investigation is Project Premonition [39], developed by Microsoft to autonomously locate, robotically collect, and computationally analyze mosquito populations for pathogenicity in geographical environmental regions.
The model simulation suggests that the Zika epidemic in Rio de Janeiro Municipality may rise each season from initial outbreak locations, with very small numbers of infected people, and spread through a larger region of the Municipality. Although the epidemic subsides at the end of the season, the final size of the epidemic at the end of the season depends on the initial outbreak locations of infected cases in the region, when geographic heterogeneity and time-limited seasonality are taken into account. The local reproduction number R 0(x,y) indicates that the most effective interventions decrease the infection rates σ 1(x,y), σ 2(x,y), increase the isolation of infected people λ(x,y), increase the mosquito removal rate μ(x,y), and control the importation of infected people, all concentrated in regions of high density population H u (x,y) and in the beginning of the season.
For the Zika epidemic in Rio de Janeiro Municipality the model suggests that the outbreak in the 2015–2016 season will occur again in the 2016–2017 season, and in future seasons. The importation of infected cases into the Municipality at the beginning of the season is inevitable, because of the general influx of people into this major metropolitan center of Brazil. Some of these cases will not generate a further spread of cases, but some will, with consideration of spatially variable factors. The reduction of future, and more extensive, seasonal outbreaks of Zika in the Municipality requires higher level monitoring of the people arriving in the region and higher level mosquito control measures throughout the region, again with consideration of spatially variable factors.
Well-posedness of the model
Theorem. Let Ω be a bounded domain in R 2 with smooth boundary ∂ Ω such that Ω lies locally on one side of ∂ Ω. Let β, μ, λ, σ 1, \(\sigma _{2}, \delta _{1}, \delta _{2} \, \in C_{+}^{0}(\overline {\Omega })\), and let \(H_{u}, I_{0}, V_{u0},V_{i0} \in C_{+}^{1}(\overline {\Omega })\). There exists a unique global classical solution \(\{H_{i}(t),V_{u}(t),V_{i}(t)\} \in C_{+}^{1}(\overline {\Omega }), \, t \geq 0\), to (1), (2), (3), satisfying boundary conditions
$$\frac{\partial}{\partial \eta} H_{i}(t,x,y) = 0, \, \frac{\partial}{\partial \eta} V_{u}(t,x,y) = 0, \, \frac{\partial}{\partial \eta} V_{i}(t,x,y) = 0, \, (x,y) \in \partial \Omega, \, t >0 $$
and initial conditions
Proof. We first observe that a unique classical solution {H i (t),V u (t),V i (t)} exists in \(C^{1}(\overline {\Omega })\) on a maximal interval of existence [0,T max ) [40–42]. Standard arguments [42] guarantee that {H i (t),V u (t),V i (t)} remain nonnegative for t∈[0,T max ). Moreover, the classical solution can be globally defined if we can establish uniform a priori bounds. Set M(t,x,y)=V u (t,x,y)+V i (t,x,y) and add Eqs. (2) and (3) to obtain
$$\begin{array}{@{}rcl@{}} \frac{\partial}{\partial t} M(t,x,y) &=& \nabla \cdot \delta_{2}(x,y) \nabla M(t,x,y) \\ &&+ \, \, \beta(x,y) M(t,x,y)\, \, - \, \, \mu(x,y) M(t,x,y)^{2}. \end{array} $$
Theorem 1 in [24] guarantees the existence of a unique global classical solution \(M(t) \in C_{+}^{1}(\overline {\Omega })\) to Eq. (6) satisfying
$$\frac{\partial}{\partial \eta} M(t,x,y) = 0, \, (x,y) \in \partial \Omega, \, t \geq 0, \, M(0,x,y) = V_{u0}(x,y) + V_{i0}(x,y), \, (x,y) \in \Omega. $$
Further, in [24] it is proved that there exists \(\overline {M} \in C_{+}^{0}(\overline {\Omega })\), \(\overline {M} \neq 0\), such that \({\lim }_{t \rightarrow \infty }M(t) = \overline {M} \in C_{+}^{0}(\overline {\Omega })\). We note that the disease free equilibrium of (1), (2), (3) is \((0,\overline {M},0)\). From [24] there exists N 1>0 such that \({max}_{t \geq 0} \| M(t) \|_{C_{+}^{0}(\overline {\Omega })} < N_{1}\), which implies \(\| V_{i}(t) \|_{C_{+}^{0}(\overline {\Omega })}, \, \| V_{u}(t) \|_{C_{+}^{0}(\overline {\Omega })} < N_{1}\). Then, since λ>0 in (1), there exists N 2>0 such that \(\| H_{i}(t) \|_{C_{+}^{0}(\overline {\Omega })} < N_{2}\). Consequently, the solution exists globally on [0,∞).
The model equations without spatial dependence
The Eqs. (1), (2), (3) without spatial dependence are
$$\begin{array}{@{}rcl@{}} \quad \frac{d}{d t} H_{i}(t) &=& - \lambda H_{i}(t) + \sigma_{1} \, V_{i}(t) \, H_{u} \end{array} $$
$$\begin{array}{@{}rcl@{}} \frac{d}{d t} V_{u} (t) &=&\beta (V_{u}(t) + V_{i}(t)) -\sigma_{2} V_{u}(t) H_{i}(t) - \mu (V_{u}(t) + V_{i}(t)) V_{u}(t) \end{array} $$
$$\begin{array}{@{}rcl@{}} \frac{d}{d t} V_{i} (t) &=& \sigma_{2} V_{u}(t) H_{i}(t) - \mu (V_{u}(t) + V_{i}(t)) V_{i}(t) \end{array} $$
with initial conditions H i (0)=H i0, V u (0)=V u0, V i (0)=V i0. Set the basic reproduction number R 0=H u σ 1 σ 2/λ μ. We note that R 0 is independent of the vector reproduction rate β. The epidemic size of the epidemic, however, is proportional to β, as seen in their formulas below. The behavior of solutions of Eqs. (7), (8), (9) can be classified as follows:
Proposition If R 0<1, then the only steady states of (7), (8), (9) in \(R_{+}^{3}\) are s s 0=(0,0,0), which is unstable in \(R_{+}^{3}\), and s s 1=(0,β/μ,0), which is proportional to β and locally exponentially asymptotically stable in \(R_{+}^{3}\). If R 0<1, H i (0)>0, and V i (0)=0, then (H i (t),V u (t),V i (t)) converges to \((0,\overline {M},0)\). If R 0>1, then s s 0 and s s 1 are unstable in \(R_{+}^{3}\) and there is another steady state in \(R_{+}^{3}\),
$${ss}_{2} = \left(\frac{\beta (H_{u} \sigma_{1} \sigma_{2} -\lambda \mu)}{\lambda \mu \sigma_{2}}, \frac{\beta \lambda}{H_{u} \sigma_{1} \sigma_{2}}, \frac{\beta (H_{u} \sigma_{1} \sigma_{2} -\lambda \mu)}{H_{u} \mu \sigma_{1} \sigma_{2}} \right)$$
$$= \left(\frac{\beta(R_{0} -1)}{\sigma_{2}}, \frac{\beta}{R_{0} \mu}, \frac{\lambda \beta (R_{0} - 1)}{ H_{u} \sigma_{1} \sigma_{2}} \right).$$
which is proportional to β and locally exponentially asymptotically stable in \(R_{+}^{3}\).
Proof. Set M(t)=V u (t)+V i (t) and \(\overline {M} = \beta / \mu \). Then M ′(t)=β M(t)−μ M(t)2 and \({\lim }_{t \rightarrow \infty } M(t) = \overline {M}\). It can be verified that the steady states of (7), (8), (9) in \(R_{+}^{3}\) are s s 0,s s 1, and s s 2. The Jacobian of (7), (8), (9) at s s 0 is
$$J(0,0,0) = \left[ \begin{array}{ccc} -\lambda & 0 & H_{u} \sigma_{1} \\ 0 & \beta & \beta \\ 0 & 0 & 0 \end{array} \right]$$
with eigenvalues {−λ,β,0}, which means that (0,0,0) is unstable. If H i (0)>0 and V i (0)=0, then (7) implies \(H_{i}^{\prime }(0) < 0\). Assume there is a smallest positive time t ∗ such that \(H_{i}^{\prime }(t^{\ast }) = 0\). Then (7) implies H i (t ∗)=(σ 1 H u /λ)V i (t ∗). If R 0<1, then (9) implies
$$V_{i}^{\prime}(t^{\ast}) = \frac{\sigma_{1} \sigma_{2} H_{u}}{ \lambda} \, V_{i}(t^{\ast}) (M(t^{\ast}) - V_{i}(t^{\ast})) - \mu V_{i}(t^{\ast}) M(t^{\ast}) < - \frac{\sigma_{1} \, \sigma_{2} \, H_{u}}{ \lambda} \, V_{i}(t^{\ast})^{2} < 0. $$
Then (7) implies
$$H_{i}^{\prime \prime}(t^{\ast}) = - \lambda H_{i}^{\prime}(t^{\ast}) + \sigma_{1} H_{u} V_{i}^{\prime}(t^{\ast}) < 0, $$
which implies H i (t) is strictly decreasing at t ∗, yielding a contradiction. Thus, H i (t) is strictly decreasing for all t≥0. Let \(H_{i,\infty } = {\lim }_{t \rightarrow \infty } H_{i}(t) \geq 0\). Assume H i,∞ >0. Then (9) implies l i m t→∞ V i (t)=λ H i.∞ /σ 1 H u >0. Equation (8) then implies \({\lim }_{t \rightarrow \infty }V_{u}(t) = \beta \overline {M} / (\sigma _{2} H_{i,\infty } + \mu \overline {M})\). Then \((H_{i,\infty },\beta \overline {M} / (\sigma _{2} H_{i,\infty } + \mu \overline {M}),\lambda H_{i,\infty } / (\sigma _{1} H_{u}))\) is a steady state of (7), (8), (9). If R 0<1, then H i,∞ =0, yielding a contradiction. Thus, H i,∞ =0.
The eigenvalues of the Jacobian of (7), (8), (9) at s s 1
$$J(0,\beta / \mu,0) = \left[ \begin{array}{ccc} -\lambda & 0 & H_{u} \sigma_{2} \\ - \beta \sigma_{1} / \mu & - \beta &0 \\ \beta \sigma_{1} / \mu & 0 & - \beta \end{array} \right]$$
$$\{- \beta, \frac{- \beta - \lambda - \sqrt{(\beta - \lambda)^{2} + 4 R_{0} \beta \lambda}}{2}, \frac{- \beta - \lambda + \sqrt{(\beta - \lambda)^{2} + 4 R_{0} \beta \lambda}}{2}\}.$$
Thus, J(0,β/μ,0) is unstable if R 0>1 and locally exponentially asymptotically stable if R 0<1.
The Jacobian of (7),(8),(9) at s s 2 is
$$\left[ \begin{array}{ccc} - \lambda & 0 & H_{u} \sigma_{1} \\ - \frac{\beta \lambda}{H_{u} \sigma_{1}} & \beta(1- \frac{\lambda \mu}{H_{u} \sigma_{1} \sigma_{1}} - \frac{H_{u} \sigma_{1} \sigma_{2}}{\lambda \mu}) & \beta (1 - \frac{\lambda \mu}{H_{u} \sigma_{1} \sigma_{2}}) \\ \frac{\beta \lambda}{H_{u} \sigma_{1}} & \beta(- 2 + \frac{\lambda \mu}{H_{u} \sigma_{1} \sigma_{2}} +\frac{H_{u} \sigma_{1} \sigma_{2}}{\lambda \mu}) & \beta (- 2 + \frac{\lambda \mu}{H_{u} \sigma_{1} \sigma_{2}}) \end{array} \right] $$
$$= \left[ \begin{array}{ccc} - \lambda & 0 & \frac{R_{0} \lambda \mu}{\sigma_{2}} \\ - \frac{\beta \sigma_{2}}{R_{0} \mu} & \beta(1- \frac{1}{R_{0}} - R_{0}) & \beta (1 - \frac{1}{R_{0}}) \\ \frac{\beta \sigma_{2}}{R_{0} \mu} & \beta(- 2 + \frac{1}{R_{0}} + R_{0}) & \beta (- 2 + \frac{1}{R_{0}}) \end{array} \right]. $$
with eigenvalues
$$\{- \beta, \frac{- R_{0} \beta - \lambda - \sqrt{(R_{0} \beta - \lambda)^{2} + 4 \beta \lambda}}{2}, \frac{- R_{0} \beta - \lambda + \sqrt{(R_{0} \beta - \lambda)^{2} + 4 R_{0} \beta \lambda}}{2}\}. $$
Since −(R 0 β+λ)2+(R 0 β−λ)2+4β λ=−4(R 0−1)β λ<0 if R 0>1, the eigenvalues of the Jacobian at s s 2 are strictly negative if R 0>1, which means that s s 2 is locally exponentially asymptotically stable if R 0>1.
World Health Organization. Zika virus. 2016;Sept 16. http://www.who.int/mediacentre/factsheets/zika/en/.
Gao D, Lou Y, He D, et al. Prevention and control of Zika as a mosquito-borne and sexually transmitted disease: A mathematical modeling analysis. Sci. Rep. 2016;17(6).
Cao-Lormeau V-M, Blake A, Mons S, et al. Guillain-Barré Syndrome outbreak associated with Zika virus infection in French Polynesia: a case-control study. Lancet. 2016; 387:1531–1539.
Nishiura H, Mizumoto K, Rock KS, et al. A theoretical estimate of the risk of microcephaly during pregnancy with Zika virus infection. Epidemics. 2016; 15:66–70.
Bogoch II, Brady OJ, Kraemer MU, et al. Potential for Zika virus introduction and transmission in resource-limited countries in Africa and the Asia-Pacific region: a modelling study. Lancet Infect. Dis. 2017. (Epub ahead of print).
Zinszer K, Morrison K, Brownstein JS, et al. Reconstruction of Zika virus introduction in Brazil. Emerg. Infect. Dis. 2017. (Epub ahead of print).
Carlson CJ, Dougherty ER, Getz W. An ecological assessment of the pandemic threat of Zika virus. PLoS Negl. Trop. Dis. 2016;eCollection.
Robert CJ, Christofferson RC, Silva NJ, et al. Modeling mosquito-borne disease spread in U.S. urbanized areas: The case of Dengue in Miami. PLoS One. 2016;11(8).
Huff A, Allen T, Whiting K, et al. FLIRT-ing with Zika: A web application to predict the movement of infected travelers validated against the current Zika virus epidemic. PLoS Curr. 2016;10(8).
Chowell G, Hincapie-Palacio D, Ospina J, et al. Using phenomenological models to characterize transmissibility and forecast patterns and final burden of Zika epidemics. PLoS Curr. 2016;31(8).
Goubert C, Minard G, Vieira C, et al. Population genetics of the Asian tiger mosquito Aedes albopictus, an invasive vector of human diseases. Heredity. 2016; 117(3):125–134.
CAS Article PubMed PubMed Central Google Scholar
Majumder MS, Santillana M, Mekaru SR, et al. Utilizing nontraditional data sources for near real-time estimation of transmission dynamics during the 2015-2016 Colombian Zika virus disease outbreak. JMIR Public Health Surveill. 2016;1(2).
Massad E, Tan SH, Khan K, et al. Estimated Zika virus importations to Europe by travellers from Brazil. Glob Health Action. 2016;17(9).
Bailey NTJ. The Mathematical Theory of Epidemics. London: Charles Griffin and Co. Ltd; 1957.
Manore C, Hickmann S, Xu S, et al. Comparing Dengue and Chikungunya emergence and endemic transmission in A. aegypti and A. albopictus. J. Theoret. Biol. 2014; 356:174–191.
Ho SM, Speldewinde P, Cook A. Predicting arboviral disease emergence using Bayesian networks: a case study of dengue virus in Western Australia. Epidemiol. Infect. 2016; 145(1):1–13.
Capasso V. Global Solution for a diffusive nonlinear deterministic epidemic model. SIAM J. Appl. Math. 1978; 35(20):274–284.
Webb GF. A reaction-diffusion model for a deterministic diffusive epidemical model. J. Math. Anal. Appl. 1981; 84:150–161.
Fitzgibbon WE, Martin CB, Morgan J. A diffusive epidemic model with criss-cross dynamics. J. Math. Anal. Appl. 1994; 184:399–414.
Fitzgibbon WE, Parrott ME, Webb GF. Diffusion Epidemic models with incubation and crisscross dynamics. Math. Bios. 1995; 128(1-2):131–155.
Fitzgibbon WE, Langlais M, Morgan J. A reaction diffusion system on non-coincident domains modeling the circulation of a disease between two host populations. Dif. Int. Eq. 2004; 17:781–802.
Fitzgibbon WE, Langlais M, Marpeau F. Modelling the circulation of a disease between two host populations on non-coincident spatial domains. Biol. Invasions. 2005; 7:863–875.
Anita S, Fitzgibbon WE, Langlais M. Global existence and internal stabilization for a reaction diffusion system posed on non-coincident domains. Disc. Cont. Dyn. Sys.-Series B. 2009; 11(4):805–822.
Fitzgibbon WE, Langlais M. Lecture Notes in Mathematics: Biomathematics Subseries In: Magal P, Ruan S, editors. New York: Springer-Verlag: 2008. p. 115–164.
Thrall PH, Antonovies J, Hall DW. Host and pathogen coexistence in sexually transmitted and vector-borne diseases. Amer. Nat. 1993; 142:543–552.
Wu Y, Zou X. Asymptotic profiles of steady states for a diffusive SIS epidemic model with mass action infection mechanism. J. Dif. Eq. 2016; 261(8):4424–4447.
Allen LJS, Bolker BM, Lou Y, et al. Asymptotic profiles of the steady states for an SIS epidemic reaction–diffusion model. Disc. Cont. Dyn. Sys - Series B. 2008; 21:1–20.
Peng R. Asymptotic profiles of the positive steady state for an SIS epidemic reaction-diffusion model. Part I. J. Dif. Eq. 2009; 247(4-15):1096–1119.
Brasil P, Calvet GA, Siqueira AM, et al. Zika virus outbreak in Rio de Janeiro, Brazil: Clinical characterization, epidemiological and virological aspects. PLOS Neglected Tropical Diseases. 2016;20(12).
Honório N, Nogueira R, Codeco C, et al. Spatial evaluation and modeling of Dengue seroprevalence and vector density in Rio de Janeiro, Brazil. PLOS Neglected Tropical Diseases. 2009;3(11).
da Saúde M, Boletim Epidemiológica, Secretaria de Viglilácia em Saúde. Monitoramento dos cases de dengue, febre de chikungunya e febre pelo virus Zika até a Semana Epidemiológica 32. 2016;47(33).
Bastos L, Villela D, de Calvalho L, et al. Assessment of basic reproductive number and its comparison with dengue. bioRxiv:055475. Posted online May 25, 2016.
Kucharsky A, Funk S, Eggo R, et al. Transmission dynamics of Zika virus island populations: A modelling analysis of the 2013–2014 French Polynesia outbreak. PLOS Neglected Tropical Diseases. 2016;10(5).
Centers for Disease Control. Zika virus. 2016. https://www.cdc.gov/zika/index.html.
Brady O, Johansson M, Guerra C, et al. Modelling adult Aedes aegypti and Aedes albopictus survival at different temperatures in laboratory settings. Parasites & Vectors. 2013;6(351).
Otero M, Schweigmann N, Solaria H. A stochastic spatial dynamical model for Aedes aegypti. Bull. Math. Biol. 2008; 70:1297–1325.
World Population. 2016. http://www.population.city/brazil/rio-de-janeiro/.
Duffy MR, Chen T-H, Hancock WT, et al. Zika virus outbreak on Yap Island, Federate States of Micronesia. N. Eng. J. Med. 2009; 360:2536–2543.
Project Premonition. 2016. http://www.microsoft.com/en-us/research/project/project-premonition/.
Martin RH. Nonlinear Operators and Differential Equations in Banach Spaces. New York: Wiley-Interscience; 1976.
Pazy A. Semigroups of Operators and Applications. New York: Springer-Verlag; 1983.
Smoller J. Shock Waves and Reaction Diffusion Equations. New York: Springer-Verlag; 1994.
No funding bodies were utilized in the design, analysis, and writing of the manuscript.
The data in the manuscript is published by the Brazilian Ministry of Health, as given in the References. The authors agree to provide upon request computer codes for the numerical simulations in the manuscript.
All authors conceived and developed the study. All authors read and approved the final manuscript.
Department of Mathematics, University of Houston, Houston, 77204, TX, USA
W. E. Fitzgibbon & J. J. Morgan
Department of Mathematics, Vanderbilt University, Nashville, 37240, TN, USA
G. F. Webb
W. E. Fitzgibbon
J. J. Morgan
Correspondence to G. F. Webb.
Fitzgibbon, W.E., Morgan, J.J. & Webb, G.F. An outbreak vector-host epidemic model with spatial structure: the 2015–2016 Zika outbreak in Rio De Janeiro. Theor Biol Med Model 14, 7 (2017). https://doi.org/10.1186/s12976-017-0051-z
DOI: https://doi.org/10.1186/s12976-017-0051-z
Criss-cross dynamics
Local reproduction number
Zika epidemic
|
CommonCrawl
|
Learning culturally situated dialogue strategies to support language learners
Victoria Abou-Khalil ORCID: orcid.org/0000-0002-9706-00711,
Toru Ishida1,
Masayuki Otani2,
Brendan Flanagan1,
Hiroaki Ogata1 &
Donghui Lin1
Research and Practice in Technology Enhanced Learning volume 13, Article number: 10 (2018) Cite this article
Successful language learning requires an understanding of the target culture in order to make valuable usage of the learned language. To understand a foreign culture, language students need the knowledge of its related products, as well as the skill of comparing them to those of their own culture. One way for students to understand foreign products is by making Culturally Situated Associations (CSA), i.e., relating the products they encountered to products from their own culture. In order to provide students with CSA that they can understand, we must gather information about their culture, provide them with the CSA, and make sure they understand it. In this case, a Culturally Situated Dialogue (CSD) must take place. To carry the dialogue, dialogue systems must follow a dialogue strategy. However, previous work showed that handcrafted dialogue strategies were shown to be ineffective in comparison with machine-learned dialogue strategies. In this research, we proposed a method to learn CSD strategies to support foreign students, using a reinforcement learning algorithm. Since no previous system providing CSA was implemented, the method allowed the creation of CSD strategies when no initial data or prototype exists. The method was applied to generate three different agents: the novice agent was based on an eight states feature-space, the intermediate agent was based on a 144 states feature-space, and the advanced agent was based on a 288 states feature-space. Each of these agents learned a different dialogue strategy. We conducted a Wizard of Oz experiment during which, the agents' role was to support the wizard in their dialogue with students by providing them with the appropriate action to take at each step. The resulting dialogue strategies were evaluated based on the quality of the strategy. The results suggest the use of the novice agent at the first stages of prototyping the dialogue system. The intermediate agent and the advanced agent could be used at later stages of the system's implementation.
Successful language learning requires more than the knowledge of vocabulary and grammar skills. It is as least as important to develop an understanding of the different culture in order to make a valuable usage of the learned language (Wagner et al. 2017). Intercultural competence is viewed as being as important as communication and should be an integral part of the language curriculum (Stewart 2007). However, the inclusion of the cultural aspect in language teaching has been challenging for teachers (Kissau et al. 2012). An important aspect of understanding another culture is the knowledge of its related products, as well as the skill of comparing them to those of their own culture. By putting ideas or products of two culture side by side, the student can see how each would look from the other perspective and avoid misunderstandings (Byram et al. 2002).
As a part of the language learning process, it is recommended that students visit a foreign country in order to practice the language and explore the culture (Byram et al. 2002). During these visits, food products are usually instantly encountered and are among the most obvious products that highlight cultural differences. Food, eating, food behaviors, and food social norms are intimately connected to cultural identity and deeper cultural concepts (Kanafani-Zahar 1997; Scholliers 2001). One method to explain a particular food product to a student would be to display a complete listing of the ingredients as well as a description of the food product. However, this kind of information might leave them with questions like how does it taste like? what is the texture? and when do we use it?
In a situation where providing a simple description of a product fails to deliver a complete understanding of the meaning of the product, an efficient alternative would be to relate the product to a similar product in the student's culture. This would mean offering Culturally Situated Associations (CSA) that allow learners to understand the meaning, usage, and taste of the food product they are inquiring about. A system that supports students with CSA must deliver the associations and make sure that those associations were understood. The previous requirements can be fulfilled by learning Culturally Situated Dialogue (CSD) strategies that would support the realization of those objectives. However, when no initial observations or system exists, learning dialogue strategies is a challenging task. In fact, developers or designers of a CSD system may not be able to predict the most appropriate action to be taken by the system at each moment and would have to invest in a time consuming task to predict what would be the most appropriate action in each situation. Moreover, the number of different utterances that could occur in a dialogue system is numerous and previous work showed that automatic dialogue strategies outperformed handcrafted ones (Scheffler and Young 2002). Ishida (2016) highlighted the need "to model agents that can not only support a specific culture, but also recognize the differences among cultures, and differences among the understanding of cultural differences." Aligning with the need for an agent that can provide CSA and addressing the challenge of automating CSD strategies in the particular situation where no initial system exists, this research proposes a method to learn CSD strategies to support students when no data or working prototype exists.
Culturally situated associations
A variety of intercultural communication models have been proposed by researchers. However, the most influential model is attributed to Byram because his approach provides a holistic intercultural competence and has defined objectives and practical derivations (Chen and Yang 2014). Byram's model defines the five following skills needed in order to accomplish a successful intercultural communication: intercultural attitudes, knowledge, interpreting and relating, discovery and interaction, and critical cultural awareness (Byram 1997). Two of those skills are necessary in the initial stages of familiarizing with a new culture and are essential to understand foreign concepts or products (Byram 1997):
Discovery or knowledge: knowledge about a social group and their products and practices in the foreign visitor's own country; and
Interpreting and relating: foreign visitors relate the information they get to information from their own culture.
Over the years, efforts have been made to use computer technologies to support the teaching of culture. To understand the other culture, different approaches were implemented: showing to students juxtaposed texts from different cultures (Liaw 2006), concordances of two corpora to investigate different usages of a word in different cultures (Leech and Fallon 1992), use of web-based tools (online forums, weblogs, Skype, and email) (Chen and Yang 2014). However, most of previous studies focused on fostering Byram's skills of intercultural attitudes, knowledge, discovery, and interaction.
The skills of interpreting and relating are not tackled in computer-assisted education and consists of putting concepts or products from two or more cultures side by side and seeing how each might look from the other perspective (Chen and Yang 2014). Providing students with CSA means putting the concepts or products from the student's culture and from the target culture side by side and helping the student interpret and relate the concepts that they encounter.
Dialogue strategies
In real life situations, interpreting and relating cannot be achieved in real time as CSA requires a deep knowledge about the foreign culture and information about the student's culture. In order to provide students with CSA that they can understand, we must gather information about their culture, provide them with the CSA, and make sure they understand it. In this case, a Culturally Situated Dialogue (CSD) must take place. To carry the dialogue, dialogue systems must follow a dialogue strategy.
The recent literature shows a growing interest in the implementation and use of automatic dialogue systems. The development of such dialogue systems, and more particularly, the development of dialogue strategies is challenging (Eckert et al. 1997). In order to achieve a dialogue in an efficient way through a series of interaction with the user, dialogue strategies are needed. By quantifying the achievement of the dialogue goal as well as the efficiency of the strategy, is it possible to describe the system as a stochastic model that can be used for learning those dialogue strategies (Levin et al. 1998). This method has many advantages including a possibility of an automation of the evaluation of the dialogue strategies as well as an automatic design and adaptation. In previous works on dialogue systems, reinforcement learning was used in order to learn Wizard of Oz' dialogue strategies of presentation of information and replicate them. Wizard of Oz allows the learning of dialogue strategies when no initial system exists. The results showed that reinforcement learning combined with Wizard of Oz experiment allows the development of optimal strategies when no working prototype is available (Rieser and Lemon 2008). In fact, reinforcement learning significantly outperformed supervised learning when interacting in simulation as well with as with real users (Rieser and Lemon 2008). However, unlike standard dialogue systems that take into account user-related properties, the challenge in learning optimal CSD strategy consist of learning which information about the student's culture, if any, should be inquired and in which order.
The Wizard of Oz (WoZ) is a research experiment in which the users interact with a computer system that they believe to be autonomous. The computer system is actually operated either partially or completely by a human being (the wizard) (Kelley 1984). The WoZ experiment is useful in different cases. It allows the gathering of information in the case of lack of basic knowledge about the user performance during a computer-based interaction. Moreover, the usage of WoZ will allow many speech designers to participate in building the knowledge about the user performance during a computer-based interaction. Finally, the WoZ allows an iterative design approach for building user interfaces as it is easy to use, requires little programming and supports rapid testing and interfaces modifications (Klemmer et al. 2000). The design of a WoZ experiment may contain different amounts of control ranging from a complete automation of the interaction to an interaction solely dependent on the wizard, as well as mixed-initiative interactions (Riek 2012). Green, Huttenrauch, and Eklundh (2004) set one of the most recognized conditions for conducting a WoZ experiment: the user should have access to specific instructions, the designers should have a behavior hypothesis as well as a specified robot behavior. The architecture's requirements of a WoZ experiment were set by Fraser and Gilbert (1991, p.81–99) that state that (1) "It must be possible to simulate the future system, given human limitations" ; (2) "It must be possible to specify the future system's behavior" ; (3) "It must be possible to make the simulation convincing." The implementation of WoZ experiments should use scenarios to place additional constraints on the study. Previous guidelines highlight the importance of scenario constraints for WoZ experiments. (Dahlbäck et al. 1993; Fraser and Gilbert 1991; Green et al. 2004; Riek 2012). The scenario constraints allow participants to have a task to solve that requires the use of the system, and where there is not a single way to solve the problem (Riek 2012; Dahlbäck et al. 1993). Finally, in a review paper, Riek (2012) went through 54 papers and categorized the papers by types of wizard control used. 72.2% of the papers reported using the WoZ experiment to control a natural language processing component such as having the robot engage in a dialogue and appropriately make utterances. The use of WoZ has shown to be beneficial to design and test dialogue strategies when no initial data or working prototype exists (Rieser and Lemon 2008). Using WoZ is a way of collecting data, before actually building a system that might need this data to be built. Moreover, it allows the testing of parts of the system without having to program and design the whole system in order to do it.
Figure 1 shows the system architecture. The WoZ experiment is used because no working prototype or initial CSD system is available. The student and the wizard communicate through Skype to allow the wizard to see the product the student is asking about. In order to provide the wizard with the optimal dialogue strategy, an agent is trained based on a reinforcement learning algorithm, and passes to the wizard the optimal strategy to take at each step. The wizard reports first their state of knowledge to the agent through a web interface (e.g., I do not have any information about the student's country yet). Once the agent receives the current state of knowledge of the system, it provides the wizard with the appropriate action to take (e.g., ask for the student's country). If the agent suggests the querying of the associated concept, the wizard retrieves the CSA from a provided database. The database contains food items as well as their related country of origin, the region of origin, the related ingredients, and their usage. The dialogue, directed by the agent and executed by the wizard, is carried out until the CSA is provided to the student and understood by them.
System architecture. Figure 1 shows the system architecture. The student and the wizard communicate through Skype to allow the wizard to see the product the student is asking about. In order to provide the wizard with the optimal dialogue strategy, an agent is trained based on a reinforcement learning algorithm, and passes to the wizard the optimal strategy to take at each step. The wizard reports first their state of knowledge to the agent through a web interface (e.g., I do not have any information about the student's country yet). Once the agent receives the current state of knowledge of the system, it provides the wizard with the appropriate action to take (e.g., ask for the student's country). If the agent suggests the querying of the associated concept, the wizard retrieves the CSA from a provided database. The database contains food items as well as their related country of origin, the region of origin, the related ingredients, and their usage. The dialogue, directed by the agent and executed by the wizard, is carried out until the CSA is provided to the student and understood by them
Identification of dialogue patterns
In order to extract the necessary components needed to build the feature space of the reinforcement learning algorithm and create the automatic dialogue strategies, we first identify common natural dialogue patterns to provide CSA to students.
To identify the possible dialogue patterns, we first conducted interviews with tourists in Nishiki Market, a traditional food market in Kyoto. We interviewed 15 tourists coming from western countries, chosen randomly during their visit to the market. We decided to interview tourists instead of students due to the fact that Japanese language students might have different levels of familiarity with Japanese products depending on their language level. This difference might lead to dialogue patters that are not representative of the ones a beginner language student would have. The breakdown of gender was balanced and the participants were from Europe, New Zealand, and USA. The tourists were asked to list the questions that they would have wanted to ask if it was possible to communicate with the shop clerks and get an answer. We received 34 questions from the participants. Table 1 shows the different questions asked by participants from different countries.
Table 1 Categorization of questions asked by tourists by country
Similar questions were put together, and the tourists' questions were categorized by question topic. The questions of the tourists were classified into three categories shown in Table 2. The first category contains the questions about the ingredients of a particular food. Questions about the taste were classified under the ingredients category as we considered that the ingredients of the food can give an idea about the taste (salty, sweet, sour, etc.). The second category includes the questions about the usage. The last category includes general questions about the composition and the usage of the food.
Table 2 Categorization of questions asked by tourists by question topic
Based on the previous questions provided by the tourists, we create typical dialogues that could happen between the shop owners and the students during their travels. During those conversations, shop owners naturally follow a CSD strategy to answer the questions of the students with CSA. We match each of the previous examples to a pattern of CSD. To further understand the CSD, we define several terms as follows:
Target concept is the concept that needs to be explained.
Associated concept is used to explain a target concept. It is a concept that belongs to a different culture than the target concept.
Common attribute is an attribute or a property that belongs to both the target and the associated concepts.
Cultural attribute, such as a location, and language, is a common attribute which contributes to determine a culture.
Using the previous terms, we classify culturally situated conversations into several culturally situated dialogue patterns:
Example conversation 1
Student: What is this and how does it taste?
Shop owner: It is Neri Goma. It is a paste made out of roasted sesame seeds. Where are you from?
Student: Iraq.
Shop owner: It is like Tahine.
Dialogue pattern 1: using cultural attribute as a pivot
Student: Question about the taste of the target concept.
Shop owner: Question to identify the cultural attributes of the student.
Student: Student provides the cultural attributes.
Shop owner: Finds the associated concept that possesses cultural attributes that matches the student cultural attributes and common attributes related to the taste that are identical to the common attributes of the target concept.
Example conversation 2Student:What is this? How do we use it? Shop owner:It is Neri Goma. It is a paste made out of roasted sesame seeds. Where are you from? Student:Iraq. Shop owner:It is like Tahine, but in Japan it is mainly used in sweets.
Dialogue pattern 2: comparative association Student: Question about a target concept. Shop owner: Question to identify the cultural attributes of the student. Student: Student provides the cultural attributes. Shop owner: Finds the associated concept that possesses cultural attributes that matches the student cultural attributes and common attributes related to the taste that are identical to the common attributes of the target concept. In case other common attributes differ from the target concept's common attributes, the differences are presented to the student.
Student: What is this?
Shop owner: It is Udon, noodles made out of wheat and flour. They are usually eaten in broth.
Student: What is the difference with Soba?
Shop owner: Udon is made out of wheat and Soba out of buckwheat. Where are you from?
Student: Italy
Shop owner: Udon is more like Spaghetti and Soba like Pizzoccheri
Dialogue pattern 3: intra-cultural comparison
Student: Question about the difference between two target concepts.
Shop owner: Question to identify the cultural attributes of the student
Shop owner: The difference between the two target concepts is identified by comparing all their common attributes. Based on the cultural attributes of the student, two associated concepts with the same difference in the common attributes are found.
Based on the previous dialogue patterns, we extract the components essential to conduct CSD strategies:
Target concept
Associated concept
Cultural attributes
Common attributes
The reinforcement learning algorithm
The Markov decision process is a mathematical formalism that is used to implement the reinforcement learning algorithm. Our algorithm was based on Ng's (2000) work. The main components of this formalism and their implementation are
State and action space : the states of the reinforcement learning algorithm amounts to all the states that the system (the wizard in our current system) possesses about internal and external resources that it is interacting with (e.g., country of the student, associated concepts). The action set of the dialogue system includes all possible actions it can accomplish. It includes the interactions with the user (e.g., asking the student for their region, providing the student with an associated concept) as well as the interactions with other resources (e.g., searching for the associated concepts). When the system's current state is s and an action a is taken, the state changes to s'. For example, when the system is in an initial state and the wizard does not have any information, the agent will ask the wizard to interact with the student and obtain a specific information. The next state, s', will depend on whether the wizard obtained the information or not.
We identified the possible state spaces based on the components extracted from the dialogue patterns. The target concept is assumed to be known as the wizard would be interacting with the student and would be able to identify it. The cultural attributes are necessary in order to determine the culture of the student, and thus, in which culture the associated concepts should be found. Students usually have a question that is related to a particular common attribute (e.g., usage, ingredients). The common attributes are necessary as they will be the basis of the comparison between the target concept and the associated concept. The action space is directly extracted from the state space. Based on the previously defined components, we created three levels of state spaces with different granularity in terms of states spaces. The three different agents were named, respectively, novice agent, Intermediate agent, and advanced agent.
: the transition probabilities of transitioning between a state s to a state s' given an action a are estimated using observed data. The estimated transition probability, \(P_{s,a,s^{\prime }}\), is computed as follows:
$$ P_{s,a,s^{\prime}}=\frac{\mathrm{number of times we took action a in state s and got to }s^{\prime}}{{\mathrm{number of times we took action a in state }}s} $$
In the case where an action a is never taken from a state s, we consider \(P_{s,a,s^{\prime }}\) to be equal to \(\frac {1}{\mathrm {number of states}}\), assuming that the probability is equally distributed over all states.
Reward : We suppose that the reward is unknown. We can also compute the expected immediate reward in a specific state as the average reward observed in state s.
Value iteration and policy : a policy is any function π that maps the states to the actions. Some policy π is executed if, whenever we are in state s, we take the action a=π(s). The value function for a policy π is the expected sum of discounted rewards when we start in state s and take actions according to π. The value function of a policy π is given by the Bellman equation (Bellman 2013).
$$ V^{\pi }(s)=R(s)+\gamma \sum_{{s^{\prime}}\epsilon S}P_{s\pi (s)}\left({s^{\prime}}\right)V^{\pi }\left({s^{\prime}}\right) $$
The Bellman equation states that the expected sum of discounted rewards Vπ(s) is given by the sum of the immediate reward and the expected sum of future rewards. We define as well the optimal value function given by
$$ V^{*}(s)={\text{max}} V^{\pi }(s) $$
V∗(s) is the best expected sum of discounted rewards that can be reached using any policy. Based on the previous equations, we will describe the algorithm that we used to calculate the value function and to get the best policy:
For each state s, initialize Vπ(s)=0
Repeat until convergence: for each state, update:
$$ V^{\pi }(s)=R(s)+\gamma \times {\text{max}} (a\epsilon A) \sum_{{s^{\prime}}\epsilon S}P_{s\pi (s)}\left({s^{\prime}}\right)V\left({s^{\prime}}\right) $$
Policy in state s is the aεA which maximizes V(s).
In this algorithm, we are updating the estimated value function based on the Bellman equation. For every state s, we compute the new value of V(s). After a certain number of iterations, the value is supposed to converge towards V*(s).
The novice agent
The first level feature space produces the novice agent. The state space includes only three entries that represent the mental state of the system; in other terms, the current state of the wizard.
Does not know the user's culture/knows the user's culture.
Does not know the associated concept/knows the associated concept.
Knows that the user does not understand the concept/knows that the user understands the concept.
Every entry can take either of its values, giving us a total number of eight states, including two final states. The final states are the states we want the agent to reach at the end of the dialogue. An episode of the reinforcement learning algorithm ends when the final states are reached. At the end of the dialogue, the student should get an associated concept that answers their question and they should be able to understand the associated concept provided to them. The final states are all the combination of states that include the two following entries: knows the associated concept and knows that the user understands the concept:
Knows the user culture/knows the associated concept/knows that the user understands the concept.
Knows the associated concept/knows that the user understands the concept.
For the first level feature space, the action space includes only three actions:
Identify the user's culture.
Identify the associated concept.
Ask if the user understood the concept.
The intermediate agent
The second level state action space produces the Intermediate agent. The second level state space is the result of breaking down the first level state space into more precise states of knowledge. It includes six entries that represent the mental state of the system.
Does not know the user's country/knows the user's country.
Does not know the user's region/knows the user's region.
Does not know the common attributes/knows the common attributes.
Does not know if there is an associated international concept/knows that there is an associated international concept/knows that there is not an associated international concept.
Does not know the cultural associated concept/knows the cultural associated concept.
Does not know if the student understood the associated concept/knows that the student understood the associated concept/knows that the student did not understand the associated concept.
Every entry can take either of its values, with all permutations giving us a total number of 144 states, including 15 final states. To be in a final state, the agent should know the associated concept and should know that the user understood the associated concept. Moreover, the knowledge of the system should be consistent (e.g., the system knows the cultural associated concept but does not know either of the cultural attributes, is not a final state).
For the second level state space, the action set includes six actions:
Identify the user's country.
Identify the user's region.
Identify the common attributes.
Identify if there is an associated international concept.
Identify if there is a cultural associated concept.
The advanced agent
The third level state action space produces the advanced agent. The third level state space is the result of breaking down the second level state space in more precise states of knowledge. It includes seven entries that represent the mental state of the system:
Does not know the country associated concept/knows the country associated concept.
Does not know the region associated concept/knows the region associated concept.
Every entry can take either of its values, with all permutations giving us a total number of 288 states, including 17 final states. To be in a final state, the agent should know the associated concept and should know that the user understood the associated concept. Moreover, the knowledge of the system should be consistent (e.g., the system knows the cultural associated concept but does not know either of the cultural attributes is not a final state). For the third level state space, the action set include the following seven actions:
Identify if there is a country associated concept.
Identify if there is a region associated concept.
In order to obtain a policy, we create three sets of observations. Three set of observations are created for the three agents: novice, intermediate, and advanced. The three created sets contain 1000 observations each. The observations are designed to simulate the ones that would be noted by a wizard. During this work, the minimal number of observations was calculated taking into consideration the case where every state is visited by every action. In order for the simulation to be representative, the observations conform to the following assumptions:
The wizard cannot find the country's associated concept or region's associated concept if the user's country or region is not identified. As the associated concept is queried based on the student's cultural attributes, it will be impossible to find it in the case where this information is not provided.
The wizard cannot find the associated concept if the common attributes that the student is asking about are not identified. In fact, the comparison between a target concept and an associated concept is queried based on the property the student is asking about. If the student is asking about the usage, the associated concept will be a concept in the student's culture that has the same usage.
If wizards are searching for international associated concept, they will find it around 20% of the time. We consider that it is infrequent for a concept to have an equivalent concept known internationally.
If wizards present an associated international concept to the student, the student will understand it around 80% of the time. We consider that if a concept has an equivalent concept known internationally, the student will probably know it. This assumption was made based on the Pareto principle (Newman 2005).
The novice agent needs a little amount of observations to cover all the actions that could be taken from every state (24 observations). The intermediate agent needs a more observations than the novice agent to cover all the actions that could be taken from every state (864 observations). The advanced agent needs the biggest number of observations to cover all the actions that could be taken from every state (2016 observations). Figure 2 plots the minimum number of observations versus the number of states.
Minimum number of observations needed versus the number of states. Figure 2 plots the minimum number of observations versus the number of states. The novice agent (8 states) needs a little amount of observations to cover all the actions that could be taken from every state (24 observations). The intermediate agent (144 states) needs 864 observations to cover all the actions that could be taken from every state. The advanced agent (288 states) needs the biggest number of observations to cover all the actions that could be taken from every state (2016 observations)
Strategy evaluation
In order to evaluate the quality of the conversation strategy we assume that the wizard follows the recommendations of the agent except when:
The agent is asking the wizard to take the same action twice or more, and the wizard knows that the action will not change the current state of knowledge and will keep the dialogue in the same state.
The agent is asking the wizard to present an information to the student while the information is unavailable.
We define a score representing the quality of the policy by
$$ {\text{Score}}= \frac{n}{N} $$
With n the average number of times the wizard followed the agent's recommendations per dialogue and N the average of recommendations received per dialogue. The score of the quality of the policy varies between 0 and 1.
The experiment In order to evaluate the different policies, we prototyped a Wizard of Oz experiment set as follows:
The participants The experiment involved two participants:
1 wizard: PhD student in informatics, in Japan (27 years old).
1 student: female Italian language student that arrived to Japan 2 weeks before the experiment to learn Japanese (26 years old).
The student's role was to ask about a concept she did not understand. The wizard's role was to provide CSA to the student. The two participants did not know each other previously. We will call the first participant wizard and the second participant student. The only prerequisite to participate in the experiment targeted the user of the system that had to be a Japanese language student that moved recently to Japan. We met both participants separately and provided them with the objectives and the rules of the experiment.
We met with the student before the experiment and gave her a list of food products. We explained that she had to choose an item she did not know, then ask for explanations about it through the system. We showed the student the system and explained how the interaction with the system will take place. We also explained to the student that the system will help her understand the target concept and that she was interacting with a human being through the system. We also met with the wizard before the experiment and specified the behavior to be adopted during the experiment. The wizard received a training to become familiar with the objective of the dialogue, the actions that can be taken, and the database. We explained to the wizard that the dialogue strategy recommended by the system should be followed. We also provided the wizard with the two situations in which the systems' recommendation can be ignored: (1) the system is asking the wizard to take the same action twice or more and the wizard knows that the action will not change the current state of knowledge and will keep the dialogue in the same state; (2) the agent is asking the wizard to present an information to the student while the information is unavailable.
The setting of the experiment
The wizard and the student were interacting via two computers using Skype. The wizard was typing, and the student was hearing the answer through clownfish plugin that converts text to speech.
The wizard had access to a simple database representation from which the CSA could be extracted based on the cultural attributes and common attributes.
The wizard and the student were asked to perform the dialogue three times. The first time, the novice agent's strategy was suggested to the wizard. The second time the intermediate agent's strategy suggested was communicated to the wizard. The third time the wizard was provided with the advanced agent's strategy.
Results of the experiment While receiving the novice agent's strategy, the wizard followed the recommendation of the agent four times over six times as shown in Table 3. The wizard reported that the recommendations of the agent were too abstract. They also reported that when the action suggested was to find the associated concept, the wizard found two associated concepts belonging to the same country. It was hard for them to present one to the student as there was no appropriate guidance for this situation.
Table 3 Dialogue between the wizard and the novice agent and wizard's compliance to recommendations ∗
While receiving the intermediate agent's strategy, the wizard followed the recommendation of the agent six times over seven times as shown in Table 4. The wizard reported that the recommendations of the agent were helpful to guide them through the process. They reported confusion when the student did not understand the first associated concept, and the recommended action did not change. The wizard had to take actions that are different from the agent's recommendations.
Table 4 Dialogue between the wizard and the intermediate agent and wizard's compliance to recommendations ∗
While receiving the advanced agent's strategy, the wizard followed the recommendation of the agent eight times over eight times as shown in Table 5. The wizard reported that the recommendations of the agent were helpful and precise enough to guide them through the process. They reported that the process was conducted without any confusion.
Figure 3 shows the score of the quality of the policy by the number of states. For the novice agent, the quality of the policy is equivalent to 0.66 and is poor compared to the intermediate agent (0.875) and the advanced agent (1).
Score of the quality of the policy by number of states. Figure 3 shows the score of the quality of the policy by the number of states. The quality of the policy is defined by \(\mbox{{Score}}= \frac {n}{N}\). With n as the average number of times the wizard followed the agent's recommendations per dialogue and N the as average of recommendations received per dialogue. The score of the quality of the policy varies between 0 and 1. For the novice agent, the quality of the policy is equivalent to 0.66 and is poor compared to the intermediate agent (0.875) and the advanced agent (1)
Table 6 shows the summary of the evaluation as well as the recommendation as of the usage of each agent.
Table 6 Summary and recommendations
There is a need for developing and designing language tools that support a more complex view of language that the traditional formal approach and allow the users to explore meaning related aspects of the target language. It is suggested that a user-centered and iterative design process would be a good starting point to design the language tools (Knutsson et al. 2008). We proposed a tool that aims to support student understanding foreign concepts and building intercultural competence. The interviews conducted led to the creation of a user-centered system.
We propose to use CSA to understand foreign concepts. CSA are based on Byram's model that is a widely accepted model that defines intercultural competence, and more particularly on the skill of interpreting and relating. Byram's model proposes five skills needed to accomplish intercultural communication: (1) intercultural attitudes, (2) knowledge, (3) interpreting and relating, (4) discovery and interaction as well as (5) critical cultural awareness (Byram 1997). Many studies used computer-mediated communication to develop second language learners' intercultural competence based on the Byram's model. However, most of past research focused on systems that help with developing the skills of intercultural attitudes, knowledge, and discovery and interaction (Liaw 2006). This study, unlike previous studies, aimed at using computer-supported communication to develop the skill of interpreting a concept from another culture and relate it to concepts from one's own.
The method proposed in this research allows the creation of automatic CSD strategies to support foreign students during their food shopping in Japan. The method could be potentially generalized to learn automatic dialogue strategies in any situation where CSA may be needed and when when little initial data or system exists. Technical and non-technical limitations of the system are highlighted and discussed below:
Range of application The proposed system supports students with the understanding of foreign concepts. This system focuses on culturally specific concepts expressed through words (e.g., udon, tahine, Kimono, and paella). However, this system does not support students with the understanding of culturally specific sentences such as idioms or proverbs. Future application of the proposed method to support the understanding of culturally specific sentences might lead to interesting results.
Dialogue patterns The number of collected dialogue patterns was based on interviews conducted in Nishiki Market. Our state spaces were derived from the dialogue patterns. However, the number of dialogue patterns may not cover extensively all the culturally situated scenarios that could happen. A more extensive survey should be conducted to cover a vast majority of the questions that might be asked and thus allow the potential identification of more dialogue patterns.
State spaces In this work, we chose three state spaces as a base for our learning algorithm. As a result, three agents could provide the wizard with dialogue strategies varying from an abstract strategy to a precise one. The extraction of state spaces was a result of the breaking down of the attributes derived from the dialogue patterns. However, the attributes could be broken down to more elaborate strategies. This work explores only three state spaces and their resulting strategies. However, by breaking down more the attributes, we would be able to study more developed agents.
Minimal of observations needed The observations fed to the learning algorithm are one of the main components defining the resulting strategy. During this work, the number of minimal number of observations was calculated taking into consideration the case where every state is visited by every action. However, the minimal number of observations needed in order to produce an effective strategy depends on the actions and the states visited.
The experiment The experiment objective was to compare the three different agents' performances for the same request done by the student. As the objective is the comparison of the dialogues, one pair of student-wizard might be adequate. In the experiment, the dialogue initiated by the student was chosen based on the most asked question by the tourists in the interviews (what is this?). However, it would be beneficial to broaden the range of dialogues in order to compare the different agents in different real-life situations. This could be explored later on in a study about the system itself.
In this research, we propose a method to learn culturally situated dialogue strategies to support foreign students using a reinforcement learning algorithm. Since no previous system was implemented, the method allows the creation of dialogue strategies when no initial data or prototype exists. To model the possible state spaces of the reinforcement learning algorithm, we first identified common dialogue patterns that take place between students and shop owners in Nishiki Market and extracted the attributes needed to conduct Culturally Situated Dialogues. By breaking down the extracted attributes into more fine-grained attributes, we created three attribute sets with different levels of granularity. Each of these three attribute sets was mapped into a different state space, resulting in the creation of three different agents: the novice agent, the intermediate agent, and the advanced agent. Each of these agents learns a different dialogue strategy. We conducted a Wizard of Oz experiment during which, the Agent's role was to support the wizard in their dialogue with students by providing them with the appropriate action to take at each step. The resulting dialogue strategies were evaluated based on two criteria: the quality of the strategy and the minimum number of observations needed to result in an acceptable dialogue strategy. The quality of the dialogue strategy was defined to reflect the 'helpfulness' of the agent in supporting the wizard. The novice agent was the least effective in producing helpful dialogue strategies for the wizard; however, it could learn the strategy based on only 24 observations. The intermediate agent performed better than the novice agent but needed at least 864 observations to learn a consistent strategy. The advanced agent was able to guide the wizard in all the steps effectively until achieving the objective of the dialogue and needed a minimum of 2016 observations to produce a consistent strategy. The results suggest the use of the novice agent at the first stages of prototyping the dialogue system. The intermediate agent and the advanced agent could be used at later stages of the system's implementation. Future work could explore the possibilities of automating the process of migrating to more complex agents depending on the available number of observations at each moment. This would allow the application of this technology to a variety of situations where culturally situated information is needed and no initial system or little observations exist.
CSA:
Culturally situated association
CSD:
Culturally situated dialogue
WoZ:
Bellman, R (2013). Dynamic programming. Courier Corporation.
Byram, M. (1997). Teaching and assessing intercultural communicative competence. Clevedon: Multilingual Matters.
Byram, M, Gribkova, B, Starkey, H. (2002). Developing the intercultural dimension in language teaching: a practical introduction for teachers. Strasbourg: Language Policy Division, Directorate of School, Out-of-School and Higher Education, Council of Europe.
Chen, JJ, & Yang, SC (2014). Fostering foreign language learning through technology-enhanced intercultural projects. Language Learning & Technology, 18(1), 57–75.
Dahlbäck, N, Jönsson, A, Ahrenberg, L (1993). Wizard of Oz studies—why and how Wizard of oz studies - why and how. Knowledge-Based Systems, 6(4), 258–266.
Eckert, W, Levin, E, Pieraccini, R (1997). User modeling for spoken dialogue system evaluation. In the proceedings of Automatic Speech Recognition and Understanding. IEEE, Santa Barbara, (pp. 80–87).
Fraser, NM, & Gilbert, GN (1991). Simulating speech systems. Computer Speech & Language, 5(1), 81–99.
Green, A, Huttenrauch, H, Eklundh, KS (2004). Applying the wizard-of-oz framework to cooperative service discovery and configuration. In the proceedings of Robot and Human Interactive Communication. IEEE, Kurashiki, (pp. 575–580).
Ishida, T (2016). Intercultural collaboration and support systems: a brief history. In the proceedings of the International Conference on Principles and Practice of Multi-Agent Systems. Springer, Cham, (pp. 3–19).
Kanafani-Zahar, A (1997). "Whoever eats you is no longer hungry, whoever sees you becomes humble": bread and identity in Lebanon. Food and Foodways, 7(1), 45–71.
Kelley, JF (1984). An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Systems (TOIS), 2(1), 26–41.
Kissau, SP, Algozzine, B, Yon, M (2012). Similar but different: The beliefs of foreign language teachers. Foreign Language Annals, 45(4), 580–598.
Klemmer, SR, Sinha, AK, Chen, J, Landay, JA, Aboobaker, N, Wang, A (2000). Suede: a Wizard of Oz prototyping tool for speech user interfaces. In the Proceedings of the 13th annual ACM symposium on User interface software and technology. ACM, New York, (pp. 1–10).
Knutsson, O, Cerratto-Pargman, T, Karlström, P (2008). Literate tools or tools for literacy? A critical approach to language tools in second language learning. Nordic Journal of Digital Literacy, 3(02), 97–112.
Leech, G, & Fallon, R (1992). Computer corpora: what do they tell us about culture. ICAME Journal, 16, 29–50.
Levin, E, Pieraccini, R, Eckert, W (1998). Using Markov decision process for learning dialogue strategies. In the proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, Seattle, (pp. 1, 201–204).
Liaw, Ml (2006). E-learning and the development of intercultural competence. Language Learning & Technology, 10(3), 49–64.
Newman, ME (2005). Power laws, Pareto distributions and Zipf's law. Contemporary Physics, 46(5), 323–351.
Riek, LD (2012). Wizard of Oz studies in hri: a systematic review and new reporting guidelines. Journal of Human-Robot Interaction, 1(1), 119–136.
Rieser, V, & Lemon, O (2008). Learning effective multimodal dialogue strategies from Wizard-of-Oz data: Bootstrapping and evaluation. In the Proceedings of the 21st International Conference on Computational Linguistics and 46th Annual Meeting of the Association for Computational Linguistics (ACL/HLT). ACL, Columbus, (pp. 638–646).
Scheffler, K, & Young, S (2002). Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning. In the proceedings of the Second International Conference on Human Language Technology Research. Morgan Kaufmann Publishers Inc., San Diego, (pp. 12–19).
Scholliers, P. (2001). Food, drink and identity: cooking, eating and drinking in Europe since the Middle Ages. New York: Berg Publisher.
Stewart, V (2007). Becoming citizens of the world. Educational Leadership, 64(7), 8–14.
Wagner, M, Perugini, DC, Byram, M. (2017). Teaching intercultural competence across the age range: from theory to practice. Clevedon: Multilingual Matters.
This research is supported by the Leading Graduates Schools Program, "Collaborative Graduate Program in Design" by the Ministry of Education, Culture, Sports, Science and Technology, Japan. This research was also partially supported by Grant-in-Aid for Scientific Research (S) from Japan Society for the Promotion of Science (24220002, 2012-2016), (17H00759, 2017-2020) and (18H03341, 2018-2020), and from Monbukagakusho (16H06304, 2016-2021).
The used code as well as the observations' data are available in the following repository https://github.com/fikutoryia/rl-csa1.
Kyoto University, Yoshida-Honmachi, Kyoto, 606-8501, Japan
Victoria Abou-Khalil
, Toru Ishida
, Brendan Flanagan
, Hiroaki Ogata
& Donghui Lin
Kindai University, Osaka, Japan
Masayuki Otani
Search for Victoria Abou-Khalil in:
Search for Toru Ishida in:
Search for Masayuki Otani in:
Search for Brendan Flanagan in:
Search for Hiroaki Ogata in:
Search for Donghui Lin in:
VA and TI conceived the presented idea and developed the method. VA conducted the interviews and designed the system architecture. VA and MO performed the computations. VA carried out the experiment. TI, HO, and BF supervised the findings of this work and gave conceptual advice. BF and DL contributed to the final version of the manuscript and gave technical advice. All authors read and approved the final manuscript.
Correspondence to Victoria Abou-Khalil.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Abou-Khalil, V., Ishida, T., Otani, M. et al. Learning culturally situated dialogue strategies to support language learners. RPTEL 13, 10 (2018) doi:10.1186/s41039-018-0076-x
Accepted: 02 July 2018
DOI: https://doi.org/10.1186/s41039-018-0076-x
Automatic dialogue strategies
Culturally situated information
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page
|
CommonCrawl
|
Multiplication and Division
Find unknowns in multiplication and division problems (5,10)
Multiplication as repeated addition (10x10)
Interpreting products (10x10)
Arrays as products (10x10)
Multiplication and division using groups (10x10)
Multiplication and division using arrays (10x10)
Find unknowns in multiplication and division problems (0,1,2,4,8)
Find unknowns in multiplication and division problems (3,6,7,9)
Find unknowns in multiplication and division problems (mixed)
Complete multiplication and division facts in a table (10x10)
Multiplication and division (turn arounds and fact families) (10x10)
Find quotients (10x10)
Number sentences and word problems (10x10)
Multiplication and division by 10
Properties of multiplication (10x10)
Multiplication and division by 10 and 100
Distributive property for multiplication
Use the distributive property
Multiply a two digit number by a small single digit number using an area model
Multiply a two digit number by a small single digit number
Multiply a single digit number by a two digit number using an area model
Multiply a single digit number by a two digit number
Multiply a single digit number by a three digit number using an area model
Multiply a single digit number by a three digit number
Multiply a single digit number by a four digit number using an area model
Multiply a single digit number by a four digit number using algorithm
Multiply 2 two digit numbers using an area model
Multiply 2 two digit numbers
Multiply a two digit number by a 3 digit number
Multiply 3 numbers together
Divide a 2 digit number by a 1 digit number using area or array model
Divide a 2 digit number by a 1 digit number
Divide a 3 digit number by a 1 digit number resulting in a remainder
Multiply various single and double digit numbers
Extend multiplicative strategies to larger numbers
Divide various numbers by single digits
Solve division problems presented within contexts
Solve multiplication and division problems involving objects or words
Multiply various single, 2 and 3 digit numbers
Divide various 4 digit numbers by 2 digit numbers
The commutative property of multiplication says that the order of numbers doesn't matter in multiplication. For example, $4\times6=6\times4$4×6=6×4.
Use the commutative property to complete the following statements:
If $2\times4=8$2×4=8, then $4\times2=\editable{}$4×2=
If $2\times5=10$2×5=10, then $5\times2=\editable{}$5×2=
If $2\times\editable{}=12$2×
=12, then $\editable{}\times2=12$
×2=12.
Know basic multiplication and division facts.
Generalise the properties of addition and subtraction with whole numbers
|
CommonCrawl
|
Why should we care about groups at all?
Someone asked me today, "Why we should care about groups at all?" I realized that I have absolutely no idea how to respond.
One way to treat this might be to reduce "why should we care about groups" to "why should we care about pure math", but I don't think this would be a satisfying approach for many people. So here's what I'm looking for:
Are there any problems that that (1) don't originate from group theory, (2) have very elegant solutions in the framework of group theory, and (3) are completely intractable (or at the very least, extremely cumbersome) without non-trivial knowledge of groups?
A non-example of what I'm looking for is the proof of Euler's theorem (because that can be done without groups).
[Edit] I take back "insolubility of the quintic" as a non-example; I also retract the condition "we're assuming group theory only, and no further knowledge of abstract algebra".
group-theory soft-question
ElliottElliott
$\begingroup$ Many interesting facts about Rubik's cubes we proven using group theory. In particular, by treating the different permutations of the cube as a finite group and then using Lagrange's Theorem, the maximum number of moves needed to solve the cube was determined. I'll admit this may not be of any practical value, but such information would be hard to find using other methods. The value of Group Theory is in it's generality. The definition of a group is so simple that many real-world problems can give rise to a group, and so much is known about the structure of groups. $\endgroup$ – user3180 Jan 28 '11 at 3:29
$\begingroup$ Why should the insolubility of the quintic be a non-example? Caring about exact solutions to polynomial equations is fairly concrete algebra. $\endgroup$ – user856 Jan 28 '11 at 3:34
$\begingroup$ von Neumann's mathematical work on Quantum Mechanics was based on group theory, and made it much easier to work in QM, as I recall. I'll look for more specific/explicit references tomorrow. $\endgroup$ – Arturo Magidin Jan 28 '11 at 4:04
$\begingroup$ Your condition "we're assuming group theory only, and no further knowledge of abstract algebra" is pretty absurd: the relevance of a concept in solving problems is completely uncorrelated to the knowledge of those studying the concept for the first time! $\endgroup$ – Mariano Suárez-Álvarez Jan 28 '11 at 4:29
$\begingroup$ Short answer : the consequences of a group definition are useful conceptual structures/frameworks that are used from any where in Crypto Analysis to Quantum Mechanics. But that is their practical use of them. How ever their structure on their own is worth study for those who think group structures are beautiful on their own. $\endgroup$ – jimjim Jan 28 '11 at 4:32
First of all, even phrasing the question as "I don't need group theory if every question that can be solved with group theory can also be solved without it" is misguided. It just so happens that the definition of a group is a natural thing. You might be able to circumvent it sometimes, but that doesn't mean that you should. If a concept naturally suggests itself, then why should one fight hard not to introduce it?
Now, why is it natural? Because there are loads of structures out there in the Platonic world that consist of a set and a binary operation: the integers, the rational numbers, the non-zero rational numbers, matrices, vectors, geometric symmetries (closely related to matrices), permutations,... the list is almost endless. So it makes sense to capture this common feature of so many familiar objects in a definition.
As for applications, traditionally groups were only thought of as symmetries of geometric objects. Even in this narrow context, the abstract framework is useful, e.g. to count solutions to puzzles or ways of colouring a shape. Here is a concrete example of a puzzle that could only be solved and understood in its entirety using abstract groups (since it allowed us to identify a group of symmetries of a certain object with an already familiar symmetry group). It is also mainly in this traditional function, that groups are of paramount importance to physicists.
Of course, the great insight of Galois was that the word "symmetry" shouldn't be understood too narrowly, and since then, groups have completely permeated all of mathematics. Groups describe the complexity of a polynomial, they describe the complexity of a topological space, of an algebraic variety, of a number field, etc. Given a number field, say a Galois extension of $\mathbb{Q}$, pretty much all its important invariants are groups: the Galois group, the class group, the ring of integers, the group of units in the ring of integers, etc. Similarly, given a topological space, you have its fundamental group, higher homotopy groups, homology and cohomology... You can tell your friend that if we didn't have groups, then we wouldn't know how to tell a donut from a ball!
I should probably stop here, since to give a full account of the usefulness of groups, one would have to write a compendium of all of mathematics.
Alex B.Alex B.
$\begingroup$ So if I'm not convinced that groups are important, the way to convince myself would be to learn more math! $\endgroup$ – Elliott Jan 29 '11 at 6:24
$\begingroup$ @Elliott That would never be a bad idea ;-) $\endgroup$ – Alex B. Jan 29 '11 at 7:38
$\begingroup$ @Zaz Those are rings, and in particular they are groups under addition. Since your profile says that any feedback is welcome, here is some feedback: it's usually safer to ask questions when you don't understand something, than throwing confident statements out there and distributing downvotes. You might also like to check peoples' profiles before correcting them on what is or isn't a group. $\endgroup$ – Alex B. Oct 26 '15 at 12:18
$\begingroup$ @Zaz I am afraid I have no intention of bumping this post with a trivial edit that would make the post less readable. My list of examples was intended as purely illustrative, it is not a formal introduction to group theory, and there is no need to introduce heavy notation. In the over 4.5 years that this post has been up for, nobody appears to have had any problems to figure out what the group operation is in each of the examples. I can live with your downvote, I was merely satisfying your request for feedback. $\endgroup$ – Alex B. Oct 26 '15 at 14:27
$\begingroup$ @Zaz No worries. While I was cycling home, I thought of an example to illustrate this abuse of notation: strictly speaking, it is wrong to speak of "the ring of integers", since the integers are merely a set. But every mathematician would prefer to take the minute risk of ambiguity to saying "the ring whose underlying set is the set of integers, and where the structure operations are addition with 0 as the neutral element, and multiplication with 1 as the neutral element". Life is just too short. $\endgroup$ – Alex B. Oct 26 '15 at 18:53
Group theory (when physicists say this they mean representation theory) is the basis of modern physics. Via Noether's theorem it is the abstract mechanism responsible for conservation laws (e.g. conservation of energy, conservation of momentum) even in classical mechanics. In quantum mechanics, representations are even more important: the representations of a group called $\text{SU}(2)$ describe the difference between bosons and fermions (the difference being their spin, which is the physical property that makes MRI work), and the representations of $\text{SU}(2) \times \text{SU}(2)$ describe the possible orbits of an electron in a hydrogen atom. So group theory can be used, among many many other things, to predict the structure of the periodic table. It is also the foundation of the Standard Model of particle physics.
(Group theory also happens to be fundamental to many modern branches of mathematics, but I figured an application to things that people provably care about outside of mathematics would look better.)
Qiaochu YuanQiaochu Yuan
$\begingroup$ @PEV: Are you sure that "provably" wasn't what Qiaochu intended? $\endgroup$ – user856 Jan 28 '11 at 18:36
$\begingroup$ "Provably" is what I intended. I think it's safe to argue that people provably care about particle physics. $\endgroup$ – Qiaochu Yuan Jan 28 '11 at 19:46
$\begingroup$ Great example, though I wish they would emphasize this aspect more in quantum mechanics classes! $\endgroup$ – Elliott Jan 29 '11 at 6:16
See this answer by Keith Conrad. So it seems that some elementary particles were predicted by group theory before being experimentally discovered. Here is a video of Richard Feynman describing the particle (using "Strangeness minus 3").
PrimeNumberPrimeNumber
$\begingroup$ To be precise, the physicists had discovered that there was a correspondence between the bases of irreducible representations of the Lie algebra of certain symmetry groups and certain families of particles. Gell-Mann predicted, based on this theory, the existence of the $\Omega^-$. $\endgroup$ – Zhen Lin Jan 28 '11 at 7:34
Group theory is used extensively in chemistry. It's what determines whether or not two molecules will bond (it's based on the symmetry of their orbitals). Group theory is essentially the basis of the Molecular Orbital Theory which is the basis of modern chemistry.
answered Feb 22 '11 at 9:56
$\begingroup$ Care to provide some links? I know there's Wikipedia, but this seems like a very interesting subject. $\endgroup$ – agarie Dec 5 '14 at 21:14
Check out this link http://plus.maths.org/content/power-groups
I think the subsection titled "Lonely Pursuits with Groups", where they have used the Klein 4-group to decide how many possible end locations are there for the final marble in the game of solitaire is particularly interesting. I was quite stunned at the elegance of the whole thing when I first saw it. I think it is a great example of the power of group theory. It also satisfies the three conditions you have listed.
svenkatrsvenkatr
$\begingroup$ That is a nice discussion. For a similar analysis of a board with a different size (fewer holes), see cut-the-knot.org/proofs/PegsAndGroups.shtml. $\endgroup$ – KCd Jan 29 '11 at 21:55
If you care about geometry/topology then you should care at least a little bit about groups. For example, the study of manifolds with a group structure (Lie groups) is a very beautiful and very applicable field. If you can put a group structure on a manifold, you get some topological information for free: e.g. abelian fundamental group, trivial tangent bundle. The applications to physics are huge (c.f. PEV's answer).
Another way group theory comes up in geometry/topology is by assigning a group to a space or an object on the space as a way of measuring something. For example, most of algebraic topology is assigning group invariants to spaces ((co)homology groups, homotopy groups) to "measure holes". This is useful because groups are often easy to distinguish while it is often difficult to directly show that two spaces are not homeomorphic. An example from differential geometry is holonomy: if we are on a manifold with a metric,the holonomy groups quantify how the space is curved.
Eric O. KormanEric O. Korman
$\begingroup$ "Groups are often easy to distinguish" --- Handle with care! $\endgroup$ – Rasmus Jan 28 '11 at 9:43
$\begingroup$ Yes the problem of distinguishing groups can be very difficult (e.g. showing two group presentations are equivalent can be a beeyatch). However, what I meant is that showing directly that there does not exist an isomorphism between two groups is usually easier than showing directly that there is not a homemorphism between two spaces (without appealing to groups or algebraic structures attached to the spaces). $\endgroup$ – Eric O. Korman Jan 28 '11 at 19:42
In mathematics, we study structures of various sorts, and the relationships (or morphisms) between them, which we usually encode as functions of a special type. More generally, the objects and relationships or morphisms form what is called a category, which consists of the objects and morphisms, and the rule for how morphisms compose with each other.
There are many many important examples of categories, at least one for every branch of mathematics, and often quite a few. (Sets and functions, vector spaces and linear maps, topological spaces and continuous maps, metric spaces and short maps, rings and ring homomorphisms, graphs and graph homomorphisms, and so on.)
If we take any category at all, and examine a single object in it, then the morphisms from that object to itself which are invertible form a group, called the automorphism group of the object. This group abstractly measures the "symmetries" of that object, under the sort of transformations we're considering important. Symmetry has shown itself to be a powerful tool for reasoning in many circumstances -- it allows us to take an argument about one part of a structure and carry it throughout the structure to many other places where it applies.
Because essentially every object we study in mathematics has an automorphism group, it makes sense to study groups in general, so that we don't have to start afresh in each branch of mathematics when we want to consider symmetries of whatever it is that we're studying.
Cale GibbardCale Gibbard
To answer your question "Why should be bother about pure math" (and as a hindsight, just focuss on the application stuff), I always look at the results that were purely abstract w decade ago, but do pop up widely in consumer electronics (like (Goppa-) codes in CDROMs and DVD).
Also, I once answered at my aunts reaction: "But can you buy something at the grocery with it ?" at the presntation of my master's thesis: "No, but wait for a hundred years and a;; has changed...".
Willem NoorduinWillem Noorduin
$\begingroup$ Can you buy anything at the grocery without group theory? Bar codes involves some simple group theory. The card you pay with involves more complicated group theory. $\endgroup$ – kasperd Aug 9 '14 at 14:00
Not the answer you're looking for? Browse other questions tagged group-theory soft-question or ask your own question.
Why is 'abuse of notation' tolerated?
Why isn't there much study on right groups (left groups)?
Proving facts about groups with representation theory.
Why is studying maximal subgroups useful?
examples of polyclic groups
Understanding Abel-Ruffini
Why are the identities of distinct groups treated as being the same?
Groups as symmetries, and question about Lie groups
Why does representing groups as linear operators provide insight about the groups themselves?
|
CommonCrawl
|
Hybrid approach based on particle swarm optimization for electricity markets participation
Ricardo Faia1,
Tiago Pinto2,
Zita Vale3 &
Juan Manuel Corchado2
Energy Informatics volume 2, Article number: 1 (2019) Cite this article
In many large-scale and time-consuming problems, the application of metaheuristics becomes essential, since these methods enable achieving very close solutions to the exact one in a much shorter time. In this work, we address the problem of portfolio optimization applied to electricity markets negotiation. As in a market environment, decision-making is carried out in very short times, the application of the metaheuristics is necessary. This work proposes a Hybrid model, combining a simplified exact resolution of the method, as a means to obtain the initial solution for a Particle Swarm Optimization (PSO) approach. Results show that the presented approach is able to obtain better results in the metaheuristic search process.
This paper is an extension of work originally presented in 2017 IEEE SSCI (Faia et al. 2017).
Traditionally, existing methods for analyzing electrical systems focus on centralized solutions (Foley et al. 2010) and, to a certain extent, are based on obsolete assumptions. With the new Smart Grid (SG) paradigm, innovation and new methods (or adaptations of the existing ones) are required to study the electrical system to have adequate decision support. In this dynamic context, thanks to SGs technologies, the appearance of new actors and roles is becoming more relevant and is leading to new operational procedures. One of the most debated and targeted cases is the role of Virtual Power Players (VPP), which combine consumers, energy storage systems (ESS) and generators at a local level. In (Burger et al. 2017), it is possible to find a review on the role of VPPs in the current electric system. This type of entity in SGs allows for electric and economic transactions in the so-called local electricity markets, as well as micro-markets in some studies (Olivella-Rosell et al. 2018).
The diversity of market models and opportunities that have been arising in recent years force the involved players to look for solutions to help them manage their investments and negotiation strategies. Considering the MIBEL market (MIBEL 2007), the time for players to act (submit their bids) may depend on the market models. In the Spot market model, all players must submit it 1 h before the start of the market (e.g., in day-ahead market, the cycle starts at 12 am and the bids for the next 24 h must be submitted until 11 am). In the Balancing market model there are six sessions per day, and in each session the players must also submit their bids 1 h before its start. In this way, to submit their bids, the players have to do it from the end of the last session to 1 h before the start of the next session. In the real-time markets model, which is the case of PJM Energy Market (PJM 2018) it is repeated every 15 min, the players have 15 min to present their bids. The time of actuation in this market is much smaller considering the times of Spot and Balancing markets in the MIBEL market. A crucial point is deciding in which markets to participate in each moment and what amount of energy to negotiate. This problem is commonly known in economics and finance as the portfolio optimization problem (Markowitz 1952).
The resolution of the portfolio optimization problem in electricity markets using metaheuristics has been much explored since the metaheuristics usually manage to arrive at a solution very close to the optimum in a short time in a short time. In this type of problems, the time is critical because the decisions have to be fast enough for the user to have time to plan until the decision is taken. On the other hand, metaheuristics have a random nature, which presents variability in the results, which means that sometimes these results can be far from the best solution. This is a gap that we address in this paper, by proposing a hybrid metaheuristic approach that decreases the variance of the results when compared with other metaheuristics.
Metaheuristic approaches are especially useful for reaching good solutions for massive computational problems in fast execution times. This is especially relevant when solving real-world problems, in which the decision time is a relevant decision factor. The participation in electricity markets is one of these problems due to the significant changes during the last decades (Meeus and Belmans 2008). Before the liberalization of electricity markets, the system operator considered demand to be fixed and scheduled operation plans based on generation resources. This made electricity negotiations highly restricted, mostly due to the fact that much of the produced energy came from fully controllable generation sources.
The recent energy policy has favored a massive introduction of renewable energy sources on electricity markets, which has greatly impacted their penetration in power systems (European Commission 2013). The intermittent nature of renewable energy had a very large impact on the way the negotiations operated since the energy supply directly influences the market prices. At the market level, the competition has increased, bringing a higher number of sellers to participate (Yoo et al. 2017). This also leads to the emergence of aggregators to represent (and even manage) groups of small player (Vale et al. 2011a), in order to increase their impact on the market. A market dimension is also being introduced into retail markets, in order to motivate consumers to change their passive attitudes. The wide-scale rollout of smart metering infrastructure is now creating the means to enable consumers to participate in competitive retail markets by overcoming the lack of infrastructure that enables sending market signals to and from consumers (Palensky and Dietrich 2011).
With the increase in competitiveness in electricity markets (Ciarreta et al. 2017), caused by the increase of energy producers (in particular renewables), there is an increasing need for tools that can provide support to electricity market participants. Multi-Agent Simulator of Competitive Electricity Markets (MASCEM) (Vale et al. 2011b) is integrated with a decision support system that aims at providing market players with suitable suggestions on which actions should be performed at each time and in different contexts of negotiation. This system is Adaptive Decision Support for Electricity Markets Negotiations (AiD-EM). AiD-EM is itself composed of several distinct decision support systems, directed to the negotiation in different EM types; e.g., Adaptive Learning Strategic Bidding System (ALBidS) (Pinto et al. 2015).
After this introduction, section 2 presents an overview of the related work the field, and section 3 presents the mathematical formulation of the addressed problem. Section 4 details the proposed hybrid model based on particle swarm optimization methodology. Section 5 shows the results of the case study, and finally, section 6 presents the most relevant conclusions of this work.
Markowitz proposed the Markowitz Portfolio Selection Theory in (Markowitz 1952). This theory enables combining assets in such a way that the resulting portfolio is characterized by a higher return to risk ratio, when compared to that provided by every single asset by itself, an effect known as diversification; i.e., the more diversified the portfolio, the lower risk level.
The concepts of portfolio optimization and diversification are instrumental in the development and understanding of financial markets and financial decision making. However, in the last years, the theory of portfolios has been applied in the electricity markets area, with the purpose of supporting the decision making (Denton et al. 2003). Using the portfolio optimization results, market participants can obtain better market participation, by obtaining a larger profit / lower cost from their participation.
In (Pinto et al. 2015), the authors propose a portfolio optimization for multiple electricity market participation. In this methodology, the players can sell and buy electricity in several markets as well as selling the produced energy. A risk management model is also considered, but it considers that risk is originating from the prices forecasting. The prices forecasting is made through an artificial neural network (ANN) (Pinto et al. 2012) and three different values for each case are presented. These three values are considered as the risk levels. The risk is associated with the expected price because with the simulations ANN arrives in different values for the same case. The maximum, minimum and average values for all simulations are calculated, in order to feed the optimization problem. Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995) variant is used to solve the optimization problem.
In (Pinheiro Neto et al. 2017), the authors propose a methodology for risk analysis and portfolio optimization of power assets with hydro, wind and solar power. In the case of study the authors considering the Regulated Contacting Environment and the Mechanism for Reallocation of Energy Brazil.
When the execution time for reaching a solution for these problems is a relevant decision factor, metaheuristic optimization approaches are often applied. Metaheuristics can be based on (Boussaïd et al. 2013): single-solution, if they use a single starting point (e.g., local search, simulated annealing, iterated local search and tabu search) or population-based if a population search points are used (e.g., particle swarm, evolutionary algorithms, colony based optimization). Many of these approached are inspired by natural processes (e.g., evolutionary algorithms from biology or simulated annealing from physics).
In metaheuristics search there two ways to characterize the research: the exploration of the search space (diversification) and exploitation of the best-found solution (intensification). Exploration means the diversification of the search to different regions in the search space for a better sampling of the solution space. On the other hand, exploitation means the intensification of the search around some the good quality solutions in other to find an improved solution. A balance between two contradictory objectives must, therefore, be guaranteed (Mehdi 1981). When applying these strategies to the solution of any optimization problem, the main concern is to determine the algorithm capability for finding the global optimum. The desirable feature of an effective optimization method is a high successes probability for finding the global solution ate the expenses the lower computational efforts. Theoretically, it is important to remark that stochastic methods (metaheuristics) need an infinite number of objective function evaluations to guarantee the convergence of the global optimum. This number is determined by the parameters employed for controlling the search process (exploitation and exploration) and the termination criterion (Fernández-Vargas et al. 2016). There are many stopping criteria used in stochastic optimization methods: they are based on the measurement of the relative error to the known value of the global optimum, the improvement of the value of objective function for a certain number of iterations or functions evaluations, or a maximum allowable numerical effort that is defined in terms of the number of algorithm iterations or objective function evaluations.
In summary, the optimal solution cannot be guaranteed when using a metaheuristic, but a reasonably good solution is obtained without having to explore the whole solutions space, and consequently in a much shorter time when compared to resolution by exact methods. There are different metaheuristics that can be applied. These vary depending on the search heuristic method chosen to guide the search. Thus, each metaheuristic can present different results. In real-world applications, the main interest is in obtaining a good solution in a reasonable amount of time. Therefore, metaheuristic methods are highly appreciated as efficient means for dealing with real-world applications (Yusta 2009).
Mathematical formulation
The formulation presented in (1) is used to represent the optimization problem, as proposed in (Pinto et al. 2014). In (1) d represents the weekday, Nday represent the number of days, p represents the negotiation period, Nper represent the number of negotiation periods, binSM,d,p and binBm,d,p are boolean variables, indicating if this player can enter in negotiation in each market type, M represents the referred market, NM represents the number of markets. Variables PSM,d,p and PBM,d,p represent the expected (forecasted) prices of selling and buying electricity in each session of each market type, in each period of each day. The outputs are SpowM representing the amount of power to sell in market M and BpowM representing the amount of power to buy in market M.
$$ {\displaystyle \begin{array}{c}\mathit{\operatorname{Max}}(Profit)=\left(\sum \limits_{M=M1}^{NM}\left({Spow}_{M,d,p}\times {PS}_{M,d,p}\times {binS}_{M,d,p}\right)\right)\\ {}-\left(\sum \limits_{S=S1}^{NM}\left({Bpow}_{S,d,p}\times {PB}_{S,d,p}\times {binB}_{m,d,p}\right.\right)\\ {}\forall d\in Nday,\forall p\in Nper,{binS}_{M,d,p}\in \left\{0,1\right\},{binB}_{m,d,p}\in \left\{0,1\right\}\\ {}{PS}_{M,d,p}= Value{\left({Spow}_M\right)}_{M,d,p}\\ {}{PB}_{M,d,p}= Value{\left({Bpow}_M\right)}_{M,d,p}\end{array}} $$
The formulation considers the expected production of a market player for each period of each day. The price value of electricity in some markets depends on the power amount to trade. With the application of a clustering mechanism it is possible to apply a fuzzy approach to estimate the expected prices depending on the negotiated amount (Faia et al. 2016). Eq. (2) defines this condition.
$$ Value{\left({Spow}_M\ or\ {Bpow}_M\right)}_{d,p,M}=\mathrm{Data}{\left(\mathrm{fuzzy}\left(\mathrm{pow}\right)\right)}_{d,p,M} $$
Equation (3) represents the main constraint to be applied in this type of problems, and imposes that the total power that can be sold in the set of all markets is never higher than the total expect production (TEP) of the player, plus the total of purchased power (Pinto et al. 2014). Restrictions (4), (5) and (6) refer to the type of generation of the supported player.
$$ {}^{\sum \limits_{M=M1}^{NM}{Spow}_M\le TEP+\sum \limits_{S=S1}^{NM}{Bpow}_M} $$
$$ TEP=\sum {Energy}_{prod},{Energy}_{prod}\in \left\{{Renew}_{prod},{Therm}_{prod}\right\} $$
$$ 0\le {Renew}_{prod}\le {\mathit{\operatorname{Max}}}_{prod} $$
$$ {\mathit{\operatorname{Min}}}_{prod}\le {Therm}_{prod}\le {\mathit{\operatorname{Max}}}_{prod}, if\ {Therm}_{prod}>0 $$
From the presented restrictions and considerations one can see that the energy produced comes from renewable sources and non-renewable sources (thermoelectric). If the player is a producer of thermoelectric power, the production has to either be null or set at a minimum value, since it is not feasible for the production plant to work under a technical operation limit. If the player is a producer of renewable energy, the only restriction is the maximum production capacity.
Proposed hybrid approach
The methodology proposed in this work is created to solve the portfolio optimization problem. In this case two different methods are used, namely an exact resolution method and a stochastic resolution method (PSO). As it is possible to observe by Fig. 1, a simplified version of the problem optimization is done by using the exact method, using the CPLEX solver to solve the Mixed Integer Linear Programing (MILP) problem. The solution that is achieved for the simplified version of the problem is then used as initial solution for the approximate method (PSO) to be executed.
Hybrid methodology pseudo code
All the metaheuristic optimization methods require an initial solution to start the optimization process (which is often randomly generated). The role of CPLEX is to provide the initial solution for PSO to initialize the search. Usually the resolution of MILP problems, as indicated in the Fig. 1, can take along execution time, depending on the problem in hand, However, to circumvent this problem the solutions are only restricted to integers and the resolution time is quite acceptable (the comparison can be consulted in the case study presented in section V).
As can be followed by Fig. 1, after performing the optimization by CPLEX, the iterative search process, the PSO, is started. In the first step, the initial solution is created so that the PSO method starts its search. Different variants are experimented and compared in the case study of section V for the creation of the initial solution, but all based on the solution from CPLEX.
The PSO algorithm does not guarantee the optimal global solution. Generally, the search is stopped when the stopping criteria are reached. At each iteration, PSO applies Eq. (7) and Eq. (8). During the iterations of the algorithm, each particle of PSO moves in the space with a velocity that is dynamically adjusted (different in each iteration). The velocity determines particles' positions according to their own and their neighboring-particles experiences, thus moving two points in each iteration: (i) the best position found so far by itself, called Pbest; and (ii) the best position of all neighbor particles, called Gbest (Denton et al. 2003).
$$ {v}_{id}^{k+1}=w.{v}_{id}^k+{c}_1.{r}_1^k.\left({Pbest}_{id}^k-{x}_{id}^k\right)+{c}_2.{r}_2^k.\left({Gbest}_{id}^k-{x}_{id}^k\right) $$
$$ {x}_{id}^{k+1}={x}_{id}^k+{v}_{id}^{k+1} $$
\( {\boldsymbol{v}}_{\boldsymbol{id}}^{\boldsymbol{k}} \)- velocity of particle i, parameter d and iteration k,
\( {\boldsymbol{x}}_{\boldsymbol{id}}^{\boldsymbol{k}} \)- position of particle i, parameter d and iteration k,
k- iteration,
Pbest- personal best,
Gbest- global best,
w - inertia term,
c1 – local attraction term.
c2 - global attraction term.
r1,r2- random numbers between [0,1].
Afterwards, Eq. (7) and (8) are applied to find new positions for each particle, and the fitness is calculated by using the objective function, Eq. (1). The next step is to update the best individual positions (Pbesti) if the current position of a particle is the best found so far by that particle. Pbest is also compared to the best global position (Gbest). If Pbesti is better than Gbest, Gbest is also updated to the new position.
Considering that the PSO performs its search based on parameters, for the parameter of inertia, has been defined that adaptive inertia will be used, Eq. (9).
$$ {w}_k^v=1.1-\frac{Gbest}{average\left( Pbes{t}_v\right)} $$
The adaptive inertia weight tries to adapt the value of inertia based on parameters that provide status information of where the particles are in the search space at each time, the inertia weight value in Eq. (9) is different for each variable, taking into account the global best position and the value of each specific variable in the personal best position of each particle (Arumugam and Rao 2008; Faia et al. 2018)
This section presents the case study that illustrates the application of the proposed methodology. All simulations were executed on a computer with 1 processor Intel® w3565 3.2GHz, with 4 Cores, 8 GB of RAM and operating system Windows 10 of 64 bits As has been previously started, the PSO algorithm starts with an initial solution based on the CPLEX resolution. In this scenario, five different markets type has been considered. The considered markets are the day-ahead spot market, negotiations by means of bilateral contacts, the balancing or intra-day market, and a local market, at the Smart-Grid (SG) level.
The balancing market is divided into different sessions. In the day-ahead spot market, the player (acting as a seller) is only allowed to sell electricity, while in the other market types the player can either buy or sell depending on the expected prices. Limits have also been imposed on the possible amount of negotiation in each market. In this case, it is only possible to buy up to 10 MW in each market in each period of negotiation, which makes a total of 40 MW purchased. It is possible to sell power on any market, and it can be transacted as a whole or in installments. The player has 10 MW of own production (TEP) for sale.
Table 1 shows the initial solution achieved by the CPLEX method. For this resolution, the Tomlab toolbox of MATLAB® has been used. CPLEX2 represents the version of the CPLEX resolution used to generate the initial solution. In this case, the variables that constitute the solution are only positive integer values; this particularity greatly simplifies the method, enabling it to solve the problem in short execution time. Table 2 CPLEX1 represents the complete version of CPLEX resolution (when the variables of the solution can be positive rational numbers), which leads to high execution times.
Table 1 Cplex result for initial solution
Table 2 Objective function results (€)
Table 1 presents the optimized variables, assuming only integer values; this configuration of the solution allows to obtain an objective function value present in Table 2. In the Spot market, 14 MW is sold, 12 MW by means of bilateral contracts and 9 MW in SG, in the variables that represent the purchase can see that in the balance markets the maximum quantity (10 MW) is bought, and it will also buy 5 MW to bilateral markets.
In the bilateral contracts, as can be observed in the variables, the two actions are performed, buying and selling. This is possible due to the fact that in this market the quantity of electricity traded influences the price of it. In the SG market, there is also this possibility, but the resolution by this method does not present this possibility.
In order to obtain better results with the PSO, an initial study was carried out in order to find the optimum parameterization of the parameters, especially the value of the local term attraction and global attraction. By the results obtained from the training of the parameters can be concluded that the best combinations of parameters are the parameters with a color of lighter tonality, as indicated by the scale of values present on the right. Analyzing the graph of Fig. 2, which represents the mean of the objective function of 100 simulations with 500 iterations each, which in the end will result in 50,000 evaluations of the objective function.
Local and Global attraction heatmap
This process was repeated for each combination of parameters, in the y-axis, the parameter c1 is represented and in the x-axis the parameter c2, both can take values from 0 to 2 with an increment of 0.1, in total we will have 441 combinations. Analysing the graph can see that the best combination of parameters is obtained when the value of c2 is less than 0.5, and for c1 the higher the value, the better the performance (Lezama et al. 2017b). Given the results obtained for the value of the combination of values that will be used will be 2 for c1 and 0 for c2, in this way the global component of the research process will be canceled.
After obtaining the initial solution, it is tested as input for different versions of the PSO. These versions differ in the construction of the initial solution. Initially, it is considered that only one particle receives the solution of CPLEX2, and thus the other particles will have a random solution, this version is called "Hybrid PSO".
In order to understand the influence of the initial solution in the PSO research, four more versions were created. In these versions, all the particles receive solutions built from a solution of the CPLEX2. In the solution, the construction phase is used the normal distribution, since, as explained in section 4, since the problem has ten variables, each variable must have a value for PSO to start the search. Figure 3, presents three representations of a normal distribution, varying the Standard Deviation (STD), for the first variable: sell in the spot market.
Normal distribution for mean = 14
The normal distribution is characterized by the mean and the STD. In PSO, the initial solution requires one solution for each variable, so one particle has ten different variables. The representation in Fig. 3, demonstrates the influence of the STD in the creation of the initial solution, since the average of the distribution was used the value of the variable of sale in the spot market optimized by the CPLEX2 present in Table 1, this process is repeated for all variables and for each particle.
From the analysis of Fig. 3, one can verify that the bigger the STD, the greater the dispersion of the distribution, and the possible range values for each variable increase. E.g., if STD = 0.5 there is a higher probability of the value created to be closer to the value of the mean than if STD = 2.5. In this case, the STD value allows the creation of different initial solutions to the problem using the variables that resulted from the CPLEX2 resolution.
Table 2 presents the objective function results for all methods. The STD = 0.5 version represents the hybrid PSO that starts its search with a solution created from a normal distribution with a mean corresponding to the deterministic resolution value (CPLEX2, Table 1) and with STD = 0.5. The versions of STD = 1.5 and STD = 2.5, is exactly the same as version STD = 0.5, but the value of the standard deviation is respectively 1.5 and 2.5.
The version STD = 2.5 (> = 0), represents the hybrid method when all PSO particles start with a solution generated by a normal distribution with STD = 2.5, and all variables in all solutions have some positive values. The CPLEX1 and CPLEX2 only have a maximum number for objective function because these are deterministic methods and not population-based. The other resolutions have different measurements because for each method 1000 simulations are executed. Table 2 shows the values of the objective function for all implemented methods. As expected, the deterministic resolution (CPLEX1) reached the best objective function value, followed by the Hybrid PSO where only one of the particles starts with the initial solution of CPLEX2. Next, the hybrid versions appear, in which the initial solution of all the particles was created using the normal distribution. Finally appears the PSO where the initial solution was created by random values and finally appears the CPLEX2.
As one can see, the difference between CPLEX1 and PSO in the objective function value for the maximum is 0.000023, it is residual value. In the average parameter, there is already a larger difference, but the hybrid PSO has a value very close to CPLEX1 (reference result) with a difference of 0.0155. The greater difference in results is observed in the STD of the methods including the PSO.
Table 3 shows the comparison of execution times between all the considered methods. The "Total mean values" column represents the mean value of execution times. As noted, CPLEX1 and CPLEX2 only have one value – exact solution, which means that it was only executed once. In the other cases, the values refer to the 1000 simulations.
Table 3 Time results for all methods in seconds
By the Table 3, it is possible to verify that PSO presents the smaller value of execution time. As expected, the CPLEX1 presents a high value but on the other hand, guarantees the maximum value for the objective function. In this case, the hybrid PSO methods and all other versions of the normal distribution have very similar values. It is worth noting that STD 2.5 (> = 0) takes twice the STD 2.5 due to the fact that the solutions are corrected to positive values.
In the column of "Total for all runs" the total value of the 1000 simulations is shown, which is proportional to the average value. In the last column, the total average value is displayed with all steps, from the data load creation of the initial solution (CPLEX2 time). The average value of the whole process is about 50 s for the hybrid PSO method, which is much smaller when compared to the value of CPLEX1; however, the objective function value is very close. Figure 4 shows the results for the number of iterations. In the bar graph, the average value for each method is shown.
Mean iteration results
As can be observed by Fig. 4, the average number of iterations in the methods where the initial solution contains information of the resolution of CPLEX2 is between 80 and 90 iterations and in the PSO with a random solution is about 63. With the inclusion of the initial solution the average value of the iterations increases, this fact can be explained because in the PSO since the search has to start with random solution and often far away from the optimum, the search tends to fall in local points and the method does not have the ability to get out of there, often converging to a bad solution.
Figure 5 shows a representation of the convergence process of the different versions of the PSO, by showing the convergence in all the 1000 executions. This enables assessing how the STD stands for in the solution search using random methods.
Algorithms performance, a PSO, b Hybrid-PSO, c STD = 2.5, d STD = 1.5, e STD = 0.5 and f STD = 0.5, (> = 0)
In Fig. 5, six different representations are presented, which refer to the results of the PSO algorithms. Since 1000 runs were performed for each, each algorithm obtains 1000 different results, and each line in the figure represents the evolution of the solution throughout the iterations.
As one can see, the images of Fig. 5, do not have the scales in the same magnitude, which may make it difficult to observe, but with the scales, all the same, it was not possible to have the notion of what really happens in the convergence process. Figure 5a) represented the standard PSO, which has an STD of 150, as can be seen in Table 2. One can observe that this resolution presents the simulations with very different final results hence the existence of the large STD. Also, in Fig. 5, is represented the Hybrid PSO method, which results, from Table 2, in the lowest value of STD. Then the methods with an initial solution based on the normal distributions follow in terms of STD. Within these can see that the STD of the 1000 simulations decreases depending on the STD of the normal distribution applied for the creation of the initial solution. It is important to mention that the method where only one of the particles starts with the initial solution obtained through CPLEX2 presents a better performance in terms of STD than the versions where all the particles receive a solution containing information from the solution of CPLEX2. Figure 6 shows a graph of the maximum convergence that the algorithms obtained during the iterations.
In the x-axis, the number of iterations is represented, and at each iteration, the maximum of the 1000 executions is calculated. As it is possible to observe, the PSO starts its search with an objective function value much lower than the other methods, but by the fortieth iteration, the value is practically the same. As it is possible to see the algorithms that start the search based on the CPLEX 2 solution quickly converge to a solution, another curious question is that the algorithms that have their initial solutions generated from the normal distribution start their search with a lower objective function value than the PSO version that directly contains the CPLEX 2 solution starts.
Figure 7 shows the boxplot performance of the PSO and Hybrid PSO algorithms since these variants obtained the lowest and highest objective function value respectively.
a Box plot for PSO and (b) box plot for Hybrid PSO
The boxplots give the indication of the concentration of the final solutions of the 1000 executed executions, so between the minimum value and the first quartile we have 25% of the observations, between the quartiles 50% of the observations are represented and between the 3° quartile and the maximum are 25% of the observations, the value in green corresponds to the median. Analyzing the boxplots, it is clear that the performance of the Hybrid PSO is better than the performance of the PSO, as the Hybrid PSO values are more concentrated. How it is possible to observe the range values of Fig. 7b) is smaller compared to the range values of Fig. 7a), this may indicate that solutions obtained by Hybrid PSO are more concentrated.
Figure 8 shows the confidence intervals for a) PSO and b) for Hybrid PSO. The graphs presented show the 95% confidence interval for the experiments performed on each one. Each of these graphs is associated with a certain error, in the case of PSO is 16.20, and in the case of Hybrid PSO is 0.00274. In this way we can consider that applying the resolution of the hybrid PSO we have 95% of hypotheses to find a value between od limits Upper bound and lower bound. So, we can consider Hybrid PSO to be more reliable.
Confidence intervals of 95% for (a) PSO and (b) Hybrid PSO
Figure 9, represented the value of the different variables in the different resolution methods. In the y-axis, the negative values represent the electricity purchases, and positive values represent the amount of electricity sold.
Sale and purchase in the different markets
In this case, as can be seen from the caption of Fig. 9, the five different markets are considered. Each bar of each method corresponds to the value of each variable, so have two possible actions for each market that gives a total of ten variables and ten bars in each method. Table 4 shows the values corresponding to the scaling of the result of the method that obtained the maximum objective function value, CPLEX1, and the results obtained by CPLEX2.
Table 4 Scheduling of sale and purchase in the different markets
The representation of Table 4 shows the difference in the deterministic resolutions from the full and simplified versions of the exact resolution method. As one can see, using CPLEX2 the variables only contain positive integer values, on the other hand in CPLEX1 the variables are numbers with several decimal places. From the result of the objective function of Table 2, one can see that there is a difference in the solutions, and from the analysis of Table 3 the execution time is also different, with about 0.06% of the time of CPLEX1, CPLEX2 can obtain a solution 0.09% inferior to CPLEX1.
Analyzing the CPLEX1 scheduling results, it can be concluded that the method respected the imposed rules that were defined, as it can be observed in the Spot market the sale was not premised and as can see from the Table 4. the variable of this case is 0, another of the conditions was the fact that in the Balancing markets only one of the actions is allowed and thus it happens, and only the action (buy electricity) is realized, keeping the variable of sales to 0, in the other two markets the two actions are realized because as the price is variable with the quantity of purchase and can occur multiple opportunities which can result in positive profit.
This paper presented a novel hybrid optimization model based on the combination of a PSO approach and a simplified resolution using an exact method, to solve the portfolio optimization problem for multiple electricity markets participation. Results enable concluding that the proposed hybrid resolution has advantages in solving the problem, as can be observed by comparing the proposed approach results with those of the standard PSO, in which the algorithm starts the search with a random solution. Using the standard PSO in this problem, a very high STD is obtained; while using the proposed approach, the STD decreased. This represents a great advantage since this measurement gives the indication of the dispersion solution of the population around the mean. As can be seen, there was also a large increase in the mean objective function value that is achieved, being located near the maximum reference value (CPLEX1).
Another of the advantageous conclusions refers to the execution time because with this method the execution time decreases considerably compared to the time of the reference result using the exact method to solve the complete version of the problem. In this sense, since in electricity markets negotiations, decisions must be taken in short times, this methodology can bring high benefits for real-world application.
As future work, it is intended to expand this methodology by combining different methods such as genetic algorithms and simulated annealing, as well as by using simple metaheuristics to select the initial solution (e.g., Vortex Search algorithm (VSA), (Yusta 2009) and Evolutionary Computation (Lezama et al. 2017a). In another phase, it is also envisaged to include a risk component in the model and thus obtaining a multiobjective problem so as to be solved with a variation of the proposed methodology.
Arumugam MS, Rao MVC (2008) On the improved performances of the particle swarm optimization algorithms with adaptive parameters, cross-over operators and root mean square (RMS) variants for computing optimal control of a class of hybrid systems. Appl Soft Comput 8:324–336
Boussaïd I, Lepagnot J, Siarry P (2013) A survey on optimization metaheuristics. Inf Sci 237:82–117. https://doi.org/10.1016/j.ins.2013.02.041
Burger S, Chaves-Ávila JP, Batlle C, Pérez-Arriaga IJ (2017) A review of the value of aggregators in electricity systems. Renew Sust Energ Rev 77:395–405. https://doi.org/10.1016/j.rser.2017.04.014
Ciarreta A, Espinosa MP, Pizarro-Irizar C (2017) Has renewable energy induced competitive behavior in the Spanish electricity market? Energy Policy 104:171–182. https://doi.org/10.1016/j.enpol.2017.01.044
Denton M, Palmer A, Masiello R, Skantze P (2003) Managing market risk in energy. IEEE Trans Power Syst 18:494–502. https://doi.org/10.1109/TPWRS.2003.810681
European Commission (2013) Horizon 2020: The EU Framework Programme for Research and Innovation. https://ec.europa.eu/programmes/horizon2020/sites/horizon2020/files/H2020_inBrief_EN_FinalBAT.pdf. Accessed 17 Jul 2017
Faia R, Pinto T, Vale Z (2016) Dynamic fuzzy clustering method for decision support in electricity markets negotiation. ADCAIJ Adv Distrib Comput Artif Intell J 5(23). https://doi.org/10.14201/ADCAIJ2016512336
Faia R, Pinto T, Vale Z, Corchado JM (2017) Hybrid particle swarm optimization of electricity market participation portfolio. Special session on Agent-based modeling for smart grids and M2M applications, 2017 IEEE Symposium on Computational Intelligence Applications in Smart Grid (IEEE CIASG'17), 2017 IEEE Symposium Series on Computational Intelligence (IEEE SSCI 2017), Honolulu, Hawaii, 27 November-1 December, 2017
Faia R, Pinto T, Vale Z, Corchado JM (2018) Strategic particle swarm inertia selection for the electricity markets participation portfolio optimization problem. Appl Artif Intell. https://doi.org/10.1080/08839514.2018.1506971
Fernández-Vargas JA, Bonilla-Petriciolet A, Rangaiah GP, Fateen SEK (2016) Performance analysis of stopping criteria of population-based metaheuristics for global optimization in phase equilibrium calculations and modeling. Fluid Phase Equilib 427:104–125. https://doi.org/10.1016/j.fluid.2016.06.037
Foley AM, Ó Gallachóir BP, Hur J et al (2010) A strategic review of electricity systems models. Energy 35:4522–4530. https://doi.org/10.1016/j.energy.2010.03.057
Kennedy J, Eberhart R (1995) Particle swarm optimization. Neural Networks, 1995. Proc IEEE Int Conf 4:1942–1948
Lezama F, De Cote EM, Sucar LE et al (2017a, 2017) Evolutionary framework for multi-dimensional signaling method applied to energy dispatch problems in smart grids. In: 2017 19th international conference on intelligent system application to power systems. ISAP. IEEE, pp 1–6
Lezama F, Soares J, Munoz de Cote E et al (2017b) Differential evolution strategies for large-scale energy resource Management in Smart Grids. In: Proceeding GECCO '17 Proceedings of the Genetic and Evolutionary Computation Conference Companion. Berlin, pp 1279–1286. https://doi.org/10.1145/3067695.3082478
Markowitz H (1952) Portfolio selection. J Finance 7:77. https://doi.org/10.2307/2975974
Meeus L, Belmans R (2008) Electricity market integration in Europe. Revue-E 124:5–10
Mehdi M (1981) Parallel Hybrid Optimization Methods for Permutation Based Problems, pp 179–180
MIBEL (2007) Mercado Iberico de Eletrecidade. http://www.mibel.com/index.php?lang=pt. Accessed 27 Feb 2017
Olivella-Rosell P, Bullich-Massagué E, Aragüés-Peñalba M et al (2018) Optimization problem for meeting distribution system operator requests in local flexibility markets with distributed energy resources. Appl Energy 210:881–895. https://doi.org/10.1016/j.apenergy.2017.08.136
Palensky P, Dietrich D (2011) Demand Side Management: Demand Response, Intelligent Energy Systems, and Smart Loads. IEEE Trans Ind Informatics 7:381–388
Pinheiro Neto D, Domingues EG, Coimbra AP et al (2017) Portfolio optimization of renewable energy assets: hydro, wind, and photovoltaic energy in the regulated market in Brazil. Energy Econ 64:238–250. https://doi.org/10.1016/j.eneco.2017.03.020
Pinto T, Morais H, Sousa TM, Sousa T, Vale Z, Praça I, Faia R, Solteiro Pires EJS (2015) Adaptive Portfolio Optimization for Multiple Electricity Markets Participation. IEEE Trans Neural Netw Learn Syst 27(8):1720–1733. https://doi.org/10.1109/TNNLS.2015.2461491
Pinto T, Sousa TM, Vale Z (2012) Dynamic artificial neural network for electricity market prices forecast. IEEE 16th International Conference on Intelligent Engineering Systems (INES 2012), Costa de Caparica, Portugal, 13-15 June, 2012. pp 311–316
Pinto T, Vale Z, Sousa TM et al (2014) Particle swarm optimization of electricity market negotiating players portfolio. Highlights Pract Appl Heterog Multi-Agent Syst 430:273–284
PJM (2018) PJM Energy Market. https://www.pjm.com
Vale Z, Pinto T, Morais H et al (2011a) VPP's multi-level negotiation in smart grids and competitive electricity markets. IEEE Power Energy Soc Gen Meet:1–8
Vale Z, Pinto T, Praça I, Morais H (2011b) MASCEM: electricity markets simulation with strategic agents. IEEE Intell Syst 26:9–17
Yoo TH, Ko W, Rhee CH, Park JK (2017) The incentive announcement effect of demand response on market power mitigation in the electricity market. Renew Sust Energ Rev 76:545–554. https://doi.org/10.1016/j.rser.2017.03.035
Yusta SC (2009) Different metaheuristic strategies to solve the feature selection problem. Pattern Recogn Lett 30:525–534. https://doi.org/10.1016/j.patrec.2008.11.012
This work has received funding from the European Union's Horizon 2020 research and innovation programme under project DOMINOES (grant agreement No 771066) and from FEDER Funds through COMPETE program and from National Funds through FCT under the project UID/EEA/00760/2019 and Ricardo Faia is supported by FCT Funds through and SFRH/BD/133086/2017 PhD scholarship.
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. A part of the results presented here are available in the following reference (Faia et al. 2017).
GECAD – Research Group on Intelligent Engineering and Computing for Advanced Innovation and Development, Polytechnic of Porto (ISEP/IPP), al. Rua Dr. António Bernardino de Almeida, 431, 4200-072, Porto, Portugal
Ricardo Faia
BISITE Research Centre, University of Salamanca (USAL), al. Calle Espejo, 12, 37007, Salamanca, Spain
Tiago Pinto
& Juan Manuel Corchado
Polytechnic of Porto (ISEP/IPP), al. Rua Dr. António Bernardino de Almeida, 431, 4200-072, Porto, Portugal
Search for Ricardo Faia in:
Search for Tiago Pinto in:
Search for Juan Manuel Corchado in:
RF conceived and developed the proposed methodology, implemented them in the MATLAB® software, conducted the experiments and wrote the paper. TP contributed in the conception of the methodology, design of the experiments, analysis of results and writing of the manuscript. ZV helped conceiving of the study, participated in its design, coordination and helped to draft the manuscript. JM Corchado contributed in the overall design of the system, design of the experiments and review of manuscript. All authors read and approved the final manuscript.
Correspondence to Tiago Pinto.
Faia, R., Pinto, T., Vale, Z. et al. Hybrid approach based on particle swarm optimization for electricity markets participation. Energy Inform 2, 1 (2019) doi:10.1186/s42162-018-0066-7
Received: 27 August 2018
Hybrid model
Metaheuristic search
Particle swarm optimization and portfolio optimization
|
CommonCrawl
|
bipartite graph example
Decisions Revisited: Why Did You Choose a Public or Private College? 's' : ''}}. Let's see the example of Bipartite Graph. First of all, notice that vertices G and J only have one edge coming from them to B and A, respectively. The real-life examples of bipartite graphs are person-crime relationship, recipe-ingredients relationship, company-customer relationship, etc. For example, in graph G shown in the Fig 4.1, with all the edges from the matching M being marked bold, vertices a 1;b 1;a 4;b 4;a 5 and b 5 are free, fa 1;b 1gand fb 2;a 2;b 3gare two examples of alternating paths, and fa 1;b 2;a 2;b 3;a 3;b 4gis one example of an augmenting path. We see clearly there are no edges between the vertices of the same set. In other words, for every edge (u, v), either u belongs to U and v to V, or u belongs to V and v to U. Laura received her Master's degree in Pure Mathematics from Michigan State University. A bipartite graph is a special kind of graph with the following properties-, The following graph is an example of a bipartite graph-, A complete bipartite graph may be defined as follows-. Complete bipartite graph is a bipartite graph which is complete. {{courseNav.course.mDynamicIntFields.lessonCount}} lessons She has 15 years of experience teaching collegiate mathematics at various institutions. Complete Bipartite Graph. What is a bipartite graph? Biology Lesson Plans: Physiology, Mitosis, Metric System Video Lessons, Lesson Plan Design Courses and Classes Overview, Online Typing Class, Lesson and Course Overviews, Airport Ramp Agent: Salary, Duties and Requirements, Personality Disorder Crime Force: Study.com Academy Sneak Peek. That is, each vertex has only one edge connected to it in a matching. In this article, I will give a basic introduction to bipartite graphs and graph matching, along with code examples using the python library NetworkX. The graph's vertices are the people, and there is an edge between them if they both said they would be happy to be matched with the other person. Based on the selections given by the members of each group, the dating service wants to see if they can come up with a scenario where everyone is matched with someone that they said they would be happy with. A bipartite graph where every vertex of set X is joined to every vertex of set Y. They can even be applied to our daily lives in unexpected areas, such as our love lives as we've seen! We know, Maximum possible number of edges in a bipartite graph on 'n' vertices = (1/4) x n2. succeed. The customer purchase behavior at AllElectronics can be represented in a bipartite graph. Bipartite graphs and matchings of graphs show up often in applications such as computer science, computer programming, finance, and business science. first two years of college and save thousands off your degree. In any bipartite graph with bipartition X and Y. Another interesting concept in graph theory is a matching of a graph. Let say set containing 1,2,3,4 vertices is set X and set containing 5,6,7,8 vertices is set Y. What is the Difference Between Blended Learning & Distance Learning? Not sure what college you want to attend yet? Most previous methods, which adopt random walk-based or reconstruction-based objectives, are typically effec-tive to learn local graph structures. In terms of the bipartite graph representing the member's selections, this means that we are looking for a set of edges such that there is only one edge for each vertex. Prove, or give a counterexample. credit by exam that is accepted by over 1,500 colleges and universities. Sciences, Culinary Arts and Personal The chromatic number of the following bipartite graph is 2-, Few important properties of bipartite graph are-, Sum of degree of vertices of set X = Sum of degree of vertices of set Y. However, when a graph is very involved, trying to find a matching by hand would be quite tedious, if not impossible. Basically, these concepts can be used to solve and analyze applications in any area where a type of matching may take place, which is a lot of areas. Log in here for access. a stack of tripartite, quadripartite, pentapartite etc. Given a bipartite graph G with bipartition X and Y, Also Read-Euler Graph & Hamiltonian Graph. 3.16(A).By definition, a bipartite graph cannot have any self-loops. A graph G= (V;E) is bipartite if the vertex set V can be partitioned into two sets Aand B(the bipartition) such that no edge in Ehas both endpoints in the same set of the bipartition. This ensures that the end vertices of every edge are colored with different colors. Let's explore! Bipartite Graph Example Every Bipartite Graph has a Chromatic number 2. The maximum number of edges in a bipartite graph on 12 vertices is _________? The vertices within the same set do not join. graphs. Is the following graph a bipartite graph? This concept is especially useful in various applications of bipartite graphs. flashcard set{{course.flashcardSetCoun > 1 ? Here we explore bipartite graphs a bit more. courses that prepare you to earn The resulting graph is shown in the image: Notice that the graph consists of two groups of vertices (group 1 and group 2), such that the vertices that are in the same group have no edges between them. Anyone can earn 1965) or complete bigraph, is a bipartite graph (i.e., a set of graph vertices decomposed into two disjoint sets such that no two graph vertices within the same set are adjacent) such that every pair of graph vertices in the two sets are adjacent. This gives the following: This gives the maximum matching consisting of the edges AJ, BG, CF, DH, and EI. The following graph is an example of a complete bipartite graph-. In this article, we will discuss about Bipartite Graphs. It is easy to see that all closed walks in a bipartite graph must have even length, since the vertices along the walk must alternate between the two parts. Let's discuss what a matching of a graph is and also how we can use it in our quest to find soulmates mathematically. Suppose a tree G(V, E). A bipartite graph G is a graph whose vertex set V can be partitioned into two nonempty subsets A and B (i.e., A ∪ B=V and A ∩ B=Ø) such that each edge of G has one endpoint in A and one endpoint in B.The partition V=A ∪ B is called a bipartition of G.A bipartite graph is shown in Fig. The chromatic number, which is the minimum number of colors required to color the … Did you know… We have over 220 college Let's use logic to find a maximum matching of this graph. Therefore, Given graph is a bipartite graph. bipartite . Every sub graph of a bipartite graph is itself bipartite. Basically, this approach uses the interactions between users and items to find out the item to recommend. Is it possible to find your soulmate through a mathematical process? just create an account. A Bipartite Graph is one whose vertices can be divided into disjoint and independent sets, say U and V, such that every edge has one vertex in U and the other in V. The algorithm to determine whether a graph is bipartite or not uses the concept of graph colouring and BFS and finds it in O(V+E) time complexity on using an adjacency list and O(V^2) on using adjacency matrix. © copyright 2003-2021 Study.com. Why do we care? Suppose that two groups of people sign up for a dating service. A matching of a graph is a set of edges in the graph in which no two edges share a vertex. 5.1 Load Dataset ¶ The dataset consists of three files. Plus, get practice tests, quizzes, and personalized coaching to help you | 13 The two sets are X = {1, 4, 6, 7} and Y = {2, 3, 5, 8}. . Maybe! Select a subject to preview related courses: Assume we put C with F. Then E must go with I, since F will have been taken. Bipartite graph: a graph G = (V, E) where the vertex set can be partitioned into two non-empty sets V₁ and V₂, such that every edge connects a vertex of V₁ to a vertex of V₂. | {{course.flashcardSetCount}} There are many natural examples, e.g. Show all steps. Already registered? See the examples in the function's help page for illustration. Maximum number of edges in a bipartite graph on 12 vertices. Get the unbiased info you need to find the right school. Your goal is to find all the possible obstructions to a graph having a perfect matching. Example: Draw the complete bipartite graphs K 3,4 and K 1,5. Bipartite Graph in Graph Theory- A Bipartite Graph is a special graph that consists of 2 sets of vertices X and Y where vertices only join from one set to other. Bipartite Graph | Bipartite Graph Example | Properties. The vertices of set X join only with the vertices of set Y and vice-versa. Watch video lectures by visiting our YouTube channel LearnVidFun. Create your account. Is any subgraph of a bipartite always bipartite? It's important to note that a graph can have more than one maximum matching. The final section will demonstrate how to use bipartite graphs to solve problems. To unlock this lesson you must be a Study.com Member. flashcard sets, {{courseNav.course.topics.length}} chapters | In mathematics, this is called a bipartite graph, which is a graph in which the vertices can be put into two separate groups so that the only edges are between those two groups, and there are no edges between vertices within the same group. In an undirected bipartite graph, the degree of each vertex partition set is always equal. Obviously, each individual can only be matched with one person. Bipartite Graphs and Problem Solving Jimmy Salvatore University of Chicago August 8, 2007 Abstract This paper will begin with a brief introduction to the theory of graphs and will focus primarily on the properties of bipartite graphs. Draw as many fundamentally different examples of bipartite graphs which do NOT have matchings. A bipartite network contains two kinds of vertices and connections are only possible between two vertices of different kind. Enrolling in a course lets you earn progress by passing quizzes and exams. | Common Core Math & ELA Standards, AP Biology - Evolution: Tutoring Solution, Quiz & Worksheet - Automatic & Controlled Processing, Quiz & Worksheet - Capitalist & Soviet Plans for the World Economy in the Cold War, Quiz & Worksheet - The Myelin Sheath, Schwann Cells & Nodes of Ranvier, What is the PSAT 8/9? A matching MEis a collection of edges such that every vertex of V is incident to at most one edge of M. A bipartite graph has two sets of vertices, for example A and B, with the possibility that when an edge is drawn, the connection should be able to connect between any vertex in A to any vertex in B. Prove that the number of edges in a bipartite graph with n vertices is at most \frac{n^2}{4}. Now the sum of degrees of vertices and will be the degree of the set. Consider the daters again. The study of graphs is known as Graph Theory. When this is the case, computers are often used to find matchings of bipartite graphs, because they can be programmed to use various algorithms do this quickly. For the AllElectronics customer purchase data, one set of vertices represents customers, with one customer per vertex. Let R be the root of the tree (any vertex can be taken as root). All rights reserved. A graph Gis bipartite if the vertex-set of Gcan be partitioned into two sets Aand B such that if uand vare in the same set, uand vare non-adjacent. Bipartite Graph Example. In the example graph, the partitions are: and. The proof is based on the fact that every bipartite graph is 2-chromatic. All acyclic graphs are bipartite. sets ( G ) >>> list ( left ) [0, 1] >>> list ( right ) [2, 3, 4] >>> nx . Most of the time, it ignores the users and items attributes and only focuses on the relationship between 2 datasets. If graph is bipartite with no edges, then it is 1-colorable. You can test out of the Any bipartite graph consisting of 'n' vertices can have at most (1/4) x n, Maximum possible number of edges in a bipartite graph on 'n' vertices = (1/4) x n, Suppose the bipartition of the graph is (V, Also, for any graph G with n vertices and more than 1/4 n. This is not possible in a bipartite graph since bipartite graphs contain no odd cycles. Here we can divide the nodes into 2 sets which follow the bipartite_graph property. We have already seen how bipartite graphs arise naturally in some circumstances. Below is an example of the complete bipartite graph $K_{5, 3}$: Number of Vertices, Edges, and Degrees in Complete Bipartite Graphs Since there are $r$ vertices in set $A$ , and $s$ vertices in set $B$ , and since $V(G) = A \cup B$ , then the number of vertices in $V(G)$ is $\mid V(G) \mid = r + s$ . This is just one of the ways that graph theory is a huge part of computer science. To learn more, visit our Earning Credit Page. imaginable degree, area of Bipartite graphs are equivalent to two-colorable graphs. complete_bipartite_graph ( 2 , 3 ) >>> left , right = nx . 6The package explicitly links to "our" bipartite, although I think it is largely independent of it, and actually very nice! After they've signed up, they are shown images of and given descriptions of the people in the other group. Each job opening can only accept one applicant and a job applicant … In this article, we will discuss about Bipartite Graphs. - Information, Structure & Scoring, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers. All of the information is entered into a computer, and the computer organizes it in the form of a graph. Therefore, we have the following: Now, let's consider vertices C, D, and E. From the edges in the graph, we have the following: Get access risk-free for 30 days, and both are of degree. Learn more about bipartite graphs and their applications - including computer matchmaking! Create an account to start this course today. Complete bipartite graph is a graph which is bipartite as well as complete. Spanish Grammar: Describing People and Things Using the Imperfect and Preterite, Talking About Days and Dates in Spanish Grammar, Describing People in Spanish: Practice Comprehension Activity, English Composition II - Assignment 6: Presentation, English Composition II - Assignment 5: Workplace Proposal, English Composition II - Assignment 4: Research Essay, Quiz & Worksheet - Esperanza Rising Character Analysis, Quiz & Worksheet - Social Class in Persepolis, Quiz & Worksheet - Employee Rights to Privacy & Safety, Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate, What is Common Core? Graph theory itself is typically dated as beginning with Leonhard Euler 's … The special branch of the recommendation systems using bipartite graph structure is called collaborative filtering. The illustration above shows some bipartite graphs, with vertices in each graph colored based on to which of the two disjoint sets they belong. Prove that a graph is bipartite if and only if it has no odd-length cycles. Bipartite Graph Properties are discussed. 22 chapters | There are many real world problems that can be formed as Bipartite Matching. In this video we look at isomorphisms of graphs and bipartite graphs. Also, any two vertices within the same set are not joined. All other trademarks and copyrights are the property of their respective owners. igraph does not have direct support for bipartite networks, at least not at the C language level. There does not exist a perfect matching for a bipartite graph with bipartition X and Y if |X| ≠ |Y|. Diary of an OCW Music Student, Week 4: Circular Pitch Systems and the Triad, How to Become a Zookeeper: Education Requirements & Work Experience, Best Online Bachelor's Degrees in Information Security, Home Health Aide (HHA): Training & Certification Requirements, Family Counseling Degree Program Information by Level, Applied Management Degree Program Overviews by Level, Associate of Interior Design Degree Overview, Web Design Major Information and Requirements, CAHSEE - Number Theory & Basic Arithmetic: Help and Review, CAHSEE - Problems with Decimals and Fractions: Help and Review, CAHSEE - Problems with Percents: Help and Review, CAHSEE Radical Expressions & Equations: Help & Review, CAHSEE Algebraic Expressions & Equations: Help & Review, CAHSEE - Algebraic Linear Equations & Inequalities: Help and Review, CAHSEE - Problems with Exponents: Help and Review, CAHSEE - Overview of Functions: Help and Review, CAHSEE - Rational Expressions: Help and Review, CAHSEE Ratios, Percent & Proportions: Help & Review, CAHSEE - Matrices and Absolute Value: Help and Review, CAHSEE - Quadratics & Polynomials: Help and Review, Bipartite Graph: Definition, Applications & Examples, CAHSEE - Geometry: Graphing Basics: Help and Review, CAHSEE - Graphing on the Coordinate Plane: Help and Review, CAHSEE - Measurement in Math: Help and Review, CAHSEE - Properties of Shapes: Help and Review, CAHSEE Triangles & the Pythagorean Theorem: Help & Review, CAHSEE - Perimeter, Area & Volume in Geometry: Help and Review, CAHSEE - Statistics, Probability & Working with Data: Help and Review, CAHSEE - Mathematical Reasoning: Help and Review, CAHSEE Math Exam Help and Review Flashcards, High School Algebra II: Tutoring Solution, FTCE School Psychologist PK-12 (036): Test Practice & Study Guide, SAT Subject Test Chemistry: Practice and Study Guide, Praxis Core Academic Skills for Educators - Writing (5722, 5723): Study Guide & Practice, PLACE Mathematics: Practice & Study Guide, Praxis School Psychologist (5402): Practice & Study Guide, WEST Middle Grades Mathematics (203): Practice & Study Guide, ORELA Middle Grades Mathematics: Practice & Study Guide, AEPA Business Education (NT309): Practice & Study Guide, OSAT Physics (CEOE) (014): Practice & Study Guide, NES Mathematics - WEST (304): Practice & Study Guide, How to Develop School Crisis Management Plans, Peer Helper Programs: Definition, Purpose & Overview, Florida's School Counseling and Guidance Framework, Measurement Error in Student Assessment: Definition, Types & Factors, Promoting Student Employability & Lifelong Learning, Quiz & Worksheet - Volume & Cavalieri's Principle, Quiz & Worksheet - Finding Unnecessary Sentences in Passages, Quiz & Worksheet - Derive the Equation of an Ellipse from the Foci, Quiz & Worksheet - Developing Historical Questions, Quiz & Worksheet - Calculating the Equation of a Parabola from the Focus and Directrix, Working with Data & Statistics: Help and Review, Law-Making Processes & U.S. Foreign Policy, Reading and Understanding Essays in Literature: Help and Review, California Sexual Harassment Refresher Course: Supervisors, California Sexual Harassment Refresher Course: Employees. 1 Bipartite graphs One interesting class of graphs rather akin to trees and acyclic graphs is the bipartite graph: De nition 1. Services. Hence, the degree of is. Therefore, it is a complete bipartite graph. We shall prove this minmax relationship algorithmically, by describing an efficient al- gorithm which simultaneously gives a maximum matching and a minimum vertex cover. 4 A bipartite graph, also called a bigraph, is a set of graph vertices decomposed into two disjoint sets such that no two graph vertices within the same set are adjacent. Hmmm…let's try to figure this out. The first file has information from person id to crime id relation. In terms of the bipartite graph representing the member's selections, this means that we are looking for a set of edges such that there is only one edge for each vertex. There does not exist a perfect matching for G if |X| ≠ |Y|. and career path that can help you find the school that's right for you. Graph matching can be applied to solve different problems including scheduling, designing flow networks and modelling bonds in chemistry. A complete bipartite graph, sometimes also called a complete bicolored graph (Erdős et al. Each applicant has a subset of jobs that he/she is interested in. maximum_matching ( G ) {0: 2, 1: 3, 2: 0, 3: 1} Draw the graph represented by the adjacency matrix. This example wasn't too involved, so we were able to think logically through it. A graph is a collection of vertices connected to each other through a set of edges. This graph consists of two sets of vertices. Bipartite graphs - recommendation example. Sociology 110: Cultural Studies & Diversity in the U.S. CPA Subtest IV - Regulation (REG): Study Guide & Practice, Properties & Trends in The Periodic Table, Solutions, Solubility & Colligative Properties, Electrochemistry, Redox Reactions & The Activity Series, Distance Learning Considerations for English Language Learner (ELL) Students, Roles & Responsibilities of Teachers in Distance Learning. Earn Transferable Credit & Get your Degree, Graphs in Discrete Math: Definition, Types & Uses, Fleury's Algorithm for Finding an Euler Circuit, Dijkstra's Algorithm: Definition, Applications & Examples, Euler's Theorems: Circuit, Path & Sum of Degrees, Equivalence Relation: Definition & Examples, Weighted Graphs: Implementation & Dijkstra Algorithm, Using the Hungarian Algorithm to Solve Assignment Problems, Difference Between Asymmetric & Antisymmetric Relation, Associative Memory in Computer Architecture, Modular Arithmetic: Examples & Practice Problems, Page Replacement: Definition & Algorithms, Concurrency & Mutual Exclusion in Operating Systems, NY Regents Exam - Integrated Algebra: Test Prep & Practice, SAT Subject Test Mathematics Level 1: Tutoring Solution, NMTA Middle Grades Mathematics (203): Practice & Study Guide, Accuplacer ESL Reading Skills Test: Practice & Study Guide, CUNY Assessment Test in Math: Practice & Study Guide, Ohio Graduation Test: Study Guide & Practice, ILTS TAP - Test of Academic Proficiency (400): Practice & Study Guide, Praxis Social Studies - Content Knowledge (5081): Study Guide & Practice. In a bipartite graph, vertices can be divided into two disjoint sets so that each edge connects a vertex in one set to a vertex in the other set. Every bipartite graph is 2 – chromatic. They're asked to select people that they would be happy to be matched with. For example, to find a maximum matching in the complete bipartite graph with two vertices on the left and three vertices on the right: >>> import networkx as nx >>> G = nx . Study.com has thousands of articles about every Well, since there's more than one way to match the groups, maybe it is not actually their soulmate, but this does go to show that we can use mathematics to possibly find a love match! A graph is a collection of vertices connected to each other through a set of edges. Mathematically speaking, this is called a matching. It consists of two sets of vertices X and Y. Let's take a couple of moments to review what we've learned. Objective: Given a graph represented by adjacency List, write a Breadth-First Search(BFS) algorithm to check whether the graph is bipartite or not. Did you know that math could help you find your perfect match? The number of edges in a complete bipartite graph is m.n as each of the m vertices is connected to each of the n vertices. However, the global properties As a member, you'll also get unlimited access to over 83,000 Before you go through this article, make sure that you have gone through the previous article on various Types of Graphs in Graph Theory. The vertices of the graph can be decomposed into two sets. Proof that every tree is bipartite . What is the smallest number of colors you need to properly color the vertices of K_{4,5}? Log in or sign up to add this lesson to a Custom Course. Furthermore, when a matching is such that if we were to try to add an edge to it, then it would no longer be a matching, then we call it a maximum matching. lessons in math, English, science, history, and more. This is my example data: datf <- data.frame(Consumers = c("A", "B", "C", "D", "E"), Brands = c("Costa", " This is my example data: datf <- data.frame(Consumers = c("A", "B", "C", "D", "E"), Brands = c("Costa", " Take a look at the bipartite graph representing the dater's preferences of who they would be happy being matched with. Theorem 1.1 (K¨onig 1931) For any bipartite graph, the maximum size of a matching is equal to the minimum size of a vertex cover. We have discussed- 1. A maximum matching is a matching with the maximum number of edges included. In the mathematical field of graph theory, a complete bipartite graph or biclique is a special kind of bipartite graph where every vertex of the first set is connected to every vertex of the second set. It means that it is possible to assign one of the different two colors to each vertex in G such that no two adjacent vertices have the same color. The vertices of set X join only with the vertices of set Y. If the graph does not contain any odd cycle (the number of vertices in the graph is odd), then its spectrum is symmetrical. Visit the CAHSEE Math Exam: Help and Review page to learn more. This satisfies the definition of a bipartite graph. We'll be loading crime data available from konect to understand bipartite graphs. I need to create a bipartite graph for consumer-brand relationships. Using similar reasoning, if we put C with I instead of F, we would end up with the maximum matching consisting of the edges AJ, BG, CI, DH, EF. Get more notes and other study material of Graph Theory. Try refreshing the page, or contact customer support. A perfect matching exists on a bipartite graph G with bipartition X and Y if and only if for all the subsets of X, the number of elements in the subset is less than or equal to the number of elements in the neighborhood of the subset. This graph is a bipartite graph as well as a complete graph. 2. study An error occurred trying to load this video. For example, consider the following problem: There are M job applicants and N jobs. Furthermore, then D must go with H, since I will have been taken. movies and actors as vertices and a movie is connected to all participating actors, etc. Bipartite Graph cannot have cycles with odd length – Bipartite graphs can have cycles but with of even lengths not with odd lengths since in cycle with even length its possible to have alternate vertex with two different colors but with odd length cycle its not possible to have alternate vertex with two different colors, see the example below The vertices of set X are joined only with the vertices of set Y and vice-versa. How Do I Use Study.com's Assign Lesson Feature? Write down the necessary conditions for a graph to have a matching (that is, fill in the blank: If a graph … Bipartite Graphs OR Bigraphs is a graph whose vertices can be divided into two independent groups or sets so that for every edge in the graph, each end of the edge belongs to a separate group. Bipartite graph embedding has recently attracted much attention due to the fact that bipartite graphs are widely used in various application domains. To gain better understanding about Bipartite Graphs in Graph Theory. That we are now familiar with these ideas and their use jobs that he/she interested! For example, consider the following graph is a matching of a bicolored... The partitions are: and gives the following: this gives the following graph is.... Not join Y if |X| ≠ |Y| will discuss about bipartite graphs in graph Theory is a bipartite,... Bipartite if and only if it has no odd-length cycles what a of. Vertices G and J only have one edge coming from them to B and movie. Structure & Scoring, Tech and Engineering - Questions & Answers about bipartite graphs and given of! Respective owners various applications of bipartite graph which is bipartite with no between... Let say set containing 5,6,7,8 vertices is at most \frac { n^2 } { }... Study.Com Member vertices connected to all participating actors, etc and their use problem: there M... ¶ the Dataset consists of two sets are X = { a, respectively the first two of! Get the unbiased info you need to properly color the vertices of set Y } and Y if ≠. To find all the possible obstructions to a Custom Course = {,. For a bipartite graph representing the dater 's preferences of who they would happy. Unbiased info you need to create a bipartite graph as well as a complete graph, right = nx bipartite. To it in the example graph, sometimes also called a complete graph graphs to problems! Edge are colored with different colors Exam: help and review page to learn.! Get practice tests, quizzes, and EI graph with k=2 vertices G and J only have one connected. Partition set is always equal Draw as many fundamentally different examples of bipartite graphs to solve problems! Tree G ( V, E ) applications such as our love lives as we 've learned et.... Are M job applicants and n jobs suppose that two groups of people sign up bipartite graph example add lesson! Perfect matching for G if |X| ≠ |Y| of jobs that he/she is interested in all participating actors,.! Having a perfect matching for a given bipartite graph representing the dater 's preferences of who would., maximum number of edges in a bipartite graph on 12 vertices partition set is always.. & Scoring, Tech and Engineering - Questions & Answers bipartite matching 's preferences of they... Vertex has only one edge coming from them to B and a, C } and Y, also graph! Allelectronics can be more than one maximum matchings for a given bipartite graph for relationships. Science, computer programming, finance, and the computer organizes it in function... Graphs is known as graph Theory is a graph can not have any.... Graph can be represented in a matching by hand would be happy to be matched with one customer per.! Is 1-colorable is, each vertex has only one edge connected to all participating actors, etc including computer!! Unexpected areas, such as our love lives as we 've seen information is entered a... Possible between two vertices of the graph in which no two edges share a vertex is to. Of jobs that he/she is interested in copyrights are the property of their respective owners not have any self-loops only. Bicolored graph ( Erdős et al the dater 's preferences of who they would be happy being with! Bipartite if and only focuses on the relationship between 2 datasets and Medicine - Questions & Answers, Health Medicine. This is just one of the ways that graph Theory { a, respectively various.! Using bipartite graph with k=2 science, computer programming, finance, and the computer organizes in! Maximum number of edges in a bipartite graph is an example of a graph is a matching with maximum... Can only be matched with computer science the example graph, the partitions are: and 2.... Read-Euler graph & Hamiltonian graph in any bipartite graph on ' n ' vertices = 36 of edge..., trying to find your soulmate through a set of vertices represents customers with! In which no two edges share a vertex has information from person to... Y if |X| ≠ |Y| and modelling bonds in chemistry bicolored graph ( Erdős et al she 15..., designing flow networks and modelling bonds in chemistry other trademarks and copyrights are the property of their owners. Draw as many fundamentally different examples of bipartite graphs and matchings of graphs show up often in applications as. To create a bipartite graph which is bipartite as well as complete applications! We can use it in our quest to find a maximum matching consisting of the graph,! Networks and modelling bonds in chemistry information, structure & Scoring, Tech and Engineering - Questions & Answers Health! Sets are X = { a, C } and Y = {,. Of vertices and a movie is connected to it in our quest to the... Follow the bipartite_graph property information, structure & Scoring, Tech and Engineering - Questions Answers. Been taken can earn credit-by-exam regardless of age or education level vertex of set Y and vice-versa be quite,! Using bipartite graph is bipartite with no edges, then D must go with H, since I will been. Following graph is very involved, trying to find a maximum matching is a bipartite graph every! Is a set of edges included this lesson to a graph: there are many world. Represented in a bipartite graph can be taken as root ) at isomorphisms of graphs up. Between users and items to find a matching of a bipartite graph can be more than maximum... X n2 the fact that every bipartite graph G with bipartition X and Y if |X| ≠ |Y| the property. Find the right school and the computer organizes it in our quest to find a matching of graph. Plus, get practice tests, quizzes, and business science that math could help you find your perfect bipartite graph example! Let say set containing 5,6,7,8 vertices is _________ obviously, each individual can only be matched.! Allelectronics customer purchase behavior at AllElectronics can be represented in a bipartite graph with n vertices is at \frac! X = { B, D } as vertices and connections are only possible two... A look at isomorphisms of graphs show up often in applications such as love. Applicant has a subset of jobs that he/she is interested in lesson you must be Study.com... That the end vertices of every edge are colored with different colors formed as matching. Study.Com Member { 4,5 } college you want to attend yet use logic to find the number... S lesson obstructions to a graph is a matching by hand would quite! In unexpected areas, such as computer science and will be the degree of the set article, sure! In an undirected bipartite graph on 12 vertices quizzes, and the computer organizes in. Networks and modelling bonds in chemistry the bipartite_graph property study of graphs is known as graph.! Quadripartite, pentapartite etc from Michigan State University as computer science, computer programming, finance, EI! 4,5 } progress by passing quizzes and exams try refreshing the page, or customer... Least not at the bipartite graph is a bipartite graph structure is called collaborative filtering copyrights. Through this article, make sure that you have gone through the previous article various. Every edge are colored with different colors all the possible obstructions to Custom. Example was n't too involved, so we were able to think through. K 3,4 and K 1,5 ( a ).By definition, a graph! Can have more than one maximum matching of a k-partite graph with bipartition and... Graph ( Erdős et al their respective owners information, structure & Scoring, Tech and Engineering - &! Find soulmates mathematically consisting of the recommendation systems using bipartite graph is itself bipartite find your perfect match if. Find all the possible obstructions to a Custom Course more, visit our Earning Credit page number... Quizzes, and business science > > left, right = nx get more and. As root ) bipartite matching suppose a tree G ( V, E ) ≠ |Y| odd-length.! At least not at the C language level CF, DH, and actually very nice ignores. Scoring, Tech and Engineering - Questions & Answers that we are now familiar with these ideas and their -! Customers, with one person is entered into a computer, and EI we know, maximum possible number the... Is connected to each other through a set of edges included study of is! Network contains two kinds of vertices and a movie is connected to it in the example of a graph. Daily lives in unexpected areas, such as computer science, computer programming,,... Matchings for a dating service couple of moments to review what we 've seen customer support and K....: Why did you know that math could help you succeed, quizzes and! For G if |X| ≠ |Y| ) > > left, right = bipartite graph example use logic find! { 4,5 } unbiased info you need to find all the possible obstructions a... ' vertices = ( 1/4 ) X n2 proof is based on the fact that every graph. A maximum matching of a graph which is bipartite as well as a complete graph... Edges between the vertices of different kind study of graphs is known graph... It is largely independent of it, and EI the Dataset consists of sets! G and J only have one edge connected to it in a bipartite graph G with bipartition and!
Court Of Common Pleas Docket, Boat Supply Store, Excoriation Disorder Dsm-5, What Is The Importance Of Studying Philippine History, Target Cake Flour, Weber Kettle Thermometer Replacement, Gucci Briefcase Black, Property Owned Before Marriage Australia, Tuck Tape Alternative Australia, Madison Reed Radiant Cream Color Instructions, Hydraulic Symbols For Beginners, Tiny Url For Choice Central, Tamil Flower Names For Baby Girl, Kodak Photo Printer Dock Pd-450, Montford Historic District Christmas Lights,
bipartite graph example 2021
|
CommonCrawl
|
Socioeconomic and demographic factors determining the underweight prevalence among children under-five in Punjab
Rizwan Farooq1,
Hina Khan2,
Masood Amjad Khan2 &
Muhammad Aslam3
Underweight prevalence continues to be major public health challenge worldwide, particularly in developing countries like Pakistan. This study is focused on socio-economic and demographic aspects of underweight prevalence among children under-five in Punjab.
In this study, several socioeconomic and demographic factors are considered using MICS-4 data-set. Only those variables which are usually described in the nutritional studies of children were picked. Covariates include: the age of children, sex of the children, age of mother, total number of children born to women, family wealth index quintile, source of drinking water, type of sanitation, place of residence, parents' education and occupation. All Categorical variables are effect coded. The child's age and the mother's age are assumed to be nonlinear, geographical region is spatial effect, while other variables are parametric in nature.
Since, the response is binary, covariate comprises linear terms, nonlinear effects of continues covariates and geographic effects, so we have use Geo-additive models (based on Fully Bayesian approach) with binomial family under logit link. Statistical analysis is performed on Statistical package R using Bayes X and R2 Bayes X Libraries.
Underweight status of children was found to be positively associated with number of under-five children in household, total number of children ever born to women and age of mother when the child was born. Whereas, it negatively associated with place of residence, parent's education and family wealth index quintile. On the regional effect, the Southern Punjab has higher prevalence of underweight compared to Central and Northern Punjab.
Similarity of our results with several other studies demonstrate that the Geo-additive models are an ideal substitute of other statistical models to analyze the underweight prevalence among children. Moreover, our findings suggest the Punjab Government, to introduce target-oriented programs such as poverty reduction and enhancement of education and health facilities for poor population and disadvantaged regions, especially Southern Punjab.
Underweight prevalence is a major cause of child mortality worldwide. Underweight children have lesser immunity against infections and have higher chance to die from common diseases, and those who survive are exposed to recurring illnesses and slower growth. Such children are more likely to have lower IQ, which not only affect their educational performance but also reduce their working abilities [1, 2].
According to MICS-2014 [1], the prevalence of underweight among under-five children in Punjab is 33.7%, which is higher than the overall the underweight prevalence in the country (Fig. 1). It is a point of serious concern, because Punjab is comparatively most developed province of Pakistan regarding health and educational facilities. Additionally, it is also the most densely populated province of country, so a minor change in the percentage can be of greater significance. Therefore, a question arises what are the factors behind such a high rate of underweight prevalence among children in the province.
Comparison of underweighted children under-five in Punjab, with Pakistan and World 2014. A self-explanatory bar chart created through Microsoft excel.The figures pertaining to "Punjab", "Pakistan" and "World" are taken from MICS-14, PDHS (2012–13) and the Website of world bank respectively [1, 3, 4].
To the best of our knowledge, not even a single research study has been conducted in Punjab so far, to explore and analyze the impact of factors influencing such a high proportion of underweight.
Alasfoor et al. (2007), Mengistu et al. (2013) and Rayhan et al. (2006), use bivariate and multivariate analysis to identify the causes of under-five malnutrition [5,6,7]. The key factors for under-five malnourishment were the education level of parents, income of the household and total number of under-five children in household. Sapkota & Gurung (2009) and Wolde et al. (2014) used logistic regression techniques to calculate the underweight prevalence among children under-five [8,9,10]. They discovered that the socioeconomic status of family, maternal education and gender, ethnicity and age of children were closely linked with underweight prevalence. Some other authors [11,12,13,14,15,16] found that age and gender of children, child's malaria status, exclusive breastfeeding, child's vaccination status, maternal education, parent's access to media, ARI in children, poverty and the type of toilet used in household were strongly correlated with underweight.
Majority of previous studies on child's underweight have usually focused on different socio-economic, demographic or health related factors but mostly have ignored spatial and nonlinear effects of covariates. Considering these aspects, Kandala et al. (2008), Lasisi et al. (2014) and Lasisi et al. (2014) use Bayesian geo-additive models to observe the pattern of underweight among the children under five year of age [14, 17, 18]. Their studies showed that underweight among children depends on residence, age of the child, maternal education, wealth status of the household and geographical zone.
We aim to study the impact of several socio-economic and demographic factors on underweight status of under-five children in Punjab, considering the spatial and nonlinear effects of covariates. This study is based on data-set of the MICS-2014 conducted by Punjab Bureau of Statistics with the collaboration of UNICEF.
Two-stage cluster sampling technique was used for selection of sample. At first stage 2050 clusters were randomly selected (called primary sampling units) from all 36 districts of Punjab through proportional allocation. On second stage systematic sample of 20 households (Called secondary sampling units) was randomly drawn from each of selected cluster [1]. The information about one or more explanatory variable(s) used in this study was missing for 6789 out of 31,083 children interviewed, so these were discarded. Finally, the total sample size for the research is 10,774 women of reproductive age (15–49) in Gilgit Baltistan.
Underweight status of children (UWa), i.e. "weight-for-age" is taken as a dependent variable. In this regard, the weight and the height of each child in sample is calculated separately. These measures are then expressed as Z-scores from the median of the reference population.
$$ {\mathrm{Z}}_{\mathrm{i}}=\frac{{\mathrm{WAP}}_{\mathrm{i}-}{\upmu}_{\mathrm{WAP}}}{\upsigma_{\mathrm{WAP}}} $$
Where, WAPi is the weight for age percentile of ith child, while μWAP and σWAP denote the median and standard deviation of WAP of reference population respectively. A child "i" is declared as an underweight if Zi < − 2, else Normal weight. So, our response variable (UWa) is binary with 0 & 1 refers to Normal weight and underweight children respectively.
We consider several socio-economic and demographic covariates as predictor, which are generally described in the nutritional studies of children. Covariates include; the age of children, sex of the children, age of mother, total number of children ever born to women, family wealth index quintile, source of drinking water, type of sanitation, place of residence (locality), parents' education and occupation. All categorical variables are effect coded; Children's age and Mother's age (when child born) are assumed nonlinear; "Region" is geographic effect, while other covariates are parametric in nature (Additional file 1).
Since, our response is binary; covariates consist of usual linear terms, nonlinear effects of continuous covariates and geographic effects, such sort of models are called Geo–Additive Regression Models, which are special case of Structure Additive Regression Models. We use Geo–Additive Regression under logit link, because we are interested to find the relative risk of underweight at different levels of given covariates. For inference, we have used a Fully Bayesian Approach because we want to describe them probabilistically. In this approach, all unknown regression coefficients and the smoothing functions are considered as random variables and must be assigned with appropriate prior distributions (Additional file 2). This methodology is based on Markov priors and uses Markov Chain Monte Carlo techniques to draw samples from posterior, and for model checking it normally used the Deviance Information Criterion. For detail, see [19,20,21,22].
Statistical analysis is performed on R Language using BayesX and R2BayesX Libraries (Additional file 2).
Before executing the final model, a bivariate analysis was performed, in order to decide which variables were to be included in the final model. The association between underweight status of children and different socio-economic and demographic variables (based on Pearson Chi-square tests of independence at 5% level of significance) is given in Table 1. According to Table 1, All variables, except the gender of children, found significantly associated with the underweight status of children. It is, therefore, the gender of children is discarded from final model (Additional file 2).
Table 1 Association between underweight and socioeconomic and demographic variables
Model summary for our final model (with 20,000 iterations, burnin = 0, step = 10, family = binomial, link = logit) is given in Table 2 and Table 3 (Additional file 2) (Fig. B1) (Fig. B2).
Table 2 Parametric coefficients
Table 3 Smooth terms variances
According to summary of results (Table 2), going from rural to urban areas decreases the log odds of underweight by 0.13.
Underweight status of Children under-five in Punjab is negatively associated with education level of father and mother of children. The log odds of underweight decreased by 0.09, 0.17, 0.28 and 0.41 for children whose fathers attain primary, middle, secondary and higher level of education respectively w.r.t. those having no education at all. The log odds of underweight decreased by 0.08, 0.19, 0.25 and 0.59 for children whose mothers got primary, middle, secondary and higher education respectively, compared to those having no education (Table 2).
The risk of underweight is significantly affected by father's occupation. Here, unemployed category is taken as a benchmark. The log odds of underweight are increased by 0.08 and 0.16 for the children whose fathers were officials and laborer respectively; in contrast, they decreased by 0.06 for child whose fathers were farmers and remained almost unchanged for children whose fathers owned any business entity (small or large) (Table 2).
Underweight status of Children under-five in Punjab is negatively correlated with household wealth index quintile. Going from the lowest wealth index quartile to second, middle, fourth and highest wealth index quantiles, decreases the log odds of underweight by 0.18, 0.35, 0.45 and 0.83 respectively (Table 2).
Total number of children ever born to a woman is positively associated with the risk of underweight. Addition of every single child into the total number of children ever born to a woman increases the log odds of underweight by 0.04 (Table 2).
The household's access to improved sanitation facility decreases the log odds of underweight by 0.09 compared to unimproved sanitation facilities (Table 2).
Underweight status of children is inversely related to the age of their mothers. The risk of underweight found highest in the children whose mother's age was less than twenty years at time of their birth, and it decline down gradually as the age of mother increase. The spread of underweight is almost same for mother's age group of 20–42 years (when children born), relatively higher beyond this age group, and highest if mother age is grater then 45 years (Fig. 2).
Comparison of Effect of Education Level of Mother" and Father" of Children on Underweight. A line chart developed through Microsoft excel to see the effect of education level of mother and father on underweight status of under-five children in Punjab province of Pakistan. Estimated effect is corresponding value of regression coefficient for each level of education
Underweight status of children is directly but non-linearly correlated with the age of children (in months) as shown in Fig. 3 and Table 3. It shows that underweight status of children is normal from birth up to about 12 months of age, then it gradually inclines and got severe after 18 months of age. This pattern continues until the age of 20 months and turn out to be almost static up to the age of about 42 months, afterward falls sharply. The prevalence of underweight is revealed to be more pronounced in age group of 18–32 months (Fig. 3).
Effect of Mother Age when Children Born (In Years)" on Underweight. A Built-in function of BayesX package in R language (based on smooth term "sx (MACB)", provided in Table 3), wherein Estimated Effect of Covariate "Mother Age when Children Born" i.e. "f (MACB)" is plotted against corresponding values of covariates
Exposure of child's mother to mass media reduces the log odds of underweight by 0.02 in comparison with no access to media at all (Table 2).
Refer to Table 2, the prevalence of underweight is more noticeable in the Southern Punjab. Going from Southern Punjab to Central and Northern Punjab decreases the log odds of underweight by 0.09 and 0.39.
The underweight status of children is slightly but not significantly affected by mother's occupation, source of drinking water and total number of under-five children in household (Table 2).
This study is focused on various socioeconomic and demographic aspects of underweight prevalence among under-five children in Punjab province of Pakistan and may facilitate the concerned administration and policy makers in formulation of apt polices to address it.
However, it exclude others factors related to health of children and mothers such as birth interval, health status of mother during pregnancy, place of delivery, weight of a child at birth, exclusive breast feeding, vaccination status of children, child morbidity status (fever, measles, diarrhea and ARI), type of food he/she received, mother's BMI, maternal and eternal care and use of extra food during pregnancy etc. In addition, there are three commonly used measures of child nutritional status: Stunting (height-for-age), Wasting (weight-for-height) and Underweight (weight-for-age). For the sake of conciseness only the last mentioned has been used, in this study.
Since, quality, availability and accessibility of hygienic food, health care and other facilities (like water and sanitation, exposure to media etc.) are considerably lower in rural Punjab, therefore, the children from rural areas of Punjab found more like to be underweight than their urban counterparts. This finding is consistent with findings obtained by other authors [12, 14, 17].
Our finding discloses a strong positive association between underweight and education level of father and mother of children. It is because, the educated parents usually have more knowledge about diet and health of children. In addition, education may change their traditional beliefs about diseases and taking care of their children. It also found consistent with several other similar studies [5, 6, 10, 12, 14,15,16,17,18]. Furthermore, the risk of underweight is 18% lower in children whose mother possesses higher education than fathers with same educational qualification (Fig. 4). It might because, the mothers usually spend more time with children compared with their fathers.
Effect of Children Age (In Months)" on Underweight.A Built-in function of BayesX package in R language (based on smooth term "sx(CAGE)", provided in Table 3), wherein Estimated Effect of Covariate "Age of children in months" i.e. "f (CAGE)"is plotted against corresponding values of covariates
The risk of underweight found more severe in children belongs to families with lowest wealth index quintile. Because, it is impossible for parents with a very small income to maintain the increasing need of food as well as health expenditures of children, which results undernourishment. On the other hand, wealthier parents are more likely to afford hygienic food, healthier living conditions and better health-care that improves the child nutrition and overall health. Many other studies on child nutrition illustrate the similar results [5, 10, 12,13,14,15,16].
Our study has found positive relationship between the number of children ever born to a woman and underweight prevalence. Same findings are revealed in a study carried out by Islam et al. (2013) [12]. Generally, the number of children ever born to a woman (fertility) have an inverse effect on child nutrition and health, which results the child to be underweight. It is because, families with more children, experience more economic strain for food consumption; hence, the children from these families are more likely to suffer from malnutrition and underweight. In addition, up to 40% of people in Punjab fall in 1St and 2nd wealth index quintile, i.e. low-income quintiles (Table 4). Whereas, this percentage goes up to 45% for families with one or more children in household under-five year of age. Therefore, it is impossible, for such people to maintain their livelihood and fulfill their basic needs. Hence, addition of every newborn baby in family may further reduce the distribution of income on health and food of children, consequently the risk of underweight increase.
Table 4 Wealth index quintile
The underweight status of children found negatively associated with improvement in sanitation facilities. It is because, the improved sanitation reduces the risk of diseases; consequently, the risk of underweight is decreased. The source of drinking water revealed similar but comparatively lesser impact on underweight status of children. It might because, the 94% inhabitants of Punjab use improved source of drinking water while comparatively lesser proportion of population (66%) use improved sanitation facility [1]. In addition, the "improved source" indicator is based on the categorization of water supplies by type of facility and does not consider the direct measurements of water quality, i.e. the water from improved source is not necessarily be free from contaminants [23].
The underweight status of children is negatively associated with mother's access to media (Table 2). It could be because, the exposure to media may lead to educate the mothers about health and food of children as well as more effective allocation of income to devote on their health and nutrition. These results are consistent with findings obtained by other authors [12, 17].
Refer to Fig. 4, the underweight status of children is inversely related to age of mother when child born. The surprising result for the underweight status of children whose mother age is greater than forty years is probably due to smaller frequency of observations with very high dispersion among them. Mothers of only 2.8% of children had the age 40 years or more when they born as well as the variation among the risk of underweight relatively higher for this age group.
The reasons for nonlinear association of underweight status of children with their age are that: usually children born with approximately normal anthropometric status; but afterwards, the health status of the most children became worse due to different socio-economic and biological factors and until it improved and become stable at a low level. The highest risk of underweight during the age group of 18–32 months is might because, during the aforesaid age group, kids are usually weaned from breast as well as their mothers do lose their ability to produce enough milk to fulfill the nutritional requirement of growing children. These results are consistent with findings obtained by other authors [10, 11, 14].
The causes for such a high rate of underweight prevalence in Southern Punjab Southern Punjab also called Seraiki belt (consisting of 11 districts) could be:
Firstly, Southern Punjab is more deprived region of province, 41% of people residing in this region have lowest wealth index quintile (Table 5).
Table 5 Wealth index quintile, by Region
Secondly, 72% of people living in Southern Punjab belongs to rural areas with poor health, living and educational facilities which is highest in entire province (Table 6).
Table 6 Urban-Rural distribution of sampled population, by Region
Thirdly, the statistics of parent's education of under-five children is worse in Southern Punjab. In this region, 64% of mothers and 41% of fathers of under five children are illiterate (Table 7). This percentage is highest in entire province (Table 7).
Table 7 Education Level of Mother and Father, by Region
Fourthly, 45% of the inhabitant of Southern Punjab have no access to improved sanitation which is also highest in entire province (Table 7).
Fifthly, about one-half of resident of this region has no access to mass media at all. It is not only highest compared with rest of Punjab but horrible to see in this era (Table 8).
Table 8 Access to Mass Media and Sanitation, by Region
In addition, Southern Punjab is considered as most deprived and neglected part of Punjab. Since 1970, South Punjab has not been receiving its due share from provincial resources. Development schemes for said region were diverted to Central and Northern parts of province. According to recent data, poverty and unemployment are relatively higher in this region [1, 3, 24,25,26,27]. Furthermore, unlike rest of Punjab province, People in Southern part of province, which are under the sway of powerful landed gentry and tribal sardars (Tribal chiefs), suffer from lack of opportunity to improve their livelihood.
Therefore, to-a-days, for fair distribution of resources and administrative reasons, the demand for creation of new autonomous province in south Punjab, i.e. Seraiki Province (Seraiki is a native language of this region), is under national debate and focus.
The results of this study reveal that the family's wealth index quintile, mother education level, father education level, geographical region, locality and total number of children ever born to a woman (fertility) are significantly associated with underweight status of children under-five in Punjab. The results from this study are found to be consistent with other studies [6, 8, 11, 13].
In order to achieve the sustainable development goal Punjab needs to scale up target-oriented programs such as poverty reduction income-generating interventions and improvement of education and health facilities for poor population and deprived parts of province especially Southern Punjab.
Recommendations for researchers:
There are three commonly used measures of child nutritional status: Stunting, Wasting and Underweight. Only the last-mentioned measure has been used in this study. Hence, Stunting and wasting may also be used for said purpose. In addition, this study is specific to Punjab province; same may be don for other provinces/parts of Pakistan.
ARI:
Acute Respiratory Illness
HH:
MDGS:
MICS:
Multiple Indicator Cluster Survey
NNS:
National Nutrition Survey
UN:
UNICEF:
United Nation Children's Emergency Fund
MICS. Multipe Indicator cluster survey. Lahore: Punjab Bureau Of Statistics; 2014.
Unicef. The state of the World's children. New York: Oxford University Press; 1998.
PDHS. (2012–13). Pakistan demographic and health survey.
The World Bank. (2015). http://data.worldbank.org/indicator/SH.STA.MALN.ZS. Retrieved November 24, 2015, from The World Bank: http://www.worldbank.org/.
Alasfoor, D., Traissac, P., Gartner, A., & Delpeuch, F. (2007 , September). Determinants of persistent underweight among children, aged 6–35 months, after huge economic development and improvements in health Services in Oman. J Health Popul Nutr, 25(3), :359–369.
Khan R, M. I., & Hayat, M. S. Factors causing malnutrition among under-five children in Bangladesh. Pak J Nutr. 2006;5(6):558–62.
Mengistu, K., Alemu, K., & Destaw, B. (2013). Prevalence of malnutrition and associated factors among children aged 6-59 months at Hidabu Abote District, north Shewa, Oromia regional state. Gondar: Institute of Public Health, College of Medicine and Health Sciences, University of Gondar, Gondar, Ethiopia doi:https://doi.org/10.4172/2161-0509.T1-001.
Sapkota V, Gurung C. Prevalence and predictors of underweight, stunting and Waisting in under-five children. JNHR. 2009;7(15):120–6.
Wolde, T., Belache, T., & Birhanu, T. (2014). Prevalence of Undernutrition and determinant factors among preschool children in Hawassa, Southern Ethiopia Food Science and Quality Management, 29.
Anderson A, Bignell W, Winful S, Soyiri I, Steiner-Asiedu M. Risk factors for malnutrition among children 5-years and younger in the Akuapim-North District in theEastern region of Ghana. Curr Res J. 2010;2(3):183–8.
Islam MM, Alam M, Tariquzaman M, Kabir MA, Pervin R. Predictors of the number of under-five malnourished children in Bangladesh: application of the generalized Poisson regression model. BMC Public Health. 2013;13(11) Retrieved from http://www.biomedcentral.com/1471-2458/13/11.
Kisiangani I, Mbakaya C, Mekokha A, Magu D. Prevelence of Malunaration among Preschool Children (6–59 months) in Westren Province , Kenya. J Public Health Epidimol. 2014;6(11):398–406. https://doi.org/10.5897/JPHE2014.0660.
Lasisi KE, Adebayo S, Abdulhamid B. Analysis of Nigerian children underweight nutritional status using Geoadditive cox models with Gaussian and binomial links. Afr Dev Resourc Res Institut J. 2014;10(2) Retrieved from http://www.journals.adrri.org/.
Nzala S, Siziya S, Babaniyi O, Songolo P, Muula A, Rudatsikira E. Demographic, cultural and environment factors associated with frequency and severity of malnutrition among Zambian children less than five years of age. J Public Health Epidemiol. 2011;3(8):362–70 Retrieved from http://www.academicjournals.org.
Zewdu S. Magnitude and factors associated with malnutrition of children under-five years of age in rural Kebeles of Haramaya, Ethiopia. Harar Bull Health Sci. 2012:221–32.
Kandala, N.-B., Fahrmeir, L., Klasen, S., & Priebe, J. (2008). Geo-additive models of childhood Undernutrition in three sub-Saharan African countries. University of Warwick. Retrieved from https://doi.org/10.1002/psp.524.
Lasisi KE, Nwaosu S, Abdulazeez KA. Geoadditive cox models with Gaussian and binomial links for the analysis of wasting status of Nigerian children. J Nat Sci Res. 2014:4.
Belitz, C. (2007). Model selection in generalised structured additive regression models. MÄunchen.
Fahrmeir K, Lang. Penalized structured additive regression for space-time ata: a Bayesian perspective. Stat Sin. 2004;14:731–61.
Kneib, & Fahrmeir. (2005). Structured Additive Regression for Categorical Space-time Data: A mixed model approach. Biometrics, to appear.
Kneib, T. (2005). Mixed model based inference in structured additive regression. Munchen.
Unicef. (2015). Water and sanitation. Retrieved from Unicef: http://data.unicef.org/water-sanitation/water.html.
MICS. Multiple Indicator cluster survey. Lahore: Punjab Bureau Of Statistics; 2007.
MICS. (2011). Multiple Indicator cluster Srvey Punjab. Lahore: Punjab Bureau of Statistics.
NNS. (2011). National Nutrition Survey. Aga Khan University, Pakistan, Planing Comossion ,Planing and Development Division ,Government of Pakistan.
Unicef. Retrieved from Unicef: http://mics.unicef.org/surveys.
Filmer D, Pritchett LH. Estimating wealth effects without expenditure data – or tears: an application to educational enrolments in states of India. Demography. 2001;38(1):115–32.
Rutstein, S. (2008). The DHS wealth index: approaches for rural and urban areas. Demographic and Health Research Division, Macro International Inc..
Rutstein, S., & Johonson, K. (2004). The DHS wealth index. DHS comparative reports no. 6. Calverton, Maryland: ORC macro, MEASURE DHS.
Cheema A, Khalid L, Patnam M. The geography of poverty: evidence from the Punjab. Lahore J Econ. 2008:163–88.
Gazdar, H. (1999). Review of Pakistan Poverty Data, Monograph Series # 9. Sustainable Development Policy Institute Islamabad.
Jamal H. The Punjab indices of multiple Depriviation 2003–04 and 2007–08. Lahore: Planing & Development Deportment, Government of Punjab; 2011.
Malik, S. J. ( 2005). Agricultural growth and rural poverty: review of the evidence. Asian Development Bank.
Brezger K, Lang. BayesX: analyzing Bayesian structured additive regression models. J Stat Softw. 2005;14(11).
Sahlin K. Estimating convergence of Markov chain Monte Carlo simulations. Sweden: Mathematical Statistics, Stockholm University; 2011.
Brooks, S. P., & Roberts, G. O. (1998). Convergence assessment techniques for Markov chain Monte Carlo. Kluwer academic publishers, Dordrecht NL-3300 AA Netherlands.
Cowles MK, Carlin BP. Markov chain Monte Carlo convergence diagnostics: a comparative review. J Am Stat Assoc. 1996;91:883–904.
Plummer, M., Best, N., Cowles, K., & Vines, K. (2010). Coda:convergence diagnosis and output analysis for MCMC.
Authors are thankful to Mr. Muhammad Amin and Syed Wasim Abbas for their support in managing data files and development of map. Authors are also thankful to the reviewers and editor for their efforts and expertise to refine this paper.
MICS 2014 dataset is publicly available online [26], public access to the data base is open and no ethical approval is required to access and use this data.
Bureau of Statistics, Government of the Punjab Lahore, Lahore, Pakistan
Rizwan Farooq
Department of Statistics, GC University Lahore, Lahore, Pakistan
Hina Khan & Masood Amjad Khan
Department of Statistics, Faculty of Science, King Abdulaziz University, Jeddah, 21551, Saudi Arabia
Muhammad Aslam
Masood Amjad Khan
RF and MAK conceptualized the research topic and drafted its manuscript. RF mainly performed the data analysis. MK and HK significantly contributed to the writing process and interpretation. HK and MA revised the article critically and provided further inputs. RF and MAK finally structured the manuscript, collected the references and edited extensively before finalization. All authors read and approved the final manuscript.
Correspondence to Hina Khan.
Supplementary information 1 [27,28,29,30,31,32,33]
Supplementary information 2 [34,35,36,37,38]
Additional file 3
: Figure B1 ACF for step = 1.Auto correlation function plot at step = 1 created through R language
: Figure B2 ACF for step = 10. Auto correlation function plot at step = 10 created through R language
Farooq, R., Khan, H., Khan, M.A. et al. Socioeconomic and demographic factors determining the underweight prevalence among children under-five in Punjab. BMC Public Health 20, 1817 (2020). https://doi.org/10.1186/s12889-020-09675-5
Bayes X
Fully Bayesian approach
Geo-additive models
Markov chain Monte Carlo
Wealth index quintile
Wasting
Health behavior, health promotion and society
|
CommonCrawl
|
Research Methods Knowledge Base
by Prof William M.K. Trochim
hosted by Conjoint.ly
Navigating the Knowledge Base
Conclusion Validity
Inferential Statistics
The T-Test
Dummy Variables
General Linear Model
Posttest-Only Analysis
Factorial Design Analysis
Randomized Block Analysis
Analysis of Covariance
Nonequivalent Groups Analysis
Regression-Discontinuity Analysis
Regression Point Displacement
🚪 Sign in to Conjoint.ly
🔬 Conjoint.ly research methods
Create Online Surveys
Gain insights you need with unlimited questions and unlimited responses.
Free Survey Tool
Analysis Requirements
The design notation for the Non-Equivalent Groups Design (NEGD) shows that we have two groups, a program and comparison group, and that each is measured pre and post. The statistical model that we would intuitively expect could be used in this situation would have a pretest variable, posttest variable, and a dummy variable variable that describes which group the person is in. These three variables would be the input for the statistical analysis. We would be interested in estimating the difference between the groups on the posttest after adjusting for differences on the pretest. This is essentially the Analysis of Covariance (ANCOVA) model as described in connection with randomized experiments (see the discussion of Analysis of Covariance and how we adjust for pretest differences). There's only one major problem with this model when used with the NEGD – it doesn't work! Here, I'll tell you the story of why the ANCOVA model fails and what we can do to adjust it so it works correctly.
A Simulated Example
To see what happens when we use the ANCOVA analysis on data from a NEGD, I created a computer simulation to generate hypothetical data. I created 500 hypothetical persons, with 250 in the program and 250 in the comparison condition. Because this is a nonequivalent design, I made the groups nonequivalent on the pretest by adding five points to each program group person's pretest score. Then, I added 15 points to each program person's posttest score. When we take the initial 5-point advantage into account, we should find a 10 point program effect. The bivariate plot shows the data from this simulation.
I then analyzed the data with the ANCOVA model. Remember that the way I set this up I should observe approximately a 10-point program effect if the ANCOVA analysis works correctly. The results are presented in the table.
In this analysis, I put in three scores for each person: a pretest score (X), a posttest score (Y) and either a 0 or 1 to indicate whether the person was in the program (Z=1) or comparison (Z=0) group. The table shows the equation that the ANCOVA model estimates. The equation has the three values I put in, (X, Y and Z) and the three coefficients that the program estimates. The key coefficient is the one next to the program variable Z. This coefficient estimates the average difference between the program and comparison groups (because it's the coefficient paired with the dummy variable indicating what group the person is in). The value should be 10 because I put in a 10 point difference. In this analysis, the actual value I got was 11.3 (or 11.2818, to be more precise). Well, that's not too bad, you might say. It's fairly close to the 10-point effect I put in. But we need to determine if the obtained value of 11.2818 is statistically different from the true value of 10. To see whether it is, we have to construct a confidence interval around our estimate and examine the difference between 11.2818 and 10 relative to the variability in the data. Fortunately the program does this automatically for us. If you look in the table, you'll see that the third line shows the coefficient associated with the difference between the groups, the standard error for that coefficient (an indicator of variability), the t-value, and the probability value. All the t-value shows is that the coefficient of 11.2818 is statistically different from zero. But we want to know whether it is different from the true treatment effect value of 10. To determine this, we can construct a confidence interval around the t-value, using the standard error. We know that the 95% confidence interval is the coefficient plus or minus two times the standard error value. The calculation shows that the 95% confidence interval for our 11.2818 coefficient is 10.1454 to 12.4182. Any value falling within this range can't be considered different beyond a 95% level from our obtained value of 11.2818. But the true value of 10 points falls outside the range. In other words, our estimate of 11.2818 is significantly different from the true value. In still other words, the results of this analysis are biased – we got the wrong answer. In this example, our estimate of the program effect is significantly larger than the true program effect (even though the difference between 10 and 11.2818 doesn't seem that much larger, it exceeds chance levels). So, we have a problem when we apply the analysis model that our intuition tells us makes the most sense for the NEGD. To understand why this bias occurs, we have to look a little more deeply at how the statistical analysis works in relation to the NEGD.
$$y_{i}=18.7+.626 X_{i}+11.3 Z_{i}$$
$$\begin{array} {lrrrr} \text{Predictor}&\text{Coef}&\text {StErr}&\text{t}&\text{p}\\ \hline \text { Constant } & 18.714 & 1.969 & 9.50 & 0.000 \\ \text { pretest } & 0.62600 & 0.03864 & 16.20 & 0.000 \\ \text { Group } & 11.2818 & 0.5682 & 19.85 & 0.000 \end{array}$$
$$\begin{aligned} \mathrm{Cl}_{.95\left(\beta_{2}=10\right)} &=\beta_{2} \pm 2 \mathrm{SE}\left(\beta_{2}\right) \\ =& 11.2818 \pm 2(.5682) \\ =& 11.2818 \pm 1.1364 \end{aligned}$$
$$C I=10.1454 \text { to } 12.4182$$
Why is the ANCOVA analysis biased when used with the NEGD? And, why isn't it biased when used with a pretest-posttest randomized experiment? Actually, there are several things happening to produce the bias, which is why it's somewhat difficult to understand (and counterintuitive). Here are the two reasons we get a bias:
pretest measurement error which leads to the attenuation or "flattening" of the slopes in the regression lines
group nonequivalence
The first problem actually also occurs in randomized studies, but it doesn't lead to biased treatment effects because the groups are equivalent (at least probabilistically). It is the combination of both these conditions that causes the problem. And, understanding the problem is what leads us to a solution in this case.
Regression and Measurement Error
We begin our attempt to understand the source of the bias by considering how error in measurement affects regression analysis. We'll consider three different measurement error scenarios to see what error does. In all three scenarios, we assume that there is no true treatment effect, that the null hypothesis is true. The first scenario is the case of no measurement error at all. In this hypothetical case, all of the points fall right on the regression lines themselves. The second scenario introduces measurement error on the posttest, but not on the pretest. The figure shows that when we have posttest error, we are disbursing the points vertically – up and down – from the regression lines. Imagine a specific case, one person in our study. With no measurement error the person would be expected to score on the regression line itself. With posttest measurement error, they would do better or worse on the posttest than they should. And, this would lead their score to be displaced vertically. In the third scenario we have measurement error only on the pretest. It stands to reason that in this case we would be displacing cases horizontally – left and right – off of the regression lines. For these three hypothetical cases, none of which would occur in reality, we can see how data points would be disbursed.
How Regression Fits Lines
Regression analysis is a least squares analytic procedure. The actual criterion for fitting the line is to fit it so that you minimize the sum of the squares of the residuals from the regression line. Let's deconstruct this sentence a bit. The key term is "residual." The residual is the vertical distance from the regression line to each point.
The graph shows four residuals, two for each group. Two of the residuals fall above their regression line and two fall below. What is the criterion for fitting a line through the cloud of data points? Take all of the residuals within a group (we'll fit separate lines for the program and comparison group). If they are above the line they will be positive and if they're below they'll be negative values. Square all the residuals in the group. Compute the sum of the squares of the residuals – just add them. That's it. Regression analysis fits a line through the data that yields the smallest sum of the squared residuals. How it does this is another matter. But you should now understand what it's doing. The key thing to notice is that the regression line is fit in terms of the residuals and the residuals are vertical displacements from the regression line.
How Measurement Error Affects Slope
Now we're ready to put the ideas of the previous two sections together. Again, we'll consider our three measurement error scenarios described above. When there is no measurement error, the slopes of the regression lines are unaffected. The figure shown earlier shows the regression lines in this no error condition. Notice that there is no treatment effect in any of the three graphs shown in the figure (there would be a treatment effect only if there was a vertical displacement between the two lines). Now, consider the case where there is measurement error on the posttest. Will the slopes be affected? The answer is no. Why? Because in regression analysis we fit the line relative to the vertical displacements of the points. Posttest measurement error affects the vertical dimension, and, if the errors are random, we would get as many residuals pushing up as down and the slope of the line would, on average, remain the same as in the null case. There would, in this posttest measurement error case, be more variability of data around the regression line, but the line would be located in the same place as in the no error case.
Now, let's consider the case of measurement error on the pretest. In this scenario, errors are added along the horizontal dimension. But regression analysis fits the lines relative to vertical displacements. So how will this affect the slope? The figure illustrates what happens. If there was no error, the lines would overlap as indicated for the null case in the figure. When we add in pretest measurement error, we are in effect elongating the horizontal dimension without changing the vertical. Since regression analysis fits to the vertical, this would force the regression line to stretch to fit the horizontally elongated distribution. The only way it can do this is by rotating around its center point. The result is that the line has been "flattened" or "attenuated" – the slope of the line will be lower when there is pretest measurement error than it should actually be. You should be able to see that if we flatten the line in each group by rotating it around its own center that this introduces a displacement between the two lines that was not there originally. Although there was no treatment effect in the original case, we have introduced a false or "pseudo" effect. The biased estimate of the slope that results from pretest measurement error introduces a phony treatment effect. In this example, it introduced an effect where there was none. In the simulated example shown earlier, it exaggerated the actual effect that we had constructed for the simulation.
Why Doesn't the Problem Occur in Randomized Designs?
So, why doesn't this pseudo-effect occur in the randomized Analysis of Covariance design? The next figure shows that even in the randomized design, pretest measurement error does cause the slopes of the lines to be flattened. But, we don't get a pseudo-effect in the randomized case even though the attenuation occurs. Why? Because in the randomized case the two groups are equivalent on the pretest – there is no horizontal difference between the lines. The lines for the two groups overlap perfectly in the null case. So, when the attenuation occurs, it occurs the same way in both lines and there is no vertical displacement introduced between the lines. Compare this figure to the one above. You should now see that the difference is that in the NEGD case above we have the attenuation of slopes and the initial nonequivalence between the groups. Under these circumstances the flattening of the lines introduces a displacement. In the randomized case we also get the flattening, but there is no displacement because there is no nonequivalence between the groups initially.
Summary of the Problem
So where does this leave us? The ANCOVA statistical model seemed at first glance to have all of the right components to correctly model data from the NEGD. But we found that it didn't work correctly – the estimate of the treatment effect was biased. When we examined why, we saw that the bias was due to two major factors: the attenuation of slope that results from pretest measurement error coupled with the initial nonequivalence between the groups. The problem is not caused by posttest measurement error because of the criterion that is used in regression analysis to fit the line. It does not occur in randomized experiments because there is no pretest nonequivalence. We might also guess from these arguments that the bias will be greater with greater nonequivalence between groups – the less similar the groups the bigger the problem. In real-life research, as opposed to simulations, you can count on measurement error on all measurements – we never measure perfectly. So, in nonequivalent groups designs we now see that the ANCOVA analysis that seemed intuitively sensible can be expected to yield incorrect results!
Now that we understand the problem in the analysis of the NEGD, we can go about trying to fix it. Since the problem is caused in part by measurement error on the pretest, one way to deal with it would be to address the measurement error issue. If we could remove the pretest measurement error and approximate the no pretest error case, there would be no attenuation or flattening of the regression lines and no pseudo-effect introduced. To see how we might adjust for pretest measurement error, we need to recall what we know about measurement error and its relation to reliability of measurement.
Recall from reliability theory and the idea of true score theory that reliability can be defined as the ratio:
$$\begin{aligned} \frac{\ { var(T) }}{var(T) + var(e)} \\\end{aligned}$$
where T is the true ability or level on the measure and e is measurement error. It follows that the reliability of the pretest is directly related to the amount of measurement error. If there is no measurement error on the pretest, the var(e) term in the denominator is zero and reliability = 1. If the pretest is nothing but measurement error, the var(T) term is zero and the reliability is 0. That is, if the measure is nothing but measurement error, it is totally unreliable. If half of the measure is true score and half is measurement error, the reliability is .5. This shows that there is a direct relationship between measurement error and reliability – reliability reflects the proportion of measurement error in your measure. Since measurement error on the pretest is a necessary condition for bias in the NEGD (if there is no pretest measurement error there is no bias even in the NEGD), if we correct for the measurement error we correct for the bias. But, we can't see measurement error directly in our data (remember, only God can see how much of a score is True Score and how much is error). However, we can estimate the reliability. Since reliability is directly related to measurement error, we can use the reliability estimate as a proxy for how much measurement error is present. And, we can adjust pretest scores using the reliability estimate to correct for the attenuation of slopes and remove the bias in the NEGD.
The Reliability-Corrected ANCOVA
We're going to solve the bias in ANCOVA treatment effect estimates for the NEGD using a "reliability" correction that will adjust the pretest for measurement error. The figure shows what a reliability correction looks like. The top graph shows the pretest distribution as we observe it, with measurement error included in it. Remember that I said above that adding measurement error widens or elongates the horizontal dimension in the bivariate distribution. In the frequency distribution shown in the top graph, we know that the distribution is wider than it would be if there was no error in measurement. The second graph shows that what we really want to do in adjusting the pretest scores is to squeeze the pretest distribution inwards by an amount proportionate to the amount that measurement error elongated widened it. We will do this adjustment separately for the program and comparisons groups. The third graph shows what effect "squeezing" the pretest would have on the regression lines – It would increase their slopes rotating them back to where they truly belong and removing the bias that was introduced by the measurement error. In effect, we are doing the opposite of what measurement error did so that we can correct for the measurement error.
All we need to know is how much to squeeze the pretest distribution in to correctly adjust for measurement error. The answer is in the reliability coefficient. Since reliability is an estimate of the proportion of your measure that is true score relative to error, it should tell us how much we have to "squeeze." In fact, the formula for the adjustment is very simple:
$$X_{\mathrm{adj}}=\bar{X}+r(X-\bar{X})$$
Xadj = adjusted pretest value,
XÌ" = original pretest value,
r = reliability
The idea in this formula is that we are going to construct new pretest scores for each person. These new scores will be "adjusted" for pretest unreliability by an amount proportional to the reliability. Each person's score will be closer to the pretest mean for that group. The formula tells us how much closer. Let's look at a few examples. First, let's look at the case where there is no pretest measurement error. Here, reliability would be 1. In this case, we actually don't want to adjust the data at all. Imagine that we have a person with a pretest score of 40, where the mean of the pretest for the group is 50. We would get an adjusted score of:
$$\begin{aligned} \mathrm{X}_{adj} &=50 + 1(40-50) \\ \mathrm{X}_{adj} &=50 + 1(-10) \\ \mathrm{X}_{adj} &=50 - 10 \\ \mathrm{X}_{adj} &=40 \end{aligned}$$
Or, in other words, we wouldn't make any adjustment at all. That's what we want in the no measurement error case.
Now, let's assume that reliability was relatively low, say .5. For a person with a pretest score of 40 where the group mean is 50, we would get:
$$\begin{aligned} \mathrm{X}_{adj} &=50 + .5(40-50) \\ \mathrm{X}_{adj} &=50 + .5(-10) \\ \mathrm{X}_{adj} &=50 - 5 \\ \mathrm{X}_{adj} &=45 \end{aligned}$$
Or, when reliability is .5, we would move the pretest score halfway in towards the mean (halfway from its original value of 40 towards the mean of 50, or to 45).
Finally, let's assume that for the same case the reliability was stronger at .8. The reliability adjustment would be:
That is, with reliability of .8 we would want to move the score in 20% towards its mean (because if reliability is .8, the amount of the score due to error is 1 -.8 = .2).
You should be able to see that if we make this adjustment to all of the pretest scores in a group, we would be "squeezing" the pretest distribution in by an amount proportionate to the measurement error (1 - reliability). It's important to note that we need to make this correction separately for our program and comparison groups.
We're now ready to take this adjusted pretest score and substitute it for the original pretest score in our ANCOVA model:
$$y_{i}=\beta_{0}+\beta_{1} X_{adj}+\beta_{2} Z_{i}+e_{i}$$
yi = outcome score for the ith unit,
β0 = coefficient for the intercept,
β1 = pretest coefficient,
β2 = mean difference for treatment,
Xadj = covariate adjusted for unreliability,
Zi = dummy variable for treatment (0 = control, 1 = treatment),
ei = residual for the ith unit
Notice that the only difference is that we've changed the X in the original ANCOVA to the term Xadj.
The Simulation Revisited
So, let's go see how well our adjustment works. We'll use the same simulated data that we used earlier. The results are:
$$y_{i}=-3.14+1.06 X_{a d j}+9.30 Z_{i}$$
$$\begin{array}{lrrcl} \text { Predictor } & \text { Coef } & \text { StErr } & \text { t } & \text { p } \\ \hline \text { Constant } & -3.141 & 3.300 & -0.95 & 0.342 \\ \text { adjpre } & 1 . 0 6 3 1 6 & 0 . 0 6 5 5 7 & 1 6 . 2 1 & 0 . 0 0 0 \\ \text { Group } & 9 . 3 0 4 8 & 0 . 6 1 6 6 & 1 5 . 0 9 & 0 . 0 0 0 \end{array}$$
$$\begin{aligned} \mathrm{Cl}_{.95\left(\beta_{2}=10\right)} &=\beta_{2} \pm 2 \mathrm{SE}\left(\beta_{2}\right) \\ =& 9.3048 \pm 2(.6166) \\ =& 9.3048 \pm 1.2332 \end{aligned}$$
$$C I=8.0716 \text { to } 10.5380$$
This time we get an estimate of the treatment effect of 9.3048 (instead of 11.2818). This estimate is closer to the true value of 10 points that we put into the simulated data. And, when we construct a 95% confidence interval for our adjusted estimate, we see that the true value of 10 falls within the interval. That is, the analysis estimated a treatment effect that is not statistically different from the true effect – it is an unbiased estimate.
You should also compare the slope of the lines in this adjusted model with the original slope. Now, the slope is nearly 1 at 1.06316, whereas before it was .626 – considerably lower or "flatter." The slope in our adjusted model approximates the expected true slope of the line (which is 1). The original slope showed the attenuation that the pretest measurement error caused.
So, the reliability-corrected ANCOVA model is used in the statistical analysis of the NEGD to correct for the bias that would occur as a result of measurement error on the pretest.
Which Reliability To Use?
There's really only one more major issue to settle in order to finish the story. We know from reliability theory that we can't calculate the true reliability, we can only estimate it. There a variety of reliability estimates and they're likely to give you different values. Cronbach's Alpha tends to be a high estimate of reliability. The test-retest reliability tends to be a lower-bound estimate of reliability. So which do we use in our correction formula? The answer is: both! When analyzing data from the NEGD it's safest to do two analyses, one with an upper-bound estimate of reliability and one with a lower-bound one. If we find a significant treatment effect estimate with both, we can be fairly confident that we would have found a significant effect in data that had no pretest measurement error.
This certainly doesn't feel like a very satisfying conclusion to our rather convoluted story about the analysis of the NEGD, and it's not. In some ways, I look at this as the price we pay when we give up random assignment and use intact groups in a NEGD – our analysis becomes more complicated as we deal with adjustments that are needed, in part, because of the nonequivalence between the groups. Nevertheless, there are also benefits in using nonequivalent groups instead of randomly assigning. You have to decide whether the tradeoff is worth it.
Next topic »
Knowledge Base written by Prof William M.K. Trochim. Changes and additions by Conjoint.ly. This page was last modified on 18 Aug 2020.
© 2021, Conjoint.ly, Sydney, Australia. ABN 56 616 169 021. For legal and data protection questions, please refer to Terms and Conditions and Privacy Policy.
|
CommonCrawl
|
Proceedings of the International Astronomical Union (3)
Publications of the Astronomical Society of Australia (2)
The ASKAP/EMU Source Finding Data Challenge
Data Analysis Pipelines and Software
A. M. Hopkins, M. T. Whiting, N. Seymour, K. E. Chow, R. P. Norris, L. Bonavera, R. Breton, D. Carbone, C. Ferrari, T. M. O. Franzen, H. Garsden, J. González-Nuevo, C. A. Hales, P. J. Hancock, G. Heald, D. Herranz, M. Huynh, R. J. Jurek, M. López-Caniego, M. Massardi, N. Mohan, S. Molinari, E. Orrù, R. Paladino, M. Pestalozzi, R. Pizzo, D. Rafferty, H. J. A. Röttgering, L. Rudnick, E. Schisano, A. Shulevski, J. Swinbank, R. Taylor, A. J. van der Horst
Journal: Publications of the Astronomical Society of Australia / Volume 32 / 2015
Published online by Cambridge University Press: 19 October 2015, e037
The Evolutionary Map of the Universe (EMU) is a proposed radio continuum survey of the Southern Hemisphere up to declination + 30°, with the Australian Square Kilometre Array Pathfinder (ASKAP). EMU will use an automated source identification and measurement approach that is demonstrably optimal, to maximise the reliability and robustness of the resulting radio source catalogues. As a step toward this goal we conducted a "Data Challenge" to test a variety of source finders on simulated images. The aim is to quantify the accuracy and limitations of existing automated source finding and measurement approaches. The Challenge initiators also tested the current ASKAPsoft source-finding tool to establish how it could benefit from incorporating successful features of the other tools. As expected, most finders show completeness around 100% at ≈ 10σ dropping to about 10% by ≈ 5σ. Reliability is typically close to 100% at ≈ 10σ, with performance to lower sensitivities varying between finders. All finders show the expected trade-off, where a high completeness at low signal-to-noise gives a corresponding reduction in reliability, and vice versa. We conclude with a series of recommendations for improving the performance of the ASKAPsoft source-finding tool.
EMU: Evolutionary Map of the Universe
Ray P. Norris, A. M. Hopkins, J. Afonso, S. Brown, J. J. Condon, L. Dunne, I. Feain, R. Hollow, M. Jarvis, M. Johnston-Hollitt, E. Lenc, E. Middelberg, P. Padovani, I. Prandoni, L. Rudnick, N. Seymour, G. Umana, H. Andernach, D. M. Alexander, P. N. Appleton, D. Bacon, J. Banfield, W. Becker, M. J. I. Brown, P. Ciliegi, C. Jackson, S. Eales, A. C. Edge, B. M. Gaensler, G. Giovannini, C. A. Hales, P. Hancock, M. T. Huynh, E. Ibar, R. J. Ivison, R. Kennicutt, Amy E. Kimball, A. M. Koekemoer, B. S. Koribalski, Á. R. López-Sánchez, M. Y. Mao, T. Murphy, H. Messias, K. A. Pimbblet, A. Raccanelli, K. E. Randall, T. H. Reiprich, I. G. Roseboom, H. Röttgering, D. J. Saikia, R. G. Sharp, O. B. Slee, Ian Smail, M. A. Thompson, J. S. Urquhart, J. V. Wall, G.-B. Zhao
Journal: Publications of the Astronomical Society of Australia / Volume 28 / Issue 3 / 2011
EMU is a wide-field radio continuum survey planned for the new Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The primary goal of EMU is to make a deep (rms ∼ 10 μJy/beam) radio continuum survey of the entire Southern sky at 1.3 GHz, extending as far North as +30° declination, with a resolution of 10 arcsec. EMU is expected to detect and catalogue about 70 million galaxies, including typical star-forming galaxies up to z ∼ 1, powerful starbursts to even greater redshifts, and active galactic nuclei to the edge of the visible Universe. It will undoubtedly discover new classes of object. This paper defines the science goals and parameters of the survey, and describes the development of techniques necessary to maximise the science return from EMU.
The stellar populations in the central parsecs of galactic bulges
Marc Sarzi, H.-W. Rix, J. C. Shields, L. C. Ho, A. J. Barth, G. Rudnick, A. V. Filippenko, W. L. W. Sargent
Journal: Proceedings of the International Astronomical Union / Volume 2004 / Issue IAUS222 / March 2004
Published online by Cambridge University Press: 24 November 2004, pp. 145-148
We present the analysis of Hubble Space Telescope blue spectra at intermediate spectral resolution for the nuclei of 23 nearby disk galaxies. These objects were selected to have nebular emission in their nuclei, and span a range of emission-line classifications as well as Hubble types. Here we focus on the stellar population as revealed by the continuum spectral energy distribution measured within the central 0.″13 (∼8 pc) of these galaxies. The data were modeled with linear combinations of single-age stellar population synthesis models. The large majority (∼80%) of the surveyed nuclei have spectra whose features are consistent with a predominantly old ($\gtrsim 5 \times 10^9$ yr) stellar population. Approximately 25% of these nuclei show evidence of a component with age younger than 1 Gyr, with the incidence of these stars related to the nebular classification. Successful model fits imply an average reddening corresponding to AV∼0.4 mag and stellar metallicity of (1–2.5)$Z_\odot$. Our findings reinforce the picture wherein Seyfert nuclei and the majority of low-ionization nuclear emission-line regions (LINERs) are predominantly accretion-powered, and suggest that much of the central star formation in HII nuclei is actually circumnuclear.To search for other articles by the author(s) go to: http://adsabs.harvard.edu/abstract_service.html
ESO distant cluster survey: spectroscopy
C. Halliday, B. Milvang-Jensen, S. Poirier, B. M. Poggianti, P. Jablonka, A. Aragóon-Salamanca, R. Pelló, R. P. Saglia, G. De Lucia, L. Simard, D. I. Clowe, G. Rudnick, S. D. M. White
Journal: Proceedings of the International Astronomical Union / Volume 2004 / Issue IAUC195 / March 2004
Published online by Cambridge University Press: 06 October 2004, pp. 236-238
We present first results for spectroscopic observations of galaxies in 4 clusters at $z=0.7-0.8$ and one cluster at $z=0.5$ observed by the ESO Distant Cluster Survey (EDisCS). Our spectroscopic catalogues contain 236 cluster members of our 5 clusters, and the number of members per cluster ranges from 30 to 67. Our cluster velocity dispersions are between $\sim$400 and over 1000 $\rm{km s}^{-1}$. Galaxy redshift distributions are found to be non-Gaussian and we find evidence for significant substructure in two clusters, one at $z \sim 0.79$ and another at $z \sim 0.54$; both clusters have velocity dispersions exceeding 1000 $\rm{km~s}^{-1}$. These systems have clearly not yet virialised at these epochs in qualitative agreement with CDM scenarios and their cluster velocity dispersions should not be used in the measurement of cluster mass. Our clusters have a wide range of different cluster velocity dispersions, richnesses and substructuring, and our spectroscopic data set is allowing a comprehensive insight into cluster galaxy evolution as a function of redshift and environment.To search for other articles by the author(s) go to: http://adsabs.harvard.edu/abstract_service.html
Observing the build–up of the colour-magnitude relation at redshift $\sim0.8$
G. De Lucia, B. M. Poggianti, A. Aragón-Salamanca, D. Clowe, C. Halliday, P. Jablonka, B. Milvang-Jensen, R. Pelló, S. Poirier, G. Rudnick, R. Saglia, L. Simard, S. D. M. White, and the EDisCS collaboration
We analyse the rest–frame (U$-$V) colour–magnitude relation for 2 clusters at redshift 0.7 and 0.8, drawn from the ESO Distant Cluster Survey. By comparing them with the population of red galaxies in the Coma cluster, we show that the high redshift clusters exhibit a deficit of passive faint red galaxies. Our results show that the red–sequence population cannot be explained in terms of a monolithic and synchronous formation scenario. A large fraction of faint passive galaxies in clusters today has moved onto the red sequence relatively recently as a consequence of the fact that their star formation activity has come to an end at $z<0.8$.To search for other articles by the author(s) go to: http://adsabs.harvard.edu/abstract_service.html
Morphologies of Solid Surfaces Produced far from Equilibrium
R. Stanley Williams, Robijn Bruinsma, Joseph Rudnick
We present the first quantitative experimental study of the morphology of amorphous solid surfaces formed by non-equilibrium processes and compare the results with theories developed to explain the formation of such surfaces.
|
CommonCrawl
|
Li* , Xu* , and Wang**: Supplier Evaluation in Green Supply Chain: An Adaptive Weight D-S Theory Model Based on Fuzzy-Rough-Sets-AHP Method
Lianhui Li* , Guanying Xu* and Hongguang Wang**
Supplier Evaluation in Green Supply Chain: An Adaptive Weight D-S Theory Model Based on Fuzzy-Rough-Sets-AHP Method
Abstract: Supplier evaluation is of great significance in green supply chain management. Influenced by factors such as economic globalization, sustainable development, a holistic index framework is difficult to establish in green supply chain. Furthermore, the initial index values of candidate suppliers are often characterized by uncertainty and incompleteness and the index weight is variable. To solve these problems, an index framework is established after comprehensive consideration of the major factors. Then an adaptive weight D-S theory model is put forward, and a fuzzy-rough-sets-AHP method is proposed to solve the adaptive weight in the index framework. The case study and the comparison with TOPSIS show that the adaptive weight D-S theory model in this paper is feasible and effective.
Keywords: Adaptive Weight , D-S Theory , Fuzzy-Rough-Sets-AHP , Green Supply Chain , Supplier Evaluation
With the fast growth of economic globalization, the resources and environment are facing enormous pressure now. Under this background, green supply chain management (GSCM) appears very important [1]. Green supply chain (GSC) was put forward in 1996 by the Manufacturing Research Center (MRC) of Michigan State University in a research on environmentally responsible manufacturing [2,3]. GSCM contains many contents, such as green supplier evaluation (GSE), green product design (GPD), green production (GP), green marketing and waste recycling (GMWR). As the upstream in the whole supply chain, the role of supplier in protecting the environment and saving costs can be transmitted to every part of the downstream through the supply chain, so as to improve the compatibility of supply chain and environment [4]. Manufacturing enterprises begin to measure the green degree of their suppliers, and one of the key steps to measure the green degree of an enterprise is how to choose the best supplier as a long-term partner [5]. By choosing the suitable green supplier, enterprises can largely improve the resource recycling rate and reduce pollutant emissions, and provide green control and processing for raw materials supplied by suppliers. Thus, the whole supply chain will be green, and a green strategic partnership is established with the supplier to achieve sustainable development. In general, one of the keys to building a green supply chain is to choose a suitable supplier.
The rest of this paper is organized as follows. In Section 2, an overview of the existing researches on supplier evaluation in GSC is provided. In Section 3, an adaptive weight D-S evidence theory model based on fuzzy-rough-sets-AHP method is put forward for supplier evaluation in GSC. In Section 4, a bearing cage supplier evaluation case is given. At last, this paper is concluded in Section 5.
2. Literature Review
A lot of significant studies on supplier evaluation in GSC are seen in the existing literatures. The representative research mainly focuses on two aspects as follows.
One is application of single method to the supplier evaluation problem. Buyukozkan and Cifci [6] proposed a fuzzy analytic network process (ANP) method based on multi-person decision-making schema under incomplete preference relationships for vendor selection. Based on the application of rough set theory to study the relations among organizational properties, supplier development program involvement properties, and performance outcome properties, Bai and Sarkis [7] put forward a formal model for green supplier selection. Tseng and Chiu [8] determined the weights of criteria and alternatives according to both by qualitative and quantitative information and sorted alternative suppliers based on a grey relational analysis (GRA). To obtain the best green supplier for a plastic manufacturing company in Singapore, Kannan et al. [9] put forward a fuzzy axiomatic design (FAD) method. To evaluate environmental performance of suppliers, Awasthi et al. [[10] proposed a fuzzy multi-criteria method (FmCM). By adding green criteria into the criteria framework of supplier selection, Yeh and Chuang [11] proposed an optimum mathematical planning model (OMPM) for green partner selection. Wu et al. [12] proposed a fuzzy linguistic decision-making method to solve the problem of selecting green supplier.
The other is integrated application of two or more than two methods for supplier evaluation. Li and Zhao [13] built the assessment model by using threshold method and gray correlation analysis (GCA) for vehicle component supplier selecting. Yan [14] used genetic algorithm (GA) and AHP to realize the dynamic adjustment of index weights in green supplier selection. Kuo et al. [15] proposed a hybrid approach based on artificial neural network (ANN), data envelopment analysis (DEA), and ANP for green supplier selection. Kuo and Lin [16] put forward a supplier selection approach based on ANP and DEA with the consideration of green indicators due to environmental protection issues. Based on fuzzy decision-making trial and evaluation laboratory model (DEMATEL), ANP, and technique for order performance by similarity to ideal solution (TOPSIS), Buyukozkan and Cifci [17] proposed a hybrid fuzzy multi-criteria decision-making (MCDM) approach for green supplier evaluation. By combining AHP and TOPSIS, Luo and Peng [18] proposed an integrated model for both of evaluation and selection of green supplier.
The above two kinds of methods have theoretical basis and practical value, but they also have some limitations. The subjectivity of fuzzy AHP in determining the index weight is too large. Neural network calculation process is complex, redundant, which will result in lack of accurate calculation. TOPSIS method has the advantages of convenient calculation and strong applicability, but the evaluation process may be missing information and the results are not objective enough. Additionally, each evaluation expert is required to give personal subjective evaluation information when considering the same evaluation index set. When different evaluation experts compare multiple indicators on the same level, it is easy to appear contradictory or chaotic judgment and evaluation. Because of the limitations of evaluation experts' understanding of supplier capabilities, the evaluation index value is often characterized by uncertainty and incompleteness. Moreover, the evaluation index weight is obviously variable when the demand has changed or the preference of the evaluation experts is different.
Therefore, we proposed an adaptive weight D-S theory model in this paper to solve the uncertainty and incompleteness problems of index value of supplier evaluation in GSC. The adaptive weight of evaluation index is determined by our designed fuzzy-rough-sets-AHP method.
3. Adaptive Weight D-S Theory Model for Supplier Evaluation
3.1 Establishment of Index Framework
For supplier evaluation in green supply chain, to build a comprehensive index framework is of great significance. On the one hand, product attribute is the main ability embodiment of a supplier; on the other, comprehensive ability can give a strong support to the product attribute of a supplier. Here, the comprehensive ability mainly contains internal competitiveness, external competitiveness, and cooperation ability.
For internal competitiveness, it mainly can be divided into innovation ability, manufacturing capacity and agility. Because a supplier is not isolated in supply chain, it is unavoidably limited by its external competitiveness. For external competitiveness, it mainly can be divided into economic environment, geographical environment, social environment, and legal environment. For cooperation ability, it mainly can be divided into technical compatibility degree, cultural compatibility degree, information platform compatibility degree and reputation.
Therefore, the index framework of supplier evaluation in GSC is built as shown in Fig. 1. It can be represented as a criterion set {C1, C2,C3,C4}. Here, C1 stands for product attribute, C2 stands for internal competitiveness, C3 stands for external competitiveness, and C4 stands for cooperation ability. Among them, C1={C11, C12, C13, C14}. In other words, C1 is divided into four indexes: C11, C12, C13, and C14. Here, C11 stands for cost, C12stands for quality, C13 stands for service, and C14 stands for flexibility.
Four criterions are divided into two types as follows. (1) Comprehensive qualitative type: C1.(2) Quantitative type: C2, C3, and C4. For comprehensive qualitative type criterion, its value is determined by its subordinate indexes. For quantitative type criterion, its value is obtained by expert score method. Similarly, four indexes of C1 are divided into two types as follows. (1) Quantitative type: C11 and C12. (2) Direct qualitative type: C13 and C14.
The index framework.
3.2 Determination of the Adaptive Weight
The AHP method [19], which was put forward by Thomas L. Saaty, can not only make clear the hierarchical structure of the components of complex problem, but also verify the consistency of the results. Therefore, it has been widely applied in the weighting of multi-attribute decision-making problem [20-22]. The traditional AHP uses exact numbers to represent the relative importance between indexes. The evaluation of relative importance between indexes in supplier evaluation by experts depends on personal judgment and subjective experience, so using exact numbers to represent the relative importance between indexes is unjustified to some extent.
Fuzzy number can give expression to the connatural uncertainty of expert's preference. Additionally, the evaluation of relative importance between indexes by multiple experts is obviously indistinguishable when integrating the opinions of all experts. Instead of a membership function, rough boundary interval [21,23] can represent the indistinguishability as a set boundary area. It can better integrate the opinions of all evaluation experts. Accordingly, a fuzzy-rough-sets-AHP method is designed to solve the adaptive weight of evaluation index.
We use U to represent a domain which is actually a nonempty finite set of objects. Y is any object in U. In U, all objects are divided into n partitions: S1,S2,…,Sn. If these n partition has the order of [TeX:] $$S_{1}<S_{2}<\ldots<S_{n}$$, the upper and lower approximation sets of any partition Si (1≤i≤n) can be defined as follows:
[TeX:] $$\begin{array}{l}{\overline{A S}\left(S_{i}\right)=\left\{Y \in K | K \subseteq U / R(Y) \wedge K \geq S_{i}\right\}} \\ {\underline{A S}\left(S_{i}\right)=\left\{Y \in K | K \subseteq U / R(Y) \wedge K \leq S_{i}\right\}}\end{array}$$
where U/R(Y) represents the partition of the indistinct relationship R(Y) in U.
According to the above definition, any ambiguous partition Si in U can be represented by its rough boundary interval RN(Si). RN(Si) consists of its rough lower limit [TeX:] $$\underline{L}\left(S_{i}\right)$$ and rough upper limit [TeX:] $$\overline{L}\left(S_{i}\right)$$ which are defined as follows:
[TeX:] $$\underline{L}\left(S_{i}\right)=\frac{\sum_{Y \in A S\left(S_{i}\right)} R(Y)}{\underline{N}\left(S_{i}\right)}$$
[TeX:] $$\overline{L}\left(S_{i}\right)=\frac{\sum_{Y \in A S\left(S_{i}\right)} R(Y)}{\overline{N}\left(S_{i}\right)}$$
where [TeX:] $$\underline{N}\left(S_{i}\right)$$ is the number of objects in the upper approximation set of Si and [TeX:] $$\underline{N}\left(S_{i}\right)$$ is the number of objects in the upper approximation set of Si.
As can be seen, an ambiguous partition in the domain can be represented by a rough boundary interval containing a rough lower limit and a rough upper limit as follows:
[TeX:] $$R N\left(S_{i}\right)=\left[\underline{L}\left(S_{i}\right), \overline{L}\left(S_{i}\right)\right]$$
We start from the bottom layer of index framework shown in Fig. 1. There are q experts. The index set is [TeX:] $$\left\{C_{11}, C_{12}, \ldots, C_{1 l}\right\} . \text { Here }, l=4$$
Step 1: According to the evaluation of expert [TeX:] $$k(k=1,2, \ldots, q) \text { on }\left\{C_{11}, C_{12}, \ldots, C_{1 l}\right\}$$, the fuzzy reciprocal judgment matrix Ek is constructed as follows:
[TeX:] $$E^{k}=\left[\begin{array}{cccc}{(1,1,1,1)} & {e_{1,2}^{k}} & {\cdots} & {e_{1, l}^{k}} \\ {e_{2,1}^{k}} & {(1,1,1,1)} & {\cdots} & {e_{2, l}^{k}} \\ {\vdots} & {\vdots} & {} & {\vdots} \\ {e_{l, 1}^{k}} & {e_{l, 2}^{k}} & {\cdots} & {(1,1,1,1)}\end{array}\right]$$
where [TeX:] $$e_{i j}^{k}$$ represents the score of supplier j compared to supplier i evaluated by expert k, here i,j=1,2,...,l and [TeX:] $$i \neq j . \quad e_{i, j}^{k}=\left(a_{i, j}^{k}, b_{i, j}^{k}, c_{i, j}^{k}, d_{i, j}^{k}\right)$$ is a trapezoidal fuzzy number and [TeX:] $$a_{i, j}^{k}, b_{i, j}^{k}, c_{i, j}^{k} \quad \text { and } \quad d_{i, j}^{k}$$ [TeX:] $$\left(a_{i, j}^{k} \leq b_{i, j}^{k} \leq c_{i, j}^{k} \leq d_{i, j}^{k}\right)$$ are all positive real numbers. Then we verify the consistency of Ek. If it is qualified, do the next step; otherwise, redo this step.
Step 2: Ek is split into ak, bk, ck, dk. The expression of ak is as follows:
[TeX:] $$a^{k}=\left[\begin{array}{cccc}{1} & {a_{1,2}^{k}} & {\cdots} & {a_{1, l}^{k}} \\ {a_{2,1}^{k}} & {1} & {\cdots} & {a_{2, l}^{k}} \\ {\vdots} & {\vdots} & {} & {\vdots} \\ {a_{l, 1}^{k}} & {a_{l, 2}^{k}} & {\cdots} & {1}\end{array}\right]$$
Step 3: Based on a1,a2,…, aq, the rough group decision matrix is constructed as follows:
[TeX:] $$a=\left[\begin{array}{cccc}{1} & {a_{1,2}} & {\cdots} & {a_{1, l}} \\ {a_{2,1}} & {1} & {\cdots} & {a_{2, l}} \\ {\vdots} & {\vdots} & {} & {\vdots} \\ {a_{l, 1}} & {a_{l, 2}} & {\cdots} & {1}\end{array}\right]$$
where [TeX:] $$a_{i, j}=\left\{a_{i, j}^{1}, a_{i, j}^{2}, \ldots, a_{i, j}^{q}\right\}$$, here [TeX:] $$i, j=1,2, \ldots, l \text { and } i \neq j$$.
The rough boundary interval of [TeX:] $$a_{i, j}^{k} \in a_{i, j}(k=1,2, \ldots, q)$$ is obtained as follows:
[TeX:] $$R N\left(a_{i, j}^{k}\right)=\left[a_{i, j}^{k,-}, a_{i, j}^{k,+}\right]$$
where [TeX:] $$a_{i, j}^{k,-}$$ is the rough lower limit of [TeX:] $$a_{i, j}^{k}$$ in set [TeX:] $$a_{i, j}^{k,+}$$ is the rough upper limit of [TeX:] $$a_{i, j}^{k}$$ in set [TeX:] $$a_{i, j}$$.
Therefore, the rough boundary interval of [TeX:] $$a_{i, j}$$ can be represented as follows:
[TeX:] $$R N\left(a_{i, j}\right)=\left\{\left[a_{i, j}^{1,-}, a_{i, j}^{1,+}\right],\left[a_{i, j}^{2,-}, a_{i, j}^{2,+}\right], \cdots,\left[a_{i, j}^{q,-}, a_{i, j}^{q,+}\right]\right\}$$
Based on the operational rule of rough boundary interval, the average form of [TeX:] $$R N\left(a_{i, j}\right)$$ is obtained as follows:
[TeX:] $$\operatorname{Arg}_{-} R N\left(a_{i, j}\right)=\left[a_{i, j}^{-}, a_{i, j}^{+}\right]=\left[\frac{\sum_{k=1}^{q} a_{i, j}^{k,-}}{q}, \frac{\sum_{k=1}^{q} a_{i, j}^{k+}}{q}\right]$$
where [TeX:] $$a_{i, j}^{-}$$ is the rough lower limit of set [TeX:] $$a_{i, j} \text { and } a_{i, j}^{+}$$ is the rough upper limit of set [TeX:] $$a_{i, j}$$.
Step 4: The rough judgement matrix is constructed as follows:
[TeX:] $$ E A=\begin{bmatrix} {1}& {A v g_{-} R N\left(a_{1,2}\right)}& {\cdots}& {A v g_{-} R N\left(a_{1, l}\right)}&\\ {A v g_{-} R N\left(a_{2,1}\right)}& {1}& {\cdots}& {A v g_{-} R N\left(a_{2, l}\right)}&\\ {\vdots}& {\vdots}& & {\vdots}&\\ {A v g-R N\left(a_{l, 1}\right)}& {A v g_{-} R N\left(a_{l, 2}\right)}& {\cdots}& {1}& \end{bmatrix}$$
EA is divided into EA- and EA+. Here, EA- is the rough lower limit matrix and EA+ is the rough upper limit matrix. EA- and EA+ are expressed as follows:
[TeX:] $$E A^{-}=\left[\begin{array}{cccc}{1} & {a_{1,2}^{-}} & {\cdots} & {a_{1, l}^{-}} \\ {a_{2,1}^{-}} & {1} & {\cdots} & {a_{2, l}^{-}} \\ {\vdots} & {\vdots} & {} & {\vdots} \\ {a_{l, 1}^{-}} & {a_{l, 2}^{-}} & {\cdots} & {1}\end{array}\right], E A^{+}=\left[\begin{array}{cccc}{1} & {a_{1,2}^{+}} & {\cdots} & {a_{1, l}^{+}} \\ {a_{2,1}^{+}} & {1} & {\cdots} & {a_{2, l}^{+}} \\ {\vdots} & {\vdots} & {} & {\vdots} \\ {a_{l, 1}^{+}} & {a_{l, 2}^{+}} & {\cdots} & {1}\end{array}\right]$$
The eigenvectors corresponding to the maximum eigenvalues of EA- and EA+ are obtained respectively as follows:
[TeX:] $$V A^{-}=\left[v a_{1}^{-}, v a_{2}^{-}, \cdots, v a_{l}^{-}\right]^{\mathrm{T}}, V A_{t}^{+}=\left[v a_{1}^{+}, v a_{2}^{+}, \cdots, v a_{l}^{+}\right]^{\mathrm{T}}$$
where [TeX:] $$v a_{i}^{-}$$ are the value of VA- on the i (i=1,2,...,l) dimension and [TeX:] $$v a_{i}^{+}$$ are the value of VA+on the i (i=1,2,...,l) dimension.
Then, we can get that [TeX:] $$g a_{i}=\left(\left|v a_{i}^{-}\right|+\left|v a_{i}^{+}\right|\right) / 2$$, and a set [TeX:] $$G A=\left\{g a_{1}, g a_{2}, \ldots, g a_{l}\right\}$$ is obtained.
Step 5: We repeat steps 3 and 4, so [TeX:] $$G B_{t}=\left\{g b_{1}, g b_{2}, \ldots, g b_{l}\right\}, G C=\left\{g c_{1}, g c_{2}, \ldots, g c_{l}\right\} \text { and } G D=\left\{g d_{1}, g d_{2}, \ldots, g d_{l}\right\}$$ can be obtained. Then the adaptive weight of evaluation indexes [TeX:] $$C_{11}, C_{12}, \dots, C_{1 l}$$ with the trapezoidal fuzzy number form are [TeX:] $$z_{1}=\left(g a_{1}, g b_{1}, g c_{1}, g d_{1}\right), z_{2}=\left(g a_{2}, g b_{2}, g c_{2}, g d_{2}\right), \ldots, z_{l}=\left(g a_{l}, g b_{l}, g c_{l}, g d_{l}\right)$$. Here we use gravity model appoach to convert [TeX:] $$z_{i}=\left(g a_{i}, g b_{i}, g c_{i}, g d_{i}\right)(i=1,2, \ldots, l)$$ into real number ri as follows:
[TeX:] $$r_{i}=\frac{\left[\left(g d_{i}\right)^{2}+g d_{i} \cdot g c_{i}+\left(g c_{i}\right)^{2}\right]-\left[\left(g a_{i}\right)^{2}+g a_{i} \cdot \lg b_{i}+\left(g b_{i}\right)^{2}\right]}{3\left(g d_{i}+g c_{i}-g a_{i}-g b_{i}\right)}$$
We normalize [TeX:] $$r_{1}, r_{2}, \ldots, r_{l}$$ and can obtain the adaptive weight of evaluation indexes [TeX:] $$C_{1 i}$$ as follows:
[TeX:] $$\omega\left(C_{1 i}\right)=\frac{\sum_{i=1}^{l} r_{i}}{l}$$
Step 6: For indexes [TeX:] $$C_{1}, C_{2}, C_{3 \mathrm{and}} \mathrm{C}_{4}$$, we repeat steps 1-5 and obtain the adaptive weights of them as [TeX:] $$\omega\left(C_{1}\right), \omega\left(C_{2}\right), \omega\left(C_{3}\right), \omega\left(C_{4}\right)$$.
3.3 D-S Theory Decision Regulations
By D-S theory, we can deal with the multi-criteria decision problems with uncertainty and incompleteness [24,25]. In the existing researches, it has been certified that a content result can be obtained and the uncertainty of decision can be decreased based on D-S theory [26,29]. According to D-S theory, we define the suppliers to be evaluated [TeX:] $$x_{1}, x_{2}, \dots, x_{i}, \dots, x_{N}$$ as the D-S identification framework [TeX:] $$\Theta=\left\{x_{1}, x_{2}, \ldots, x_{i}, \ldots, x_{N}\right\}$$. All possible subsets of can be expressed by power set 2. When all elements in are incompatible and independent with each other, the number of elements in 2 is 2N. Then, a set function [TeX:] $$m : 2^{\Theta} \rightarrow[0,1]$$, which satisfies [TeX:] $$m(\phi)=0 \text { and } \sum_{A \subset \Theta} m(A)=1$$ , is defined. Here, m is known as the basic probability allocation (BPA) function on and A is a supplier to be evaluated. m(A), which is the BPA value of A, represents the trust degree in A. Any supplier to be evaluated satisfying the condition m(A)>0 is called a focal element.
For [TeX:] $$A \subseteq \Theta$$, the fusion rule of finite BPA functions on is as follows:
[TeX:] $$\left(m_{1} \oplus m_{2} \oplus \ldots \oplus m_{n}\right)(A)=\frac{1}{K} \sum_{A_{1} \cap A_{2} \cap \cdots A_{n}=A}\ m_{1}\left(A_{1}\right) \cdot m_{2}\left(A_{2}\right) \cdot \ldots \cdot m_{n}\left(A_{n}\right)$$
K is the normalization constant and is expressed as follows:
[TeX:] $$K=\sum_{A_{1} \cap A_{2} \cap \ldots \cap A_{n} \neq \phi}\ m_{1}\left(A_{1}\right) \cdot m_{2}\left(A_{2}\right) \cdot \ldots \cdot m_{n}\left(A_{n}\right)=\ =1-\sum_{A_{1} \cap A_{2} \cap \ldots \cap A_{n}=\phi}\ m_{1}\left(A_{1}\right) \cdot m_{2}\left(A_{2}\right) \cdot \ldots \cdot m_{n}\left(A_{n}\right)$$
The overall trust degree of A on can be represented as a belief function [TeX:] $$\operatorname{Bel}(A)=\sum_{B \subseteq A} m(B)$$ where [TeX:] $$B \subset \Theta$$, and the uncertainty degree of A on can be represented as a plausible function [TeX:] $$P l(A)=\sum_{B \cap A \neq \phi} m(B) \text { where } B \subset \Theta$$.
For a supplier A on , Bel(A) shows the sum of the possibility estimate of all its subsets, and Pl(A) shows the sum of the uncertainty estimate of all its subsets. For A, the degree of confirmation can be expressed by the trust interval [Bel(A), Pl(A)].
According to the above analysis, the sum of credibility which the evidences support A is shown by Bel(A) and the sum of credibility which the evidences does not negative A is shown by Pl(A). Thus, the trust interval is formed as [Bel(A), Pl(A)]. The support degree to a supplier of belief function and plausible function can be reflected by [Bel(A), Pl(A)] comprehensively.
According to references [30,31], to evaluate the suppliers by trust interval approach is more reasonable than max-belief-function decision-making approach or max-plausible-function decision-making approach. The D-S theory decision regulations based on trust interval for supplier evaluation in GSC are as follows.
(i) It is assumed that supplier Ai is better than supplier Aj with a degree of [TeX:] $$P\left(A_{i}>A_{j}\right)$$. The trust interval of Ai is [Bel(Ai), Pl(Ai)], and the trust interval of Aj is [Bel(Aj), Pl(Aj)]. [TeX:] $$P\left(A_{i}>A_{j}\right)$$ is obtained as follows:
[TeX:] $$P\left(A_{i}>A_{j}\right)=\frac{\max \left[0, P l\left(A_{i}\right)-\operatorname{Bel}\left(A_{j}\right)\right]-\max \left[0, \operatorname{Bel}\left(A_{i}\right)-P l\left(A_{j}\right)\right]}{\left[P l\left(A_{i}\right)-\operatorname{Bel}\left(A_{i}\right)\right]+\left[P l\left(A_{j}\right)-\operatorname{Bel}\left(A_{j}\right)\right]}$$
where [TeX:] $$P\left(A_{i}>A_{j}\right) \in[0,1]$$.
(ii) The partial order relationship: (a) When [TeX:] $$P\left(A_{i}>A_{j}\right)>0.5$$, Ai is better than Aj, which is expressed as Ai Aj ; (b) When [TeX:] $$P\left(A_{i}>A_{j}\right)>0.5 \text { and } P\left(A_{j}>A_{k}\right)>0.5, A_{i}$$ is better than Ak, which is expressed as Ai Aj Ak.
3.4 Supplier Evaluation Based on D-S Theory
The weighted BPA value of focal element [TeX:] $$A_{i}\left(i<2^{N}\right)$$ under index [TeX:] $$t(t \in I F)$$, which is [TeX:] $$\tilde{m}_{t}\left(A_{i}\right)$$, is introduced into the D-S theory model as evidence input. The calculating and processing approaches for weighted BPA value of each focal element are as follows.
Based on investigating the actual status of each supplier, the expert gives the initial value of indexes which belongs to quantitative type or direct qualitative type. Here, the indexes which belong to definite quantitative type or direct qualitative type are assigned exact values, the indexes which are relatively fuzzy quantitative type are assigned value intervals, and the indexes which are completely unknown are assigned null value. By membership approach, we can calculate the tendency degree of the initial index value.
To an index, five levels of expert's remark are given as: {G1,G2,G3,G4,G5}={very bad, bad, middle, good,very good}. Here, G1 is the remark level corresponding to the minimum initial index value D1, and G5 is the remark level corresponding to the maximal initial index value G5. However, to cost-based index C11, G1 is the remark level corresponding to the maximal initial index value D1, and G5 is the remark level corresponding to the minimum initial index value D5.
We assume that the corresponding exact numbers of the five levels of expert's remark are: [TeX:] $$E\left(G_{1}\right)=0,E\left(G_{2}\right)=0.25, E\left(G_{3}\right)=0.5, E\left(G_{4}\right)=0.75, \text { and } E\left(G_{5}\right)=1$$. The membership degree of the initial index value to Gi is defined as i. On t, the tendency degree of Ai is expressed as [TeX:] $$P_{t}\left(A_{i}\right)$$, and the calculation of [TeX:] $$P_{t}\left(A_{i}\right)$$ is divided into two circumstances as follows:
(1) Index t belongs to quantitative type.
In this circumstance, the initial index value of Ai on t is a point-value a or an value-interval [a,b].
If [TeX:] $$D_{i} \leq a \leq D_{i+1} \text { or } D_{i} \leq a \leq b \leq D_{i+1}, P_{t}\left(A_{i}\right)=\beta_{i} E\left(G_{i}\right)+\beta_{i+1} E\left(G_{i+1}\right)$$.
If [TeX:] $$D_{i} \leq a \leq D_{i+1} \text { and } D_{i+1} \leq b \leq D_{i+2}, P_{t}\left(A_{i}\right)=\beta_{i} E\left(G_{i}\right)+\beta_{i+1} E\left(G_{i+1}\right)+\beta_{i+2} E\left(G_{i+2}\right)$$.
If [TeX:] $$D_{i} \leq a \leq D_{i+1} \text { and } D_{j} \leq b \leq D_{j+1}, P_{t}\left(A_{i}\right)=\beta_{i} E\left(G_{i}\right)+\beta_{i+1} E\left(G_{i+1}\right)+\ldots+\beta_{j} E\left(G_{j}\right)+\beta_{j+1} E\left(G_{j+1}\right)$$.
(2) Index t belongs to direct qualitative type.
In this circumstance, the tendency degree of Ai on t is equal to the number corresponding to the remark level.
By the above approach, the tendency degree of each focal element except under any index can be solved. Here, the expert's uncertainty is indicated by . Without the consideration of the influence of , the supplier evaluation problem is a simple probability allocation problem. However, the advantages of D-S theory in solving multiple indexes decision problem are not reflected. Simultaneously, the trust degree of expert on any index is different and the uncertainty of an index is expressed by the probability allocation of
Thus, the probability allocation value of on different indexes should also be considered differently. For example, in the supplier evaluation problem, the weight of evaluation index is obviously variable in different requirements. If cost reduction is needed, C11 will be more important and its trust degree should be larger. The BPA value of on C11 should be smaller. Accordingly, we introduce the adaptive weight (determined in Section 3.2) to regulate the preference of each index and solve the probability allocation problem of on all indexes, then the weighted BPA value of every focal element on any index is calculated as [TeX:] $$\tilde{m}_{t}\left(A_{i}\right)$$.
We assume that the adaptive weight of t is [TeX:] $$\omega_{t}\left(\omega_{t} \in(0,1)\right)$$. The bigger wt is, the higher the trust degree of expert to t, the lower the uncertainty of t is, and vice versa. Therefore, a weighted normalization processing of the BPA values of all focal elements is taken as follows:
[TeX:] $$\left\{\begin{array}{cl}{\tilde{m}_{t}\left(A_{i}\right)=\frac{\omega_{t} P_{t}\left(A_{i}\right)}{\sum_{i=1}^{l-1} P_{t}\left(A_{i}\right)}} & {A_{i} \neq \Theta} \\ {\tilde{m}_{t}\left(A_{i}\right)=1-\omega_{t}} & {A_{i}=\Theta}\end{array}\right.$$
According to Fig. 1, the evidences of C11,C12,C13 and C14 of C1 are fused and processed, and then the weighted BPA value of C1 is calculated as [TeX:] $$\tilde{m}_{1}\left(A_{i}\right)$$. After that, we execute a secondary fusion which includes the weighted BPA value of C1,C2,C3 and C4, and the evaluation result of suppliers can be obtained.
As an important part of modern mechanical equipment, the main function of bearing is to sustain the mechanical revolving body, depress the friction in movement and ensure the rotary precision. A bearing manufacturing enterprise has three candidate bearing-cage suppliers. To select the optimal bearing-cage supplier, supplier evaluation should be executed.
Firstly, the adaptive weight of evaluation index is determined by our designed fuzzy-rough-sets-AHP method. Four experts (expert 1, expert 2, expert 3, and expert 4) participate in the judgment on C11,C12,C13 and C14. Using the proportional scale method of trapezoidal fuzzy number [21], the trapezoidalfuzzy- number reciprocal judgment matrices E1, E2, E3, and E4 are shown as follows.
[TeX:] $$E^{1}=\left[\begin{array}{cccc}{\tilde 5/\tilde5} & {\tilde6 / \tilde4} & {\tilde6 / \tilde4} & {\tilde7 /\tilde 3} \\ {\tilde4 / \tilde6} & {\tilde5 / \tilde5} & {\tilde6 / \tilde4} & {\tilde6 / \tilde4} \\ {\tilde4 / \tilde6} & {\tilde4 / \tilde6} & {\tilde5 /\tilde 5} & {\tilde5 /\tilde 5} \\ {\tilde3 /\tilde 7} & {\tilde4 / \tilde6} & {\tilde4 / \tilde6} & {\tilde5 / \tilde5}\end{array}\right], \ E^{2}=\left[\begin{array}{cccc}{\tilde5 / \tilde5} & {\tilde5 / \tilde5} & {\tilde6 / \tilde4} & {\tilde6 / \tilde4} \\ {\tilde5 / \tilde5} & {\tilde5 / \tilde5} & {\tilde6 / \tilde4} & {\tilde6 / \tilde4} \\ {\tilde4 / \tilde6} & {\tilde4 / \tilde6} & {\tilde5 / \tilde5} & {\tilde5 / \tilde5} \\ {\tilde4 / \tilde6} & {\tilde4 / \tilde6} & {\tilde5 / \tilde5} & {\tilde5 / \tilde5}\end{array}\right] \\ E^{3}=\left[\begin{array}{cccc}{\tilde5 / \tilde5} & {\tilde7 / \tilde3} & {\tilde5 / \tilde5} & {\tilde6 / \tilde4} \\ {\tilde3 / \tilde7} & {\tilde5 /\tilde 5} & {\tilde3 /\tilde7} & {\tilde4 / \tilde6} \\ {\tilde5 / \tilde5} & {\tilde7 / \tilde3} & {\tilde5 /\tilde 5} & {\tilde6 /\tilde 4} \\ {\tilde4 / \tilde6} & {\tilde6 /\tilde 4} & {\tilde4 /\tilde 6} & {\tilde5 / \tilde5}\end{array}\right], \ E^{4}=\left[\begin{array}{cccc}{\tilde5 / \tilde5} & {\tilde6 / \tilde4} & {\tilde7 / \tilde3} & {\tilde8 / \tilde2} \\ {\tilde4 / \tilde6} & {\tilde5 / \tilde5} & {\tilde6 / \tilde4} & {\tilde7 / \tilde3} \\ {\tilde3 / \tilde7} & {\tilde4 / \tilde6} & {\tilde5 / \tilde5} & {\tilde6 / \tilde4} \\ {\tilde2 /\tilde 8} & {\tilde3 / \tilde7} & {\tilde4 /\tilde 6} & {\tilde5 / \tilde5}\end{array}\right]$$
Taking E1 for example, it can be converted to the following form:
[TeX:] $$E^{1}=\left[\begin{array}{cccc}{(1,1,1,1)} & {(1,11 / 9,13 / 7,7 / 3)} & {(1,11 / 9,13 / 7,7 / 3)} & {(3 / 2,13 / 7,3,4)} \\ {(3 / 7,7 / 13,9 / 11,1)} & {(1,1,1,1)} & {(1,11 / 9,13 / 7,7 / 3)} & {(1,11 / 9,13 / 7,7 / 3)} \\ {(3 / 7,7 / 13,9 / 11,1)} & {(3 / 7,7 / 13,9 / 11,1)} & {(1,1,1,1)} & {(1,11 / 9,13 / 7,7 / 3)} \\ {(1 / 4,1 / 3,7 / 13,2 / 3)} & {(3 / 7,7 / 13,9 / 11,1)} & {(3 / 7,7 / 13,9 / 11,1)} & {(1,1,1,1)}\end{array}\right]$$
After consistency check, E1, E2, E3 and E4 are all qualified. Then they are split, and a1 is as follows:
[TeX:] $$a^{1}=\left[\begin{array}{cccc}{1} & {1} & {1} & {3 / 2} \\ {3 / 7} & {1} & {1} & {1} \\ {3 / 7} & {3 / 7} & {1} & {1} \\ {1 / 4} & {3 / 7} & {3 / 7} & {1}\end{array}\right]$$
Based on a1, a2, a3 and a4, the rough group-decision matrix is obtained as follows:
[TeX:] $$a=\left[\begin{array}{cccc}{\{1,1,1,1\}} & {\{1,1,3 / 2,1\}} & {\{1,1,1,3 / 2\}} & {\{3 / 2,1,1,7 / 3\}} \\ {\{3 / 7,1,1 / 4,3 / 7\}} & {\{1,1,1,1\}} & {\{1,1,1 / 4,1\}} & {\{1,1,3 / 7,3 / 2\}} \\ {\{3 / 7,3 / 7,1,1 / 4\}} & {\{3 / 7,3 / 7,3 / 2,3 / 7\}} & {\{1,1,1,1\}} & {\{1,1,1,1\}} \\ {\{1 / 4,3 / 7,3 / 7,1 / 9\}} & {\{3 / 7,3 / 7,1,1 / 4\}} & {\{3 / 7,1,3 / 7,3 / 7\}} & {\{1,1,1,1 \}}\end{array}\right] $$
In element [TeX:] $$a_{1,4}=\{3 / 2,1,1,7 / 3\}$$, the upper approximation set of partition '3/2' is {3/2,7/3} and the lower approximation set of partition '3/2' is {3/2,1,1}, so [TeX:] $$\underline{L} \quad\left(3 / 2^{\prime}\right)=(3 / 2+1+1) / 3=1.17$$, [TeX:] $$\overline{L}\left(3 / 2^{\prime}\right)=(3 / 2+7 / 3) / 2=1.92 \text { and } R N\left(3 / 2^{\prime}\right)=[1.17,1.92]$$. Similarly, RN('1')=[1,1.46], RN('7/3')=[1.46,2.33] and [TeX:] $$R N\left(a_{1,4}\right)=\{[1.17,1.92],[1,1.46],[1,1.46],[1.46,2.33]\}$$.
Thus, [TeX:] $$A v g_{-} R N\left(a_{1,4}\right)=[1.16,1.79]$$. The rough boundary intervals in average form of other elements of a can be also calculated. The rough judgment matrix is constructed as follows:
[TeX:] $$E A=\left[\begin{array}{cccc}{[1,1]} & {[1.03,1.22]} & {[1.03,1.22]} & {[1.16,1.79]} \\ {[0.38,0.74]} & {[1,1]} & {[0.67,0.95]} & {[0.72,1.17]} \\ {[0.38,0.74]} & {[0.50,0.90]} & {[1,1]} & {[1,1]} \\ {[0.23,0.38]} & {[0.38,0.74]} & {[0.46,0.68]} & {[1,1]}\end{array}\right]$$
Then EA is split into EA- and EA+. The eigenvector corresponding to the maximum eigen-value of EA- is [TeX:] $$V A^{-}=[0.71,0.44,0.45,0.30]^{\mathrm{T}}$$ and the eigenvector corresponding to the maximum eigen-value of EA+ is [TeX:] $$ V A^{+}=[0.65,0.49,0.47,0.34]^{\mathrm{T}}$$. So GA={0.68,0.47,0.46,0.32}. Similarly, GB={0.73,0.51,0.66,0.58}, GC={0.82,0.67,0.73,0.69} and GD={0.95,0.77,0.83,0.75}. Then the adaptive weight of indexes C11,C12,C13 and C14 with the trapezoidal fuzzy number form are [TeX:] $$z_{1}=(0.68,0.73,0.82,0.95), z_{2}=(0.47,0.51,0.67,0.77), z_{3}=(0.46,0.66,0.73,0.83) \text { and } z_{4}=(0.32,0.58,0.69,0.75)$$. After gravity model approach processing and normalization processing, we obtain the adaptive weight of evaluation indexes C11,C12,C13 and C14: [TeX:] $$\omega\left(C_{11}\right)=0.30, \omega\left(C_{12}\right)=0.23, \omega\left(C_{13}\right)=0.25, \text { and } \omega\left(C_{14}\right)=0.22$$. Similarly, the adaptive weights of criterions C1,C2,C3 and C4 are [TeX:] $$\omega\left(C_{1}\right)=0.57, \omega\left(C_{2}\right)=0.18, \omega\left(C_{3}\right)=0.26, \text { and } \omega\left(C_{4}\right)=0.09$$.
Secondly, we use the proposed adaptive weight D-S theory model to deal with the decision of supplier evaluation problem. For the three candidate suppliers, their initial index values are shown in Table 1.
Initial index value
C2/ full score is 17
C3/ full score is 9
C11/¥ C12/ error value (mm) C13 C14
x1 6.4×101 0.01 Good Very bad 15 [8.5, 9] 0.9848
x2 1.9×101 0.01 Very good Good [15, 16] [8.5, 9] 1
x3 2.8×101 0.03 Very bad Good [5, 9] / 1
Corresponding to the remark level [TeX:] $$\left\{G_{1}, G_{2}, G_{3}, G_{4}, G_{5}\right\}$$, the reference values of the index belonging to quantitative type are as follows: [TeX:] $$G\left(C_{11}\right)=\left\{10^{5}, 10^{4}, 10^{3}, 10^{2}, 10^{1}\right\}, G\left(C_{12}\right)=\{0.05,0.04,0.03,0.02,0.01\}$$, [TeX:] $$G\left(C_{2}\right)=\{17,13,9,5,1\}, G\left(C_{3}\right)=\{1,3,5,7,9\} \text { and } G\left(C_{4}\right)=\{0,0.25,0.5,0.75,1\}$$.
Then, the membership degree of initial index value to every remark level is obtained. The data in Table 1 is translated into the membership degree form corresponding to remark grade. As shown in Table 2, the tendency degree form of initial index value is obtained.
The tendency degree form of initial index value
>x1 0.8500 1.0000 0.7500 0 0.8750 0.9688 0.9848
>x2 0.4750 1.0000 1.0000 0.7500 0.8813 0.9688 1.0000
>x3 0.2000 0.3000 0 0.7500 0.3750 / 1.0000
We define the set of candidate suppliers as the D-S theory identification framework: [TeX:] $$\Theta=\left\{x_{1}, x_{2}, x_{3}\right\}$$, Here, x1,x2 and x3 represent bearing-cage suppliers 1, 2, and 3, respectively.
For four indexes C11,C12,C13 and C14 and three criterions C2,C3, and C4, the weighted BPA values of all focal elements are obtained according to the tendency degree shown in Table 2 and the weight vectors [TeX:] $$\left(\omega\left(C_{11}\right), \omega\left(C_{12}\right), \omega\left(C_{13}\right), \omega\left(C_{14}\right)\right)=(0.30,0.23,0.25,0.22), \text { and }\left(\omega\left(C_{2}\right), \omega\left(C_{3}\right), \omega\left(C_{4}\right)\right)=(0.18,0.26,09)$$. The calculation result is as follows:
(1) C11: [TeX:] $$\tilde{m}_{11}\left(x_{1}\right)=0.1672, \quad \tilde{m}_{11}\left(x_{2}\right)=0.0934, \quad \tilde{m}_{11}\left(x_{3}\right)=0.0393, \quad \tilde{m}_{11}(\Theta)=0.7000$$
(3) C13: [TeX:] $$\tilde{m}_{13}\left(x_{1}\right)=0.1071, \tilde{m}_{13}\left(x_{2}\right)=0.1429, \quad \tilde{m}_{13}\left(x_{3}\right)=0, \quad \tilde{m}_{13}(\Theta)=0.7500$$
(4) C14: [TeX:] $$\tilde{m}_{14}\left(x_{1}\right)=0, \quad \tilde{m}_{14}\left(x_{2}\right)=0.1100, \tilde{m}_{14}\left(x_{3}\right)=0.1100, \tilde{m}_{14}(\Theta)=0.7800$$
(5) C2: [TeX:] $$\tilde{m}_{2}\left(x_{1}\right)=0.0739, \quad \tilde{m}_{2}\left(x_{2}\right)=0.0744, \quad \tilde{m}_{2}\left(x_{3}\right)=0.0317, \quad \tilde{m}_{2}(\Theta)=0.8200$$
(6) C3: [TeX:] $$\tilde{m}_{3}\left(x_{1}\right)=0.1300, \quad \tilde{m}_{3}\left(x_{2}\right)=0.1300, \quad \tilde{m}_{3}(\Theta)=0.7400$$
(7) C4: [TeX:] $$\tilde{m}_{4}\left(x_{1}\right)=0.0297, \tilde{m}_{4}\left(x_{2}\right)=0.0302, \tilde{m}_{4}\left(x_{3}\right)=0.0302, \tilde{m}_{4}(\Theta)=0.9100$$
After that, we take [TeX:] $$\tilde{m}_{11}\left(x_{i}\right), \tilde{m}_{12}\left(x_{i}\right), \tilde{m}_{13}\left(x_{i}\right), \text { and } \tilde{m}_{14}\left(x_{i}\right)$$ as the evidence input and implement the first evidence fusion. The BPA values of all focal elements are obtained as follows: [TeX:] $$m_{1}\left(x_{1}\right)=0.1001,m_{1}\left(x_{2}\right)=0.7815, m_{1}\left(x_{3}\right)=0.0772, m_{1}\left(x_{1}, x_{2}\right)=0.0102, m_{1}\left(x_{2}, x_{3}\right)=0.0201, m_{1}\left(x_{1}, x_{3}\right)=0.0098, \text { and } m_{1}(\Theta)=0.0011.$$
We normalize BPA values [TeX:] $$m_{1}\left(A_{i}\right)$$ of the suppliers to be evaluated and on index C1. With the consideration of [TeX:] $$\omega\left(C_{1}\right)$$, the weighted BPA values are obtained as follows: [TeX:] $$\tilde{m}_{1}\left(x_{1}\right)=0.0571, \tilde{m}_{1}\left(x_{2}\right)=0.4455, \tilde{m}_{1}\left(x_{3}\right)=0.0440, \quad \tilde{m}_{1}\left(x_{1}, x_{2}\right) 0.0058, \quad \tilde{m}_{1}\left(x_{2}, x_{3}\right)=0.0115, \tilde{m}_{1}\left(x_{1}, x_{3}\right)=0.0056, \text { and } \tilde{m}_{1}(\Theta)=0.0006$$.
Then, we take [TeX:] $$\tilde{m}_{1}\left(A_{i}\right), \tilde{m}_{2}\left(A_{i}\right), \tilde{m}_{3}\left(A_{i}\right) \text { and } \tilde{m}_{4}\left(A_{i}\right)$$ as the evidence input and implement the second evidence fusion. The comprehensive BPA values of all focal elements are obtained as follows: [TeX:] $$m\left(x_{1}\right)=0.1255, m\left(x_{2}\right)=0.7088, m\left(x_{3}\right)=0.0102, m\left(x_{1}, x_{2}\right)=0.0999, m\left(x_{2}, x_{3}\right)=0.0032, m\left(x_{1}, x_{3}\right)=0.0506\ and\ m(\Theta)=0.0018_{\circ}$$
[TeX:] $$\operatorname{Bel}\left(A_{i}\right) \text { and } \operatorname{Pl}\left(A_{i}\right)$$ of all suppliers are calculated. Then the trust intervals of all suppliers are obtained as follows:
(1) [TeX:] $$x_{1} :[0.1255,0.2778]$$
On the basis of the D-S theory decision regulations, the result is as follows:
(1) [TeX:] $$P\left(x_{1}>x_{2}\right)=0,$$ so x1 x2.
(2) [TeX:] $$P\left(x_{1}>x_{3}\right)=1,$$ so x3 x1
Therefore, the evaluation result of three suppliers is x3 x1 x2 and supplier 2 is the optimal bearingcage supplier. Thus, the proposed adaptive weight D-S theory model can solve the supplier evaluation problem in GSC even the initial index value is uncertain and incomplete (See in Table 1, the initial values of x2 and x3 on index C2 are interval values and the initial value of x3 on index C3 is missing).
To verify the effectiveness of the proposed adaptive weight D-S theory model, we use traditional TOPSIS method [18,21] to make a comparison. Because the traditional TOPSIS method can only solve the evaluation problem with certain and complete index value, we replace the interval with its mid-value and ignore the index with missing index value (The initial values of x2 and x3 on index C2 are 15.5 and 7, respectively. Index C3 is ignored). The tendency degree method is still used to process the initial index value. Then, processing of the adaptive weights is executed on the basis of the hierarchical structure shown in Fig. 1 and the final weight vector of C11,C12,C13,C14,C2 and C4 is [TeX:] $$\omega=(0.15,0.11,0.12,0.09,0.18,0.09 )$$ in which index C3 is ignored. In Table 3, the weighted index value matrix is obtained.
The weighted index value matrix
x1 0.1275 0.1100 0.0900 0 0.1575 0.0886
x2 0.0712 0.1100 0.1200 0.0675 0.1586 0.0900
x3 0.0300 0.0330 0 0.0675 0.0675 0.0900
From Table 3, the positive and negative ideal points are (0.1275, 0.1100, 0.1200, 0.0675, 0.1586, 0.0900) and (0.0300, 0.0330, 0, 0, 0.0675, 0.0886), respectively. So we can obtain the close degree to positive ideal point of each supplier is as follows:
(1) x1:0.7065.
Therefore, the evaluation result of three candidate suppliers by traditional TOPSIS method is x3 x1 x2 and the bearing manufacturing enterprise should choose supplier 2 as the optimal bearing cage supplier. The evaluation results of the proposed adaptive weight D-S theory model and traditional TOPSIS method are consistent. This shows that the proposed adaptive weight D-S theory model is feasible and effective.
In this paper, an adaptive weight D-S theory model is proposed for the evaluation problem characterized by uncertainty and incompleteness and variable index weight in GSC. In addition, a fuzzyrough- sets-AHP approach is designed to obtain the adaptive index weight. The index framework is established considering of the main factors affecting the supplier evaluation in GSC, which can improve the scientific nature and rationality. The case study and the comparison with TOPSIS show that the optimal supplier of manufacturing enterprise can be correctly selected by the proposed adaptive weight D-S theory model.
This paper is supported by Key Scientific Research Projects in 2017 at North Minzu University (No. 2017KJ22), the Third Batch of Ningxia Youth Talents Supporting Program (No. TJGC2018048), Natural Science Foundation of Ningxia Province (No. NZ17113), and Ningxia First-class Discipline and Scientific Research Project: Electronic Science and Technology (No. NXYLXK2017A07).
Lianhui Li
He received his M.E. degree in Vehicle Engineering from Henan University of Science and Technology, China, in 2010 and his Ph.D. degree in Aeronautics and Astronautics Manufacturing Engineering from Northwestern Polytechnical University, China, in 2016. He is currently a lecturer at North Minzu University, China. His current research interests include CAD/CAM and logistics engineering.
Guanying Xu
He received his B.S. degree in Computer Science and Engineering from Anhui University of Technology, China, in 2017. He is currently a master's degree candidate in Computer Science and Engineering at North Minzu University. His current research interests include uncertainty theory, fuzzy decision making, and data stream classification.
Hongguang Wang
He received M.E. degree in Aeronautical Engineering from Northwestern Polytechnical University, China, in 2013. He is currently an engineer at the 713th Research Institute of China Shipbuilding Industry Corporation, China. His current research interests include CAD/CAM.
1 K. Govindan, S. Rajendran, J. Sarkis, P. Murugesan, "Multi criteria decision making approaches for green supplier evaluation and selection: a literature review," Journal of Cleaner Production, vol. 98, pp. 66-83, 2015.doi:[[[10.1016/j.jclepro.2013.06.046]]]
2 J. Sarkis, "A strategic decision framework for green supply chain management," Journal of Cleaner Production, vol. 11, no. 4, pp. 397-409, 2003.doi:[[[10.1016/s0959-6526(02)00062-8]]]
3 S. Curkovic, "Environmentally responsible manufacturing: the development and validation of a measurement model," European Journal of Operational Research, vol. 146, no. 1, pp. 130-155, 2003.doi:[[[10.1016/S0377-2217(02)00182-0]]]
4 S. Mishra, C. Samantra, S. Datta, S. S. Mahapatra, "Multi-attribute group decision-making (MAGDM) for supplier selection using fuzzy linguistic modelling integrated with VIKOR method," International Journal of Services and Operations Management, vol. 12, no. 1, pp. 67-89, 2012.doi:[[[10.1504/ijsom.2012.046674]]]
5 Y. Wang, K. Jiang, "Research on supplier evaluation and selection based on combination of weighting method and grey relational TOPSIS method in green supply chain," Journal of Industrial Technological & Economics, vol. 2016, no. 12, pp. 84-91, 2016.custom:[[[-]]]
6 G. Buyukozkan, G. Cifci, "A novel fuzzy multi-criteria decision framework for sustainable supplier selection with incomplete information," Computers in Industry, vol. 62, no. 2, pp. 164-174, 2011.doi:[[[10.1016/j.compind.2010.10.009]]]
7 C. Bai, J. Sarkis, "Green supplier development: analytical evaluation using rough set theory," Journal of Cleaner Production, vol. 18, no. 12, pp. 1200-1210, 2010.doi:[[[10.1016/j.jclepro.2010.01.016]]]
8 M. L. Tseng, A. S. Chiu, "Evaluating firm's green supply chain management in linguistic preferences," Journal of Cleaner Production, vol. 40, pp. 22-31, 2013.doi:[[[10.1016/j.jclepro.2010.08.007]]]
9 D. Kannan, K. Govindan, S. Rajendran, "Fuzzy axiomatic design approach based green supplier selection: a case study from Singapore," Journal of Cleaner Production, vol. 96, pp. 194-208, 2015.doi:[[[10.1016/j.jclepro.2013.12.076]]]
10 A. Awasthi, S. S. Chauhan, S. K. Goyal, "A fuzzy multicriteria approach for evaluating environmental performance of suppliers," International Journal of Production Economics, vol. 126, no. 2, pp. 370-378, 2010.doi:[[[10.1016/j.ijpe.2010.04.029]]]
11 W. C. Yeh, M. C. Chuang, "Using multi-objective genetic algorithm for partner selection in green supply chain problems," Expert Systems with Applications, vol. 38, no. 4, pp. 4244-4253, 2011.doi:[[[10.1016/j.eswa.2010.09.091]]]
12 J. Wu, Q. W. Cao, H. Li, "A method for choosing green supplier based on COWA operator under fuzzy linguistic decision-making," Journal of Industrial Engineering & Engineering Management, vol. 24, pp. 61-65, 2010.custom:[[[-]]]
13 X. Li, C. Zhao, "Selection of suppliers of vehicle components based on green supply chain," in Proceedings of 2009 16th International Conference on Industrial Engineering and Engineering Management, Beijing, China, 2009;pp. 1588-1591. custom:[[[-]]]
14 G. E. Yan, "Research on green suppliers' evaluation based on AHP & genetic algorithm," in Proceedings of 2009 International Conference on Signal Processing Systems, Singapore, 2009;pp. 615-619. custom:[[[-]]]
15 R. J. Kuo, Y. C. Wang, F. C. Tien, "Integration of artificial neural network and MADA methods for green supplier selection," Journal of Cleaner Production, vol. 18, no. 12, pp. 1161-1170, 2010.doi:[[[10.1016/j.jclepro.2010.03.020]]]
16 R. J. Kuo, Y. J. Lin, "Supplier selection using analytic network process and data envelopment analysis," International Journal of Production Research, vol. 50, no. 11, pp. 2852-2863, 2012.doi:[[[10.1080/00207543.2011.559487]]]
17 G. Buyukozkan, G. Cifci, "A novel hybrid MCDM approach based on fuzzy DEMATEL, fuzzy ANP and fuzzy TOPSIS to evaluate green suppliers," Expert Systems with Applications, vol. 39, no. 3, pp. 3000-3011, 2012.doi:[[[10.1016/j.eswa.2011.08.162]]]
18 X. X. Luo, S. H. Peng, "Research on the vendor evaluation and selection based on AHP and TOPSIS in green supply chain," Soft Science, vol. 25, no. 2, pp. 53-56, 2011.custom:[[[-]]]
19 T. L. Saaty, Analytic Hierarchy Process, NY: McGraw-Hill, New York, 1980.custom:[[[-]]]
20 L. Lidinska, J. Jablonsky, "AHP model for performance evaluation of employees in a Czech management consulting company," Central European Journal of Operations Research, vol. 26, no. 1, pp. 239-258, 2018.doi:[[[10.1007/s10100-017-0486-7]]]
21 L. Li, R. Mo, Z. Chang, H. Zhang, "Priority evaluation method for aero-engine assembly task based on balanced weight and improved TOPSIS," Computer Integrated Manufacturing Systems, vol. 21, no. 5, pp. 1193-1201, 2015.doi:[[[10.13196/j.cims.2015.05.005]]]
22 V. Sangiorgio, G. Uva, F. Fatiguso, "Optimized AHP to overcome limits in weight calculation: building performance application," Journal of Construction Engineering and Management, vol. 144, no. 2, 2017.doi:[[[10.1061/(asce)co.1943-7862.0001418]]]
23 X. T. Wang, W. Xiong, "Rough AHP approach for determining the importance ratings of customer requirements in QFD," Computer Integrated Manufacturing Systems, vol. 16, no. 4, pp. 763-771, 2010.custom:[[[-]]]
24 G. Shafer, A Mathematical Theory of Evidence, NJ: Princeton University Press, Princeton, 1976.custom:[[[-]]]
25 R. R. Yager, "On the Dempster-Shafer framework and new combination rules," Information Sciences, vol. 41, no. 2, pp. 93-137, 1987.doi:[[[10.1016/0020-0255(87)90007-7]]]
26 L. Liu, T. Bao, J. Yuan, C. Li, "Risk assessment of information security based on Grey incidence and D-S theory of evidence," Journal of Applied Sciences, vol. 13, no. 10, pp. 1740-1745, 2013.doi:[[[10.3923/jas.2013.1740.1745]]]
27 M. S. Khan, I. Koo, "the effect of multiple energy detector on evidence theory based cooperative spectrum sensing scheme for cognitive radio networks," Journal of Information Processing Systems, vol. 12, no. 2, pp. 295-309, 2016.doi:[[[10.3745/JIPS.03.0040]]]
28 X. Zhang, S. Mahadevan, X. Deng, "Reliability analysis with linguistic data: an evidential network approach," Reliability Engineering & System Safety, vol. 162, pp. 111-121, 2017.doi:[[[10.1016/j.ress.2017.01.009]]]
29 X. Zhang, Y. Deng, F. T. Chan, A. Adamatzky, S. Mahadevan, "Supplier selection based on evidence theory and analytic network process," Proceedings of the Institution of Mechanical EngineersPart B: Journal of Engineering Manufacture, vol. 230, no. 3, pp. 562-573, 2016.doi:[[[10.1177/0954405414551105]]]
30 R. Mo, D. Zhang, C. Li, "Selection and evaluation of fitting ring in assembly dimensions chain for aircraft based on D-S evidence theory," Computer Integrated Manufacturing System, vol. 21, no. 9, pp. 2361-2368, 2015.doi:[[[10.13196/j.cims.2015.09.011]]]
31 Y. M. Wang, J. B. Yang, D. L. Xu, "A preference aggregation method through the estimation of utility intervals," Computers & Operations Research, vol. 32, no. 8, pp. 2027-2049, 2005.doi:[[[10.1016/j.cor.2004.01.005]]]
Revision received: March 13 2018
Accepted: May 17 2018
Corresponding Author: Lianhui Li* ([email protected])
Lianhui Li*, Ningxia Key Laboratory of Intelligent Information and Big Data Processing, North Minzu University, Yinchuan, China, [email protected]
Guanying Xu*, Ningxia Key Laboratory of Intelligent Information and Big Data Processing, North Minzu University, Yinchuan, China, [email protected]
Hongguang Wang**, The 713th Research Institute of China Shipbuilding Industry Corporation, Zhengzhou, China, [email protected]
|
CommonCrawl
|
A Whole Genome Association Study on Meat Palatability in Hanwoo
Hyeong, K.E.;Lee, Y.M.;Kim, Y.S.;Nam, K.C.;Jo, C.;Lee, K.H.;Lee, J.E.;Kim, J.J. 1219
https://doi.org/10.5713/ajas.2014.14258 PDF KSCI
A whole genome association (WGA) study was carried out to find quantitative trait loci (QTL) for sensory evaluation traits in Hanwoo. Carcass samples of 250 Hanwoo steers were collected from National Agricultural Cooperative Livestock Research Institute, Ansung, Gyeonggi province, Korea, between 2011 and 2012 and genotyped with the Affymetrix Bovine Axiom Array 640K single nucleotide polymorphism (SNP) chip. Among the SNPs in the chip, a total of 322,160 SNPs were chosen after quality control tests. After adjusting for the effects of age, slaughter-year-season, and polygenic effects using genome relationship matrix, the corrected phenotypes for the sensory evaluation measurements were regressed on each SNP using a simple linear regression additive based model. A total of 1,631 SNPs were detected for color, aroma, tenderness, juiciness and palatability at 0.1% comparison-wise level. Among the significant SNPs, the best set of 52 SNP markers were chosen using a forward regression procedure at 0.05 level, among which the sets of 8, 14, 11, 10, and 9 SNPs were determined for the respectively sensory evaluation traits. The sets of significant SNPs explained 18% to 31% of phenotypic variance. Three SNPs were pleiotropic, i.e. AX-26703353 and AX-26742891 that were located at 101 and 110 Mb of BTA6, respectively, influencing tenderness, juiciness and palatability, while AX-18624743 at 3 Mb of BTA10 affected tenderness and palatability. Our results suggest that some QTL for sensory measures are segregating in a Hanwoo steer population. Additional WGA studies on fatty acid and nutritional components as well as the sensory panels are in process to characterize genetic architecture of meat quality and palatability in Hanwoo.
Association between Genetic Polymorphism in the Swine Leukocyte Antigen-DRA Gene and Piglet Diarrhea in Three Chinese Pig Breeds
Yang, Q.L.;Zhao, S.G.;Wang, D.W.;Feng, Y.;Jiang, T.T.;Huang, X.Y.;Gun, S.B. 1228
The swine leukocyte antigen (SLA)-DRA locus is noteworthy among other SLA class II loci for its limited variation and has not been investigated in depth. This study was investigated to detect polymorphisms of four exons of SLA-DRA gene and its association with piglet diarrhea in Landrace, Large White and Duroc pigs. No polymorphisms were detected in exon 3, while 2 SNPs (c.178G>A and c.211T>C), 2 SNPs (c.3093A>C and c.3104C>T) and 5 SNPs (c.4167A>G, c.4184A>G, c.4194A>G, c.4246A>G and c.4293G>A) were detected in exon 1, exon 2 and exon 4 respectively, and 1 SNP (c.4081T>C) in intron 3. Statistical results showed that genotype had significant effect on piglet diarrhea, individuals with genotype BC had a higher diarrhea score when compared with the genotypes AA, AB, AC and CC. Futhermore, genotype AC had a higher diarrhea score than the genotype CC in exon 1 (p<0.05); diarrhea scores of genotype AA and BB were higher than those of genotypes AC and CC in exon 2 (p<0.05); individuals with genotype AA had a higher diarrhea score than individuals with genotype AB and BB in exon 4 (p<0.05). Fourteen common haplotypes were founded by haplotype constructing of all SNPs in the three exons, its association with piglet diarrhea appeared that Hap2, 5, 8, 10, and 14 may be the susceptible haplotypes and Hap9 may be the resistant haplotype to piglet diarrhea. The genetic variations identified of the SLA-DRA gene may potentially be functional mutations related to piglet diarrhea.
Thoroughbred Horse Single Nucleotide Polymorphism and Expression Database: HSDB
Lee, Joon-Ho;Lee, Taeheon;Lee, Hak-Kyo;Cho, Byung-Wook;Shin, Dong-Hyun;Do, Kyoung-Tag;Sung, Samsun;Kwak, Woori;Kim, Hyeon Jeong;Kim, Heebal;Cho, Seoae;Park, Kyung-Do 1236
Genetics is important for breeding and selection of horses but there is a lack of well-established horse-related browsers or databases. In order to better understand horses, more variants and other integrated information are needed. Thus, we construct a horse genomic variants database including expression and other information. Horse Single Nucleotide Polymorphism and Expression Database (HSDB) (http://snugenome2.snu.ac.kr/HSDB) provides the number of unexplored genomic variants still remaining to be identified in the horse genome including rare variants by using population genome sequences of eighteen horses and RNA-seq of four horses. The identified single nucleotide polymorphisms (SNPs) were confirmed by comparing them with SNP chip data and variants of RNA-seq, which showed a concordance level of 99.02% and 96.6%, respectively. Moreover, the database provides the genomic variants with their corresponding transcriptional profiles from the same individuals to help understand the functional aspects of these variants. The database will contribute to genetic improvement and breeding strategies of Thoroughbreds.
Association between Single Nucleotide Polymorphisms of Fatty Acid Synthase and Fat Deposition in the Liver of the Overfed Goose
Wu, Wei;Guo, Xuan;Zhang, Lei;Hu, Dan 1244
Goose fatty liver is one of the most delicious and popular foods in the world, but there is no reliable genetic marker for the early selection and breeding of geese with good liver-producing potential. In our study, one hundred and twenty-four 78-day-old Landes geese bred in Shunda Landes goose breeding farm, Jiutai, Jilin, China were selected randomly. The fatty livers were sampled each week after overfeeding during a three week period. Polymerase chain reaction-single strand conformation polymorphism and DNA sequencing were used to identify single nucleotide polymorphisms (SNPs) of fatty acid synthase (FAS), which is an important enzyme involved in the synthesis of fat under both physiological and pathological conditions. Least-squares correlation was established between these SNPs and fatty liver weight, abdominal fat weight, and intestinal fat weight of the overfed Landes geese, respectively. The results showed that fatty liver weight of geese with EF and FF genotypes (amplified by primer P1) was significantly higher than that of the EE genotype (p<0.05), and liver weight of CD and DD genotypes (amplified by primer P2) was significantly higher than that of the CC genotype (p<0.05). Different genotype combinations showed different liver weights, and from highest to lowest were ABDD, DDEF, DDFF, DDEE, ABEF, ABFF, AADD, and CDEF. Further analysis of DNA sequencing showed that there were two SNPs within the 5' promoter region the FAS gene. The geese of EF and FF genotypes carried a change of T to C, and the geese of CD and DD genotypes carried a change of A to G. The changes of the bases could potentially influence the binding of some transcription factors to this region as to regulate FAS gene. To our knowledge, this is the first report of SNPs found within the 5' promoter region of the Landes goose FAS gene, and our data will provide an insight for early selection of geese for liver production.
Study on Growth Curves of Longissimus dorsi Muscle Area, Backfat Thickness and Body Conformation for Hanwoo (Korean Native) Cows
Lee, J.H.;Oh, S.H.;Lee, Y.M.;Kim, Y.S.;Son, H.J.;Jeong, D.J.;Whitley, N.C.;Kim, J.J. 1250
The objective of this study was to estimate the parameters of Gompertz growth curves with the measurements of body conformation, real-time ultrasound longissimus dorsi muscle area (LMA) and backfat thickness (BFT) in Hanwoo cows. The Hanwoo cows (n = 3,373) were born in 97 Hanwoo commercial farms in the 17 cities or counties of Gyeongbuk province, Korea, between 2000 and 2007. A total of 5,504 ultrasound measurements were collected for the cows at the age of 13 to 165 months in 2007 and 2008. Wither height (HW), rump height (HR), the horizontal distance between the top of the hips (WH), and girth of chest (GC) were also measured. Analysis of variance was conducted to investigate variables affecting LMA and BFT. The effect of farm nested in location was included in the statistical model, as well as the effects of HW, HR, WH, and GC as covariates. All of the effects were significant in the analysis of variance for LMA and BFT (p<0.01), except for the HR effect for LMA. The two ultrasound measures and the four body conformation traits were fitted to a Gompertz growth curve function to estimate parameters. Upper asymptotic weights were estimated as $54.0cm^2$, 7.67 mm, 125.6 cm, 126.4 cm, 29.3 cm, and 184.1 cm, for LMA, BFT, HW, HR, WH, and GC, respectively. Results of ultrasound measurements showed that Hanwoo cows had smaller LMA and greater BFT than other western cattle breeds, suggesting that care must be taken to select for thick BFT rather than an increase of only beef yield. More ultrasound records per cow are needed to get accurate estimates of growth curve, which, thus, helps producers select animals with high accuracy.
Association of Novel Polymorphisms in Lymphoid Enhancer Binding Factor 1 (LEF-1) Gene with Number of Teats in Different Breeds of Pig
Xu, Ru-Xiang;Wei, Ning;Wang, Yu;Wang, Guo-Qiang;Yang, Gong-She;Pang, Wei-Jun 1254
Lymphoid enhancer binding factor 1 (LEF-1) is a member of the T-cell specific factor (TCF) family, which plays a key role in the development of breast endothelial cells. Moreover, LEF-1 gene has been identified as a candidate gene for teat number trait. In the present study, we detected two novel mutations (NC_010450.3:g. 99514A>G, 119846C>T) by DNA sequencing and polymerase chain reaction-restriction fragment length polymorphism in exon 4 and intron 9 of LEF-1 in Guanzhong Black, Hanjiang Black, Bamei and Large White pigs. Furthermore, we analyzed the association between the genetic variations with teat number trait in these breeds. The 99514A>G mutation showed an extremely significant statistical relevance between different genotypes and teat number trait in Guanzhong (p<0.001) and Large White (p = 0.002), and significant relevance in Hanjiang (p = 0.017); the 119846C>T mutation suggested significant association in Guanzhong Black pigs (p = 0.042) and Large White pigs (p = 0.003). The individuals with "AG" or "GG" genotype displayed more teat numbers than those with "AA"; the individuals with "TC" or "CC" genotype showed more teat numbers than those with "TT". Our findings suggested that the 99514A>G and 119846C>T mutations of LEF-1 affected porcine teat number trait and could be used in breeding strategies to accelerate porcine teat number trait improvement of indigenous pigs breeds through molecular marker assisted selection.
Genetic Structure of and Evidence for Admixture between Western and Korean Native Pig Breeds Revealed by Single Nucleotide Polymorphisms
Edea, Zewdu;Kim, Sang-Wook;Lee, Kyung-Tai;Kim, Tae Hun;Kim, Kwan-Suk 1263
Comprehensive information on genetic diversity and introgression is desirable for the design of rational breed improvement and conservation programs. Despite the concerns regarding the genetic introgression of Western pig breeds into the gene pool of the Korean native pig (KNP), the level of this admixture has not yet been quantified. In the present study, we genotyped 93 animals, representing four Western pig breeds and KNP, using the porcine SNP 60K BeadChip to assess their genetic diversity and to estimate the level of admixture among the breeds. Expected heterozygosity was the lowest in Berkshire (0.31) and highest in Landrace (0.42). Population differentiation ($F_{ST}$) estimates were significantly different (p<0.000), accounting for 27% of the variability among the breeds. The evidence of inbreeding observed in KNP (0.029) and Yorkshire (0.031) may result in deficient heterozygosity. Principal components one (PC1) and two (PC2) explained approximately 35.06% and 25.20% of the variation, respectively, and placed KNP somewhat proximal to the Western pig breeds (Berkshire and Landrace). When K = 2, KNP shared a substantial proportion of ancestry with Western breeds. Similarly, when K = 3, over 86% of the KNP individuals were in the same cluster with Berkshire and Landrace. The linkage disquilbrium (LD) values at $r^2_{0.3}$, the physical distance at which LD decays below a threshold of 0.3, ranged from 72.40 kb in Landrace to 85.86 kb in Yorkshire. Based on our structure analysis, a substantial level of admixture between Western and Korean native pig breeds was observed.
Follicle Stimulating Hormone (FSH) Dosage Based on Body Weight Enhances Ovulatory Responses and Subsequent Embryo Production in Goats
Rahman, M.R.;Rahman, M.M.;Khadijah, W.E. Wan;Abdullah, R.B. 1270
An experiment was conducted to evaluate the efficacy of porcine follicle stimulating hormone (pFSH) dosage based on body weight (BW) on ovarian responses of crossbred does. Thirty donor does were divided into 3 groups getting pFSH dosages of 3, 5, and 8 mg pFSH per kg BW, respectively, and were named as pFSH-3, pFSH-5 and pFSH-8, respectively. Estrus was synchronized by inserting a controlled internal drug release (CIDR) device and a single injection of prostaglandin $F2{\alpha}$ ($PGF2{\alpha}$). The pFSH treatments were administered twice a day through 6 decreasing dosages (25, 25, 15, 15, 10, and 10% of total pFSH amount; decreasing daily). Ovarian responses were evaluated on Day 7 after CIDR removal. After CIDR removal, estrus was observed 3 times in a day and pFSH treatments were initiated at 2 days before the CIDR removal. All does in pFSH-5 and pFSH-8 showed estrus signs while half of the does in pFSH-3 showed estrus signs. No differences (p>0.05) were observed on the corpus luteum and total ovarian stimulation among the treatment groups, while total and transferable embryos were higher (p<0.05) in pFSH-5 (7.00 and 6.71) than pFSH-3 (3.00 and 2.80) and pFSH-8 (2.00 and 1.50), respectively. In conclusion, 5 mg pFSH per kg BW dosage gave a higher number of embryos than 3 and 8 mg pFSH per kg BW dosages. The results indicated that the dosage of pFSH based on BW is an important consideration for superovulation in goats.
Bale Location Effects on Nutritive Value and Fermentation Characteristics of Annual Ryegrass Bale Stored in In-line Wrapping Silage
Han, K.J.;McCormick, M.E.;Derouen, S.M.;Blouin, D.C. 1276
In southeastern regions of the US, herbage systems are primarily based on grazing or hay feeding with low nutritive value warm-season perennial grasses. Nutritious herbage such as annual ryegrass (Lolium multiflorum Lam.) may be more suitable for preserving as baleage for winter feeding even with more intensive production inputs. Emerging in-line wrapped baleage storage systems featuring rapid wrapping and low polyethylene film requirements need to be tested for consistency of storing nutritive value of a range of annual ryegrass herbage. A ryegrass storage trial was conducted with 24-h wilted 'Marshall' annual ryegrass harvested at booting, heading and anthesis stages using three replicated in-line wrapped tubes containing ten round bales per tube. After a six-month storage period, nutritive value changes and fermentation end products differed significantly by harvest stage but not by bale location. Although wilted annual ryegrass exhibited a restricted fermentation across harvest stages characterized by high pH and low fermentation end product concentrations, butyric acid concentrations were less than 1 g/kg dry matter, and lactic acid was the major organic acid in the bales. Mold coverage and bale aroma did not differ substantially with harvest stage or bale location. Booting and heading stage-harvested ryegrass baleage were superior in nutritive value to anthesis stage-harvested herbage. Based on the investigated nutritive value and fermentation characteristics, individual bale location within in-line tubes did not significantly affect preservation quality of ryegrass round bale silages.
Effects of Aspergillus Oryzae Culture and 2-Hydroxy-4-(Methylthio)-Butanoic Acid on In vitro Rumen Fermentation and Microbial Populations between Different Roughage Sources
Sun, H.;Wu, Y.M.;Wang, Y.M.;Liu, J.X.;Myung, K.H. 1285
An in vitro experiment was conducted to evaluate the effects of Aspergillus oryzae culture (AOC) and 2-hydroxy-4-(methylthio)-butanoic acid (HMB) on rumen fermentation and microbial populations between different roughage sources. Two roughage sources (Chinese wild rye [CWR] vs corn silage [CS]) were assigned in a $2{\times}3$ factorial arrangement with HMB (0 or 15 mg) and AOC (0, 3, or 6 mg). Gas production (GP), microbial protein (MCP) and total volatile fatty acid (VFA) were increased in response to addition of HMB and AOC (p<0.01) for the two roughages. The HMB and AOC showed inconsistent effects on ammonia-N with different substrates. For CWR, neither HMB nor AOC had significant effect on molar proportion of individual VFA. For CS, acetate was increased (p = 0.02) and butyrate was decreased (p<0.01) by adding HMB and AOC. Increase of propionate was only occurred with AOC (p<0.01). Populations of protozoa ($p{\leq}0.03$) and fungi ($p{\leq}0.02$) of CWR were differently influenced by HMB and AOC. Percentages of F. succinogenes, R. albus, and R. flavefaciens (p<0.01) increased when AOC was added to CWR. For CS, HMB decreased the protozoa population (p = 0.01) and increased the populations of F. succinogenes and R. albus ($p{\leq}0.03$). Populations of fungi, F. succinogenes (p = 0.02) and R. flavefacien (p = 0.03) were increased by adding AOC. The HMB${\times}$AOC interactions were noted in MCP, fungi and R. flavefacien for CWR and GP, ammonia-N, MCP, total VFA, propionate, acetate/propionate (A/P) and R. albus for CS. It is inferred that addition of HMB and AOC could influence rumen fermentation of forages by increasing the number of rumen microbes.
Energy Requirements for Maintenance and Growth of Male Saanen Goat Kids
Medeiros, A.N.;Resende, K.T.;Teixeira, I.A.M.A.;Araujo, M.J.;Yanez, E.A.;Ferreira, A.C.D. 1293
The aim of study was to determine the energy requirements for maintenance and growth of forty-one Saanen, intact male kids with initial body weight (BW) of $5.12{\pm}0.19$ kg. The baseline (BL) group consisted of eight kids averaging $5.46{\pm}0.18$ kg BW. An intermediate group consisted of six kids, fed for ad libitum intake, that were slaughtered when they reached an average BW of $12.9{\pm}0.29$ kg. The remaining kids (n = 27) were randomly allocated into nine slaughter groups (blocks) of three animals distributed among three amounts of dry matter intake (DMI; ad libitum and restricted to 70% or 40% of ad libitum intake). Animals in a group were slaughtered when the ad libitum-treatment kid in the group reached 20 kg BW. In a digestibility trial, 21 kids (same animals of the comparative slaughter) were housed in metabolic cages and used in a completely randomized design to evaluate the energetic value of the diet at different feed intake levels. The net energy for maintenance ($NE_m$) was $417kJ/kg^{0.75}$ of empty BW (EBW)/d, while the metabolizable energy for maintenance ($ME_m$) was $657kJ/kg^{0.75}$ of EBW/d. The efficiency of ME use for NE maintenance ($k_m$) was 0.64. Body fat content varied from 59.91 to 92.02 g/kg of EBW while body energy content varied from 6.37 to 7.76 MJ/kg of EBW, respectively, for 5 and 20 kg of EBW. The net energy for growth ($NE_g$) ranged from 7.4 to 9.0 MJ/kg of empty weight gain by day at 5 and 20 kg BW, respectively. This study indicated that the energy requirements in goats were lower than previously published requirements for growing dairy goats.
Effects of Dietary Garlic Powder on Growth, Feed Utilization and Whole Body Composition Changes in Fingerling Sterlet Sturgeon, Acipenser ruthenus
Lee, Dong-Hoon;Lim, Seong-Ryul;Han, Jung-Jo;Lee, Sang-Woo;Ra, Chang-Six;Kim, Jeong-Dae 1303
A 12 week growth study was carried out to investigate the supplemental effects of dietary garlic powder (GP) on growth, feed utilization and whole body composition changes of fingerling sterlet sturgeon Acipenser ruthenus (averaging weight, 5.5 g). Following a 24-h fasting, 540 fish were randomly distributed to each of 18 tanks (30 fish/tank) under a semi-recirculation freshwater system. The GP of 0.5% (GP0.5), 1% (GP1), 1.5% (GP1.5), 2% (GP2) and 3% (GP3) was added to the control diet (GP0) containing 43% protein and 16% lipid. After the feeding trial, weight gain (WG) of fish fed GP1.5, GP2 and GP3 were significantly higher (p<0.05) than those of fish fed GP0, GP0.5 and GP1. Feed efficiency and specific growth rate (SGR) showed a similar trend to WG. Protein efficiency ratio of fish fed GP1.5, GP2, and GP3 were significantly higher (p<0.05) than those of fish groups fed the other diets. A significant difference (p<0.05) was found in whole body composition (moisture, crude protein, crude lipid, ash, and fiber) of fish at the end of the experiment. Significantly higher (p<0.05) protein and lipid retention efficiencies (PRE and LRE) were also found in GP1.5, GP2, and GP3 groups. Broken-line regression model analysis and second order polynomial regression model analysis relation on the basis of SGR and WG indicated that the dietary optimal GP level could be greater than 1.77% and 1.79%, but less than 2.95% and 3.18% in fingerling sterlet sturgeon. The present study suggested that dietary GP for fingerling sterlet sturgeon could positively affect growth performance and protein retention.
Effects of Dietary Supplementation with the Combination of Zeolite and Attapulgite on Growth Performance, Nutrient Digestibility, Secretion of Digestive Enzymes and Intestinal Health in Broiler Chickens
Zhou, P.;Tan, Y.Q.;Zhang, L.;Zhou, Y.M.;Gao, F.;Zhou, G.H. 1311
This study was designed to investigate the effects of basal diets supplemented with a clay product consisting of zeolite and attapulgite (ZA) at 1:1 ratio on growth performance, digestibility of feed nutrients, activities of digestive enzymes in small intestine and intestinal health in broiler chickens. In experiment 1, 112 one-day-old male chickens were randomly divided into 2 groups with 8 replicates of 7 chickens each. In experiment 2, 84 one-day-old male chickens were randomly allocated into 2 groups consisting 6 replicates of 7 chickens each. The experimental diets both consisted of a maize-soybean basal control diet supplemented with 0% or 2% ZA. The diets were fed from 1 to 42 days of age. The results showed that ZA supplementation could increase body weight gain (BWG) and feed intake (FI), but had no significant effect on feed conversion ratio. The apparent digestibility values of crude protein and gross energy were significantly increased (p<0.05) by ZA from 14 to 16 d and 35 to 37 d. Dietary ZA treatment significantly increased (p<0.05) the activities of amylase, lipase and trypsin in jejunal digesta and the activities of maltase and sucrase in jejunal mucosa on days 21 and 42. The ZA supplementation also significantly increased (p<0.05) the catalase activity, reduced (p<0.05) the malondialdehyde concentration in the jejunal mucosa. In addition, a decrease of serum diamine oxidase activity and an increase (p<0.05) in concentration of secretory immunoglobulin A in jejunal mucosa were observed in birds treated with ZA on 21 and 42 days. It is concluded that ZA supplementation (2%) could partially improve the growth performance by increasing BWG and FI. This improvement was achieved through increasing the secretion of digestive enzymes, enhancing the digestibilites of nutrients, promoting intestinal health of broiler chickens.
The Optimum Feeding Frequency in Growing Korean Rockfish (Sebastes schlegeli) Rearing at the Temperature of 15℃ and 19℃
Mizanur, Rahman Md.;Bai, Sungchul C. 1319
Two feeding trials were conducted to determine the optimum feeding frequency in growing Korean rockfish, (Sebastes schlegeli) reared at the temperatures of $15^{\circ}C$ and $19^{\circ}C$. Fish averaging $92.2{\pm}0.7$ g (mean${\pm}$standard deviation [SD]) at $15.0{\pm}0.5^{\circ}C$ and $100.2{\pm}0.4g$ ($mean{\pm}SD$) at $19.0{\pm}0.5^{\circ}C$ water temperature were randomly distributed into each of 15 indoor tanks containing 250-L sea water from a semi-recirculation system. A total of five feeding frequency groups were set up in three replicates as follows: one meal in a day at 08:00 hour, two meals a day at 08:00 and 17:00 hours, three meals a day at 08:00, 14:00, and 20:00 hours, four meals a day at 08:00, 12:00, 16:00, and 20:00 hours, and one meal every 2 days at 08:00 hour. Fish were fed at the rate of 1.2% body weight (BW)/d at $15^{\circ}C$ and 1.5% BW/d at $19^{\circ}C$. At the end of 8 wks of feeding trial weight gain and specific growth rate were significantly higher at the fish fed groups of one meal a day and two meals a day at $15^{\circ}C$ and fish fed groups of 1 meal every 2 days at $19^{\circ}C$ were significantly lower than those of all other fish fed groups. Glutamic oxaloacetic transaminase and glutamic pyruvic transaminase of fish fed group at 1 meal every 2 days was significantly higher than those of all other fish fed groups in both experiments. Weight gain, specific growth rate and condition factor were gradually decreased as the feeding frequency increased. The results indicate that growing Korean rockfish 92 and 100 g perform better at $15^{\circ}C$ than $19^{\circ}C$ water temperature. As we expected, current results have indicated that a feeding frequency of 1 meal a day is optimal for the improvement of weight gain in growing Korean rockfish grown from 92 g to 133 g at $15^{\circ}C$ and 100 g to 132 g at $19^{\circ}C$ water temperature.
Genome-wide Association Study for Warner-Bratzler Shear Force and Sensory Traits in Hanwoo (Korean Cattle)
Dang, C.G.;Cho, S.H.;Sharma, A.;Kim, H.C.;Jeon, G.J.;Yeon, S.H.;Hong, S.K.;Park, B.Y.;Kang, H.S.;Lee, S.H. 1328
Significant SNPs associated with Warner-Bratzler (WB) shear force and sensory traits were confirmed for Hanwoo beef (Korean cattle). A Bonferroni-corrected genome-wide significant association (p< $1.3{\times}10^{-6}$) was detected with only one single nucleotide polymorphism (SNP) on chromosome 5 for WB shear force. A slightly higher number of SNPs was significantly (p<0.001) associated with WB shear force than with other sensory traits. Further, 50, 25, 29, and 34 SNPs were significantly associated with WB shear force, tenderness, juiciness, and flavor likeness, respectively. The SNPs between p = 0.001 and p = 0.0001 thresholds explained 3% to 9% of the phenotypic variance, while the most significant SNPs accounted for 7% to 12% of the phenotypic variance. In conclusion, because WB shear force and sensory evaluation were moderately affected by a few loci and minimally affected by other loci, further studies are required by using a large sample size and high marker density.
Evaluation of Various Packaging Systems on the Activity of Antioxidant Enzyme, and Oxidation and Color Stabilities in Sliced Hanwoo (Korean Cattle) Beef Loin during Chill Storage
Kang, Sun Moon;Kang, Geunho;Seong, Pil-Nam;Park, Beomyoung;Cho, Soohyun 1336
The effects of various packaging systems, vacuum packaging (VACP), medium oxygen-modified atmosphere packaging (50% $O_2/20%$ $CO_2/30%$ $N_2$, MOMAP), MOMAP combined with vacuum skin packaging (VSP-MOMAP), high oxygen-MAP (80% $O_2/20%$ $CO_2/30%$ $N_2$, HOMAP), and HOMAP combined with VSP (VSP-HOMAP), on the activity of antioxidant enzyme, and oxidation and color stabilities in sliced Hanwoo (Korean cattle) beef loin were investigated at $4^{\circ}C$ for 14 d. Higher (p<0.05) superoxide dismutase activity and total reducing ability were maintained in VSP-MOMAP beef than in HOMAP beef. Lipid oxidation (2-thiobarbituric acid reactive substances, TBARS) was significantly (p<0.05) retarded in MOMAP, VSP-MOMAP, and VSP-HOMAP beef compared with HOMAP beef. Production of nonheme iron content was lower (p<0.05) in VSP-MOMAP beef than in HOMAP beef. Red color ($a^*$) was kept higher (p<0.05) in VSP-MOMAP beef compared with MOMAP, HOMAP, and VSP-HOMAP beef. However, VACP beef was found to have the most positive effects on the antioxidant activity, oxidation and red color stabilities among the various packaged beef. These findings suggested that VSP-MOMAP was second to VACP in improving oxidation and color stabilities in sliced beef loin during chill storage.
Copy Number Deletion Has Little Impact on Gene Expression Levels in Racehorses
Park, Kyung-Do;Kim, Hyeongmin;Hwang, Jae Yeon;Lee, Chang-Kyu;Do, Kyoung-Tag;Kim, Heui-Soo;Yang, Young-Mok;Kwon, Young-Jun;Kim, Jaemin;Kim, Hyeon Jeong;Song, Ki-Duk;Oh, Jae-Don;Kim, Heebal;Cho, Byung-Wook;Cho, Seoae;Lee, Hak-Kyo 1345
Copy number variations (CNVs), important genetic factors for study of human diseases, may have as large of an effect on phenotype as do single nucleotide polymorphisms. Indeed, it is widely accepted that CNVs are associated with differential disease susceptibility. However, the relationships between CNVs and gene expression have not been characterized in the horse. In this study, we investigated the effects of copy number deletion in the blood and muscle transcriptomes of Thoroughbred racing horses. We identified a total of 1,246 CNVs of deletion polymorphisms using DNA re-sequencing data from 18 Thoroughbred racing horses. To discover the tendencies between CNV status and gene expression levels, we extracted CNVs of four Thoroughbred racing horses of which RNA sequencing was available. We found that 252 pairs of CNVs and genes were associated in the four horse samples. We did not observe a clear and consistent relationship between the deletion status of CNVs and gene expression levels before and after exercise in blood and muscle. However, we found some pairs of CNVs and associated genes that indicated relationships with gene expression levels: a positive relationship with genes responsible for membrane structure or cytoskeleton and a negative relationship with genes involved in disease. This study will lead to conceptual advances in understanding the relationship between CNVs and global gene expression in the horse.
Recombinant Goat VEGF164 Increases Hair Growth by Painting Process on the Skin of Shaved Mouse
Bao, Wenlei;Yin, Jianxin;Liang, Yan;Guo, Zhixin;Wang, Yanfeng;Liu, Dongjun;Wang, Xiao;Wang, Zhigang 1355
To detect goat vascular endothelial growth factor (VEGF)-mediated regrowth of hair, full-length VEGF164 cDNA was cloned from Inner Mongolia cashmere goat (Capra hircus) into the pET-his prokaryotic expression vector, and the recombinant plasmid was transferred into E. coli BL21 cells. The expression of recombinant $6{\times}his-gVEGF164$ protein was induced by 0.5 mM isopropyl thio-${\beta}$-D-galactoside at $32^{\circ}C$. Recombinant goat VEGF164 (rgVEGF164) was purified and identified by western blot using monoclonal anti-his and anti-VEGF antibodies. The rgVEGF164 was smeared onto the dorsal area of a shaved mouse, and we noted that hair regrowth in this area was faster than in the control group. Thus, rgVEGF164 increases hair growth in mice.
Changes in Hematological, Biochemical and Non-specific Immune Parameters of Olive Flounder, Paralichthys olivaceus, Following Starvation
Kim, Jong-Hyun;Jeong, Min Hwan;Jun, Je-Cheon;Kim, Tae-Ik 1360
Triplicate groups of fed and starved olive flounder, Paralichthys olivaceus (body weight: $119.8{\pm}17.46$ g), were examined over 42 days for physiological changes using hematological, biochemical, and non-specific immune parameters. No significant differences in concentrations of blood hemoglobin and hematocrit and plasma levels of total cholesterol, aspartate aminotransferase, alanine aminotransferase, glucose, and cortisol were detected between fed and starved groups at any sampling time throughout the experiment. In contrast, plasma total protein concentrations were significantly lower in starved fish than in fed fish from day 7 onwards. Moreover, plasma lysozyme concentrations were significantly higher in starved flounder from day 21 onwards. This result confirms that the response of olive flounder to short-term (less than about 1.5 months) starvation consists of a readjustment of metabolism rather than the activation of an alarm-stress response. The present results indicate that starvation does not significantly compromise the health status of fish despite food limitation.
Semi-domesticated and Irreplaceable Genetic Resource Gayal (Bos frontalis) Needs Effective Genetic Conservation in Bangladesh: A Review
Uzzaman, Md. Rasel;Bhuiyan, Md. Shamsul Alam;Edea, Zewdu;Kim, Kwan-Suk 1368
Several studies arduously reported that gayal (Bos frontalis) is an independent bovine species. The population size is shrinking across its distribution. In Bangladesh, it is the only wild relative of domestic cattle and also a less cared animal. Their body size is much bigger than Bangladeshi native cattle and has prominent beef type characters along with the ability to adjust in any adverse environmental conditions. Human interactions and manipulation of biodiversity is affecting the habitats of gayals in recent decades. Besides, the only artificial reproduction center for gayals, Bangladesh Livestock Research Institute (BLRI), has few animals and could not carry out its long term conservation scheme due to a lack of an objective based scientific mission as well as financial support. This indicates that the current population is much more susceptible to stochastic events which might be natural catastrophes, environmental changes or mutations. Further reduction of the population size will sharply reduce genetic diversity. In our recent investigation with 80K indicine single nucleotide polymorphism chip, the $F_{IS}$ (within-population inbreeding) value was reported as $0.061{\pm}0.229$ and the observed ($0.153{\pm}0.139$) and expected ($0.148{\pm}0.143$) heterozygosities indicated a highly inbred and less diverse gayal population in Bangladesh. Prompt action is needed to tape the genetic information of this semi-domesticated bovine species with considerable sample size and try to investigate its potentials together with native zebu cattle for understanding the large phenotypic variations, improvement and conservation of this valuable creature.
|
CommonCrawl
|
Home | Climbing | * Rope Climbing 101 Climbing (old version) Quadcopter High Line Rigging Share This Page
Most recent update:
Introduction | Descent | Ascent | Rope Placement | Climbing Physics | Conclusion | Disclaimer
NOTE: This content is obsolete — read this article instead.
Figure 1: Climbing rig with a Distel Hitch
Figure 2: Climbing rig with a Petzl I'D S descender
First, before we get started, read this disclaimer. I'm more than willing to discuss my climbing experiences and make recommendations, but if you fall, you're on your own.
When I was two or three years old, a certain low-life relative of mine held me over a precipice and threatened to drop me. My screams would have set off car alarms had they existed back then. Over the years I've come to see that a lifelong paralyzing fear of heights came directly from that experience, but I didn't see a way to overcome it.
Well, guess what, boys and girls? Taking up single rope ascents has had a beneficial, not to say transformative, effect on my attitude toward heights. For decades I would approach cliffs and high places, irrationally thinking my father would drop me for sure this time. But as this article shows, I've recently trained myself to trust a stout rope, my equipment and my skills, so I can look straight down from a great height, perfectly confident that the rope will hold me. To put it simply, I've replaced a lifelong irrational fear with a justified confidence in my equipment and skills based on reason and experience.
Before training myself in single rope ascents, I would watch YouTube videos of experienced climbers ascending ropes to great heights, thinking to myself, "How do people do that? If I climbed that high I would freeze up and rescuers would have to save a quivering mass of Jello." But the experience of ascending a rope that you know can support the mass of a Volkswagen, assisted by devices whose operation you fully understand, over and over in perfect safety, gradually confers a rational confidence based on experience.
It's not a perfect world and this experience is a mixed blessing — now I trust ropes, but I still don't trust people.
This page is mostly about single rope technique ascents and descents, both on steep surfaces and suspended without support.
About a year ago, I lost a quadcopter in a tree (a story told here). The tree was about 24 meters (80 feet) tall but had no limbs below about 18 meters (60 feet). I wanted to recover my nice drone, but it wasn't obvious how I would ascend a bare tree trunk.
In spite of my fear of heights I've done a lot of climbing over the years, including some technical climbing, but I had never tried to apply those skills to a tall tree. I did some research and realized that, with the right equipment and some nerve, I could perform a single rope ascent, snatch the drone and descend, in relative safety.
I acquired a simple but adequate climbing rig and returned to the site with a huge slingshot meant to launch a light line as the first phase of an ascent (explained more fully below). As it turned out, the light slingshot line snagged the drone, which obligingly fell from the tree, thus depriving me of a climb I wasn't that enthusiastic about.
But since then, for a number of reasons, I've gotten much more enthusiastic about single rope ascents/descents of trees, suitable rappelling sites and elsewhere, for sport, exercise and to acquire new skills. On these pages I'll describe my experiences, show suitable equipment and offer some advice. I hope you enjoy this narrative.
Discussing descents before ascents might seem out of order, but beginners are more likely to try a descent, a rappell, before learning how to ascend a rope. Also descent equipment is simpler than a configuration able to support a safe ascent.
Climbing ropes come in two primary types — static and dynamic:
A static rope is designed to stretch as little as possible and is the preferred type to minimize wasted energy and for work environments in which position control is important.
A dynamic rope is designed to stretch a certain amount. This feature is meant to protect a climber from the consequences of a fall that's arrested by the rope, which acts like a shock absorber.
Beginners who want to minimize their risk, or who plan to climb for sport in natural areas, should acquire a dynamic rope.
Beginners who want to work on a rope, who need to control their position accurately, or who want to minimize the effort of a climb, should acquire a static rope. But always remember that a static rope is inherently riskier in a fall.
Rope Size
Figure 3: Typical rope size labeling (Petzl I'D S)
As one gets farther into technical climbing and rope work, it quickly becomes appparent that nearly all climbing hardware works best with a certain rope size. So, given the recreational focus of this article and unless the reader has a special reason for choosing a particular rope size, and in the context of the Petzl descenders to be described, I recommend an 11mm rope, which seems to be in the middle of the size range expected by common climbing hardware (see Figure 3).
As one acquires experience with modern descenders and begins to acquire field experience, the importance of rope size and whether it's a static or dynamic rope becomes more apparent. For the Petzl descenders described in this article, an 11mm rope seems optimal, but if there's any way to test a rope size (or several rope sizes) on a particular descender in advance of purchase, by all means take advantage of the opportunity. Connecting a descender to a rope and leaning back in a harness will tell you much more than any specification list or advertisement. If the line is too small it will pass through the descender without resistance, and unless you apply some tension to the free end you might fall. If the line is too large, it might not pass through the descender at all under specific circumstances.
In one case in the field, my 1/2" (12.5mm) rope fell into some mud, which caused it to absorb some water and dirt and increase slightly in size. This made it completely unusable with my Petzl descender — the rope jammed in the descender mechanism and refused to budge.
Descent Methods
In the climbing physics section below I explain why a safe descent requires that the energy acquired in the ascent be dissipated, mostly as heat. This heat must be managed. If not dealt with properly, the heat can burn the climber's hands, his rope and/or his descent devices.
The basic idea of a safe descent is to moderate the rate of descent by controlling the release of energy, usually through a friction device. Here are some examples of friction descent devices, from primitive to sophisticated:
Friction hitch.
Figure 4: Distel Hitch
Figure 5: Distel Hitch under load
A Distel Hitch is one example of a friction hitch that, if rigged correctly, can be used to support a climber's weight and control a descent. It has the advantage that (if rigged correctly) if released it will halt the climber's descent. The climber controls a Distel Hitch descent by pulling down on the top of the hitch, which reduces its grip on the climbing rope and allows it to slide.
One drawback to a Distel Hitch is that it can bind up and refuse to release its grip on the climbing rope. Another drawback is that, in a fast or long descent, the hitch can overheat and melt, with potentially disastrous consequences.
Friction hitches often require fine tuning to work as intended, and should always be tested under load in advance of actually being used at height. And in the final analysis, this class of hitch is not a very good or safe way to descend.
Figure-8 Descender
Figure 6: Figure-8 descender under load,
controlled by hand
Figure 7: Figure-8 descender under load, controlled
by a Machard/French Prusik (lower left)
The figure-8 descender is a traditional descent device with disadvantages that for some may outweigh its simplicity and low cost. The device is easy to rig, but it's also easy to drop while rigging. And in the classic configuration (Figure 6), the amount of force required to control the descent is high, and if the climber releases his grip on the rope, he falls. This means the climber has only one free hand to deal with issues during descent, and can't easily stop or pause while descending.
Always wear gloves when using a figure-8 descender in the traditional configuration (Figure 6) — the amount of force on the climber's hands is high.
A remedy for the figure-8's drawbacks is to pair it with a Machard/French Prusik (Figure 7, lower left). The Frenck Prusik controls the descent rate while handling only a small percentage of the descent energy, so the hitch isn't likely to melt under load like the Distel Hitch method described above. Also, very important, if the climber releases his grip on the French Prusik, the descent stops. This means the climber can have both hands free if needed.
The climber controls descent by pulling down on the top of the French Prusik hitch, thereby reducing its grip on the rope. This combination of figure-8 descent device and French Prusik to control descent can, if rigged correctly, produce a good, smooth descent with complete control, and the advantage that the climber can stop the descent by simply letting go.
Advanced Descenders
Figure 8: Petzl GriGri descender under load
Figure 9: Petzl I'D S descender under load
Advanced descenders have many benefits — they can dissipate a lot of heat wthout failing, most have failsafe features that stop a descent if the climber releases his grip, and they can usually be integrated into a unified ascent/descent rig that allows a climber to change direction with little or no reconfiguration.
The Petzl GriGri (Figure 8) is a smaller, less expensive version of the Petzl I'D S (Figure 9). The GriGri has a descent handle (not visible in Figure 8) the climber can use to control his rate of descent. Compared to the I'D S, the GriGri's "sweet spot" (the control handle's available range for smooth descent) is smaller — some might say it's nonexistent by comparison with the more expensive descender. In most cases descending with the GriGri requires one hand on the descent control handle and one hand on the rope, so it's a bit inconvenient to use.
The Petzl I'D S (Figure 9) is a rather expensive, but very good, descent device. Its descent handle has a number of features that add to usability and/or safety. The control handle's "sweet spot" is wider and more meaningful than for the less expensive GriGri model. And the control handle has a panic-off feature that halts the descent if the climber pulls the handle too hard.
Unlike the I'D L (a similar Petzl model not shown here), the I'D S has an openable gate, which means the climber can attach and detach the climbing rope while the descender remains safely tethered to a carabiner.
Both the GriGri and the I'D S allow the climber to change direction very easily, in some cases without any configuration change at all. That means calling these devices "descenders," as their manufacturer does, is a bit misleading — they work just fine as the central element in an ascent/descent configuration that has every advantage over having to re-rig in order to change direction.
Now for the somewhat more complex task of ascending a single rope. To address ascent methods, we should assume the most challenging circumstance — the climber is suspended only by the rope and has no nearby surfaces to help in the ascent. I suggest this because in a mixed-terrain climb that includes unsupported ascent as one possibility, if the equipment can't function in that mode, the climb stops. Also, the reader should remember that my original reason for taking up single line ascents was to get up a tree that had no limbs to assist a conventional climb, so from the outset, I knew an unsupported ascent was the problem to be solved.
Unsupported Ascent
The most efficient equipment for unsupported ascent takes advantage of all the climber's limbs — both arms, both legs. But for reasons of practicality or economics, many climbers use a configuration that exploits fewer limbs in the effort. Here are some devices designed to assist an unsupported ascent:
Hand Ascender
Figure 10: Petzl rope clamp hand ascender
A rope clamp hand ascender relies on a gripping cam that the user engages and disengages from the rope. This kind of ascender is available in right-hand and left-hand models.
Chest Ascender or "Croll"
Figure 11: Petzl chest ascender/Croll
This device is normally attached to the midpoint of a climbing harness. In a single rope ascent it's configured to cooperate with a hand or foot ascender as the climb progresses. It can also be used as a fall arrester in a technical-climbing setup, in which the user passes a safety line through the device as he ascends using conventional climbing methods.
Foot Ascender
Figure 12: Petzl foot ascender
This device, available in right and left foot models, is meant to cooperate with hand and/or chest ascenders and takes advantage of the climber's leg strength to assist the ascent.
Descender used as ascender
Figure 13: Petzl I'D S mated to pulley
and hand ascender
Figure 14: Wider view showing foot lines
attached to hand ascender
This configuration exploits the fact that the Petzl I'D S descender works equally well as an ascender, which allows a single setup to work for both ascent and descent with little or no reconfiguration required.
Figure 14 shows two lines attached to the hand ascender leading to foot loops for assistance during the climb.
In practice, the climber pulls the hand ascender downward with arm and leg effort, which causes the I'D S to slide up the rope, then the climber slides the hand ascender up the rope and repeats the process.
The purpose of the pulley is to give the climber a 2:1 mechanical advantage when pulling the rope exiting the pulley. In practice, the climber ascends by pulling with both arms and legs, so this setup maximizes efficiency and distribution of effort.
In this setup, to change directions the climber need only disengage the hand ascender and let it slide freely along the rope, then operate the I'D S descent clutch — very simple.
While learning single rope ascent I've tried all the schemes and devices shown above. While testing a combination of hand ascender and chest croll for ascent and figure-8 device for descent, I saw that changing directions was a major undertaking that included the risk of dropping something essential to the descent. This got me thinking about the advantages of a clutch-based descender like the Petzl I'D S.
For a while I thought using a foot ascender along with a hand ascender would give optimal results, but after acquiring the Petzl I'D S I realized that foot loops attached to the hand ascender was a more practical configuration that allowed a very easy change of direction (i.e. just disengage the hand ascender).
Rope Placement
This section relates mostly to preparations for ascending a tree, and the problem of lofting a rope to a desired height for ascent. For short climbs it may be possible to simply throw the climbing rope over a tree limb, but for any higher ascent the process has multiple phases:
Use some kind of launcher to propel a light, weighted line over the target limb.
Use the light line to lift the climbing line into position.
Climb.
I've used hand slingshots with limited success. The idea is that if one uses a light enough line, such as a monofilament nylon fishing line with a small weight tied to the end, the line can be lofted to a substantial height. The problem is that the weight often tends to be deflected by leaves and not get to the intended target.
Figure 15: Loading a weighted line into a huge slingshot
(Click image for full size)
After some experimentation I settled on the setup shown in Figure 15. At first glance it sounds pretty crazy — a 250 centimeter (eight foot) pole with a beefy slingshot fork at one end rigged with a very large rubber band. I modified my slingshot with a mechanical-advantage pulley and release setup (visible in Figure 15, lower right) to allow the big rubber band to be stretched much tighter than simple arm force would allow, and a trigger release scheme to facilitate accurate aiming and release at full stretch. This setup allows me to accurately place lines in at most 30 meter (nearly 100 foot) trees, more than enough height for my purposes.
Climbing Physics
This may surprise some of my readers, but there's some physics involved in climbing and descending, and since I do mathematical physics, I want to include some derivations that, in spite of their seemingly esoteric origins, are worth reading, for practical application as well as knowledge of physics.
(Readers who don't like math can skip ahead to the conclusion below.)
Gravitational Potential Energy
Remember these key points for what follows:
To ascend, the climber must use energy from his muscles to overcome gravity.
At the top of the climb, the acquired energy has the form of gravitational potential energy, energy implicit in the climber's position in Earth's gravitational field, which we calculate below.
Then, to descend, the acquired energy must be dissipated, mostly as heat. This is why climbing ropes and descender devices heat up during descents.
Energy is conserved — all the energy acquired while climbing is dissipated while descending.
Here is the classic gravitational potential energy ($PE_g$) equation that can be applied to any two masses:
\begin{equation} PE_g = \frac{-G m_1 m_2}{r} \end{equation}
$PE_g$ = Gravitational potential energy, Joules.
G = Universal gravitational constant.
$m_1$ = Mass of one body (in this case Earth), kilograms.
$m_2$ = Mass of the other body (in this case the climber), kilograms.
$r$ = distance from the center of one mass to the center of the other, meters.
Equation (1) isn't particularly useful in everyday climbing, because it gives an absolute energy value (the total gravitational energy of a mass located at Earth's surface), and we're more likely to want an energy differential, for example the energy difference between ground level and a chosen climbing height, so we can compute the effort required to ascend, as well as the heat that will be dissipated while descending. For this purpose we can derive a simpler equation.
Differential Energy
First, let's write out the full form of the energy differential equation:
\begin{equation} E_{h} = -\frac{G m_{c} m_{e}}{h + r_{e}} + \frac{G m_{c} m_{e}}{r_{e}} \end{equation}
Terms not already explained:
$h$ = Climber's height above ground, meters.
$E_h$ = energy required to ascend to height $h$, Joules.
$m_c$ = Climber's mass in kilograms.
$m_e$ = Earth's mass in kilograms.
$r_e$ = Earth's radius in meters.
Simple Work Equation
To simplify climbing calculations, let's create a new term representing earth's gravitational acceleration:
\begin{equation} g = \frac{G m_{e}}{r_{e}^{2}} = 9.822 \end{equation}
$g$ = Gravitational acceleration at Earth's surface in units of meters / second2 (physicists call this quantity "little $g$").
With little $g$ at our disposal, and given that we want an energy differential for a height $h$, we can write this greatly simplified equation:
\begin{equation} w = m g h \end{equation}
$w$ = work, units Joules.
$m$ = climber's mass in kilograms.
$g$ = little $g$, derived in equation (3).
$h$ = climb height in meters.
Here's an example question using equation (4) — a climber with a mass of 82 kilos (180 pounds) ascends 100 meters (328 feet). How much energy does he expend while climbing?
$E$ = 82 * 100 * 9.822 = 80,540.4 Joules
Next question. A given climber can sustain a continuous power output of 2/10 horsepower. How much time will the climber need to ascend the full 100 meter height?
NOTE: Power is the time derivative of energy and is espressed in watts. One watt requires a continuous energy expenditure of one joule over time (watts = joules / seconds). One horsepower = 746 watts, so 2/10 horswepower = 149.2 watts.
t = 80,540.4 / 149.2 = 539.8 seconds or 9 minutes.
NOTE: As we transition from ascending to descending, pay close attention to the fact that all the energy acquired during the ascent must be dissipated during the descent, mostly as heat. This results from the law of energy conservation — energy cannot be created or destroyed, only changed in form.
Next question — let's say the climber wants to descend quickly, say, in ten seconds. How much power will be dissipated during the descent?
w = 80,540.4 / 10 = 805.4 watts.
That's very likely to be too high a power dissipation rate for most descent equipment, and unless the climber is wearing gloves he's likely to burn his hands and possibly his rope. So the next question — how long will the descent take if the climber limits his power dissipation to 250 watts?
t = 80,540.4 / 250 = 322 seconds or 5.4 minutes.
250 watts is still a lot of heat, but because most of it is dissipated along the length of the rope, it's more manageable than the earlier example. The climber would still be well advised to wear gloves (imagine having a small but powerful, 250-watt light bulb gripped tightly in your hand for five minutes).
This leads us to the first bit of common-sense advice — if you want to preserve your rope and your hands, limit your descent rate. The idea is that low power levels can be safely dissipated by radiation and convection, but high power levels can and will burn your rope and/or your hands. This factor is less important with modern mechanical descent devices able to dissipate heat efficiently, but will always be a limiting factor when descending using traditional methods like rope friction descenders as shown in Figure 1 above.
Calories Burned
What we call dietary calories are actually equal to what in physics are called kilocalories, and one Joule equals 2.39×10−4 kilocalories. Also, a typical person is only 20-25% efficient in turning calories into climbing energy. Let's use a typical efficiency 22% for our next computation.
In the above problem, as the climer ascended 100 meters, he expended 80,540.4 Joules. With an efficiency of 22%, how many dietary calories ($dc$) did he expend during the climb?
\begin{equation} dc = \frac{j \times 2.39 \times 10^{-4}} {0.22} \end{equation}
So in our example problem, $dc$ = 80,540.4 x 2.39 x 10-4 / 0.22 = 87.5 dietary calories.
Single rope ascent is a very useful skill, a source of vigorous exercise, and a confidence-builder as well. In my view it's well worth the investment in time and equipment. I hope this article has provided a useful survey of the kinds of equipment that are available as well as some useful climbing configurations. Thanks for reading!
In no event shall the author be liable for any loss or damage including without limitation, indirect or consequential loss or damage, or any loss or damage whatsoever arising from injury or loss of life arising out of, or in connection with, the use of the information presented in these pages.
|
CommonCrawl
|
Journal of Innovation and Entrepreneurship
A Systems View Across Time and Space
Methodology for assessing the effectiveness of regional infrastructure facilities to support scientific, technical and innovation activities in the context of the synergy effect: analysis, formation and study
Vladimir Byvshev ORCID: orcid.org/0000-0001-5903-13791,2,
Kristina Parfenteva ORCID: orcid.org/0000-0003-2991-21293,
Irina Panteleeva ORCID: orcid.org/0000-0003-3292-07284,5,
Danil Uskov ORCID: orcid.org/0000-0003-2628-48251,2 &
Vadim Demin ORCID: orcid.org/0000-0002-4777-91762,6
Journal of Innovation and Entrepreneurship volume 11, Article number: 65 (2022) Cite this article
The objective of the study is to develop a method for the evaluation of efficiency of the regional infrastructure facilities for the support of scientific, research, technical and innovation activities. This paper presents an analysis of the methods currently used in Russia and abroad, identifying their advantages and disadvantages. Based on the analysis, the author suggests a list of parameters characterizing the given domain, and develops a system for the integrated parameter calculation; a list of the regions is provided with the potential for the most objective efficiency evaluation and testing of the developed method; conclusions are made based on the demonstrated calculations. As a result, the developed method is considered effective and promising. Regardless of the composite index currently being in the stability zone, some of its components may lie in the catastrophic risk zone, posing potential threats to the further innovative development of the subject. At the same time, it is found out that an important role in the efficient functioning of the infrastructure supporting the scientific, research, technical and innovation activities belongs to the legislative environment and the closed innovative cycle (synergy effect).
Active implementation of innovations capable of ensuring the development of knowledge-based economy is a priority of the state policy of the Russian Federation, but the innovative development at the national level appears impossible without well-balanced regional development. One of the factors impeding the innovative development is uneven spatial development of the Russian Federation and increasing regional differentiation. For this reason, an important role is assigned to the formation and development of a regional infrastructure for the support of scientific, research, technical and innovation activity, expected to eliminate the present imbalances through the creation of favorable conditions for the development and further implementation of innovations (Colombelli et al., 2020; Filipishyna et al., 2018; Firsova et al., 2020; Rezk et al., 2015, 2016; Veselovsky et al., 2019; Zollo et al., 2011).
Once the required infrastructure is formed, one cannot expect immediate and effective output or quick solution of the existing problems. The studies show that if the support infrastructure functions efficiently, the result can be seen only in 5 years, provided that the terms of implementation of the related innovation activity plans are met and proper communication with the innovation market members is present (Ascani et al., 2020; Bezpalov et al., 2019; Kiškis et al., 2016; Laužikas et al., 2016; Parrilli et al., 2020).
For the infrastructure to function efficiently and fulfill the assigned functions, its activity needs to be evaluated in order to correct the underperforming processes. Such evaluation may be carried out using specialized methods. Let us review some methodological approaches to such evaluation, currently used across the globe.
Current practices analysis
In the USA, a composite innovation index is calculated for American counties (Based Economy. U. S. Economic Development Administration). The index consists of four blocks with different weight factors: human capital (30%), economic dynamics (30%), productivity and employment (30%), as well as welfare (10%). The index covers both the resources for innovation activity and its outcomes (Statsamerica, 2009).
The Adam Smith International method (ASI) consists of five stages (1. Evaluation of infrastructure creation costs, 2. Process evaluation, 3. Output evaluation, 4. Results evaluation, 5. Impact evaluation). At the costs stage, the amount of investment required, for instance, for the infrastructure hardware compliant with a list of applicable standards, is evaluated. The process stage assumes the achievement of target indices by the infrastructure's support for the scientific, research, technical and innovation activity. At the output stage, the innovation companies' satisfaction with the infrastructure is analyzed. The main results of such infrastructure's functioning may include diffusion of technologies, R&D quality improvement, etc. The final stage is impact, a vivid parameter of which is the degree of integration in the international markets (Assets publishing service, 2012).
Another method for the innovative development evaluation used in the EU is European Innovation Scoreboard based on a system of 29 indices; later, it served as a basis for the creation of the Regional Innovation Scoreboard of 16 indices. Both systems comprise three index blocks: innovative development factors, companies' activity and innovation activity results. According to the evaluation, the European Union regions can be divided into five types: innovation leaders, strong innovators, moderate innovators, medium innovators and modest innovators (Kudriavtseva, 2012).
The methods for the evaluation of the regional support infrastructure for the scientific, research, technical and innovation activity of Russian researchers are exclusively limited to the analysis of innovative development in the region. Such a tendency may be related to some problems which may occur in choosing the required indices due to the large number of infrastructure facilities in the regions and impossibility of applying identical parameters capable of evaluating the activity of such objects appropriately. At the same time, one should not ignore the fact that the innovative process in the region, as a rule, is carried out in a certain innovation climate, which is greatly determined by the functioning of the regional infrastructure. Based on this statement, we may conclude that its evaluation must be inseparable from the evaluation of the innovative process it underlies.
From the point of view of consistency of its classification, systematization and evaluation of the components of the support infrastructure for scientific, research, technical and innovation activity in the region, as well as the selection of indices that characterize the state and efficiency of its functioning, the method of Panshin and Kashitsyna appears to be the most complete one (Pan'shin & Kashitsyna, 2009). According to the authors, besides providing for a comprehensive study of the development level of the support infrastructure for scientific, research, technical and innovation activity, it is universally applicable to the majority of the Russian regions. The present study forms an integrated index differentiated by the types of elements of the support infrastructure for the scientific, research, technical and innovation activity.
Summarizing the review of the existing methods, we may remark that their authors evaluated both the innovation activity of a region as a whole and the indices that indicate the efficiency of the support infrastructure performance. However, there is a risk of problems in finding sources of data due to the absence of open access to such data. Apart from that, one may notice that there is a lack of indices characterizing the structure of the support infrastructure for scientific, research, technical and innovation activity, as well as regional regulatory legal documents for the domain of innovations.
Methods and approaches
Having reviewed the foreign and Russian approaches to evaluating the regional support infrastructure for scientific, research, technical and innovation activity, it is hereby suggested that an integrated methodology should be developed that would encompass the advantages of the methods described above, at the same time compensating for their drawbacks. The developed methodology will feature the following advantages:
integrity—comprehensive demonstration of the efficiency of functioning of the support infrastructure for scientific, research, technical and innovation activity, with regard to the synergy effect caused by the operation of the entire infrastructure not considered by the reviewed methods;
sufficiency—the evaluation system is limited with a required number of parameters capable of fully characterizing the condition and efficiency of the support infrastructure for scientific, research, technical and innovation activity, including its regulatory and legal component not considered by the reviewed methods;
information support—the evaluation is based on open and accessible statistic data;
practical applicability—the evaluation system can be not only applied within the given study, but used by regional authorities in their continuous work on correcting the strategic, regulatory, legal documentation and improving the regional innovation policy mechanisms (Ruiga et al., 2019).
At the first stage, the system of parameters for evaluating the efficiency of the support infrastructure for scientific, research, technical and innovation activity is formed, and the parameters are grouped by aspects. The parameter groups and their threshold values presented in Table 1 are suggested to be used as applicable aspects.
Table 1 Parameters for evaluating the support infrastructure for scientific, research, technical and innovation activity in the region by differentiated aspects
The regulatory documents foreseen by the first group of parameters lay the foundation for the regional innovation development and operation efficiency of the support infrastructure, as they improve the environment for further integration of scientific and production processes. The regulatory legal support in science and innovation domain shall be focused not only on the execution of special regulatory legal acts, but also their actualization, which is caused by the complex nature of science and innovation domain (Bondarev & Turina, 2011). The specificity of determining the reference values for this parameter group depends on the presence or absence of an up-to-date document. Depending on that, it may be equal to one or zero (if unavailable in the region).
The group of infrastructure availability parameters is based on the comprehensiveness of the facilities of the support infrastructure for scientific, research, technical and innovation activity in the region. This is explained by the fact that an infrastructure must be a single whole, which is achieved by the integration of all the elements required for the implementation of a complete innovation process. Therefore, the absence of one necessary element in the region will indicate a "gap" in the service range at a given stage of the innovation process, and the absence of full support for the implementation of innovative projects by its objects (Dalekin, 2018; Koroleva & Ermoshina, 2014).
The infrastructure availability of the regional support infrastructure for scientific, research, technical and innovation activity is evaluated with a focus on the logic of the innovation process, as stable regional development requires the innovation initiatives to be supported at all of its stages. For this reason, it is reasonable to classify the infrastructure elements by their belonging to the five stages of the innovation process (Fig. 1) (Ivashchenko & Denisova, 2022).
Innovation process stages
The specificity of evaluating the availability of the support infrastructure for scientific, research, technical and innovation activity in the region is the determination of reference values for each of its elements. Thus, in case of availability of the objects incorporated into it, such element of the support infrastructure for scientific, research, technical and innovation activity is assigned a reference value equal to one. If any element of the infrastructure is unavailable in the region, the reference value equals to zero (Fig. 2).
Evaluation of availability of the support infrastructure for scientific, research, technical and innovation activity in the region
If all the elements are available in the region, it is, therefore, concluded that the support infrastructure for scientific, research, technical and innovation activity really is a whole system that offers support to the innovation activity subjects on all levels of the innovation process. This way, the synergy effect takes place, and such region is assigned another extra point in the computation of individual parameters in this subgroup. The effect manifests itself in the growing operation efficiency of the regional support infrastructure for scientific, research, technical and innovation activity in the process of interaction, integration of each of its elements into a well-balanced system for the achievement of a common goal. In this way, it may also signify a dramatic growth and enhancement of the innovation development level in the region (Ivanova, 2019).
Due to the differentiation of the RF constituent entities by many of the factors listed above, the sufficiency and optimality of the available infrastructure facilities in the region shall be evaluated with a relative parameter (Fig. 2), equal to the ratio of the number of infrastructure facilities to the total number of organizations engaged in the research and development domain.
In the creation of innovations, the region's potential parameter group is based on the statistic characteristics of the staffing and human capital of the region, as well as the financial investment into education made by the state. The staffing and human capital is the key factor that promotes the development of the science and innovation potential, as it is people, not machines or investments, who generate the ideas for innovations and scientific discoveries. Of special importance is the establishment of long-term relationship between the government authorities and researchers. The studies show that the territories where the greatest number of innovations are implemented feature a higher innovation staffing potential (Bell et al., 2019; Khuchbarov, 2015; Kremer, 2020; Semenov, 2007).
The group of commercialization and effectiveness parameters of the scientific and innovation activity of the region reveals the parameters of financing the science and innovations, activeness in the innovation and top technology development, as well as in further innovative product manufacture. The parameters are selected due to their usability to evaluate the functioning of the science and innovation spheres in the region.
For the fourth group of parameters, the threshold values are computed by the calculation of mean values for the total number of the regions of the Russian Federation (Loginov, 2015). However, in the present research, the regions with a stably low and continuously deteriorating innovation development level were not included into the calculation not to understate the threshold values.
Therefore, the threshold values were determined based on the statistical data of the regions with higher rating according to the regional innovation development-focused rating agencies, such as RIA Rating (Official Website of The Rating Agency "RIA Rating"), Association of Innovative Regions of Russia (Official Website of Association of Innovative Regions of Russia), Expert RA (Official Website of The Rating Agency "Expert RA"), as well as statistic studies of knowledge-based economy carried out by Science and Research University Higher School of Economics (Gokhberg et al., 2020). To collect objective results, the data collected in the 5-year period from 2014 to 2018 in 15 innovation-development regions were reviewed.
After the selection of the required parameters and reference values for them, at the second stage of the study, the individual indicators for the evaluation of the support infrastructure for scientific, research, technical and innovation activities shall be computed using formulas (1, 2).
During the individual parameter computation at the second stage of work, the parameters were standardized (Mityakov & Mityakov, 2014; Mityakov, 2018). The standardization function may expand the dynamic range of result visualization. As there are two threshold values applied to the selected parameters, which are "not more than" and "not less than", one of the available options of the function for the "not less than" ratio is the function in formula (1):
Therefore, for the "not more than" type of ratio, the function from formula (2) is applied:
where \({I}^{r}\) is a particular indicator for a group of parameters; n is a number of parameters in a given group; \({x}_{i}^{r}\) is a value of the ith parameter in region r; \({a}_{i}^{r}\) is a threshold value of the ith parameter in region r.
At the third stage, based on the computed particular indicators, the generalized index of the activity of the support infrastructure for scientific, research, technical and innovation activity, is computed using formula (3) as a weighed sum of the standardized particular indicators.
$${\mathrm{IR}}^{r}={\sum }_{i=1}^{n}\frac{{k}^{r}}{p}\times {I}^{r},$$
where \({\mathrm{IR}}^{r}\) is a generalized index for the support infrastructure for scientific, research, technical and innovation activity in region r; \(\frac{{k}^{r}}{p}\) is the weight factor of particular indicators; \({k}^{r}\) is the number of parameters included into each group; p is the total number of parameters for all groups; \({I}^{r}\) is the same as in formula (1).
At the fourth stage, the values of the particular indicators and indices evaluating the performance of the support infrastructure for scientific, research, technical and innovation activity, are compared to the standardized values presented in Table 2.
Table 2 Interpretation of indicators' values by risk grade
A distinctive features of the developed method is the availability and accessibility of all parameters included into the computation system, consideration of both quantitative and qualitative indicators, such as the regulatory legal support of innovation activity, and consideration of the synergy effect caused by the comprehensive operation of the support infrastructure.
For the verification of the method for the evaluation of activity of the facilities of the support infrastructure for scientific, research, technical, innovation activity, the regions were selected based on ranking the regions by their overall indices. The first group holds the leader region and the regions with the total index value different from that of the leading region's by not more than 20%. The second group includes the regions inferior to the leader by more than 20% but less than 40%. The interval in the third group is 41–60%, and in the fourth, it exceeds 60%. Then, regions from each group were selected to compute the score of the support infrastructure for scientific, research, technical and innovation activity of the region by the differentiated aspects (Table 3).
Table 3 Computation of the parameters for evaluating the support infrastructure for scientific, research, technical and innovation activity in the region by differentiated aspects
The regions included into the first group and selected for the method testing are the city of Moscow, the city of Saint Petersburg and the Republic of Tatarstan. Let us analyze the results of the efficiency evaluation of the support infrastructure for scientific, research, technical and innovation activity in the given regions.
From Table 3, we may conclude, that the generalized index for the support infrastructure for scientific, research, technical and innovation activity of the city of Moscow was in the stable zone, which means that its functions in quite an efficient manner. However, in some groups of parameters, significant differences were discovered. The lowered index of the regulatory and legal parameters group can be explained by the suspension of the Law No. 22 of June 06, 2012 "On scientific, technical and innovation activity in the city of Moscow", and the Decree of the Government of Moscow No. 513-PP of June 26, 2007 "On the development strategy of the city of Moscow for the period until 2025", according to which, one of the tasks set for the achievement of the strategic goal of increasing competitiveness and innovation development in the region was the efficient operation of the support infrastructure for scientific, research, technical and innovation activity. According to the city mayor, the project was developed in collaboration with the research community of the Higher School of Economics in 2011, but the city development was ahead of the document development pace.
At the same time, regardless of the absence of some regulatory legal acts, there representatives of each of the elements of the support infrastructure for scientific, research, technical and innovation activity are present and developing in the city, thereby providing the synergy effect. Apart from that, it is also possible to consider the quantitative distribution of the infrastructure facilities by element types (Table 4).
Table 4 Distribution of the facilities of the support infrastructure for scientific, research, technical and innovation activity by elements
Despite good support of all the innovation cycle stages, the city of Moscow lacks optimality of infrastructure availability; the optimal value is 1. (Official Website of NIAC MIIRIS). Reviewing the computed data, one can notice that the major number of facilities were evaluated by their human resources and production-technological infrastructure. The city of Moscow is a real leader in the number of top educational institutions, including higher education institutions and research organizations.
Let us study the parameters of Saint Petersburg from Table 3. Similar to Moscow, the operation efficiency of the support infrastructure for scientific, research, technical and innovation activity has been stable throughout the decade.
Since 2014, the regulatory legal support of science and innovation domains in the city has included the entire range of documents underlying the innovation activity as a whole and maintaining the support infrastructure for scientific, research, technical and innovation activity. Therefore, it is possible to note that the enactment of the related decrees caused the rise in the generalized index in 2014 compared with the previous year's value.
Reviewing the infrastructure availability (Table 4), it is worth noticing that there is a full range of infrastructural elements in the territory of the city, which proves the availability of the full innovation process cycle.
However, considering the ratio of the number of facilities of the support infrastructure for scientific, research, technical and innovation activity to the number of organizations involved in research and development activities, one may conclude that it is not optimal, being, on the average, 9% below the established reference value (Table 4). For this reason, there is a need to increase the number of facilities of the support infrastructure for scientific, research, technical and innovation activity in order to correct the ratio.
The fourth group parameters are dramatically decreasing in 2015. It was caused by the excessive technological innovation expenditures for the actual production volume, which may mean, according to the specialists, low technology transfer efficiency (Bondarenko et al., 2018; Mityakov, 2018).
Furthermore, let us consider the Republic of Tatarstan; according to Table 3, the efficiency of its facilities of the support infrastructure for scientific, research, technical and innovation activity appears relatively stable, though it is necessary to focus on some of the parameter groups.
The low value of the particular regulatory legal support availability indicator in 2009 was caused, first of all, by a "gap" in the special legislative act of the Republic due to the presence of its draft. At the same time, the Republic runs the State Program "Economic Development and Innovation Economy of the Republic of Tatarstan for 2014–2021", which, among other goals, aims at creating the right environment for the efficient functioning of the innovation economy, including the functioning of the support infrastructure for scientific, research, technical and innovation activity.
The Republic has the entire set of infrastructure elements, which indicates proper infrastructure availability for the complete innovation process cycle (Table 4).
The greatest number of facilities was found in the information element. The Republic of Tatarstan is the leader of the Russian Federation in the number of Technology and Innovation Support Centers, which inventors and research fellows may use for free to search information in the closed Rospatent databases. Moreover, Innopolis, a special economic zone of technical and implementation type, has been founded in Tatarstan. This project is unique in creating a good environment for the comfortable accommodation and work for the young IT specialists in the same area with the necessary social infrastructure, and offering a range of benefits and preferences.
It should be noted that the human resources element of the infrastructure has been in the moderate risk zone throughout the studied period. Such a tendency is caused, first of all, by the low number of specialists engaged in scientific research and development; however, regardless of the failure to fulfill the reference parameter value, the Republic of Tatarstan is one of the leaders in innovation development with a stable growth in the invention activity and a big number of the developed top production technologies.
The next analyzed region is the Krasnoyarsk Territory. Based on the data in Table 3, the generalized index of the support infrastructure for scientific, research, technical and innovation activity of the Krasnoyarsk Territory underwent some changes, which means that the reasons for such changes need to be revealed.
The regulatory legal support, being the foundation for the development and operation of the support infrastructure for scientific, research, technical and innovation activity is noted to be developing in a stagewise manner. For instance, in 2009–2010, there were no strategic planning documents at the regional level; neither was there a State Program for this domain in 2009–2013. However, in 2011 and 2014, the situation changed through the enactment of the Law of the Krasnoyarsk Territory No.13-6629 of December 01, 2011 "On Scientific, Research, Technical and Innovation Activity in the Krasnoyarsk Territory", and the establishment of the State Program of the Krasnoyarsk Territory "Development of Investment Activity, Small and Medium-Sized Businesses" for 2014–2030.
Studying the availability of the proper number of facilities of the support infrastructure for scientific, research, technical and innovation activity in the given element (Table 4), it was discovered that at the present moment, this branch of support is not developing effectively enough. Since the beginning of the State Program execution, no significant increase in the number of infrastructure facilities has occurred, and the optimality of the infrastructure availability has also been unstable.
Among the problems of the region, it is also possible to notice a low share of the personnel engaged in scientific research and development. One of the reasons of the negative tendencies described above is the insufficiency of action and lack of governmental support for the scientific, research, technical and innovation activity in the Krasnoyarsk Territory, which is also expressed in the program document of the region. For this reason, there is a need for creating new and developing the current support infrastructure facilities, as well as for increasing the level of integration and interaction between science and business.
Throughout the studied period, the generalized index of the support infrastructure for scientific, research, technical and innovation activity of the Perm Territory has been in the moderate risk zone (Table 3).
Similar to that of the Krasnoyarsk Territory, the regulatory and legal support here has been developing in a stage-wise manner. For instance, in 2009 and 2010, there was no necessary strategic planning document. Besides, in 2009–2012, there were no governmental programs in the studied domain either. However, they were enacted in 2011 and 2013. Thus, the state subprogram of the State Program "Economic Development and Innovation Economy" determines one of its tasks for the achievement of the set goal as the development of a proper support infrastructure for scientific, research, technical and innovation activity to promote accelerated creation and development of innovative enterprises.
Since 2013, there has been a rise in the number of production and technological element facilities of the support infrastructure for scientific, research, technical and innovation activity in the Perm Territory (Table 4).
Throughout the decade, the fourth group performance has been in the significant and moderate risk zones. The Perm Territory experiences an obvious deficit of research personnel, including research fellows with academic degrees; for efficient operation of the innovation system, the region needs twice more than it currently has. It directly indicates the need for the development of the human resources in the support infrastructure for scientific, research, technical and innovation activity of the Territory to increase the numbers of highly qualified staff. Apart from that, another problem is a lack of budget required for taking the major measures aimed at the promotion of the innovation development of the region.
Studying the Volgograd Oblast, one may notice that its generalized index is within the significant and moderate risk zones. For 4 years, from 2009 to 2013, the region had no State Program. However, enacted in 2014, it has been adhered to until today. The document highlights the lack of budget and infrastructure for the implementation of the innovative projects of the innovation activity subjects. Let us consider the elements of the support infrastructure for scientific, research, technical and innovation activity of the region comprehensively (Table 4).
Among all the regions studied above, the optimality index of the infrastructure availability in the Volgograd Oblast is the lowest, far behind the threshold value. For this reason, it appears necessary to create additional infrastructure facilities to make the right environment for the enhancement of the innovation development in the region; is also causes parameter groups 3 and 4 in the region to fall to the significant and moderate risk zones.
In the infrastructure development level, the Republic of Sakha (Yakutia) is similar to the region shown above, but, unlike that of the Volgograd Oblast, the generalized index of the support infrastructure for scientific, research, technical and innovative activity of the Republic stands in the stability zone. Let us analyze the factors that could cause such an outcome.
Thus, analyzing the regulatory legal support, it is worth mentioning the absence of any State Program for the research, technical and innovation development of the Republic of Sakha from 2009 to 2011; it was developed in 2012, as in other regions, aimed at promoting the innovation and technological development and forming a competitive economy through the development of regional support infrastructure for scientific, research, technical and innovation activity. The main elements of the support infrastructure for scientific, research, technical and innovation activity of the Republic of Sakha (Yakutia) are presented in Table 4.
Starting from 2014, there has been a rise in the number of facilities in the production and technological element of the support infrastructure, which proves the availability of a set of actions to enhance the development of the region.
Studying the indicators of the innovation creation potential groups, one may notice that all of them were in the moderate risk zone, close to stability. This is explained by the fact that the state education expenditures to GRP ratio in the Republic is sufficient, and there is a high share of researchers with an academic degree. However, one can also notice a low share of personnel engaged in research and development. This may be related to the actual profile of the Republic focused on the mining industry, which constitutes over 50% in the GRP structure. For this reason, there is a low invention activity factor as well as a low share of innovative products, works and services.
The last region to be analyzed was the Bryansk Oblast. Reviewing the generalized score of the support infrastructure for scientific, research, technical and innovation activity, it is found that all of them are in the stability zone. Let us outline the major factors.
The Oblast has the complete necessary regulatory and legal base that creates foundation for the innovation development and operation of the support infrastructure for scientific, research, technical and innovation activity. Therefore, in this regard, the region stands in the stability zone.
It has all types of the support infrastructure elements, and, therefore, a complete innovation cycle. Based on Table 4, we may conclude that throughout several years, there have been no changes in the dominating majority of elements, except for the production and technological one. At the same time, there is a staffing problem caused by a low number of people engaged in research and development; however, the share of innovative products and well-developed top production technologies in the total technologies currently used by industries is high. Therefore, it may be assumed that the elements of the support infrastructure for scientific, research, technical and innovation activity that are represented at a greater ratio to one innovative enterprise than they are in other regions, generate a positive result in the innovative economy development.
A high value of the total integral index does not always signify absence of problems in some parameter groups; a region being a leader in the integral index may be inferior to an average region in some parameters. For this reason, comprehensive analysis and effective innovation development practice selection requires considering every parameter group separately.
As a result of the test carried out, we may notice the effectiveness of the method for determination of bottlenecks in the composition and efficiency of the support infrastructure for scientific, research, technical and innovation activity. The applied method may be used to form a general integral index and to analyse the efficiency of infrastructure functioning in some particular parameters. Separation of the regulatory and legal support parameters into one group allowed to identify the bottlenecks and gaps (almost in all analyzed regions) in the regional legislation that hold back the innovation progress. The synergy effect of the support infrastructure makes it possible to take into consideration such parameters as the closed innovation cycle and the way of innovation support at different stages of technological transfer in various regions. Remarkably, a big number of support infrastructure facilities does not always mean efficient operation.
All data generated or analysed during this study are included in this published article.
NIAC MIIRIS:
National Information and Analytical Center for Monitoring the Innovative Infrastructure of Scientific and Technical Activities and Regional Innovation Systems
GRP:
Gross Regional Product
RA:
Adam Smith International [Electronic resource]: Measuring and Maximising Value for Money in Infrastructure Programmes. (2012). Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/194319/measure-maximize-VfM-infrastructure.pdf
Ascani, A., Bettarelli, L., Resmini, L., Balland, P. (2020). Global networks, local specialisation and regional patterns of innovation, Research Policy, 49 (8).
Bell, A., Chetty, R., Jaravel, X., Petkova, N., & Reenen, J. V. (2019). Who Becomes an Inventor in America? The Importance of Exposure to Innovation, the Quarterly Journal of Economics, 134(2), 647–713. https://doi.org/10.1093/qje/qjy028
Bezpalov, V. V., Fedyunin, D. V., Solopova, N. A., Avtonomova, S. A., & Lochan, S. A. (2019). A model for managing the innovation-driven development of a regional industrial complex. Entrepreneurship and Sustainability Issues, 6(4), 1884–1896. https://doi.org/10.9770/jesi.2019.6.4(24)
Bondarenko, V. V., Chakaev, R. R., Leskina, O. N., Tanina, M. A., Yudina, V. A., & Kharitonova, T. V. (2018). The role of regional development institutions in enhancing the innovation potential of the constituent entities of the Russian Federation. Regional Economics: Theory and Practice, 16(1), 83–100. https://doi.org/10.24891/re.16.1.83
Bondarev, S. A., & Turina, VYu. (2011). Regulatory support innovation in the Russian Federation. Vestnik Saratov State Technical University, 2(60), 345–349.
Colombelli, A., Grilli, L., Minola, T., & Mrkajic, B. (2020). To what extent do young innovative companies take advantage of policy support to enact innovation appropriation mechanisms? Research Policy, 49, 1–17.
Crossing the next regional frontier [Electronic resource]. Information and Analytics Linking Regional Competitiveness to Investment in a Knowledge – Based Economy. U. S. Economic Development Administration, 2009. Available at: http://www.statsamerica.org/innovation.
Dalekin, P.I. (2018). Improvement of Standard Legal Support of Innovative Activity in The Russian Federation. Proceedings of Voronezh State University. Series: Law, 3(34), 51–61. Available at: http://www.vestnik.vsu.ru/pdf/pravo/2018/03/2018-03-03.pdf
Filipishyna, L., Bessonova, S., & Venckeviciute, G. (2018). Integral assessment of developmental stability: Cases of Lithuania and Ukraine. Entrepreneurship and Sustainability Issues, 6(1), 87–99. https://doi.org/10.9770/jesi.2018.6.1(7)
Firsova, A. A., Makarova, E. L., & Tugusheva, R. R. (2020). Institutional management elaboration through cognitive modeling of the balanced sustainable development of regional innovation systems. Journal of Open Innovation: Technology, Market, and Complexity, 6(2), 32. https://doi.org/10.3390/joitmc6020032
Gokhberg, L. M., et al. (2020). The Rating of Innovative Development of The Constituent Territories of The Russian Federation: Analytical Report. National Research University Higher School of Economics, 6, 264 p. Available at: https://www.hse.ru/primarydata/rir2019
Ivanova, I., Strand, O., & Leydesdorff, L. (2019). The synergy and cycle values in regional innovation systems: The case of Norway. Foresight and STI Governance, 13(1), 48–61. https://doi.org/10.17323/2500-2597.2019.1.48.61
Ivashchenko, N. P., Denisova, S. A. Innovation Process and Forms of Innovation Commercialisation. Academic Material. [Electronic resource]. Available at: http://www.msu.ru/projects/amv/doc/h6_1_6_1_nom3_2.pdf
Khuchbarov, A. U. (2015). Human Capital as A Factor of Regional Development. Humanities, Social-economic and Social Sciences, 10, 72–75. Available at: https://www.online-science.ru/m/products/economi_sciense/gid3166/pg0/
Kiškis, M., Limba, T., & Gulevičiūtė, G. (2016). Business value of intellectual property in biotech SMEs: Case studies of Lithuanian and Arizona's (US) firms. Entrepreneurship and Sustainability Issues, 4(2), 221–234. https://doi.org/10.9770/jesi.2016.4.2(11)
Koroleva, L. P., & Ermoshina, T. V. (2014). Innovation infrastructure: Composition and place in the innovation system of the economy. Innovations, 12(194), 59–61.
Kremer, M. (2020). Experimentation, Innovation, and Economics. American Economic Review, 110(7), 1974–1994. https://doi.org/10.1257/aer.110.7.1974
Kudriavtseva, S. S. (2012). Comparative Analysis of Innovative Development of The European Union Countries and Russia (Based on the European Innovation Scoreboard Methodology)—Problems of Raw Economies. Bulletin of Kazan Technological University, 15(19), 204–208.
Kumar, I., Nolan, Ch., Cordes, S., Conover, J., Rogers, C., Strange, R., Galloway, H., Morrison, Ed, Drabenstott, M., Waldorf, B. (2009). Crossing the Next Regional Frontier: Information and Analytics Linking Regional Competitiveness to Investment in a Knowledge-Based Economy. Available at: https://www.researchgate.net/publication/277142742_Crossing_the_Next_Regional_Frontier_Information_and_Analytics_Linking_Regional_Competitiveness_to_Investment_in_a_Knowledge-Based_Economy
Laužikas, M., Miliūtė, A., Tranavičius, L., & Kičiatovas, E. (2016). Service innovation commercialization factors in the fast food industry. Entrepreneurship and Sustainability Issues, 4(2), 108–128. https://doi.org/10.9770/jesi.2016.4.2(1)
Loginov K. K. (2015). Analysis of Indicators of Regional Economic Safety. The Russian Automobile and Highway Industry Journal, 2(42), 132–139. Available at: https://vestnik.sibadi.org/jour/article/view/136/134
Mityakov, E. S. (2018). Development of Methodology and Tools for Monitoring the Economic Security of Russian Regions (Thesis for the Degree of the Doctor of Economics), Nizhny Novgorod, 360 p. Available at: https://search.rsl.ru/ru/record/01009944030
Mityakov, E.S., Mityakov, S.N. (2014). Adaptive approach to calculation of the generalized index of economic security. Modern Problems of Science and Education, 2, 415–421. Available at: https://www.elibrary.ru/item.asp?id=21471412
Official Website of Association of Innovative Regions of Russia [Electronic resource]. Available at: https://i-regions.org/
Official Website of The Rating Agency "Expert RA" [Electronic resource]. Available at: https://www.raexpert.ru
Official Website of The Rating Agency "RIA Rating" [Electronic resource]. Available at: https://riarating.ru
Pan'shin, I.V., Kashitsyna T.N. (2009). Improving the methodology for the component assessment of the level of the regional innovation infrastructure development. Regional Economics: Theory and Practice, 30(123), 43–53. Available at: https://www.elibrary.ru/item.asp?id=12854823
Parrilli, M. D., Balavac, M., & Radicic, D. (2020). Business innovation modes and their impact on innovation outputs: Regional variations and the nature of innovation across EU regions. Research Policy. https://doi.org/10.1016/j.respol.2020.104047
Rezk, M. A., Ibrahim, H. H., Radwan, A., et al. (2016). Innovation magnitude of manufacturing industry in Egypt with particular focus on SMEs. Entrepreneurship and Sustainability Issues, 3(4), 307–318. https://doi.org/10.9770/jesi.2016.3.4(1)
Rezk, M. A., Ibrahim, H. H., Tvaronavičienė, M., et al. (2015). Measuring Innovations in Egypt: Case of Industry. Entrepreneurship and Sustainability Issues, 3(1), 47–55. https://doi.org/10.9770/jesi.2015.3.1(4)
Ruiga, I. R., Byvshev, V. I., Panteleeva, I. A. (2019). Evaluation of Efficiency of Regional Innovation Infrastructure: Formation of Methodological Principles and Performance Indicators. Innovative Development of Economy, 2(50), 62–71. Available at: https://ineconomic.ru/en/node/1864
Semenov, E. V. (2007). Human Capital in Science. International Organisations Research Journal, 2(4), 24–39. Available at: https://www.hse.ru/en/mag/nohead/vmo/2007-2-4/26975518.html
Veselovsky, M. Y., Izmailova, M. A., Lobacheva, E. N., Pilipenko, P. P., & Rybina, G. A. (2019). Strategic management of innovation development: Insights into a role of economic policy. Entrepreneurship and Sustainability Issues, 7(2), 1296–1307. https://doi.org/10.9770/jesi.2019.7.2(34)
Website of the National Information and Analytical Center for Monitoring the Innovative Infrastructure of Scientific and Technical Activities and Regional Innovation Systems [Electronic resource]. Available at: http://www.miiris.ru
Zollo, G., Autorino, G., De Crescenzo, E., et al. (2011). A Gap Analysis of Regional Innovation Systems (ris) With Medium-Low Innovative Capabilities: The Case of Campania Region (Italy), 8th ESU Conference on Entrepreneurship, 1–21. Available at: https://idus.us.es/handle/11441/57776
The study was supported by the Russian Science Foundation grant No. 22-78-00011, https://rscf.ru/project/22-78-00011/.
Department of Economic and Financial Security, Siberian Federal University, Krasnoyarsk, Russian Federation
Vladimir Byvshev & Danil Uskov
Department of Organization and Support of Competitions, Krasnoyarsk Regional Fund for Support of Scientific and Technical Activities, Krasnoyarsk, Russian Federation
Vladimir Byvshev, Danil Uskov & Vadim Demin
Organizational and Analytical Department, Krasnoyarsk Regional Fund for Support of Scientific and Technical Activities, Krasnoyarsk, Russian Federation
Kristina Parfenteva
Department of Advertising and Socio-Cultural Activities, Siberian Federal University, Krasnoyarsk, Russian Federation
Irina Panteleeva
Krasnoyarsk Regional Fund for Support of Scientific and Technical Activities, Krasnoyarsk, Russian Federation
Department of Artificial Intelligence Systems, Siberian Federal University, Krasnoyarsk, Russian Federation
Vadim Demin
Vladimir Byvshev
Danil Uskov
Correspondence to Kristina Parfenteva.
The authors declare that they have no competing interests to declare.
Byvshev, V., Parfenteva, K., Panteleeva, I. et al. Methodology for assessing the effectiveness of regional infrastructure facilities to support scientific, technical and innovation activities in the context of the synergy effect: analysis, formation and study. J Innov Entrep 11, 65 (2022). https://doi.org/10.1186/s13731-022-00257-w
DOI: https://doi.org/10.1186/s13731-022-00257-w
Infrastructure for support of scientific
Technical and innovation activities
Regional economy
Innovation activity
|
CommonCrawl
|
Full paper | Open | Published: 11 April 2019
A large scale of apparent sudden movements in Japan detected by high-rate GPS after the 2011 Tohoku Mw9.0 earthquake: Physical signals or unidentified artifacts?
Peiliang Xu1,
Yuanming Shu2,
Jingnan Liu3,
Takuya Nishimura1,
Yun Shi4 &
Jeffrey T. Freymueller5
A moment magnitude Mw9.0 earthquake hit northeastern Japan at 14:46:18 (Japan Standard Time), March 11, 2011. We have obtained 1 s precise point positioning solutions for 1198 GEONET stations. Although GPS position time series have been routinely investigated and used as waveforms for dynamic inversion of earthquakes, we focus on exploring the spatial displacement features of GEONET stations for this earthquake. A movie inspection of high-rate GPS waveforms leads us to find that 76.21% of the GEONET stations in the Japanese islands subsided suddenly within 1 s between 14:59:45 and 14:59:46, Japan local time, with an average displacement of \(-\,2.43\), 2.83 and \(-\,4.75\) mm in the east, north and vertical components, respectively, about 15 min after the 2011 Tohoku earthquake. We have performed different types of independent tests, namely measurement error analysis, processing the GEONET data with a different software system, a statistical hypothesis testing under a simple assumption of sign distributions, the test computation of the displacement field outside of the Japanese islands and an independent test with the Japanese strong motion borehole network KiK-net, to see whether these sudden movements actually occurred. The first four independent tests are passed almost without any doubt, and the direction of the average sudden displacements is roughly consistent tectonically with the direction of subduction of the Pacific plate. Because there are only 78 KiK-net borehole stations available for an independent seismic test, the KiK-net results are marginally consistent with those of GEONET. In the daily seismological and geophysical practice, one may then conclude that the sudden movement within the second is real after passing these five independent tests. However, a further epoch-by-epoch check pinpoints a few more seconds with even a higher probability of sudden displacement from the 20-min three-component high-rate GPS waveforms after the main shock, or more precisely, the seconds between 14:59:04 and 14:59:05, 15:01:04 and 15:01:05, and 15:03:39 and 15:03:40 with 80.80, 84.14 and 85.89% of the GEONET stations simultaneously moving upward, southward and westward, respectively. Although these probabilities are very high, it may hardly be imagined that a large scale of sudden movements could occur repeatedly between 14:59:04 and 15:03:40. The high-rate GPS results imply that some detected sudden movements after the earthquake could be unidentified artifacts of GPS data processing, though we cannot rule out the possibility that the detected sudden movements in Japan after the 2011 Tohoku Mw9.0 earthquake are real physical signals.
Geodetic deformation measurements have played an important role in earth sciences and disaster prevention/reduction for years. They are uniquely fundamental to understanding the interseismic, coseismic and post-seismic deformation process of the earthquake cycle and earthquake mechanics (see e.g., Reid 1910; Whitten and Claire 1961; Prescott et al. 1989; Blewitt et al. 1993; Segall 1997; Thatcher et al. 1999). From two geodetic campaigns along the San Andreas fault made in the years 1851–1865 and 1874–1892 before the 1906 California earthquake and one in the years 1906–1907 after the earthquake, Reid (1910) proposed the elastic rebound theory to explain the mechanics of earthquakes, which has since become a landmark theory in seismology and earthquake engineering (Segall 1997; Dieterich 1974). When combining Reid's elastic rebound theory with geodetic measurements, particularly the measurements close to the hypocenter area, one can invert for static rupture models on the fault plane to further understand the physics of earthquakes (Thatcher et al. 1997; Simons et al. 2011; Iinuma et al. 2012). Precise seafloor geodetic displacement measurements have been shown to significantly improve to estimate the static slip distributions for the 2011 Tohoku Mw9.0 earthquake (Sato et al. 2011; Iinuma et al. 2012). Geodetic deformation measurements are used to compute coseismic strain changes due to earthquakes (Whitten and Claire 1961; Frank 1966; Clarke et al. 1998) and to evaluate strain accumulation over time to interpret the process of strain energy accumulation, which may provide clues for earthquake prediction (Fialko 2006). Space geodesy is also an indispensable component of earthquake early warning systems and real-time tsunami monitoring systems.
Japan is a tectonically very active area surrounded by plate margins, subduction zones, trenches and troughs and island arcs, since the oceanic Pacific and Philippine plates, the continental Eurasian plate and the North American plate all meet within the Japanese islands. As a result, Japan is full of high, and sometimes, even catastrophic risks from earthquakes and volcanoes. With strong support from the government and attributed to great effort by Japanese scientists, the Japanese islands are excellently equipped with continuous monitoring infrastructure, with the strong motion monitoring networks "KNET/KiK-net" and the continuous GPS monitoring network "GEONET" as two of the outstanding examples. Thus, Japan may be said to be one of the best natural fields for earthquake experiments in the world.
A moment magnitude Mw9.0 mega-earthquake, called the 2011 Tohoku earthquake, suddenly hit the northeastern Japan at 14:46:18 (Japan Standard Time, according to Japan Meteorological Agency), March 11, 2011 (http://www.jma.go.jp/jma/en/2011/Earthquake/Informationon2011Earthquake.html; Lay and Kanamori 2011). This subduction zone mega-earthquake ruptured an area of about 500 km by 200–300 km (Lay and Kanamori 2011; Simons et al. 2011; Iinuma et al. 2012), with a maximum slip of about 50–60 m, and generated a tragic tsunami along the northeastern coast of Japan, with a maximum run-up height of 40.5 m. The earthquake was well recorded by almost all the earthquake monitoring networks in Japan. Nevertheless, it clipped many seismometers in the northeastern part of Japan. Since seismometers are actually a simplified type of inertial measurement unit without measuring attitudes, seismic data collected at different instants close to the epicenter during a large earthquake are recorded in an instantly different, constantly changing body frame due to rotations and tilts, in addition to saturation, instrumental drifts and clipping, and some data can become unusable (Graizer 2010).
High-rate GNSS has been a hot research topic recently and found a lot of applications in earth sciences and civil engineering (see e.g., Kato et al. 2000; Larson et al. 2003; Larson 2009; Smalley 2009; Grapenthin and Freymueller 2011; Branzanti et al. 2013; Moschas and Stiros 2015; Xu et al. 2013, 2019). A number of methods have been proposed to ensure high precision high-rate GNSS waveform and displacement measurements. Important methodological advances include the invention and application of sidereal and spatiotemporal filters (see e.g., Wdowinski et al. 1997; Choi 2004; Dong et al. 2006; Ragheb et al. 2006; Larson et al. 2007), which enable removal of the effects of multipath and common mode GNSS errors, and precisely reconstruct high-rate GNSS waveforms and displacements. For more information on GNSS seismology, the reader is referred to the review paper by Larson (2009).
Almost all works on high-rate GNSS are focused on precise measurement of temporal waveforms and displacements. In this paper, we will primarily study the spatial variations of coordinates of GNSS stations within very short periods of time, 1 s in our case. We have processed 1 Hz GPS data from 1198 GEONET stations and obtained 1 s precise point positioning (PPP) time series of positions for these stations. A movie of these high-rate PPP waveforms leads us to find that 76.21% of the GEONET stations in the Japanese islands consistently subside suddenly within 1 s between 14:59:45 and 14:59:46.
The major purposes of this paper are twofold: (i) to describe how such a large scale of apparent sudden movements in Japan are identified/detected; and (ii) to make a thorough effort to test whether these apparent phenomena are physical signals or unidentified artifacts by examining any possible effect of GPS errors, checking the IGS stations outside Japan, processing the GEONET data with a different software system, performing statistical hypothesis testings and further carrying out the independent tests with GEONET and KiK-net data and finally checking epoch-wise PPP waveforms. In what follows, we will describe how the apparent phenomenon of sudden movements is identified/detected and how all the independent tests are carried out to test whether the detected sudden movements are physical signals or artifacts.
Sketch of independent tests
To examine whether the detected large-scale sudden movement after the earthquake is a real physical signal or an artifact, we have come up with six independent tests. The first test is to examine GPS measurement error sources and to confirm that they can either have no or negligible effect on the detected sudden movement within 1 s. The second independent test is to check whether the GEONET displacement pattern over the Japanese islands is consistent with similar sudden movements in the countries surrounding Japan by using the same software system. Both the sudden displacement patterns obtained by processing the high-rate GEONET and IGS GPS data in and outside the Japanese islands are basically fully consistent with each other. As the third independent test, we process the GEONET data with a different (well-known) software system. The displacement results of the GEONET stations from both software systems are consistent as well. Thus, the first three tests are passed successfully. As the fourth independent test, we assume that any GEONET station after the earthquake moves up or down with the same probability at any moment, which is equivalent to making the assumption that such a movement is purely attributed to white noise. This independent test is also passed with large statistical confidence, implying that Japan was subject to a large scale of sudden movement within 1 s about 13.5 min after the 2011 Tohoku earthquake with a large probability.
In addition to GEONET GPS data, the fifth thorough independent test has been carried out to examine the results between 14:59:45 and 14:59:46 with the borehole KiK-net strong motion data. Since the KiK-net stations are only triggered to start recordings of data, we have only a total of 78 borehole KiK-net stations available for this independent test. The KiK-net displacement results are marginally consistent with the detected sudden movement by GEONET. But at least, this fifth independent test with the borehole KiK-net strong motion data can be said to pass marginally. Nevertheless, since strong motion seismographs are well known to have serious problems in integrated displacements due to tilting and baseline shifts and since the number of available borehole KiK-net stations is extremely small, we cannot firmly conclude that the seismic KiK-net is definitely in favor of the detected movement by GEONET.
All reasonable independent tests that geoscientist may think of with both GPS and KiK-net data and in terms of statistical hypothesis testings seem to fully support the phenomenon of sudden movement within 1 s after the earthquake as a real physical signal. As a final independent test, we have checked whether the same phenomenon occurs at other epochs. We have written a computer code to screen the epoch-wise high-rate GPS PPP waveforms of all the GEONET stations. As a result, the computer code pinpoints a few more seconds with even a higher probability from the 20-min three-component high-rate GPS waveforms after the main shock, notably, the seconds between 14:59:04 and 14:59:05, 15:01:04 and 15:01:05, and 15:03:39 and 15:03:40 with 80.80, 84.14 and 85.89% of the 1198 GEONET stations simultaneously moving upward, southward and westward, respectively. From the physical point of view, it seems hard to believe that a mega-earthquake could trigger several sudden movements of large scale within 20 min, though we cannot completely exclude such a possibility either. On the other hand, as GPS experts, both theoretically and/or in practical GPS data processing, we suspect that the detected movements likely are artifacts rather than real physical signals, even though the high-rate PPP waveforms show that as many as 85.89% of 1198 GEONET stations move along one direction within 1 s.
GPS data, data processing methods and results
Displacements from the 2011 Tohoku earthquake were measured on almost all GEONET stations. The GEONET GPS data have been used to determine the displacement waveforms and coseismic fields for inverting rupture models of the 2011 Tohoku earthquake (Simons et al. 2011; Iinuma et al. 2012). For this research, we received, in total, the GPS data of 1221 GEONET stations at the sampling rate of 1 Hz on this mega-earthquake from the Geospatial Information Authority (GSI) of Japan via the Japan Association of Surveyors. Larson et al. (2003) applied kinematic relative positioning method (Larson 2011, private email communication on April 30, 2011) to process the 1 Hz GPS data for the 2002 Denali fault Mw7.9 earthquake and successfully showed that seismic surface waves can be correctly reconstructed from high-rate GPS measurements. For mega-earthquakes like the 2011 Tohoku earthquake, we cannot expect a fixed reference station in Japan, which can be assumed not to move during the earthquake. As a result, we have processed the 1 s sampling GPS data for 1221 GEONET stations by using the GNSS PPP function of the Wuhan University software package PANDA (Position And Navigation system Data Analysis) (Liu and Ge 2003), since GNSS PPP does not require any reference station; individual stations are positioned relative to the global network used to determine the orbits and clocks. The position time series of seventeen stations are too short to be useful in this work, and six more stations are too noisy with a very large variation, with two stations located in the Kanto area and four in the Tohoku area, respectively. The final number of the GEONET stations used for this report is 1198.
GNSS positioning has almost always been based on two types of GNSS observables, namely pseudoranges and carrier phases. The linearized observational equations of a pseudorange and a carrier phase observable between a satellite and a receiver can be written as follows:
$$\begin{aligned} P_r^s= \rho ({\mathbf{x}}_r, {\mathbf{x}}^s) + I_r^s + T_r^s + c(\delta t_r-\delta t^s) + \delta b_r^p - \delta b_s^p + \epsilon _{mp} + \epsilon _p, \end{aligned}$$
$$\begin{aligned} L_r^s= \rho ({\mathbf{x}}_r, {\mathbf{x}}^s) - I_r^s + T_r^s + c(\delta t_r-\delta t^s) + \lambda N + \delta b_r^l - \delta b_s^l + \epsilon _{ml} + \epsilon _l, \end{aligned}$$
where \(P_r^s\) and \(L_r^s\) are, respectively, the code and phase observables between satellite s and receiver r, \(\rho (\cdot ,\cdot )\) is the geometric distance between this pair of satellite and receiver. \({\mathbf{x}}_r\) and \({\mathbf{x}}^s\) stand for the positions of the receiver and the satellite, respectively. \(I_r^s\) and \(T_r^s\) are the ionospheric and tropospheric delays. \(\delta t_r\), \(\delta t^s\), \(\delta b_r^p\), \(\delta b_s^p\), \(\delta b_r^l\) and \(\delta b_s^l\) are the clock errors and the hardware code and phase biases of the receiver and the satellite, respectively. \(\lambda\) is the wavelength of the carrier wave, and N is the unknown integer of full cycle. \(\epsilon _{mp}\) and \(\epsilon _{ml}\) are the multipath errors of the code and carrier phase observables, respectively, which are frequency-dependent. \(\epsilon _p\) and \(\epsilon _l\) are the measurement noises of the code and carrier phase observables \(P_r^s\) and \(L_r^s\), respectively. The observational Eq. (1) symbolically applies to all frequencies and GNSS systems. We may note that the code biases \(\delta b_r^p\) and \(\delta b_s^p\) of (1a) can be corrected with the IGS products, and as a result, can be merged into the pseudorange \(P_r^s\). The carrier phase observable \(L_r^s\) of (1b) is supposed to include the satellite/receiver antenna PCO/PCV and wind-up corrections.
To attain the highest possible GNSS positioning accuracy, we have to either eliminate or correct all the systematic error terms in the observational Eq. (1). The former has been achieved in geodesy since the inception of GPS by applying the double difference operator to both code and phase measurements, if a baseline is sufficiently short. The latter has led to the innovative development of PPP by Zumberge et al. (1997). Since there is no fixed/reference station during a mega-earthquake like this 2011 Tohoku Mw9.0 earthquake, GNSS PPP processing is preferred over precise GNSS relative positioning for this research.
For conciseness of notation, in what follows, we will ignore superscripts/subscripts s and r for satellites and receivers. As in the case of other most widely used software systems such as GAMIT and GIPSY-OASIS, PANDA PPP uses ionospheric-free combinations of dual frequency P-code and carrier phase observables:
$$\begin{aligned} P_3= \frac{f_1^2}{f_1^2-f_2^2}P_1-\frac{f_2^2}{f_1^2-f_2^2}P_2, \end{aligned}$$
$$\begin{aligned} L_3= \frac{f_1^2}{f_1^2-f_2^2}L_1-\frac{f_2^2}{f_1^2-f_2^2}L_2, \end{aligned}$$
(see e.g., King et al. 1985; Brunner and Gu 1991; Bassiri and Hajj 1993), where \(f_i (i=1,2)\) are the L1 and L2 frequencies, and \(L_i (i=1,2)\) and \(P_i (i=1,2)\) are the raw carrier phase and code observations in the L1 and L2 frequencies, respectively. Since the GEONET stations are all equipped with choke rings to eliminate or mitigate multipath errors, both \(\epsilon _{mp}\) and \(\epsilon _{ml}\) can be omitted from (1). On the other hand, receiver and satellite phase biases cannot be separated from the ambiguity unknowns. As a result, GNSS PPP data processing is based on linearizing the ionospheric-free P-code and carrier phase observables (2):
$$\begin{aligned} \delta P_3^{kt}= \varvec{a}_{kt}\Delta \varvec{x}_t+\Delta e_t+\Delta T_k + \epsilon _{p3}^{kt}, \end{aligned}$$
$$\begin{aligned} \delta L_3^{kt}= \varvec{a}_{kt}\Delta \varvec{x}_t+\Delta e_t+\Delta T_k + \lambda _{if} N_{if}^k + \epsilon _{l3}^{kt}, \end{aligned}$$
to estimate four types of unknown parameters from ionospheric-free pseudoranges and carrier phase observables, namely the unknown coordinates of a station, the receiver clock error, the residual tropospheric delays and the non-integer ambiguities, where \(\delta P_3^{kt}\) and \(\delta L_3^{kt}\) are, respectively, the ionosphere-free code and phase observables after subtracting the corresponding approximate range between the kth satellite and the receiver. All model corrections have been fully taken into account in \(\delta P_3^{kt}\) and \(\delta L_3^{kt}\), except for their residual errors. The superscript and subscript k stands for the kth satellite, \(\varvec{a}_{kt}\) is the given row vector of the linearization coefficients, \(\Delta \varvec{x}_t\) is the vector of the coordinate corrections of the receiver at the epoch t in the local reference frame with the axes pointing to east, north and upward, \(\Delta e_t\) is the receiver clock range correction to be estimated at each epoch, \(\Delta T_k\) is the correction to tropospheric path delay to be estimated, \(\lambda _{if}\) is the wavelength of the ionosphere-free frequency, \(N_{if}^k\) is a non-integer ionosphere-free ambiguity unknown, and \(\epsilon _{p3}^{kt}\) and \(\epsilon _{l3}^{kt}\) represent the (systematic and random) residual errors of the ionosphere-free observables. In this study, a priori values of standard deviations for ionosphere-free phase and code observables are set to 3 mm and 30 cm, respectively. For more methodological details about PPP, the reader may refer to Kouba (2009) (see also Kouba and Héroux 2001).
When PANDA is used to process GPS data in PPP mode, it generally takes about 20–30 min for the floating ambiguities to converge. Actually, to be more careful and to make sure the correctness of our processing results, we have continuously processed almost 4 h of static GPS data before starting to compute the converged PPP solutions or waveforms for the 2011 Tohoku Mw9.0 earthquake. Although the code observables are important to help speed up the convergence of ambiguity resolution, after the convergence of the floating ambiguities, the precision of PPP coordinate solutions is basically solely determined by carrier phase observables. PANDA further implements the second-order ionosphere corrections. Since the tropospheric parameters are generally estimated for a session of 2 h, and since the converged floating ambiguities can be treated as part of phase observables, we can then use the simplified observation Eq. (3b), namely
$$\begin{aligned} \delta y_3^{kt}=\varvec{a}_{kt}\Delta \varvec{x}_t+\Delta e_t+\epsilon _{l3}^{kt}, \end{aligned}$$
to compute the precise GPS PPP solutions of coordinates within each session, where \(\delta y_3^{kt}= \delta L_3^{kt}-\Delta T_k - \lambda _{if} N_{if}^k\). We should like to note that in order to avoid any potential effect of discontinuity of the estimated tropospheric parameters at the end points of the 2 h session, we have carefully set the 2 h session of tropospheric corrections to roughly cover the earthquake at its central part of the session. Thus, tropospheric corrections are smoothly continuous without any dislocation/discontinuity. Although \(\epsilon _{l3}^{kt}\) can affect precise PPP positioning up to a few centimeters over a time period of a few tens of minutes to years, experiments have clearly shown that precise PPP positioning can reach mm level of accuracy for changes in displacements over a short period of time (Xu et al. 2013; Shu et al. 2017).
The weighting function used in PANDA PPP is elevation dependent and given by
$$\begin{aligned} w(A_{\mathrm{e}}) = \left\{ \begin{array}{ll} 1, & \hbox{if}\, A_{\mathrm{e}} > 30^{\circ} \\ 2\sin (A_e), & \hbox{if}\, A_{\mathrm{e}} \le 30^{\circ} \end{array} \right. \end{aligned}$$
where \(A_{\mathrm{e}}\) is the elevation angle of a GPS satellite. In post-processing mode, we use the International GNSS Service precise orbits and 5-s precise clock products from the Center for Orbit Determination in Europe (CODE) to compute the position waveforms of the GEONET stations. PANDA follows the IERS conventions 2003 to correct tidal effects and antenna phase center variations and uses the Saastamoinen model with the global mapping function of Boehm et al. (2006) to correct tropospheric delays.
Based on the 1 Hz PPP time series of positions obtained for these 1198 GEONET stations, we first inspected the PPP waveforms and identified a sudden displacement pattern over the whole GEONET network at a specific epoch soon after the 2011 Tohoku earthquake, namely the second between 14:59:45 and 14:59:46. We wrote a computer code to check whether this phenomenon occurs at other epochs. The computer code found a few more seconds at 14:59:04, 15:01:04 and 15:03:39 with even a higher probability from the 20-min three-component high-rate GPS waveforms after the main shock, showing that 80.80, 84.14 and 85.89% of the GEONET stations simultaneously move along the same directions, respectively. In this paper, we will carry out a detailed analysis of the first inspected epoch to demonstrate how an artifact may be reported as a real physical signal, even though we cannot be certain that the apparent sudden movements detected by high-rate GPS PPP are indeed not physical.
Figure 1 shows the displacements between these two epochs for the GEONET network in the east, north and vertical directions. For a better visual effect, we only show the stations that subside in Fig. 1a. The corresponding displacement vectors are also shown in Fig. 2. Among the 1198 GEONET stations, 913 stations, i.e., 76.21% of the stations, have negative vertical displacements. Among these stations, the maximum and mean values of sudden vertical movements are \(-39.08\) mm and \(-4.75\) mm, respectively. Listed in the appendix of electronic materials are the three-component displacements between the epochs of 14:59:45 and 14:59:46 for all the GEONET stations (Additional file 1: Table S1).
Three components of sudden displacements (mm) within 1 s between 14:59:45 and 14:59:46 over the GEONET network, with the red star showing the epicenter of the 2011 Tohoku earthquake: panel a—the vertical displacements of the GEONET stations that move downward; panels b and c—the displacements of the GEONET stations that move in the east and north directions
Sudden displacement vectors (unit length: 1 cm) within 1 s between 14:59:45 and 14:59:46 from all the GEONET stations. The red star stands for the epicenter of the 2011 Tohoku earthquake. Panel a shows the sudden displacement vectors of east (horizontal axis) and up (vertical axis) components. The horizontal and vertical axes indicate that the stations basically move westward and downward, respectively. Similarly, panels b and c show the sudden displacement vectors of north (horizontal axis) and up (vertical axis) components, and east (horizontal axis) and north (vertical axis) components of the GEONET stations, respectively. The vertical axis of the north components in panel c indicates that the stations basically move northward
We have also used 30 min of the position time series before the earthquake to evaluate the standard deviations of displacements between two consecutive epochs. To avoid the effect of any spikes in coordinates in the time series, we use the sign-constrained robust least squares method (Xu 2005) to estimate the standard deviations of the position series. The robust trimmed mean estimates of the standard deviations for the coordinate differences between two consecutive epochs from the 1198 stations are 2.02 mm, 2.17 mm and 5.00 mm for the east, north and vertical components, respectively, which are consistent with the error evaluation of high-rate GPS PPP solutions (Xu et al. 2013) and further confirmed by a recent theoretical high-rate PPP error analysis (Shu et al. 2017).
Table 1 shows that among the GEONET stations, 66.44% move toward the west direction between 14:59:45 and 14:59:46 (compare panel b of Fig. 1). If we focus on the northeastern area, the percentage basically remains unchanged, with the mean displacement of \(-2.43\) mm. Since the robust mean standard deviation between two consecutive epochs in the east component of the positioning solutions is about 2.02 mm, the mean displacement is only slightly beyond one standard deviation. However, by treating the signs of displacements for these 1198 stations as Bernoulli statistical experiments, the pattern for 796 out of 1198 stations to move westward within 1 s is clearly significant statistically, as will be confirmed in the next section. In the case of the north component of the displacements, about 66.28% of the stations suddenly move northward (compare panel c of Fig. 1), with the mean displacement of 2.83 mm, which is also only slightly beyond one standard deviation (2.17 mm). Nevertheless, as in the cases of the vertical and eastern components, the pattern of sudden movement toward the north direction is statistically significant. This movement occurred before any of the major M7.0 aftershocks during the 2011 mega-earthquake, because the first aftershock with a magnitude larger than 7.0 occurred at 15:08, about 20 min after the main shock (Huang and Zhao 2013). Although an aftershock of magnitude M6.4 (Huang and Zhao 2013) and likely one more aftershock with a magnitude larger than M6 according to the NIED Web site occurred up to 15:06, an earthquake of magnitude M6 can, at most, produce some local effect, but should not be expected to affect the sudden movement over all the Japanese islands.
Table 1 Statistics of the three components of sudden displacements from the 1198 GEONET stations between 14:59:45 and 14:59:46 after the 2011 Tohoku earthquake
First independent tests: measurement errors, IGS stations outside Japan and results from a different software system
Although sidereal and spatiotemporal filters are essential for precise measurement of high-rate GNSS waveforms and displacements, they have no role to play in the case of the variations of coordinates of GNSS stations over such a short period of time. Any common mode errors (CME) and multipath can be reasonably assumed to remain almost constant within 1 s. As a result, their effects within this second remain almost constant as well and are canceled out if we compute the difference of these almost constant effects between two consecutive epochs.
The detected apparent sudden movements cannot be explained by GPS CME over a large area, which are generally of seasonal and secular nature and can be removed by spatiotemporal filtering of GPS daily positions of months to years, with CME sources of environmental and observation technology origins such as unmodeled or mismodeled errors of satellite orbits, reference frame, large-scale atmosphere residual errors, receiver and satellite antenna phase center mismodeling (Wdowinski et al. 1997; Dong et al. 2006). In the timescale of seconds, the effect of all these CME contributions will remain constant and, as a result, will not affect our results of movement over 1 s. Wdowinski et al. (1997) attributed the CME in the vertical component to satellite orbital and reference frame errors. Since both IGS precise satellite orbits and reference frame errors are smooth over hours, at the very least, they should have no effect on the detected vertical sudden movement within 1 s.
To further show that the sudden movements within 1 s reported in Table 1 appear not likely from GNSS error sources of any kinds in our knowledge, we have estimated the effect of each major error source of \(\epsilon _{l3}^{kt}\) on the PPP-derived displacement within 1 s, under fairly general conditions and/or with model data such as PCO/PCV directly from GEONET. Since no GEONET stations during the 2011 Tohoku Mw9.0 earthquake are used to produce the 5-s CODE precise satellite clock products, we do not expect any potential bias in the precise clock products that might be related to the detected sudden movements. Thus, we only need to investigate the effect of interpolating the precise satellite clock errors. In this case, we assume an instant uncertainty of clocks equivalent to 2 cm in ranging and follow Bock et al. (2009) by assuming a degradation of accuracy loss smaller than 2% from linearly interpolating the 5-s CODE precise clock corrections to compute its effect on PPP within 1 s. The final effects of these major error sources within 1 s are summarized in Table 2. Although the second-order ionospheric corrections (Bassiri and Hajj 1993; Brunner and Gu 1991) have been fully implemented in PANDA, we have still listed the maximum second-order effects of ionosphere on the PPP displacements in Table 2. The second-order effects of ionosphere on the sudden movement within the second of concern for all these 1198 GEONET stations are shown in Fig. 3, with a maximum effect of about 0.008 mm in the vertical direction.
The second-order effects (units: mm) of ionosphere on the sudden movements of all the GEONET stations within 1 s between 14:59:45 and 14:59:46, corresponding to the vertical, east and north directions, respectively. The red star stands for the epicenter of the 2011 Tohoku earthquake
Table 2 clearly shows that the maximum effects on PPP displacements within 1 s are from precise satellite clock errors. All the other residual systematic errors have a much smaller effect on PPP displacements from 1 s to the next. The typical clock errors amount to only about 0.1 mm, or equivalently, about one part in twenty to fifty of the standard deviations between two consecutive epochs. The maximum effect of the instant precise satellite clock uncertainty on the movement within 1 s is in the vertical direction but is only 0.014 mm. The only potential additional sources of bias that we can think of would relate to a sudden deformation of one or more global network station, such as an unmodeled coseismic displacement. However, such an effect is very unlikely in this case because the global solution did not use stations in Japan. Furthermore, we interpolate the clock solutions that are provided at 5-s intervals, so any such error would not have an abrupt nature in our solutions.
Table 2 Effects of residual systematic errors on PPP displacements within 1 s (units: \(1.0\hbox{E}{-}3\) mm)
We have computed PPP solutions for several IGS and other stations to the west of Japan using PANDA, which are shown in Fig. 4. The displacements within 1 s at these stations are listed in Table 3. Although the amplitudes of seismic waveforms at station DAEJ and those along the Chinese border are between 15 and 25 cm and those at the far-field stations like PIMO, YAKT and ULAB are around 10 cm, the movements within the second of our interest range from \(-6.57\) to 0.86 mm in the east component, \(-0.36\) to 5.35 mm in the north component, and \(-3.13\) to 6.63 mm in the up component, respectively. In general, all these stations show a trend to move westward, northward and downward, which is more or less consistent with the trend detected with the GEONET stations on the Japanese islands. Since the CODE clock estimates are not involved with the GEONET stations, we conclude that a bias in the GPS solutions is not a likely explanation for the abrupt displacement.
GNSS stations to the west of Japan
Table 3 PPP displacements (units: mm) within 1 s for the GNSS stations to the west of Japan
Additionally, to check and confirm our PPP positioning solutions of the GEONET stations, we have also processed the GEONET 1 s sampling GPS data in the PPP mode by using a different GNSS software system, more precisely, GIPSY-OASIS. Both software systems PANDA and GIPSY-OASIS are methodologically similar and start with the same observational model of ionosphere-free phase and code observables. The total number of stations for which both GNSS software systems PANDA and GIPSY-OASIS have successfully processed is 1104. By comparing the PPP results of PANDA with those of GIPSY-OASIS, we have found that on average, 88.38% of these 1104 GEONET stations have shown the same pattern of the displacements. Although 11.62% of the stations show a different direction of displacement, all of these stations have a small displacement, with an absolute mean value of 0.66 mm, 0.90 mm and 1.64 mm for the east, north and vertical components, respectively. In other words, the sudden movements at these stations are well within random errors, and at all other stations, the PANDA and GIPSY solutions are consistent.
The pattern of sudden subsidence may not be random with a high probability
Although other research groups also processed the GEONET GPS data in the PPP mode, they either obtained solutions over a longer interval of time (Simons et al. 2011) or selected a few stations to test the improvement of new PPP methods (Li et al. 2013). Simons et al. (2011) used JPL's GIPSY-OASIS software to estimate kinematic PPP solutions of the GEONET stations at the interval of 5 min, determined the coseismic displacements and estimated a slip distribution model for the 2011 Tohoku earthquake. The 30 s PPP solutions were also made publicly available by the ARIA team at JPL and Caltech (ftp://sideshow.jpl.nasa.gov/pub/JPL_GPS_Timeseries/japan/30sec_sol/) and used to demonstrate the wave motion of the 2011 Tohoku earthquake (Grapenthin and Freymueller 2011). The major purpose of Li et al. (2013) was to test new PPP methods for use in earthquake/tsunami monitoring.
Although the mean value of the vertical movements is within one time of the vertical standard deviation and might be thought to be not significant for each individual station from the statistical point of view, the pattern of such movements over the whole GEONET network is clearly not random. Actually, instead of using the rule of thumb of 2 or 3 times sigma to statistically judge whether a value is significant in terms of its magnitude, it is more appropriate to use statistical sign testing to see whether measurements exhibit a certain pattern or trend. If we assume a simple stochastic model that the vertical component of displacement between 14:59:45 and 14:59:46 for each station is random with mean zero, then the signs "downward" and "upward" of a random vertical displacement can be equivalently interpreted stochastically as the events of "success" and "failure" of a random variable, and thus, can be described by using a Bernoulli distribution (Mood et al. 1974), namely
$$\begin{aligned} P(S) = \left\{ \begin{array}{ll} p, & \hbox{if the station moves downward} \\ 1-p, & \hbox{if the station moves upward} \end{array} \right. \end{aligned}$$
where p stands for the probability that the station moves downward. If the vertical movement of a station is only contaminated by random errors with mean zero, then the probability of moving downward is equal to 0.5, i.e., \(p=0.5\), implying that the station moves equally likely, either upward or downward. If the same experiments are stochastically repeated independently n times, then the corresponding distribution is binomial, namely
$$\begin{aligned} P(n_s=k) = C_n^k p^k (1-p)^{n-k}, \end{aligned}$$
where \(n_s\) stands for the number of times that k stations move downward, and \(C_n^k\) for the number of combinations, which is the number of possible ways to have k stations moving downward among n stations. If \(n_s\) is sufficiently large, say larger than 1000, then the binomial distribution can be approximated by using a normal distribution with mean np and variance \(np(1-p)\) (Mood et al. 1974).
To statistically explain the GPS-observed results of vertical movements, we now assume that we repeat the same experiments 1198 times, calculate the probability that k experiments produce the same negative sign (i.e., move downward) and plot it in Fig. 5. The probability for 913 out of 1198 stations to move simultaneously within 1 s is very small. If we focus on the northeastern area of Japan, then the percentage of such stations slightly increases from 76.21 to 76.88%. Thus, we may conclude that the pattern of the GPS-observed sudden vertical movements over (the GEONET of) Japan after the 2011 Tohoku Mw9.0 earthquake is not random with a high probability.
Probability for k vertical displacements to have the negative sign over the whole GEONET network
An independent test of the sudden motion patterns with the strong motion data from KiK-net
As an important infrastructure of large earthquake observation, two major strong motion networks have been constructed after the 1995 Hyogoken-Nanbu (Kobe) earthquake, namely K-NET with about 1000 stations almost uniformly installed on the surface of the Japanese islands with an average distance of about 20 km and KiK-net with about 700 pairs of stations. The KiK-net stations are more sensitive and also uniformly installed in pairs, with one seismograph on the ground surface and the other in a borehole of more than 100 m in depth. The K-NET and KiK-net stations are triggered to start data recording if ground motions satisfy the pre-defined condition of acceleration. For more technical details about these two basic strong motion observation networks in Japan, the reader is referred to Kinoshita (1998), Aoi et al. (2004) and/or directly to the Web site of the National Research Institute for Earth Science and Disaster Prevention (NIED) at http://www.kyoshin.bosai.go.jp/kyoshin/.
Wang et al. (2013) performed a substantially detailed analysis and comparison of the K-NET and KiK-net strong motion and GEONET GPS coseismic displacements during the 2011 Tohoku Mw9.0 earthquake and concluded that the K-NET and KiK-net surface strong motion data cannot be useful for the following two reasons: (i) The coseismic displacements from the K-NET and surface KiK-net data show a large variability, even for the stations close to each other; (ii) these large variations of K-NET/KiK-net-derived coseismic displacements from the corresponding GEONET values are likely attributed to large transient tilting of accelerators and nonlinear baseline shifts and (iii) there are outliers in the strong motion-derived coseismic displacements. When comparing the coseismic displacements from the borehole KiK-net seismograms with those from GEONET, they concluded a root-mean-squared error (rmse) of 0.47 m. After applying an outlier detection and removal strategy to the borehole KiK-net coseismic displacements, the rmse is still as large as about 0.25 m. Since GPS PPP has been repeatedly confirmed to be of cm level of accuracy in the long term and even mm level of accuracy in the short term in the past 20 years or so, the coseismic displacements from the borehole KiK-net after performing all the necessary corrections such as tilting and baseline corrections are in no way comparable with those from GEONET in terms of both accuracy and reliability, even though seismometers have been well known to be more sensitive than GPS.
To further test the reliability and accuracy of seismic waveforms by integrating the borehole KiK-net accelerations, we have selected five borehole KiK-net stations that are the closest to the GEONET stations within a horizontal range of 5.7–24.4 m, as shown in Fig. 6. A direct integration of raw seismographs leads to totally messy and divergent displacement waveforms for these five stations. As a result, we apply three different threshold values of 0.005, 0.01 and 0.05 Hz to high-pass filter the borehole KiK-net accelerations and integrate them to obtain the seismic waveforms of these stations for the main shock and some immediate aftershocks, which are plotted in Fig. 7, together with the PPP waveforms of the corresponding nearby GEONET stations. We can observe from Fig. 7: (i) The seismic waveforms strongly depend on the threshold frequencies; (ii) the seismic waveforms require baseline corrections to be useful, even though criteria for baseline corrections are more or less arbitrary; and (iii) the seismic waveforms can be significantly different from the GEONET PPP waveforms, as can be readily inferred by comparing the waveforms from both GPS and seismograms in Fig. 7 either on the same earthquake (right panel) or different earthquakes (left panel). Actually, without first knowing the independent GPS PPP waveforms, we simply cannot say anything about the reliability and accuracy of seismic waveforms at these five borehole KiK-net stations at all. From this point of view, precise high-rate space geodesy should play an even more important role in seismology and disaster mitigation/prevention in the future.
The five almost collocated KiK-net and GEONET stations, with a horizontal distance of 5.7 m (AICH21, 0998), 7.4 m (GIFH23, 0991), 10.1 m (TKCH08, 0793), 11.4 m (ABSH04, 0504) and 24.4 m (FKOH08, 0452), respectively
The seismic waveforms of the KiK-net stations (AICH21, GIFH23, FKOH08, TKCH08 and ABSH04) from high-pass filtering and integrating the borehole KiK-net seismograms with different threshold frequencies (0.005, 0.01 and 0.05 Hz), together with the GPS PPP waveforms of the corresponding closest GEONET stations. The waveforms of the main shock at 14:46 and three immediate aftershocks at 14:51, 15:15 and 15:26 at the KiK-net station AICH21 (almost collocated with the GEONET station 0998) are shown on the left panel. The right panel shows the waveforms of the main shock at the KiK-net stations GIFH23, FKOH08, TKCH08 and ABSH04, almost collocated with the GEONET stations 0991, 0452, 0793 and 0504, respectively
Even with the inherent problems of seismometers in mind, we attempted to make this independent test of the GEONET results of sudden movements with the borehole KiK-net seismograms. According to the web site of NIED, four earthquakes triggered the recordings of KiK-net stations before 15:00, namely earthquakes M9.0 at 14:46, M6.8 at 14:51, M5.8 at 14:54 and M6.4 at 14:58. The aftershocks with a magnitude larger than M6.0 are not consistent with the relocation results of Huang and Zhao (2013), which should not be much of our concern, because any earthquake of magnitude M6 is not expected to affect the whole Japanese islands and beyond. Although the main shock triggered the recordings at 525 KiK-net stations, because the KiK-net seismograms last only for 5 min, we can finally identify a total of 78 borehole KiK-net stations available for this independent test. Since we have no practically objective and operational way to judge which high-pass filter is the best suitable to process the seismograms for a particular earthquake, we will perform the independent test with the high-pass filtered displacement waveforms with a (more or less arbitrary) pre-defined frequency of 0.05 Hz from all these 78 borehole KiK-net stations without any (tilting and/or baseline) corrections over the second. The displacement results are marginally consistent with those of GEONET, or more precisely, with 48, 50 and 45 stations (or equivalently 61.5%, 64.1% and 57.7%) moving along the same east, north and vertical directions as the detected GPS movement. We note that these testing results are similar to those with the original non-filtered displacement waveforms (precisely 59.0%, 48.7% and 60.3%) and with the pre-defined frequency of 0.01 Hz (precisely 57.7%, 62.8% and 52.6%). In the case of 0.005 Hz, the marginal support almost disappears. Without knowing the true and/or GPS waveforms in advance, one would have no practical idea about which seismic waveform result is correct, which further implies the uncertainty, unreliability and even tricky use of seismic waveforms. Because the number of the available borehole KiK-net stations is extremely small, and because of the well-known problems with seismographs, we could not firmly conclude from the borehole KiK-net testings whether the detected sudden movement is definitely real or an unidentified artifact of GPS data processing.
A direct epoch-by-epoch test with GPS PPP waveforms
To check whether the phenomenon of sudden movement between 14:59:45 and 14:59:46 also occurs at other epochs, we wrote a computer code for this purpose, which pinpoints a few more second epochs with even a higher probability from the 20-min three-component high-rate GPS PPP waveforms after the main shock, or more precisely, the seconds between 14:59:04 and 14:59:05, 15:01:04 and 15:01:05, and 15:03:39 and 15:03:40 with 80.80, 84.14 and 85.89% of the GEONET stations simultaneously moving upward, southward and westward, respectively. We choose the two epochs with the largest probabilities to show the patterns of movement within 1 s in Fig. 8. For simplicity, we only show the directions of movement in different colors without the magnitudes in Fig. 8. It can be clearly seen from Fig. 8 that both panels look green, though some blue solid circles are scattered here and there, indicating that a great majority of the GEONET stations move in the same direction within 1 s (left panel—southward; right panel—westward). Although these probabilities are very high, it may hardly be imagined that a large scale of sudden movement could occur repeatedly between 14:59:04 and 15:03:40. Together with the above seismic test, even with only a very small number of available KiK-net seismographs, the detected sudden movements after the earthquake could be unidentified artifacts of GPS data processing, though it might be possible that they be physical signals.
The patterns of the movements of 1198 GEONET stations within 1 s: left panel—epoch between 15:01:04 and 15:01:05 with 1008 stations moving southward, and right panel—epoch between 15:03:39 and 15:03:40 with 1029 stations moving westward
Physical signals or unidentified artifacts?
We have processed the 1 Hz GPS data from 1198 GEONET stations and obtained the 1 s precise point positioning (PPP) time series of positions for these stations. A quick inspection of the PPP waveforms has led us to find that 76.21% of the GEONET stations in the Japanese islands consistently move suddenly within 1 s between 14:59:45 and 14:59:46. We have also carried out different independent tests to check whether the detected sudden movement is real, namely (i) a detailed analysis of GPS measurement errors in the long and short terms; (ii) processing the high-rate GPS data of IGS stations around Japan to test whether the displacement pattern outside the Japanese islands is consistently similar in the surrounding countries; (iii) independently processing the GEONET data with another well-known GPS data processing software system GIPSY; (iv) performing a statistical hypothesis testing by assuming that any movement at the epoch is purely due to white noise and (v) comparing the GEONET results with those from borehole KiK-net strong motion seismographs. The first four tests fully support a physical signal, while the KiK-net is marginally consistent with the GEONET results. Due to problems such as tilting, rotation and baseline corrections inherent with strong motion seismographs, the KiK-net results may not be conclusive as an independent test. We cannot confirm independently from borehole KiK-net strong motion seismographs whether the detected sudden movements by GEONET are physically real or unidentified artifacts due to GPS data processing. In case that this observation of sudden movements within 1 s would be real, a physical explanation is unclear.
The last independent test has been directly applied to the GPS PPP waveforms of the 1198 GEONET stations in order to examine whether the same phenomenon occurs at other epochs. A new computer code has pinpointed that the same phenomenon of sudden movement has indeed occurred at the second epochs of 14:59:04, 15:01:04 and 15:03:39, with even a higher percentage of 80.80, 84.14 and 85.89 among the 1198 GEONET stations, respectively. These high-rate GPS PPP results might imply that Japan seemingly moves suddenly and repeatedly, within 1 s soon after the 2011 Tohoku earthquake, with the highest data-based probability of about 0.86. Although the directions of the average sudden movements are roughly consistent with that of subduction of the Pacific plate, we are suspicious that the repeatedness of sudden movements within a few minutes after the main shock might not be physically reasonable or acceptable. From this point of view, although it is impossible to completely eliminate the possibility that the detected sudden movements may be physical signals, as senior GNSS experts, both theoretically and/or in practical GNSS data processing, we tend to think that the sudden movements detected by the high-rate GPS PPP are probably unidentified artifacts of GPS data processing.
Finally, we like to make some remarks. Although a maximum 85.89% (or equivalently 1029 stations) of the 1198 GEONET stations show a consistent trend of seemingly sudden movement at some epochs after the main shock, if the phenomenon would be attributed as some unidentified artifacts of GPS data processing, all the work in this paper should indicate that cutting-edge GNSS results should be interpreted with the greatest possible care in the geoscience community. More work is surely necessary in the future to help clarify possible or potential sources of artifacts from GNSS data processing, if what we found is indeed a GNSS-rooted artifact. One idea is to use GNSS Doppler measurements, together with GNSS code and phase measurements, to compute high-rate GNSS displacements and velocities, since they contain the information on both positions and velocities (see e.g., Hofmann-Wellenhof et al. 1992; Zhang 2007; Zhang and Guo 2013), but are not used in our GNSS data processing. Nevertheless, one may not be too optimistic at this moment, since current GNSS Doppler measurements are generally noisier (Zhang and Guo 2013; Crespi 2018, private email communication on March 6, 2018). The other idea is to further explore whether any unknown stochastic characteristics in GNSS measurements may potentially be related to the reported phenomenon.
Aoi S, Kugugi T, Fujiwara H (2004) Strong-motion seismograph network operated by NIED: K-NET and KiK-net. J Assoc Earthq Eng 4:65–74
Bassiri S, Hajj GA (1993) Higher-order ionospheric effects on the global positioning system observables and means of modeling them. Manuscr Geod 18(5):280–289
Blewitt G, Heflin MB, Hurst KJ, Jefferson DC, Webb FH, Zumberge JF (1993) Absolute far-field displacements from the 28 June 1992 Landers earthquake sequence. Nature 361:340–342
Bock H, Dach R, Jäggi A, Beutler G (2009) High-rate GPS clock corrections from CODE: support of 1 Hz applications. J Geod 83:1083–1094
Böhm J, Niell A, Tregoning P, Schuh H (2006) Global mapping function (GMF): a new empirical mapping function based on numerical weather model data. Geophys Res Lett 33:L07304
Branzanti M, Colosimo G, Crespi M, Mazzoni A (2013) GPS near-real-time coseismic displacements for the great Tohoku-oki earthquake. IEEE Geosci Rem Sens Lett 10:372–376
Brunner FK, Gu M (1991) An improved model for the dual frequency ionospheric correction of GPS observations. Manuscr Geod 16(3):205–214
Choi K (2004) Modified sidereal filtering: implications for high-rate GPS positioning. Geophys Res Lett 31:L22608
Clarke PJ, Davies RR, England PC, Parsons B, Billiris H, Paradissis D, Veis G, Cross PA, Denys PH, Ashkenazi V, Bingley R, Kahle HG, Muller MV, Briole P (1998) Crustal strain in central Greece from repeated GPS measurements in the interval 1989–1997. Geophys J Int 135:195–214
Dieterich JH (1974) Earthquake mechanisms and modeling. Ann Rev Earth Planet Sci 2:275–301
Dong D, Fang P, Bock Y, Webb F, Prawirodirdjo L, Kedar S, Jamason P (2006) Spatiotemporal filtering using principal component analysis and Karhunen–Loeve expansion approaches for regional GPS network analysis. J Geophys Res 111:B03405
Fialko Y (2006) Interseismic strain accumulation and the earthquake potential on the southern San Andreas fault system. Nature 441:968–971
Frank FC (1966) Deduction of earth strains from survey data. Bull Seismol Soc Am 56:35–42
Graizer V (2010) Strong motion recordings and residual displacements: What are we actually recording in strong motion seismology? Seismol Res Lett 81:635–639
Grapenthin R, Freymueller JT (2011) The dynamics of a seismic wave field: animation and analysis of kinematic GPS data recorded during the 2011 Tohoku-oki earthquake, Japan. Geophys Res Lett 38:L18308
Hofmann-Wellenhof B, Lichtenegger H, Collins (1992) GPS theory and practice. Springer, New York
Huang Z, Zhao D (2013) Relocating the 2011 Tohoku-oki earthquakes (M 6.0–9.0). Tectonophysics 586:35–45
Iinuma T, Hino R, Kido M, Inazu D, Osada Y, Ito Y, Ohzono M, Tsushima H, Suzuki S, Fujimoto H (2012) Coseismic slip distribution of the 2011 off the Pacific coast of Tohoku Earthquake (M9.0) refined by means of seafloor geodetic data. J Geophys Res 117:B07409
Kato T, Terada Y, Kinoshita M, Kakimoto H, Isshiki H, Matsuishi M, Yokoyama A, Tanno T (2000) Real-time observation of tsunami by RTK-GPS. Earth Planets Space 52:841–845
King RW, Masters EG, Rizos C, Stolz A, Collins J (1985) Surveying with GPS. In: Monograph 9, School of Surveying. The University of New South Wales, Australia
Kinoshita S (1998) Kyoshin net (K-NET). Seismol Res Lett 69:309–332
Kouba J (2009) A guide to using international GNSS service (IGS) products. IGS Central Bureau, Pasadena
Kouba J, Héroux P (2001) Precise point positioning using IGS orbit and clock products. GPS Solut 5(2):12–28
Larson KM (2009) GPS seismology. J Geod 83:227–233
Larson KM, Bodin P, Gomberg J (2003) Using 1-Hz GPS data to measure deformation caused by the Denali fault earthquake. Science 300:1421–1424
Larson KM, Bilich A, Axelrad P (2007) Improving the precision of high-rate GPS. J Geophys Res 112:B05422
Lay T, Kanamori H (2011) Insights from the great 2011 Japan earthquake. Phys Today 64:33–39
Li X, Ge MR, Zhang Y, Wang RJ, Xu PL, Wickert J, Schuh H (2013) New approach for earthquake/tsunami monitoring using dense GPS networks. Sci Rep 3:2682. https://doi.org/10.1038/srep02682
Liu J, Ge M (2003) PANDA software and its preliminary result of positioning and orbit determination. J Nat Sci Wuhan Univ 28:603–609
Moschas F, Stiros S (2015) Dynamic deflections of a stiff footbridge using 100-Hz GNSS and accelerometer data. J Surv Eng 141:04015003
Mood AM, Graybill FA, Boes DC (1974) Introduction to the theory of statistics, 3rd edn. McGraw-Hill, London
Prescott WH, Davis JL, Svarc JL (1989) Global positioning system measurements for crustal deformation. Science 244:1337–1340
Ragheb AE, Clarke PJ, Edwards SJ (2006) GPS sidereal filtering: coordinate-and carrier-phase-level strategies. J Geod 81(5):325–335
Reid HF (1910) The California earthquake of April 18, 1906. The mechanics of the earthquake. In: Report of the state earthquake investigation commission, vol 2. Carnegie Institution of Washington, Washington DC
Sato M, Ishikawa T, Ujihara N, Yoshida S, Fujita M, Mochizuki M, Asada A (2011) Displacement above the hypocenter of the 2011 Tohoku-Oki earthquake. Science 332:1395
Segall P (1997) New insights into old earthquakes. Nature 388:122–123
Simons M, Minson SE, Sladen A, Ortega F, Jiang JL, Owen SE, Meng LS, Ampuero JP, Wei SJ, Chu RS (2011) The 2011 magnitude 9.0 Tohoku-Oki earthquake: mosaicking the megathrust from seconds to centuries. Science 332:1421–1425
Smalley R Jr (2009) High-rate GPS: How high do we need to go? Seismol Res Lett 80:1054–1061
Shu Y, Shi Y, Xu PL, Niu X, Liu JN (2017) Error analysis of high-rate GNSS precise point positioning for seismic wave measurement. Adv Space Res 59:2691–2713
Thatcher W, Marshall G, Lisowski MJ (1997) Resolution of fault slip along the 470-km-long rupture of the great 1906 San Francisco earthquake and its implications. J Geophys Res 102:5353–5367
Thatcher W, Foulger GR, Julian BR, Svarc J, Quilty E, Bawden GW (1999) Present-day deformation across the Basin and Range province, western United States. Science 283:1714–1718
Wang R, Parolai S, Ge M, Jin M, Walter TR, Zschau J (2013) The 2011 Mw9.0 Tohoku earthquake: comparison of GPS and strong-motion data. Bull Seismol Soc Am 103:1336–1347
Wdowinski S, Bock Y, Zhang J, Fang P, Genrich J (1997) Southern California permanent GPS geodetic array: spatial filtering of daily positions for estimating coseismic and postseismic displacements induced by the 1992 Landers earthquake. J Geophys Res 102:18,057–18,070
Whitten CA, Claire CN (1961) Analysis of geodetic measurements along the San Andreas fault. Bull seism Soc Am 50:404–415
Xu PL (2005) Sign-constrained robust least squares, subjective breakdown point and the effect of weights of observations on robustness. J Geod 79:146–159
Xu PL, Shi C, Fang R, Liu JN, Niu X, Zhang Q, Yanagidani T (2013) High-rate precise point positioning (PPP) to measure seismic wave motions: an experimental comparison of GPS PPP with inertial measurement units. J Geod 87:361–372
Xu PL, Shu YM, Niu X, Liu JN, Yao WQ, Chen Q (2019) High-rate multi-GNSS attitude determination: experiments, comparisons with inertial measurement units and applications of GNSS rotational seismology to the 2011 Tohoku Mw9.0 earthquake. Meas Sci Technol 30:024003. https://doi.org/10.1088/1361-6501/aaf987
Zhang J (2007) Precise velocity and acceleration determination using a standalone GPS receiver in real time. Ph.D. Thesis, RMIT, Australia
Zhang XH, Guo BF (2013) Real-time tracking the instantaneous movement of crust during earthquake with a stand-alone GPS receiver. Chin J Geophys 56(6):1928–1936. https://doi.org/10.6038/cjg20130615 (in Chinese)
Zumberge J, Heflin M, Jefferson D, Watkins M, Webb F (1997) Precise point positioning for the efficient and robust analysis of GPS data from large networks. J Geophys Res 102(B3):5005–5017
PX conceives the ideas of this manuscript, designs the experiments and tests, advises on and checks the GPS PPP data processing with PANDA, helps the data processing of the KiK-net strong motion seismographs and writes the manuscript. YS processes the GPS data to obtain the PPP waveforms, computes the KiK-net strong motion data and participates in error analysis. JL participates in the discussion of the detected sudden movement and error analysis and provides technical support with PANDA. TN processes the GPS data with the GIPSY software system for an independent test of the detected sudden movements. YS participates in the discussion of error analysis and results. JF suggests testing the stations outside Japan, participates in the discussion of the results and revises the manuscript. All authors read and approved the final manuscript.
We are very much indebted to Prof. Masataka Ando for his thorough reading of the manuscript and advice on the writing, for hours of discussions and for his many constructive comments on the observed phenomenon. We thank Prof. Benjamin Fong Chao for the discussion of the observed results and Dr. Rongjiang Wang for the private communications on the KNET and KiK-net coseismic displacements. We also thank two anonymous reviewers for their constructive comments, which help restructure part of the materials presented in this paper.
There is no competing interest with this manuscript.
We acknowledge the Geospatial Information Authority (GSI) of Japan for providing all the GEONET RINEX GPS raw data via Japan Association of Surveyors and the National Research Institute for Earth Science and Disaster Prevention for the KNET and KiK-net strong motion data.
This work is partially supported by the National Natural Science Foundation of China (41874012, 41231174 and 41674013).
Disaster Prevention Research Institute, Kyoto University, Uji, Kyoto, 611-0011, Japan
Peiliang Xu
& Takuya Nishimura
College of Marine Geosciences, Ocean University of China, Qingdao, People's Republic of China
Yuanming Shu
GNSS Research Center, Wuhan University, Wuhan, 430071, People's Republic of China
Jingnan Liu
School of Geomatics, Xi'an University of Science and Technology, Xi'an, 710054, People's Republic of China
Yun Shi
Department of Earth and Environmental Sciences, Michigan State University, East Lansing, MI, 48824, USA
Jeffrey T. Freymueller
Search for Peiliang Xu in:
Search for Yuanming Shu in:
Search for Jingnan Liu in:
Search for Takuya Nishimura in:
Search for Yun Shi in:
Search for Jeffrey T. Freymueller in:
Correspondence to Peiliang Xu.
Additional file 1. The sudden displacements between 14:59:45 and 14:59:46 after the 2011 Tohuku Mw9.0 earthquake.
Appendix: The electronic materials
Due to the limited space, we list, as an electronic file (or Additional file 1: Table S1), all the coordinate results obtained by post-processing the 1 Hz GPS data of the GEONET stations in the PPP mode with the software system PANDA. Additional file 1: Table S1 contains the three-component displacements between 14:59:45 and 14:59:46 for all the GEONET stations, with seven columns: GEONET station code numbers, the coordinates of the stations (latitude, longitude and height (meters)) and the three-component displacements of the stations between 14:59:45 and 14:59:46 (east, north, vertical) in millimeters.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
High-rate GNSS
Precise point positioning (PPP)
Spatial pattern of deformation
Tohoku Mw9.0 earthquake
6. Geodesy
|
CommonCrawl
|
Hostname: page-component-7ccbd9845f-692xr Total loading time: 1.833 Render date: 2023-01-31T10:10:39.348Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>Journals
>Ergodic Theory and Dynamical Systems
>Volume 43 Issue 2
>Dendrites and measures with discrete spectrum
Ergodic Theory and Dynamical Systems
Recurrence and minimality in dendrites with $\overline {E(X)}$ countable
Discrete spectrum in dendrites with $\overline {E(X)}$ countable
Dendrites and measures with discrete spectrum
Part of: Topological dynamics Special properties
Published online by Cambridge University Press: 15 December 2021
MAGDALENA FORYŚ-KRAWIEC ,
JANA HANTÁKOVÁ ,
JIŘÍ KUPKA ,
PIOTR OPROCHA [Opens in a new window] and
SAMUEL ROTH [Opens in a new window]
MAGDALENA FORYŚ-KRAWIEC
Faculty of Applied Mathematics, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Kraków, Poland (e-mail: [email protected])
JANA HANTÁKOVÁ
Faculty of Applied Mathematics, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Kraków, Poland (e-mail: [email protected]) Mathematical Institute Silesian University in Opava, Na Rybníčku 1, 74601 Opava, Czech Republic (e-mail: [email protected], [email protected])
JIŘÍ KUPKA
Centre of Excellence IT4Innovations – Institute for Research and Applications of Fuzzy Modeling, University of Ostrava, 30. dubna 22, 701 03 Ostrava 1, Czech Republic (e-mail: [email protected])
PIOTR OPROCHA*
Faculty of Applied Mathematics, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Kraków, Poland (e-mail: [email protected]) Centre of Excellence IT4Innovations – Institute for Research and Applications of Fuzzy Modeling, University of Ostrava, 30. dubna 22, 701 03 Ostrava 1, Czech Republic (e-mail: [email protected])
SAMUEL ROTH
Mathematical Institute Silesian University in Opava, Na Rybníčku 1, 74601 Opava, Czech Republic (e-mail: [email protected], [email protected])
e-mail: [email protected]
Save PDF (0.19 mb) View PDF[Opens in a new window]
Rights & Permissions[Opens in a new window]
We are interested in dendrites for which all invariant measures of zero-entropy mappings have discrete spectrum, and we prove that this holds when the closure of the endpoint set of the dendrites is countable. This solves an open question which has been around for awhile, and almost completes the characterization of dendrites with this property.
dendritediscrete spectrumtopological entropyminimal set
MSC classification
Primary: 37B40: Topological entropy
Secondary: 37B45: Continua theory in dynamics 54F50: Spaces of dimension $leq 1$; curves, dendrites
Ergodic Theory and Dynamical Systems , Volume 43 , Issue 2 , February 2023 , pp. 545 - 555
DOI: https://doi.org/10.1017/etds.2021.157[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
© The Author(s), 2021. Published by Cambridge University Press
A dynamical system is a pair $(X,f)$ , where X is a compact metric space and $f\colon X\to X$ is a continuous map. A continuum is a compact connected metric space. Throughout this paper, we assume X is a dendrite, that is, a locally connected continuum containing no simple closed curve.
The main motivation of the paper can be derived from the Möbius disjointness conjecture proposed by Sarnak in 2009 [Reference Kočan, Kornecká-Kurková and Málek23]. By topological arguments, the conjecture was confirmed on various one-dimensional spaces: the interval [Reference Karagulyan14], the circle [Reference Bowen8], topological graphs [Reference Li, Oprocha, Yang and Zeng16], some dendrites [Reference Downarowicz11], etc. However, using ergodic theory, it was proved that if all invariant measures have discrete spectrum, then the conjecture also holds (see e.g. [Reference Huang, Wang and Ye13, Theorem 1.2]). This leads to a natural question, what can be said about the spectrum of measures for zero-entropy maps in the above-mentioned spaces. In [Reference Li, Tu and Ye17], the authors confirmed that, indeed, maps on topological graphs with zero entropy can have only invariant measures with discrete spectrum. This motivated the following open question [Reference Li, Tu and Ye17, Question 1.1].
Question 1.1. Which one-dimensional continua X have the property that every invariant measure of $(X,f)$ has discrete spectrum, assuming f is a zero-entropy map?
Similar questions, however, were stated even before, for example in [Reference Sharkovsky, Kolyada, Sivak and Fedorenko24] from 1982, where the author asked whether every ergodic invariant measure in a mean equicontinuous system has discrete spectrum. The authors of [Reference Li, Tu and Ye17] partially answered this question by showing that the result holds for zero-entropy maps on quasi-graphs X, and it was completely answered in the affirmative in 2015 in [Reference Mai and Shi18]. Let us mention at this point that continua satisfying the condition in Question 1.1 cannot be too complex. It was shown in [Reference Li, Tu and Ye17] that if a dendrite has an uncountable set of endpoints, then it supports a plethora of maps with zero topological entropy possessing invariant measures which do not have discrete spectrum. Then in the realm of dendrites, only those with a countable set of endpoints can be examples in Question 1.1.
In this paper, we study the dynamics of zero-entropy maps on dendrites for which the endpoint set has a countable closure. In §3, we build on results from [Reference Arévalo, Charatonik, Pellicer Covarrubias and Simón3, Reference Askri4] and show that every recurrent point is in fact minimal (Theorem 3.6), which generalizes a well-known property of zero-entropy interval maps. In §4, we use this result together with a characterization of minimal $\omega $ -limit sets from [Reference Askri4] to show that every invariant measure has discrete spectrum (Theorem 4.3) in the case of these dendrites. Our results almost completely characterize dendrites for which all invariant measures of zero-entropy mappings have discrete spectrum. We leave unsolved the case of dendrites for which the endpoint set is countable but has an uncountable closure. We strongly believe that in the case of these dendrites, the analog of Theorem 3.6 also holds, because all known examples seem to confirm that. Unfortunately, we were not able to find a good argument to justify this statement. Structural properties of (other) one-dimensional continua that may serve as positive examples in Question 1.1 are yet to be understood.
2 Preliminaries
Let $(X,f)$ be a dynamical system and $x\in X$ . The orbit of x, denoted by $\mathrm {Orb}_f(x)$ , is the set $\{f^n(x)\colon n\geq 0\}$ , and the $\omega $ -limit set of x, denoted by $\omega _f(x)$ , is defined as the intersection $\bigcap _{n\ge 0} \overline {\{f^m (x)\colon m\ge n\}}$ . It is easy to check that $\omega _f(x)$ is closed and strongly f-invariant, that is, $f (\omega _f(x))= \omega _f(x)$ . The point x is periodic ( $x\in \mathrm {Per}(f)$ ) if $f^p(x) =x$ for some $p\in \mathbb N$ , where the smallest such p is called the period of x. Note that throughout this paper, $\mathbb N$ denotes the set of positive integers. The point x is recurrent ( $x\in \mathrm {Rec}(f)$ ) if $x\in \omega _f (x)$ . The orbit of a set $A\subset X$ , denoted $\mathrm {Orb}_f(A)$ , is the set $\bigcup _{n\geq 0} f^n(A)$ , and A is called invariant if $f(A)\subseteq A$ . A set M is minimal if it is non-empty, closed, invariant, and does not have a proper subset with these three properties. It can be equivalently characterized by $M=\omega _f(x)$ for every $x\in M$ . A point is minimal if it belongs to a minimal set.
Recall that a dendrite is a locally connected continuum X containing no homeomorphic copy of a circle. A continuous map from a dendrite into itself is called a dendrite map. For any point $x \in X$ , the order of x, denoted by $\mathrm {ord}(x)$ , is the number of connected components of $X\setminus \{x\}$ . Points of order $1$ are called endpoints while points of order at least $3$ are called branch points. By $E(X)$ and $B(X)$ we denote the set of endpoints and branch points, respectively. In this paper, we especially focus on dendrites in which $E(X)$ has countable closure. These dendrites are a special case of a tame graph, as introduced in [Reference Askri5]. Note that when $\overline {E(X)}$ is countable, so also is $\overline {B(X)\cup E(X)}$ , because $B(X)$ is countable in any dendrite and has accumulation points only in $\overline {E(X)}$ .
For any two distinct points $x,y \in X$ , there exists a unique arc $[x,y]\subset X$ joining those points. A free arc is an arc containing no branch points. We say that two arcs $I, J$ form an arc horseshoe for f if $f^n(I) \cap f^m(J)\supset I \cup J$ for some $n,m\in \mathbb {N}$ , where $I, J$ are disjoint except possibly at one endpoint. Denote by $h_{\mathrm {top}}(f)$ the topological entropy of a dendrite map f (for the definition, see [Reference El Abdalaoui, Askri and Marzougui1, Reference Block and Coppel7, Reference Davenport9]). We will frequently use the fact that for dendrite maps, positive topological entropy is implied by the existence of an arc horseshoe [Reference Li, Oprocha and Zhang15].
The set of all Borel probability measures over X is denoted by $M(X)$ , and $M_f(X)\subset M(X)$ denotes the set of all elements of $M(X)$ invariant with respect to the map f. The set of all ergodic measures in $M_f(X)$ is denoted by $M_f^e(X)$ . We say that a finite measure $\mu $ on X is concentrated on $A\subset X$ if $\mu (A)=\mu (X)$ . It is well known that $M(X)$ endowed with the weak-* topology is a compact metric space and that $M_f(X)$ is its closed subset. We say that $\mu \in M_f(X)$ has discrete spectrum, if the linear span of the eigenfunctions of $U_f$ in $L^2_\mu (X)$ is dense in $L^2_\mu (X)$ , where as usual $U_f$ denotes the Koopman operator: $U_f(\varphi )=\varphi \circ f$ for every $\varphi \in L^2_\mu (X)$ . We refer the reader to [Reference Dinaburg10, Reference Walters26] for standard monographs on ergodic theory and entropy.
3 Recurrence and minimality in dendrites with $\overline {E(X)}$ countable
First we recall the following results by Askri on the structure of minimal $\omega $ -limit sets in a special class of dendrite maps.
Proposition 3.1. [Reference Askri4, Proposition 3.4]
Let X be a dendrite such that $E(X)$ is countable and let $f\colon X\to X$ be a continuous map with zero topological entropy. If $M= \omega _f (x)$ is an infinite minimal $\omega $ -limit set for some $x\in X$ , then for every $k\geq 1$ there is an f-periodic subdendrite $D_k$ of X and an integer $n_k\geq 2$ with the following properties:
(1) $D_k$ has period $\alpha _k:=n_1 n_2 \ldots n_k$ ;
(2) for $i\neq j\in \{0,\ldots ,\alpha _k-1\}, f^i(D_k)$ and $ f^j(D_k)$ are either disjoint or intersect at one common point;
(3) $\bigcup _{k=0}^{n_j-1}f^{k\alpha _{j-1}}(D_j)\subset D_{j-1}$ ;
(4) $M \subset \bigcap _{k\geq 0}{\mathrm {Orb}}_f(D_k)$ ;
(5) $f(M^k_i) = M^k_{i+1 \mod \alpha _k}$ , where $M^k_i = M\cap f^i(D_k)$ for all k and all $0\leq i\leq \alpha _k-1$ .
While equation (5) is not directly stated in [Reference Askri4], it is an obvious consequence of the other statements.
Implicit in Proposition 3.1 is the idea that the minimal set M has an odometer as a factor. Our next lemma shows that when $E(X)$ has countable closure, the factor map is invertible except on a countable set.
Given an increasing sequence $(\alpha _k)$ with $\alpha _k | \alpha _{k+1}$ for all k, we define the group $\Omega =\Omega (\alpha _k)$ of all $\theta \in \prod _{k=0}^\infty \mathbb {Z}/\alpha _k\mathbb {Z}$ , such that $\theta _{k+1}$ is congruent to $\theta _k$ modulo $\alpha _k$ for all k, and we let $\tau $ denote the group rotation $\tau (\theta )=\theta +(1,1,1,\ldots )$ . Then $(\Omega ,\tau )$ is called the odometer associated to the sequence $(\alpha _k)$ .
Lemma 3.2. Let $X,f,M,(D_k),(\alpha _k)$ be as in Proposition 3.1 and suppose that $\overline {E(X)}$ is countable. Then the following hold.
(1) The sets $J_\theta =\bigcap _k f^{\theta _k}(D_k)$ , $\theta \in \Omega $ , are closed, connected, and pairwise disjoint.
(2) There is a factor map $\pi :(M,f)\to (\Omega ,\tau )$ which takes the value $\theta $ on $M\cap J_\theta $ .
(3) Each fiber $\pi ^{-1}(\theta )$ , $\theta \in \Omega $ , is countable, and all but countably many of these fibers are singletons.
Proof. It is clear from Proposition 3.1 that each set $J_\theta $ is closed, connected, and has non-empty intersection with M. It is also clear that $f(J_\theta )=J_{\tau (\theta )}$ . However, because the sets $f^i(D_k)\cap f^j(D_k)$ are allowed to intersect at a point, it is not clear if the sets $J_\theta $ are pairwise disjoint. We prove this fact now. Suppose there are $\theta \neq \theta '$ with $J_\theta \cap J_{\theta '}\neq \emptyset $ . Find k minimal such that $\theta _k\neq \theta ^{\prime }_k$ . Then clearly
$$ \begin{align*} J_\theta \cap J_{\theta'} = f^{\theta_k}(D_k) \cap f^{\theta^{\prime}_k}(D_k) = \{a\} \end{align*} $$
for some single point $a\in X$ . Taking the image by $f^{\alpha _k}$ , we have
$$ \begin{align*} f^{\alpha_k}(a) \in f^{\alpha_k}(J_\theta)\cap f^{\alpha_k}(J_{\theta'}) = J_{\tau^{\alpha_k}(\theta)} \cap J_{\tau^{\alpha_k}(\theta')} = f^{\theta_k+\alpha_k}(D_k) \cap f^{\theta^{\prime}_k+\alpha_k}(D_k) = \{a\}, \end{align*} $$
because $D_k$ is periodic with period $\alpha _k$ . This shows that a is periodic with period $\alpha _k$ . In particular, it does not belong to the infinite minimal set M. Now for $n\in \mathbb {N}$ , let $J_n=J_{\tau ^{n\alpha _k}(\theta )}$ . Then $a\in J_n$ and we can choose an additional point $m_n \in M \cap J_n$ for all n. Thus the $J_n$ sets are non-degenerate subdendrites and intersect pairwise only at a. In particular, the sets $(a,m_n]$ are pairwise disjoint connected subsets of X, so their diameters must converge to zero (see e.g. [Reference Misiurewicz and Marchioro19, Lemma 2.3]). However, because M is closed, this shows that $a\in M$ , a contradiction.
Now that the sets $J_\theta $ , $\theta \in \Omega $ have been shown to be pairwise disjoint, we see immediately that $\pi $ is well defined. It is also easy to see that $\pi $ is continuous and $\tau \circ \pi = \pi \circ f$ .
Again, because the sets $J_\theta $ , $\theta \in \Omega $ are pairwise disjoint connected sets in X, only countably many of them can have positive diameter. It follows that $\pi ^{-1}(\theta )$ is a singleton except for countably many $\theta $ . It remains to show that $M\cap J_\theta $ is countable when $J_\theta $ is non-degenerate. Because M is minimal and $J_\theta $ never returns to itself, we must have $M\cap J_\theta \subset \operatorname {\mathrm {Bd}}(J_\theta )$ , where $\operatorname {\mathrm {Bd}}(J_{\theta })$ stands for the boundary of $J_{\theta }$ . However, the boundary in X of the subdendrite $J_\theta $ is a subset of $E(J_\theta ) \cup B(X) \cup \overline {E(X)}$ , which is countable by the assumption that $E(X)$ has countable closure. Here we use the well-known facts that $B(X)$ is countable in any dendrite, and the cardinality of the endpoint set of a dendrite cannot increase when we pass to a subdendrite, see e.g. [Reference Naghmouchi21, Reference Scarpellini22].
Remark 3.3. We have in fact shown that for a zero entropy map on a dendrite whose endpoint set has countable closure, every minimal subsystem is a regular extension of its maximal equicontinuous factor in the sense defined in [Reference García-Ramos, Jäger and Ye12]. For infinite minimal sets, this is implied by Lemma 3.2(3) and a finite minimal set is just a periodic orbit.
The next lemma strengthens [Reference Askri4, Lemma 3.5] by relaxing the condition that $E(X)$ be closed.
Lemma 3.4. [Reference Askri4, Lemma 3.5]
Let $X,f,M,(D_k),(\alpha _k)$ be as in Proposition 3.1 and suppose that $\overline {E(X)}$ is countable. Then there is $N\geq 1$ such that $\text { for all } k\geq N, f^{i_k}(D_k)$ is a free arc for some $0\leq i_k\leq \alpha _k-1$ .
Proof. Using Lemma 3.2, we know that there are uncountably many singleton sets $J_\theta $ . Now a dendrite whose endpoint set has countable closure is always the union of a countable sequence of free arcs and a countable set, see [Reference Askri5, Theorem 2.2]. It follows that we can find $\theta $ with the singleton $J_\theta $ in the interior of a free arc A in X. Because $J_\theta $ is the nested intersection $\bigcap _N f^{\theta _N}(D_N)$ , we can find N large enough that $f^{\theta _N}(D_N)$ is contained in A. Then $f^{\theta _k}(D_k)$ is a free arc for all $k\geq N$ .
Our next result is a good first step in showing that recurrent points are minimal. It is a modified version of [Reference Arévalo, Charatonik, Pellicer Covarrubias and Simón3, Theorem 1.1], and the proof closely follows the one from that paper.
Lemma 3.5. Let X be a dendrite with $\overline {E(X)}$ countable, $f:X\to X$ a continuous map with zero topological entropy, and $x\in X$ a point which is recurrent but not periodic. Then $\omega _f(x)$ contains no periodic points.
Proof. Throughout the proof, we will use freely the following well-known property of $\omega $ -limit sets (e.g. [Reference Bartoš, Bobok, Pyrih, Roth and Vejnar6]): if for fixed $n\geq 2$ we write $W_i=\omega _{f^n}(f^i(x))$ for $0\leq i<n$ , then $\omega _f(x)=\bigcup _{i=0}^{n-1} W_i$ and $f(W_i)=W_{i+1 \; (\text {mod }n)}$ . In particular, if $\omega _f(x)$ is uncountable, then so is each $W_i$ , and if $\omega _f(x)$ contains a given fixed point, then each $W_i$ contains it as well. We continue to use the notation $[x,y]$ for the unique arc in X with endpoints $x,y\in X$ , and if $z\in (x,y)=[x,y]\setminus \{x,y\}$ , we will say for simplicity that z lies between x and y.
Now let $L=\omega _f(x)$ , where x is recurrent but not periodic. Then L is the closure of the orbit of x, and hence it is a perfect uncountable set.
Step 1: L does not contain a periodic point with a free arc neighborhood in X. Suppose to the contrary that $a\in L$ , $f^N(a)=a$ , and some free arc C is a neighborhood of a in X. Then, by the standard properties mentioned above, $a\in \omega _{f^N}(f^i(x))$ for some $0\leq i<N$ . Replacing f with its iterate and x with its image, we may safely assume that $N=1$ and $i=0$ , that is, a is a fixed point in $L=\omega _f(x)$ .
Because periodic points are never isolated in infinite $\omega $ -limit sets, we know that L accumulates on a from at least one side in the free arc C. So, choose an endpoint b of C such that $L\cap [a,b]$ accumulates on a. For convenience, we let C carry its natural order as an arc, oriented in such a way that $a<b$ . Choose five points $y_i\in L\cap [a,b]$ with $a<y_1<y_2<y_3<y_4<y_5<b$ . Choose three small arc neighborhoods $I_2, I_3, I_4$ containing $y_2,y_3,y_4$ , respectively, and let them be pairwise disjoint and lie between $y_1$ and $y_5$ . Because the orbit of x visits each of these neighborhoods $I_i$ infinitely often, there must be points in $I_2$ and $I_4$ which visit $I_3$ , so by [Reference Misiurewicz and Marchioro19, Theorem 2.13], f has a periodic point c between $y_1$ and $y_5$ . Replacing x with a point from its orbit near $y_1$ , we may assume that $a<x<c<y_5<b$ . Let r be the period of c and put $g=f^r$ . Because a was already fixed for f, we have $a\in \omega _g(x)$ as well. Note that because x is recurrent for f, it is also recurrent for g.
Claim. There is an arc I invariant for g with $[a,x]\subseteq I \subseteq [a,c]$ .
Proof of Claim
To prove the claim, put $I=\overline {\bigcup _{n=0}^{\infty } g^n([a,x])}$ . Because a is fixed and x is recurrent, it suffices to show that $g^n([a,x]) \subseteq [a,c]$ for all n. If this is not true, then there is $z\in [a,x]$ and $n_0 \geq 1$ such that a is between $g^{n_0}(z)$ and c or c is between $g^{n_0}(z)$ and a. We treat these two cases separately.
Suppose first that a is between $g^{n_0}(z)$ and c. Then $a\in g^{n_0}([z,c])$ , so there is $a_{-1}$ between z and c with $g^{n_0}(a_{-1})=a$ . Then $f^n(a_{-1})=a$ for all $n\geq n_0\cdot r$ . Because $L\cap [a,b]$ accumulates on a, we can find a point $x'\in \mathrm {Orb}_f(x)$ between a and $a_{-1}$ . Because $y_5\in \omega _f(x)$ , we can find $n\geq n_0\cdot r$ such that $f^n(x')$ is close to $y_5$ and $a<x'<a_{-1}<c<f^n(x')$ . Put $J=[a,x']$ and $K=[x',a_{-1}]$ . Then $f^n(J)\cap f^n(K) \supseteq J\cup K$ , so f possesses an arc horseshoe and thus has positive topological entropy, a contradiction.
Suppose instead that c is between $g^{n_0}(z)$ and a. Then $c\in g^{n_0}([a,x])$ , so there must be $c_{-1}$ between a and x with $g^{n_0}(c_{-1})=c$ . Because $a\in \omega _g(x)$ , we can choose $n>n_0$ with $g^n(x)$ close to a so that $g^n(x)<c_{-1}<x<c$ . Put $J=[c_{-1},x]$ and $K=[x,c]$ . Then again $g^n(J)\cap g^n(K)\supseteq J\cup K$ , so g has positive topological entropy and so does f, a contradiction. This completes the proof of the claim.
Now we may use the claim to finish Step 1. Because x belongs to the closed invariant set I, we have $\omega _g(x)=\omega _{g|_I}(x)$ . However, $g|_I$ is an interval map, and when an infinite $\omega $ -limit set for an interval map contains a periodic point, the topological entropy must be positive (see [Reference Nadler20]), a contradiction.
Step 2: L does not contain any periodic points. Suppose to the contrary that $a\in L$ is a periodic point. As in Step 1, we may assume that a is fixed. By [Reference Askri5, Theorem 2.2], the dendrite X is the union of a countable sequence of free arcs together with a countable set. In particular, we can find a free arc C not containing a with $L\cap C$ uncountable. Write $C=[u,v]$ with v between u and a and let $<$ denote the order in C with $u<v$ . Because $L\cap C$ is infinite, we may choose four points $x_i\in \mathrm {Orb}_f(x)$ with $u<x_1<x_2<x_3<x_4<v$ . As in Step 1, we can use small arc neighborhoods of $x_2,x_3,x_4$ to find a periodic point c with $u<x_1<c<v$ , and because $x_1$ is in the orbit of x, we may redefine $x=x_1$ without changing $\omega _f(x)$ . Let r denote the period of c and put $g=f^r\kern-3pt.$ Because x is recurrent also for g, we have $\mathrm {Orb}_g(x)\cap [u,c]$ infinite, so we can find two points $x_5,x_6\in \mathrm {Orb}_g(x)$ with $u<x_5<x_6<c$ and passing forward along the orbit, we can redefine $x=x_6$ without changing $\omega _g(x)$ . In particular, $x_5\in \omega _g(x)=\overline {\mathrm {Orb}_g(x)}$ , so we can choose $p\geq 1$ with $g^p(x)$ close to $x_5$ so that $u<g^p(x)<x<c$ .
Let $l=\omega _{g^p}(x)$ . We have $x\in l$ because x is recurrent and $a\in l$ because a is a fixed point in $\omega _f(x)$ . Moreover, $c\not \in l$ as a result of Step 1. So let $X_0, X_1$ denote the connected components of $X\setminus \{c\}$ containing x and a, respectively, and put $l_i=l\cap X_i$ . Then $l=l_0\cup~l_1$ expresses l as the disjoint union of two non-empty open subsets (in the topology induced from X to l). Recall that every $\omega $ -limit set $\omega _f(x)$ is weakly incompressible, that is, $f(\overline {U})\not \subset U$ for any set $U\subsetneq \omega _f(x)$ open in $\omega _f(x)$ (see, e.g. [Reference Sarnak25]). Thus we have $g^p(l_0)\cap l_1 \neq \emptyset $ . Therefore, we may choose $y\in l_0$ with $g^p(y)\in l_1$ , and because $\mathrm {Orb}_{g^p}(x)$ is dense in $l_0$ , we may choose y from the orbit of x. We finish the proof in two cases, depending on the location of y.
Suppose first that y is between x and c. In the ordering of the arc $[g^p(x),g^p(y)]$ , we have $g^p(x) < x < y < c < g^p(y)$ . Put $I=[x,y]$ and $J=[y,c]$ . Clearly $g^p(I)\supseteq I\cup~J$ . Because $y\in \mathrm {Orb}_g(x)$ , we have $\omega _{g}(y)=\omega _{g}(x)\supset \mathrm {Orb}_{g}(x)\ni g^p(x)$ . In particular, we may choose $n>p$ to make $g^{n}(y)$ as close to $g^p(x)$ as we like, so that $x,y \in [g^{n}(y),c]$ . However, then $g^{n}(J) \supseteq I\cup J$ . We conclude that g possesses an arc horseshoe and thus g has positive topological entropy, which is a contradiction with $h_{\mathrm {top}}(f)=0$ .
Suppose instead that x is between y and c. Then $c\in [g^p(x),g^p(y)]$ , so there is $c_{-1}\in (x,y)$ with $g^p(c_{-1})=c$ . In the ordering of the arc $[y,g^p(y)]$ , we have $y<c_{-1}<x<c$ and we also have $x\in (g^p(x),c)$ . Put $I=[c_{-1},x]$ and $J=[x,c]$ . Because $y\in \mathrm {Orb}_g(x)\subset \omega _g(x)$ , we can find $n>p$ with $g^n(x)$ as close to y as we like. In particular, we can get $x,c_{-1}\in [g^n(x),c]$ . However then $g^n(I) \cap g^n(J) \supseteq I\cup J$ . Again we conclude that g has positive topological entropy, which is a contradiction. This ends the proof.
Theorem 3.6. If X is a dendrite in which $\overline {E(X)}$ is countable and if $f\colon X\to X$ has zero topological entropy, then every recurrent point for f is minimal.
Proof. Let $x\in \mathrm {Rec}(f)$ . If x is periodic, then it is minimal, so assume x is not periodic. Let $L=\omega (x)$ . Let $M\subset L$ be a minimal set. By Lemma 3.5, L contains no periodic orbits, so M is an infinite minimal set. Then Proposition 3.1 applies and we get a sequence of f-periodic subdendrites $(D_k)_{k\geq 1}$ and periods $(\alpha _k)$ satisfying properties (1)–(5) of that proposition. By Lemma 3.4 for all sufficiently large k, we have that $f^i(D_k)$ is a free arc for suitable i. Because M is infinite and $D_k$ is periodic, we have $M\cap \operatorname {\mathrm {int}} f^i(D_k)\neq \emptyset $ and as a consequence, $Orb_f(x)\cap D_k\neq \emptyset $ , for every sufficiently large k. Hence, $\bigcap _{k\geq 1}Orb_f(D_k)$ contains L, that is, property (4) still holds with L in the place of M.
We claim that property (5) also holds with L in the place of M. Fix k and denote $L_i=f^i(D_k)\cap L$ for $0\leq i<\alpha _k-1$ . Observe that L does not contain periodic points, and the set $\mathrm {Orb}(f^i(D_k)\cap f^j(D_k))$ is always finite and invariant for any $i\neq j$ (can be empty) and hence $L_i \cap f^j(D_k)=\emptyset $ for $i\neq j$ . This shows that the sets $L_i \cap L_j=\emptyset $ for $i\neq j$ . Clearly $f(L_i) \subseteq L_{i+1 (\text {mod }\alpha _k)}$ , and $f(L)=L$ because $\omega $ -limit sets are always mapped onto themselves. This shows that $f(L_i)=L_{i+1 (\text {mod } \alpha _k)}$ . In particular, we conclude that $L_i$ is uncountable for each i.
Again using Lemma 3.4, choose k large enough that $f^i(D_k)$ is a free arc for some $0\leq i<\alpha _k-1$ and let $A=f^i(D_k)$ denote that free arc. We have just shown that $L_i=A\cap L$ is uncountable, so because A is a free arc, there are points from $\omega (x)$ in its interior. Thus we can find a point $y=f^l(x)$ from the forward orbit of x in A. Then $\omega _f(x)=\omega _f(y)$ and y is also recurrent for f. Because $\mathrm {Rec}(f^{\alpha _k})=\mathrm {Rec}(f)$ , y is also recurrent for $f^{\alpha _k}$ . However, the restriction of $f^{\alpha _k}$ to A is an interval map with zero topological entropy. For such a map, all recurrent points are minimal points, see e.g. [Reference Bartoš, Bobok, Pyrih, Roth and Vejnar6, Ch. VI. Proposition 7]. Thus, y is a minimal point for $f^{\alpha _k}$ , and hence also for f. This shows that $\omega _f(y)=\omega _f(x)$ is a minimal set, and hence x itself is minimal.
4 Discrete spectrum in dendrites with $\overline {E(X)}$ countable
By [Reference Li, Tu and Ye17, Theorem 1.5], each one-sided subshift with zero entropy can be extended to a dynamical system on the Gehman dendrite with zero topological entropy. This provides a plethora of examples of dynamical systems on a dendrite with a closed set of endpoints having zero topological entropy and invariant measures which do not have discrete spectrum. However, in the Gehman dendrite, $E(X)$ is uncountable, because $E(X)$ is a Cantor set. Alternatively, each dendrite with $E(X)$ uncountable contains a copy of the Gehman dendrite (see e.g. [Reference Adler, Konheim and McAndrew2], cf. [Reference Li, Tu and Ye17]). So on all these dendrites, there exist dynamical systems with zero topological entropy and invariant measures not having discrete spectrum.
Our work below shows that the opposite holds in the case of a dendrite X, where $\overline {E(X)}$ is countable: all invariant measures of zero-entropy mappings have discrete spectrum. So in the case of dendrites, the remaining case in Question 1.1 is when $E(X)$ is countable but $\overline {E(X)}$ is uncountable. This case is left as a problem for further research.
Lemma 4.1. Let $(X,f)$ be a topological dynamical system and suppose that all measures $\mu \in M_f(X)$ which are concentrated on $A_i$ have discrete spectrum, for each member $A_i$ of some finite or countable collection of invariant Borel sets. Then any $\mu \in M_f(X)$ which is concentrated on $\bigcup _i A_i$ also has discrete spectrum. In particular, if $\mathrm {Rec}(f)\subseteq \bigcup _i A_i$ , then every $\mu \in M_f(X)$ has discrete spectrum.
Proof. Let $\mu $ be any finite invariant measure concentrated on $\bigcup _i A_i$ . Because each $A_i$ is invariant, that is, $f(A_i)\subset A_i$ , and f preserves $\mu $ , we may assume by throwing away a set in X of $\mu $ -measure zero that $f^{-1}(A_i)=A_i$ for each i.
We may take the index set for the variable i to be $\{1,\ldots ,n\}$ in the finite case or $\mathbb {N}$ in the countable case. Then putting $B_i = A_i \setminus \bigcup _{j<i} A_j$ for each i, we get a collection $\{B_i\}$ of pairwise disjoint invariant Borel sets. Now let $I=\{i~:~\mu (B_i)>0\}$ and write $\mu _i=\mu |_{B_i}$ for the (unnormalized) restriction of $\mu $ to $B_i$ . Then we get a direct sum decomposition of Hilbert spaces $L^2_{\mu }(X) = \bigoplus _{i\in I} L^2_{\mu _i}(B_i).$ (Here in a direct sum $\bigoplus H_i$ of Hilbert spaces, we include all $(v_i), v_i\in H_i$ such that $\sum ||v_i||^2<\infty $ . We do not require that all but finitely many $v_i$ vanish.) We may extend each function $\phi \in L^2_{\mu _i}(B_i)$ to an element of $L^2_{\mu _i}(X)$ by letting $\phi $ vanish outside of $B_i$ . Because $f^{-1}(B_i)=B_i$ , we see that if $\phi \circ f = \lambda \phi $ holds $\mu _i$ -almost everywhere (a.e) in $B_i$ , then by letting $\phi $ vanish outside $B_i$ , it continues to hold $\mu $ -a.e in X. Thus we have the equivalent direct sum decomposition
(4.1) $$ \begin{align} L^2_{\mu}(X) = \bigoplus_{i\in I} L^2_{\mu_i}(X), \end{align} $$
and an eigenfunction in a coordinate space is still an eigenfunction in the whole space. For each $i\in I$ , the normalized measure $\mu _i/\mu (B_i)$ is an invariant probability measure for f concentrated on $B_i \subset A_i$ , so by hypothesis, the eigenfunctions of the Koopman operator on the space $L^2_{\mu _i/\mu (B_i)}(X)$ have dense linear span. Dropping the normalizing constant, the same holds for $L^2_{\mu _i}(X)$ . Passing through the direct sum decomposition, it follows that the eigenfunctions of the Koopman operator on the space $L^2_{\mu }(X)$ have dense linear span, that is, $\mu $ has discrete spectrum.
The last statement of the lemma follows by the Poincaré recurrence theorem, whereby if $\mathrm {Rec}(f) \subseteq \bigcup A_i$ , then every measure $\mu \in M_f(X)$ is concentrated on $\bigcup A_i$ .
Lemma 4.2. Let X be a dendrite and suppose that $f\colon X\to X$ is a continuous map with zero topological entropy. If $D\subset X$ is a tree and $R\colon X\to D$ is a natural retraction, then the map $F\colon D\to D$ given by $F=R\circ f$ has zero topological entropy.
Proof. Suppose that F has positive entropy. Then by [Reference Li, Oprocha and Zhang15], there exists an arc horseshoe $I_1,I_2$ with $F^{n_1}(I_1)\cap F^{n_2} (I_2) \supset I_1 \cup I_2$ for some $n_1,n_2\in \mathbb N$ . Then $F^{i}(I_j)$ is not a single point for any $i=1,\ldots ,n_j$ and $j=1,2$ . However, if $F(J)$ is non-degenerate for an arc J, then $f(J)\supset F(J)$ which implies that $f^{n_1}(I_1)\cap f^{n_2}(I_2)\supset I_1\cup I_2$ which implies that f has positive topological entropy. A contradiction.
Theorem 4.3. Let X be a dendrite such that $\overline {E(X)}$ is countable and let $f:X\to X$ be a continuous map with zero topological entropy. Then every measure $\mu \in M_f(X)$ has discrete spectrum.
Proof. Let $Z=\{z\in \overline {E(X)}~:~\omega _f(z)\text { is an infinite minimal set}\}.$ Following arguments in [Reference Naghmouchi21, Theorem 10.27], let $(T_n)_{n \in \mathbb {N}} \subset X$ be an increasing sequence of topological trees with endpoints in $E(X)$ defined as follows. We inductively construct the sequence $(T_n)_{n\in \mathbb {N}}$ starting with $T_1 = \{e_1\}$ for some $e_1 \in E(X)$ . Then for $n\geq 1$ , we attach to $T_n$ an arc $[e,e_{n+1}]$ whose one endpoint $e_{n+1}$ belongs to $E(X)\setminus T_n$ and $e\in T_n$ . Because $E(X)$ is countable, we can put every endpoint into one of the trees, that is, we let the sequence $(e_n)_{n \in \mathbb {N}}$ be an enumeration of $E(X)$ , and then $\bigcup _{n\geq 1} T_n$ being a connected set must coincide with the whole dendrite X.
Let $\hat {T}_n=\bigcap _{i=0}^{\infty } f^{-i}(T_n)$ be the maximal invariant set completely contained in $T_n$ . Let $\mathrm {Per}(f)$ be the set of periodic points of f. We claim that
(4.2) $$ \begin{align} \mathrm{Rec}(f) \subset \mathrm{Per}(f) \cup \bigg(\bigcup_{z\in Z}\omega_f(z)\bigg) \cup \bigg(\bigcup_{n} \hat{T}_n\bigg). \end{align} $$
To see this, let x be a non-periodic recurrent point whose orbit is not contained in any of the trees $T_n$ . This means that there are points $f^{n_i}(x)$ which belong to $T_{m_i}\setminus T_{m_i-1}$ for some strictly increasing sequences $m_i$ , $n_i \to \infty $ . Then the arcs $[f^{n_i}(x), e_{m_i}]$ in X are pairwise disjoint, so by [Reference Misiurewicz and Marchioro19, Lemma 2.3], their diameters tend to zero. This shows that $\liminf _{n\to \infty } d(f^n(x),E(X))=0$ . Therefore, $\omega _f(x)\cap \overline {E(X)}\neq \emptyset $ . By Theorem 3.6, $\omega _f(x)$ is a minimal set, so choosing $z\in \omega _f(x)\cap \overline {E(X)}$ , we have $\omega _f(x)=\omega _f(z)$ . This establishes equation (4.2).
Now observe that any finite invariant measure concentrated on $\mathrm {Per}(f)$ has discrete spectrum, see e.g. [Reference Li, Tu and Ye17, Theorem 2.3]. As for the sets $\hat {T}_n$ , note that for each $n\in \mathbb {N}$ , the map $F=R\circ f$ , where $R\colon X\to T_n$ is a retraction, satisfies $F|_{\hat {T}_n}=f|_{\hat {T}_n}$ by the definition and therefore each f-invariant measure concentrated on $\hat {T}_n$ (a subset of a tree) has discrete spectrum, as, by [Reference Li, Tu and Ye17], all invariant measures of F have discrete spectrum.
Finally, we claim that any invariant measure concentrated on $\omega _f(z)$ , $z\in Z$ , has discrete spectrum. Let $(D_k)$ be the periodic subdendrites with periods $(\alpha _k)$ described in Proposition 3.1 and let $\pi :(\omega _f(z),f)\to (\Omega ,\tau )$ be the factor map onto the odometer described in Lemma 3.2. Let $\mu \in M_f(X)$ be any invariant measure concentrated on $\omega _f(z)$ . Then the pushforward measure $\pi _*(\mu )$ is invariant for the odometer, so by unique ergodicity, it is the Haar measure on $\Omega $ and it has discrete spectrum as a consequence of [Reference Walters26, Theorem 3.5]. Now because $\omega _f(z)$ contains no periodic points, we know that $\mu $ is non-atomic and therefore countable sets have measure zero. Then in the category of measure preserving transformations, the factor map $\pi :(\omega _f(z),\mu ,f)\to (\Omega ,\pi _*(\mu ),\tau )$ is in fact an isomorphism, because by Lemma 3.2, it is invertible except on a set of $\mu $ -measure zero. This implies that $\mu $ has discrete spectrum.
We have shown that an invariant measure concentrated on any of the countably many invariant sets in equation (4.2) has discrete spectrum. By Lemma 4.1, this completes the proof.
We would like to thank M. Gröger and F. García-Ramos for pointing out the fact mentioned in Remark 3.3. M. Foryś-Krawiec was supported in part by the National Science Centre, Poland (NCN), grant SONATA BIS no. 2019/34/E/ST1/00237: 'Topological and Dynamical Properties in Parameterized Families of Non-Hyperbolic Attractors: the inverse limit approach'. S. Roth was supported by Czech Republic RVO funding for IČ47813059.
This research is part of a project that has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 883748.
Adler, R. L., Konheim, A. G. and McAndrew, M. H.. Topological entropy. Trans. Amer. Math. Soc. 114 (1965), 309–319.CrossRefGoogle Scholar
Arévalo, D., Charatonik, W. J., Pellicer Covarrubias, P. and Simón, L.. Dendrites with a closed set of end points. Topology Appl. 115(1) (2001), 1–17.Google Scholar
Askri, G.. Li–Yorke chaos for dendrite maps with zero topological entropy and $\omega$ -limit sets. Discrete Contin. Dyn. Syst. 37(6) (2017), 2957–2976.Google Scholar
Askri, G.. Equicontinuity and Li–Yorke pairs of dendrite maps. Dyn. Syst. 35(4) (2020), 597–608.Google Scholar
Bartoš, A., Bobok, J., Pyrih, P., Roth, S. and Vejnar, B.. Constant slope, entropy, and horseshoes for a map on a tame graph. Ergod. Th. & Dynam. Sys. 40(11) (2020), 2970–2994.CrossRefGoogle Scholar
Block, L. S. and Coppel, W. A.. Dynamics in One Dimension (Lecture Notes in Mathematics, 1513). Springer-Verlag, Berlin, 1992.Google Scholar
Bowen, R.. Entropy for group endomorphisms and homogeneous spaces. Trans. Amer. Math. Soc. 153 (1971), 401–414.CrossRefGoogle Scholar
Davenport, H.. On some infinite series involving arithmetical functions (II). Q. J. Math. 8 (1937), 313–320.CrossRefGoogle Scholar
Dinaburg, E. I.. The relation between topological entropy and metric entropy. Dokl. Akad. Nauk 190 (1970), 19–22.Google Scholar
Downarowicz, T.. Entropy in Dynamical Systems (New Mathematical Monographs, 18). Cambridge University Press, Cambridge, 2011.CrossRefGoogle Scholar
El Abdalaoui, E. H., Askri, G. and Marzougui, H.. Mobius disjointness conjecture for local dendrite maps. Nonlinearity 32(1) (2019), 285–300.CrossRefGoogle Scholar
García-Ramos, F., Jäger, T. and Ye, X.. Mean equicontinuity, almost automorphy and regularity. Israel J. Math. 243 (2021), 155–183.CrossRefGoogle Scholar
Huang, W., Wang, Z. and Ye, X.. Measure complexity and Möbius disjointness. Adv. Math. 347 (2019), 827–858.CrossRefGoogle Scholar
Karagulyan, D.. On Möbius orthogonality for interval maps of zero entropy and orientation-preserving circle homeomorphisms. Ark. Mat. 53 (2015), 317–327.Google Scholar
Kočan, Z., Kornecká-Kurková, V. and Málek, M.. Entropy, horseshoes and homoclinic trajectories on trees, graphs and dendrites. Ergod. Th. & Dynam. Sys. 31 (2011), 165–175.CrossRefGoogle Scholar
Li, J., Oprocha, P., Yang, Y. and Zeng, T.. On dynamics of graph maps with zero topological entropy. Nonlinearity 30(12) (2017), 4260–4276.CrossRefGoogle Scholar
Li, J., Oprocha, P. and Zhang, G. H.. Quasi-graphs, zero entropy and measures with discrete spectrum. Preprint, 2021, arXiv:1809.05617v2 [math.DS].Google Scholar
Li, J., Tu, S. and Ye, X.. Mean equicontinuity and mean sensitivity. Ergod. Th. & Dynam. Sys. 35 (2015), 2587–2612.CrossRefGoogle Scholar
Mai, J. H. and Shi, E. H.. ${\bar{R}}={\bar{P}}$ for maps of dendrites with $\mathit{\mathrm{Card}}\kern-2pt\left({\mathrm{End}}\left(\mathrm{X}\right)\right)<c$ . Internat. J. Bifur. Chaos Appl. Sci. Engrg. 19(4) (2009), 1391–1396.Google Scholar
Misiurewicz, M.. Horseshoes for continuous mappings of an interval. Dynamical Systems (Bresanone, 1978). Ed. Marchioro, C.. Liguori, Naples, 1980, pp. 125–135.Google Scholar
Nadler, S. B. Jr. Continuum Theory. Marcel Dekker, New York, 1992.Google Scholar
Naghmouchi, I.. Pointwise-recurrent dendrite maps. Ergod. Th. & Dynam. Sys. 33 (2013), 1115–1123.Google Scholar
Sarnak, P.. Three Lectures on the Möbius Function, Randomness and Dynamics (Lecture Notes). IAS, 2009.Google Scholar
Scarpellini, B.. Stability properties of flows with pure point spectrum. J. Lond. Math. Soc. (2) 26(3) (1982), 451–464.Google Scholar
Sharkovsky, A. N., Kolyada, S. F., Sivak, A. G. and Fedorenko, V. V.. Dynamics of One-Dimensional Maps (Mathematics and Its Applications, 407). Kluwer Academic Publishers, Dordrecht, 1997.Google Scholar
Walters, P.. An Introduction to Ergodic Theory (Graduate Texts in Mathematics, 79). Springer, New York, 1982.CrossRefGoogle Scholar
You have Access Open access
|
CommonCrawl
|
Search all SpringerOpen articles
Journal for Labour Market Research
Estimation of standard errors and treatment effects in empirical economics—methods and applications
Volume 47 Supplement 1-2
20 years of IAB Establishment Panel – Payoffs and Perspectives / 20 Jahre IAB-Betriebspanel – Erträge und Perspektiven
Schätzung von Standardfehlern und Kausaleffekten in der empirischen Wirtschaftsforschung – Methoden und Anwendungen
Olaf Hübler1
Journal for Labour Market Research volume 47, pages 43–62 (2014)Cite this article
This paper discusses methodological problems of standard errors and treatment effects. First, heteroskedasticity- and cluster-robust estimates are considered as well as problems with Bernoulli distributed regressors, outliers and partially identified parameters. Second, procedures to determine treatment effects are analyzed. Four principles are in the focus: difference-in-differences estimators, matching procedures, treatment effects in quantile regression analysis and regression discontinuity approaches. These methods are applied to Cobb-Douglas functions using IAB establishment panel data.
Different heteroskedasticity-consistent procedures lead to similar results of standard errors. Cluster-robust estimates show evident deviates. Dummies with a mean near 0.5 have a smaller variance of the coefficient estimates than others. Not all outliers have a strong influence on significance. New methods to handle the problem of partially identified parameters lead to more efficient estimates.
The four discussed treatment procedures are applied to the question whether company-level pacts affect the output. In contrast to unconditional difference-in-differences and to estimates without matching the company-level effect is positive but insignificant if conditional difference-in-differences, nearest-neighbor or Mahalanobis metric matching is applied. The latter result has to be specified under quantile treatment effects analysis. The higher the quantile the higher is the positive company-level pact effect and there is a tendency from insignificant to significant effects. A sharp regression discontinuity analysis shows a structural break at a probability of 0.5 that a company-level pact exists. No specific effect of the Great Recession can be detected. Fuzzy regression discontinuity estimates reveal that the company-level pact effect is significantly lower in East than in West Germany.
Dieser Beitrag diskutiert Möglichkeiten zur Schätzung von Standardfehlern und Kausaleffekten. Zunächst werden heteroskedastie- und gruppenrobuste Schätzungen für Standardfehler betrachtet sowie Auffälligkeitenund Probleme bei Dummy-Variablen als Regressoren, Ausreißern und nur partiell identifizierten Parametern erörtert. Danach geht es um Verfahren zur Bestimmung von Treatmenteffekten. Vier Prinzipien werden hierzuvorgestellt: Differenz-von-Differenzen-Schätzer, Matchingverfahren, Kausaleffekte in der Quantilsregressionsanalyse und Ansätze zur Bestimmung von Diskontinuitäten bei Regressionsschätzungen. Anwendungen erfolgen im zweiten Teil der Arbeit auf Cobb-Douglas-Produktionsfunktionen unter Verwendung von IAB-Betriebspaneldaten.
Verschiedene heteroskedastiekonsistente Verfahren führen zu recht ähnlichen Ergebnissen bei den Standardfehlern. Clusterrobuste Schätzungen zeigen dagegen deutliche Abweichungen. Dummies als Regressoren mit einem Mittelwert in der Nähe von 0.5 weisen kleinere Varianzen der Koeffizienterschätzer auf als andere. Nicht alle Ausreißer haben einen nennenswerten Einfluss auf die Signifikanz. Neuere Methoden zur Behandlung des Problems von nur partiell identifizierten Parametern führen zu effizienteren Schätzungen.
Die vier diskutierten Verfahren zur Bestimmung der Wirkungen von Maßnahmen werden auf das Problem, ob betriebliche Bündnisse einen signifikanten Einfluss auf den Produktionsoutput haben, angewandt. Im Gegensatz zu nicht konditionalen Differenz-von-Differenzen-Schätzern und Schätzern ohne Matching sind die Effekte betrieblicher Bündnisse bei bedingten Differenz-von-Differenzen- Schätzern und Matching-Verfahren zwar positiv, aber insignifikant. Diese Aussage ist auf Basis der Treatment-Quantilsanalysezu präzisieren. Je höher die Quantile sind, umso größer ist die Wirkung betrieblicher Bündnisse mit einer Tendenz von insignifikanten zu signifikanten Effekten. Die deterministische Regressionsanalyse mit Diskontinuitäten zeigt einen Strukturbruch bei Wahrscheinlichkeit 0.5, dass ein betriebliches Bündnis existiert. Es lassen sich keine spezifischen Effekte während der Rezession 2009 ausmachen. Schätzungen im Rahmen stochastischer Diskontinuitätsansätze offenbaren, dass die Wirkungen betrieblicher Bündnisse in Ostdeutschland signifikant niedriger ausfallen als in Westdeutschland.
Contents, questions and methods have changed in empirical economics in the last 20 years. Many methods were developed in the past but the application in empirical economics follows with a lag. Some methods are well-known but have experienced only little attention. New approaches focus on characteristics of the data, on modified estimators, on correct specifications, on unobserved heterogeneity, on endogeneity and on causal effects. Real data sets are not compatible with the assumptions of classical models. Therefore, modified methods were suggested for the estimation and inference.
The road map of the following considerations are four hypotheses where the first two and the second two belong together:
Significance is an important indicator in empirical economics but the results are sometimes misleading.
Assumptions' violation, clustering of the data, outliers and only partially identified parameters are often the reason of wrong standard errors using classical methods.
The estimation of average effects is useful but subgroup analysis and quantile regressions are important supplements.
Causal effects are of great interest but the determination is based on disparate approaches with varying results.
In the following some econometric methods are developed, presented and applied to Cobb-Douglas production functions.
Econometric methods
Significance and standard errors in regression models
The working horse in empirical economics is the classical linear model
$$ y_{i}= x'_{i}\beta+ u_{i}, \quad i=1,\ldots,n. $$
The coefficient vector β is estimated by ordinary least squares (OLS)
$$ \hat{\beta}= \bigl(X'X\bigr)^{-1}X'y $$
and the covariance matrix by
$$ \hat{V}(\hat{\beta}) = \hat{\sigma}^2 \bigl(X'X\bigr)^{-1}, $$
where X is the design matrix and \(\hat{\sigma}^{2}\) the estimated variance of the disturbances. The influence of a regressor, e.g. x k , on the regressand y is called significant at a 5 percent level if \(|t|=|\hat{\beta}_{k}/\sqrt {\hat{V}(\hat{\beta}_{k})}|>t_{0.975}\). In empirical papers this result is often documented by an asterisk and implicitly interpreted as a good one, while insignificance is a negative signal. Ziliak and McCloskey (2008) and Krämer (2011) have criticized this procedure although the analysis is extended by robustness tests in many investigations. Three types of mistakes can lead to a misleading interpretation:
There does not exist any effect but due to technical inefficiencies a significant effect is reported.
The effect is small but due to the precision of the estimates a significant effect is determined.
There exists a strong effect but due to the variability of the estimates the statistical effect cannot be detected.
The consequence cannot be to neglect the instrument of significance. But what can we do? The following proposals may help to clarify why some standard errors are high and others low, why some influences are significant and others not, whether alternative procedures can reduce the danger of one of the three mistakes:
Compute robust standard errors.
Analyze whether variation within clusters is only small in comparison with variation between the clusters.
Check whether dummies as regressors with high or low probability are responsible for insignificance.
Test whether outliers induce large standard errors.
Consider the problem of partially identified parameters.
Detect whether collinearity is effective.
Investigate alternative specifications.
Use sub-samples and compare the results.
Execute sensitivity analyses (Leamer 1985).
Employ the sniff test (Hamermesh 2000) in order to detect whether econometric results are in accord with economic plausibility.
Heteroskedasticity-robust standard errors
OLS estimates are inefficient or biased and inconsistent if assumptions of the classical linear model are violated. We need alternatives which are robust to the violation of specific assumptions. In empirical papers we find often the hint that robust standard errors are displayed. This is imprecise. In most cases this means only heteroskedasticity-robust. This should be mentioned and also that the estimation is based on White's approach. If we know the type of heteroskedasticity, a transformation of the regression model should be preferred, namely
$$ \frac{y_i}{\sigma_i} = \frac{\beta_0}{\sigma_i}+\beta_1 \frac {x_{1i}}{\sigma_i}+ \cdots+\beta_K\frac{x_{Ki}}{\sigma_i} + \frac {u_i}{\sigma_i}, $$
where i=1,…,n. Typically, the individual variances of the error term are unknown. In the case of unknown and unspecific heteroscedasticity White (1980) recommends the following estimation of the covariance matrix
$$\begin{aligned} \hat{V}_{white}(\hat{\beta}) = \bigl(X'X\bigr)^{-1} \Bigl(\sum\hat{u}_i^2x_ix_i' \Bigr) \bigl(X'X\bigr)^{-1}. \end{aligned}$$
Such estimates are asymptotically heteroscedasticity-robust. In many empirical investigations this robust estimator is routinely applied without testing whether heteroskedasticity exists. We should stress that those estimated standard errors are more biased than conventional estimators if residuals are homoskedastic. As long as there is not too much heteroskedasticity, robust standard errors are also biased downward. In the literature we find some suggestions to modify this estimator, namely to weight the squared residuals \(\hat{u}_{i}^{2}\):
$$\begin{aligned} hc_1=\frac{n}{n-K}\hat{u}^2_i \end{aligned}$$
$$\begin{aligned} hc_j=\frac{1}{(1-c_{ii})^{\delta_j}}\hat{u}_i^2, \end{aligned}$$
where j=2,3,4, c ii is the main diagonal element of X′(X′X)−1 X and δ j =1;2;min[γ 1,(nc ii )/K]+min[γ 2,(nc ii )/K], γ 1 and γ 2 are real positive constants.
The intention is to obtain more efficient estimates. It can be shown for hc 2 that under homoskedasticity the mean of \(\hat{u}_{i}^{2}\) is the same as σ 2(1−c ii ). Therefore, we should expect that the hc 2 option leads under homoskedasticity to better estimates in small samples than the simple hc 1 option. Then \(E(\hat{u}_{i}^{2}/(1-c_{ii}))\) is σ 2. The second correction is presented by MacKinnon and White (1985). This is an approximation of a more complicated estimator which is based on a jackknife estimator—see Sect. 2.1.2. Applications demonstrate that the standard error increases started with OLS via hc 1, hc 2 to the hc 3 option. Simulations, however, do not show a clear preference. As one cannot be sure which case is the correct one, a conservative choice is preferable (Angrist and Pischke 2009, p. 302). The estimator should be chosen that has the largest standard error. This means the null hypothesis (H 0: no influence on the regressand) keeps up longer than with other options.
Cribari-Neto and da Silva (2011) suggest γ 1=1 and γ 2=1.5 in hc 4. The intention is to weaken the effect of influential observations compared with hc 2 and hc 3 or in other words to enlarge the standard errors. In an earlier version (Cribari-Neto et al. 2007) a slight modification is presented: \(hc_{4}^{*}=1/(1-c_{ii})^{\delta_{4*}}\), where δ 4∗=min(4,nc ii /K). It is argued that the presence of high leverage observations is more decisive for the finite-sample behavior of the consistent estimators of \(V(\hat{\beta})\) than the intensity of heteroskedasticity, hc 4 and hc 4∗ aim at discounting for leverage points—see Sect. 2.1.5—more heavily than hc 2 and hc 3. The same authors formulate a further estimator
$$\begin{aligned} hc_5=\frac{1}{(1-c_{ii})^{\delta_5}}\hat{u}^2_i, \end{aligned}$$
where \(\delta_{5}=\min(\frac{nc_{ii}}{K},\max(4,\frac{nkc_{ii, \max}}{K}))\), k is a predefined constant, where k=0.7 is suggested. In this case squared residuals are affected by the maximal leverage.
Re-sampling procedures
Other possibilities to determine the standard error are the jackknife and the bootstrap estimator. These are re-sampling procedures, which construct sub-samples with n−1 observations in the jackknife case. Sequentially, one observation is eliminated. The former methods compare the estimated coefficients of the total sample size \(\hat{\beta}\) with those after eliminating one observation \(\hat{\beta}_{-i}\). The jackknife estimator of the covariance matrix is
$$\begin{aligned} \hat{V}_{\mathrm{jack}}=\frac{n-K}{n}\sum _{i=1}^n(\hat{\beta}_{-i}-\hat{\beta}) ( \hat{\beta}_{-i}-\hat{\beta})'. \end{aligned}$$
There exist many ways to bootstrap regression estimates. The basic idea is assume that the sample with n elements is the population and B times m elements (sampling with replacement) are drawn, where m≤n and m>n is feasible. If \(\hat{\beta}_{\mathrm{boot}}'=(\hat {\beta}(1)_{m}', \ldots,\hat{\beta}(B)_{m}')\) are the bootstrap estimators of the coefficients the asymptotic covariance matrix is
$$\begin{aligned} \hat{V}_{\mathrm{boot}}=\frac{1}{B}\sum ^B_{b=1}\bigl(\hat{\beta}(b)_m-\hat{ \beta}\bigr) \bigl(\hat{\beta}(b)_m-\hat{\beta}\bigr)', \end{aligned}$$
where \(\hat{\beta}\) is the estimator with the original sample size n. Alternatively, \(\hat{\beta}\) can be substituted by \(\bar{\beta}=1/B\sum \hat{\beta}(b)_{m}\). Bootstrap estimates of the standard error are especially helpful when it is difficult to compute standard errors by conventional methods, e.g. 2SLS estimators under heteroskedasticity or cluster-robust standard errors when many small clusters or only short panels exist. The jackknife can be viewed as a linear approximation of the bootstrap estimator. A further popular way to estimate the standard errors is the delta method. This approach is especially used for nonlinear functions of parameter estimates \(\hat{\gamma}=g(\hat{\beta})\). An asymptotic approximation of the covariance matrix of a vector of such functions is determined. It can be shown that
$$\begin{aligned} n^{1/2}(\hat{\gamma}- \gamma_0) \sim N\bigl(0, G_0V^{\infty}(\hat{\beta})G_0'\bigr), \end{aligned}$$
where γ 0 is the vector of the true values of γ, G 0 is an l×K matrix with typical element ∂g i (β)/∂β j , evaluated at β 0, and V ∞ is the asymptotic covariance matrix of \(n^{1/2}(\hat{\beta}- \beta_{0})\).
The Moulton problem
The variance of a regressor is low if this variable strongly varies between groups but only little within groups (Moulton 1986, 1987, 1990). This is especially the case if industry, regional and macroeconomic variables are introduced in a microeconomic model or panel data are considered. In a more general context this is called the problem of cluster sampling. Individuals or establishments are sampled in groups or clusters. Consequence may be a weighted estimation that adjust for differences in sampling rates. However, weighting is not always necessary and estimates may understate the true standard errors. Some empirical investigations note that cluster-robust standard errors are displayed but do not mention the cluster variable. If panel data are used then this is usually the identification variable of the individuals or firms. In many specifications more than one cluster variable, e.g. a regional and an industry variable, is incorporated. Then it is misleading if the cluster variable is not mentioned. Furthermore, then a sequential determination of a cluster-robust correction is not qualified if there is a dependency between the cluster variables. If we can assume that there is a hierarchy of the cluster variables then a multilevel approach can be applied (Raudenbush and Bryk 2002; Goldstein 2003). Cameron and Miller (2010) suggest a two-way clustering procedure. The covariance matrix can be determined by
$$\begin{aligned} \hat{V}_{\mathrm{two}\mbox{-}\mathrm{way}}(\hat{\beta})=\hat{V}_1(\hat{\beta})+ \hat{V}_1(\hat{\beta})-\hat{V}_{1\cap2}(\hat{\beta}) \end{aligned}$$
when the three components are computed by
$$\begin{aligned} &{\hat{V}(\hat{\beta})=\bigl(X'X\bigr)^{-1}\hat{B} \bigl(X'X\bigr)^{-1}} \\ &{\hat{B}=\Biggl(\sum_{g=1}^{G}X'_g \hat{u}_g\hat{u}_g'X_g \Biggr).} \end{aligned}$$
Different ways of clustering can be used. Cluster-robust inference asymptotics are based on G→∞. In many applications there are only a few clusters. In this case \(\hat{u}_{g}\) has to be modified. One way is the following transformation
$$\begin{aligned} \tilde{u}_g=\sqrt{\frac{G}{G-1}}\hat{u}_g. \end{aligned}$$
Further methods and suggestions in the literature are presented by Cameron and Miller (2010) and Wooldridge (2003).
A simple and extreme example shall demonstrate the cluster problem.
Assume a data set with 5 observations (n=5) and 4 variables (V1–V4).
1 24 123 −234 −8
2 875 87 54 3
3 −12 1234 −876 345
4 231 −87 −65 9808
5 43 34 9 −765
The linear model
$$\begin{aligned} V1=\beta_1+\beta_2V2+\beta_3V3+ \beta_4V4+u \end{aligned}$$
is estimated by OLS using the original data set (1M). Then the data set is doubled (2M), quadrupled (4M) and octuplicated (8M). The following OLS estimates result.
\(\hat{\beta}\)
\(\hat{\sigma}_{\hat{\beta}}\)
V2 1.7239 1.7532 0.7158 0.4383 0.2922
const 323.2734 270.5781 110.463 67.64452 45.0963
The coefficients of 1M to 8M are the same, however, the standard errors decrease if the same data set is multiplied. Namely, the variance is only 1/6, 1/16 and 1/36 of the original variance. The general relationship can be shown as follows. For the original data set (X 1) the covariance matrix is
$$\begin{aligned} \hat{V}_1(\hat{\beta}) = \hat{\sigma}_1^2 \bigl(X_1'X_1\bigr)^{-1}. \end{aligned}$$
Using X 1=⋯=X F the F times enlarged data set with the design matrix \(X'=:(X_{1}'\cdots X_{F}')\) leads to
$$\begin{aligned} \hat{\sigma}_F^2 = \frac{1}{F\cdot n - K}\sum _{i=1}^{F\cdot n}\hat {u}^2_{i} = \frac{F(n-K)}{F\cdot n - K}\hat{\sigma}^2_1 \end{aligned}$$
$$\begin{aligned} \hat{V}_F(\hat{\beta}) =& \hat{\sigma}_F^2 \bigl(X'X\bigr)^{-1} = \hat{\sigma }_F^2 \frac{1}{F}\cdot\bigl(X_1'X_1 \bigr)^{-1} \\ =& \frac{n-K}{F\cdot n - K}\hat {V}_1(\hat{\beta}). \end{aligned}$$
K is the number of regressors including the constant term, n is the number of observations in the original data set (number of clusters), F is the number of observations within a cluster. In the numerical example with F=8, K=4, n=5 the Moulton factor MF that indicates the deflation factor of the variance is
$$\begin{aligned} MF = \frac{n-K}{F\cdot n - K} = \frac{1}{36}. \end{aligned}$$
This is exactly the same as it was demonstrated in the numerical example. Analogously the estimated values 1/6 and 1/16 can be determined. As the multiplying of the data set does not add any further information to the simple original data set not only the coefficients but also the standard errors should be the same. Therefore, it is necessary to correct the covariance matrix. Statistical packages, e.g. Stata, supply cluster-robust estimates
$$\begin{aligned} \hat{V}(\hat{\beta})_C = \Biggl(\sum^C_{c=1}X_c'X_c \Biggr)^{-1}\sum^C_{c=1}X_c' \hat {u}_c\hat{u}_cX_c\Biggl(\sum ^C_{c=1}X_c'X_c \Biggr)^{-1}, \end{aligned}$$
where C is the number of clusters. In our specific case this is the number of observations n. This approach implicitly assumes that F is small and n→∞. If this assumption does not hold a degrees-of-freedom correction
$$\begin{aligned} \mathit{df}_C=\frac{F\cdot n-1}{F\cdot n-K}\cdot\frac{n}{n-1} \end{aligned}$$
is helpful. \(\mathit{df}_{C}\cdot\hat{V}(\hat{\beta})_{C}\) is the default option in Stata and corrects for the number of clusters in practice being finite. Nevertheless, this correction eliminates only partially the underestimated standard errors. In other words, the corrected t-statistic of the regressor x k is larger than that of \(\hat{\beta }_{k}/\sqrt{\hat{V}_{1k}}\).
Large standard errors of dichotomous regressors with small or large mean
Another problem with estimated standard errors can be induced by Bernoulli distributed regressors. Assume a simple two-variable classical regression model
$$\begin{aligned} y = a + b\cdot D + u. \end{aligned}$$
D is a dummy variable and the variance of \(\hat{b}\) is
$$\begin{aligned} V(\hat{b})=\frac{\sigma^2}{n}\cdot\frac{1}{s_D^2}, \end{aligned}$$
$$\begin{aligned} s_D^2 =&\hat{P}(D=1)\cdot\hat{P}(D=0)=: \hat{p}(1- \hat{p})\\ =&\frac {(n|D=1)}{n}\cdot\biggl(1-\frac{(n|D=1)}{n}\biggr). \end{aligned}$$
If \(s_{D}^{2}\) is determined by \(\bar{D}=(n|D=1)/n\) we find that \(\bar{D}\) is at most 0.5. \(V(\hat{b})\) is minimal at given n and σ 2 when the sample variance of D reaches the maximum, if \(\bar{D}=0.5\). This result holds only for inhomogeneous models.
An income variable (Y=Y 0/107) with 53,664 observations is regressed on a Bernoulli distributed random variable RV. The coefficient β 1 of the linear model Y=β 0+β 1 RV+u is estimated by OLS, where alternative values of the mean of RV (\(\overline{RV}\)) are assumed (0.1,0.2,…,0.9)
\(\hat{\beta}_{1}\)
std.err.
\(\overline{RV}=0.1\) −0.3727 0.6819
\(\overline{RV}=0.2 \) −0.5970 0.5100
\(\overline{RV}=0.4\) 0.3068 0.4170
\(\overline{RV}=\boldsymbol{0.5}\) 0.1338 0.4094
This example confirms the theoretical result. The standard error is smallest if \(\overline{RV}=0.5\) and increases systematically if the mean of RV decreases or increases. An extension to multiple regression models seems possible—see applications in the Appendix, Tables 11, 12, 13, 14. The more \(\bar{D}\) deviates from 0.5, the larger or smaller is the mean of D, the higher is the tendency to insignificant effects. A caveat is necessary. The conclusion that the t-value of a dichotomous regressor D 1 is always smaller than that of D 2, when V(D 1)>V(D 2), is not unavoidable. The basic effect of D 1 on y may be larger than that of D 2 on y. The theoretical result aims on specific variables and not on the comparison between regressors. In practice, significance is determined by \(t=\hat{b}/\sqrt{\hat{V}(\hat {b})}\). However, we do not find a systematic influence of \(\hat{b}\) on t if \(\bar{D}\) varies. Nevertheless, the random differences in the influence of D on y can dominate the \(\bar{D}\) effect via \(s_{D}^{2}\). The comparison of Table 13 with Table 14 shows that the influence of a works council (WOCO) is stronger than that of a company-level pact (CLP). The coefficients of the former regressor are larger and the standard errors are lower than that of the latter regressor so that the t-values are larger. In both cases the standard errors increase if the mean of the regressor is reduced. The comparison of line 1 in Table 13 with line 9 in Table 14, where the mean of CLP and WOCO is nearly the same, makes clear that the stronger basic effect of WOCO on lnY dominates the mean reduction effect of WOCO. The t-value in line 9 of Table 14 is smaller than that in line 1 of Table 14 but still larger than that in line 1 of Table 13. Not all deviations of the mean of a dummy D as regressor from 0.5 induce the described standard error effects. A random variation of \(\bar{D}\) is necessary. An example, where this is not the case, is matching—see Sect. 2.2 and the application in Sect. 3. \(\bar{D}\) increases due to the systematic elimination of those observations with D=0 that are dissimilar to those of D=1 in other characteristics.
Outliers and influential observations
Outliers may have strong effects on the estimates of the coefficients, of the dependent variable and on standard errors and therefore on significance. In the literature we find some suggestions to measure outliers that are due to large or small values of the dependent variable or on the independent variables. Belsley et al. (1980) use the main diagonal elements c ii of the hat matrix C=X(X′X)−1 X′ to determine the effects of a single observation on the coefficient estimator \(\hat{\beta}\), on the estimated endogenous variable \(\hat {y}_{i}\) and on the variance \(\hat{V}(\hat{y})\). The higher c ii , the higher is the difference between the estimated dependent variable with and without the ith observation. A rule of thumb orients on the relation
$$\begin{aligned} c_{ii}>\frac{2K}{n}. \end{aligned}$$
An observation i is called an influential observation with a strong leverage if this inequality is fulfilled. The effects of the ith observation on \(\hat{\beta}\), \(\hat{y}\) and \(\hat{V}(\hat{\beta})\) and the rules of thumb can be expressed by
$$\begin{aligned} \bigl|\hat{\beta}_{k}-\hat{\beta}_k(i)\bigr|>\frac{2}{\sqrt{n}} \end{aligned}$$
$$\begin{aligned} \biggl|\frac{\hat{y}_i-\hat{y}_{i(i)}}{s(i)\sqrt{c_{ii}}}\biggr|>2\sqrt{\frac{K}{n}} \end{aligned}$$
$$\begin{aligned} \biggl|\frac{\operatorname{det}(s^2(i)(X'(i)X(i))^{-1}}{\operatorname{det}(s^2(X'X)^{-1})} \biggr| > \frac{3K}{n}. \end{aligned}$$
If the inequalities are fulfilled, this indicates a strong influence of observation i where (i) means that observation i is not considered in the estimates. The determination of an outlier is based on externally studentized residuals
$$\begin{aligned} \hat{u}^*_i=\frac{\hat{u}_i}{s(i)\sqrt{1-c_{ii}}} \sim t_{n-K-1}. \end{aligned}$$
Observations which fulfill the inequality \(|\hat{u}^{*}_{i}|>t_{1-\alpha /2;n-K-1}\) are called outliers. Alternatively, a mean shift outlier model can be formulated
$$\begin{aligned} y = X\beta+ A_j\delta+ \epsilon, \end{aligned}$$
$$\begin{aligned} A_{j} = \left \{ \begin{array}{l@{\quad }l} 1 &\mbox{if}\ i=j\\ 0 &\mbox{otherwise}. \end{array} \right . \end{aligned}$$
Observation j has a statistical effect on y if δ is significantly different from zero. The estimated t-value is the same as \(\hat{u}^{*}_{j}\). This procedure does not separate whether the outlier j is due to unusual y- or unusual x-values.
Hadi (1992) proposes an outlier detection with respect to all regressors. The decision whether the design matrix X contains outliers is based on an elliptical distance
$$\begin{aligned} d_i(c,W) = \sqrt{(x_i-c)'W(x_i-c)}, \end{aligned}$$
where intuitively the classical choices of c and W are the arithmetic mean (\(\bar{x}\)) and the inverse of the sample covariance matrix (S −1) of the estimation function of β, respectively, so that the Mahalanobis distance follows. If
$$\begin{aligned} d_i\bigl(\bar{x} ,S^{-1}\bigr)^2 > \chi^2_K, \end{aligned}$$
observation i is identified as an outlier. As \(\bar{x}\) and S react sensitive to outliers it is necessary to estimate an outlier-free mean and sample covariance matrix. For this purpose, only outlier-free observations are considered to determine \(\bar{x}\) and S. Another way to avoid the sensitivity problem is to use more robust estimators of the location and covariance matrix, e.g. the median but not the mean is robust to outliers. Finally, an outlier vector MOD (multiple outlier dummy) instead of A is incorporated in the model in order to test whether the identified outlier observations have a significant effect on the dependent variable. A second problem is whether we should eliminate all outliers or only some of them or no outlier. The situation is obvious if an outlier is induced by measurement errors. Then we should eliminate this observation if we have no information to correct the error. Typically, however, we cannot be sure that an anomalous value is due to measurement errors. Insofar, the correct estimation is based between the two extremes: all outliers are considered or all outliers are eliminated. A solution is presented in the next subsection.
Partially identified parameters
Assume that some observations are unknown or not exactly measured. Consequence is that a parameter cannot exactly be determined but only within a range. The outlier situation leads to such a partial identification problem. There exist many other similar constellations.
The share of unemployed persons is 8 % but 5 % have not answered to the question of the employment status. Therefore, the unemployment rate can only be calculated within certain limits, namely between the two extremes:
all persons who have not answered are employed
all persons who have not answered are unemployed.
In the first case the unemployment rate is 7.6 % and in the second case 12.6 %.
The main methodological focus of partially identified parameters is the search for the best statistical inference. Chernozhukov et al. (2007), Imbens and Manski (2004), Romano and Shaikh (2010), Stoye (2009) and Woutersen (2009) have discussed solutions.
If Θ 0=[θ l ,θ u ] describes the lower and the upper bound based on the two extreme situations Stoye (2009) develops the following confidence interval
$$\begin{aligned} CI_{\alpha}=\biggl[\hat{\theta}_l-\frac{{c}_{\alpha}\hat{\sigma}_l}{\sqrt{n}}, \hat{ \theta}_u-\frac{{c}_{\alpha}\hat{\sigma}_l}{\sqrt{n}}\biggr], \end{aligned}$$
where \(\hat{\sigma}_{l}\) is the standard error of the estimation function \(\hat{\theta}_{l}\). c α is chosen by
$$\begin{aligned} \varPhi\biggl({c}_{\alpha}+\frac{\sqrt{n}\hat{\Delta}}{ \hat{\sigma}_l}\biggr)-\varPhi(-{c}_{\alpha})=1- \alpha, \end{aligned}$$
where Δ=θ u −θ l . As Δ is unknown, the interval has to be estimated (\(\hat{\Delta}\)).
Treatment evaluation
The objective of treatment evaluation is the determination of causal effects of economic measures. The simplest form to measure the effect is to estimate α in the linear model
$$\begin{aligned} y=X\beta+ \alpha D + u, \end{aligned}$$
where D is the intervention variable and measured by a dummy: 1 if an individual or an establishment is assigned to treatment; 0 otherwise. Typically, this is not the causal effect. An important reason for this failure are unobserved variables that influence y and D, when D and u correlate.
In the last 20 years a wide range of methods was developed to determine the "correct" causal effect. Which approach should be preferred depends on the data, the behavior of the economic agents and the assumptions of the model. The major difficulty is that we have to compare an observed situation with an unobserved situation. Depending on the available information the latter is estimated. We have to ask what would occur if not D=1 but D=0 (treatment on the treated) would take place. This counterfactual is unknown and has to be estimated. Inversely, if D=0 is observable we can search for the potential result under D=1 (treatment on the untreated). A further problem is the fixing of the control group. What is the meaning of "otherwise" in the definition of D? Or in other words: What is the causal effect of an unobserved situation? Should we determine the average causal effect or only that of a subgroup?
Neither a before-after comparison \((\bar{y}_{1}|D=1)-(\bar{y}_{0}|D=1)\) nor a comparison of \((\bar{y}_{t}|D=1)\) and \((\bar{y}_{t}|D=0)\) in cross-section is usually appropriate. Difference-in-differences estimators (DiD), a combination of these two methods, are very popular in applications
$$\begin{aligned} \bar{\Delta}_1-\bar{\Delta}_0 =& \bigl[( \bar{y}_1|D=1)-(\bar{y}_1|D=0)\bigr]\\ &{} - \bigl[( \bar{y}_0|D=1)-(\bar{y}_0|D=0)\bigr]. \end{aligned}$$
The effect can be determined in the following unconditional model
$$\begin{aligned} y = a_1 + b_1T + b_2D + b_3TD + u, \end{aligned}$$
where T=1 means a period that follows the period of the measure (D=1). T=0 is a period before the measure takes place. In this approach \(\hat{b}_{3}=\bar{\Delta}_{1}-\bar{\Delta}_{0}\) is the causal effect. The equation can be extended by further regressors X. This is called a conditional DiD estimator. Nearly all DiD investigations neglect a potential bias in standard error estimates induced by serial correlation. A further problem results under endogenous intervention variables. Then an instrumental variables estimator should be employed avoiding the endogeneity bias. This procedure will be considered in the quantile regression analysis. If the dependent variable is a dummy a nonlinear estimator has to be applied. Suggestions are presented by Ai and Norton (2003) and Puhani (2012).
Matching procedures were developed with the objective to find a control group that is very similar to the treatment group. Parametric and non-parametric procedures can be employed to determine the control group. Kernel, inverse probability, radius matching, local linear regression, spline smoothing or trimming estimators are possible. Mahalanobis metric matching with or without propensity scores and nearest neighbor matching with or without caliper are typical procedures—see e.g. Guo and Fraser (2010). The Mahalanobis distance is defined by
$$\begin{aligned} (u-v)'S^{-1}(u-v), \end{aligned}$$
where u (v) is a vector that incorporates the values of matching variables of participants (non-participants) and S is the empirical covariance matrix from the full set of non-treated participants.
An observed or artificial statistical twin can be determined to each participant. The probability of all non-participants to participate on the measure is calculated based on probit estimates (propensity score). The statistical twin j of a participant i is that who has a propensity score (ps j ) nearest to that of the participant. The absolute distance between i and j may not exceed a given value ϵ
$$\begin{aligned} |ps_i-ps_j| < \epsilon, \end{aligned}$$
where ϵ is a predetermined tolerance (caliper). A quarter of a standard deviation of the sample estimated propensity scores is suggested as the caliper size (Rosenbaum and Rubin 1985). If the control group is identified the causal effect can be estimated using the reduced sample (treatment observations and matched observations). In applications α from the model y=Xβ+αD+u or b 3 from the DiD approach is determined as causal effect. Both estimators implicitly assume that the causal effect is the same for all subgroups of individuals or firms and that no unobserved variables exist that are correlated with observed variables. Insofar matching procedures suffer from the same problem as OLS estimators.
If the interest is to detect whether and in which amount the effects of intervention variables differ between the percentiles of the distribution of the objective variable y a quantile regression analysis is an appropriate instrument. The objective is to determine quantile treatment effects (QTE). The distribution effect of a measure can be estimated by the difference Δ of the dependent variable with (y 1) and without (y 0) treatment (D=1; D=0) separate for specific quantiles Q τ where 0<τ<1
$$\begin{aligned} \Delta^{\tau} = Q_{y^1}^{\tau}-Q_{y^0}^{\tau}. \end{aligned}$$
The empirical distribution function of an observed situation and that of the counterfactual is identified. From the view of modeling four major cases are developed in the literature that differ in the assumptions. The measure is assumed exogenous or endogenous and the effect on y is unconditional or conditional analogously to DiD.
Exogenous (1) Firpo (2007) (2) Koenker and Bassett (1978)
Endogenous (3) Frölich and Melly (2012) (4) Abadie et al. (2002)
In case (1) the quantile treatment effect \(Q_{y^{1}}^{\tau}-Q_{y^{0}}^{\tau }\) is estimated by
$$\begin{aligned} Q_{y^j}^{\tau}=\arg \min_{\alpha_0;\alpha_1}E\bigl[ \rho_{\tau}(y-q_j) (W|D=j)\bigr], \end{aligned}$$
where j=0;1, q j =α 0+α 1(D|D=j), ρ τ =a(τ−1(a≤0)) is a check function; a is a real number. The weights are
$$\begin{aligned} W=\frac{D}{p(X)} + \frac{1 - D}{1 - p(X)}. \end{aligned}$$
The estimation is characterized by two stages. First, the propensity score is determined by a large number of regressors X via a nonparametric method—\(\hat{p}(X)\). Second, in \(Q_{y^{j}}^{\tau}\) the probability p(X) is substituted by \(\hat{p}(X)\).
Case (2) follows Koenker and Bassett (1978).
$$\begin{aligned} &\sum _{(i|y_i\ge x_i'\beta)=1}^{n_1} \tau\cdot \bigl|y_i- \alpha(D_i|D_i=j)-x_i'\beta\bigr|\\ &\quad {}+ \sum _{(i|y_i<x_i\beta)=n_1+1}^{n} (1-\tau)\cdot \bigl|y_i- \alpha(D_i|D_i=j)-x_i'\beta\bigr| \end{aligned}$$
has to be minimized with respect to α and β, where τ is given. In other words,
$$\begin{aligned} Q_{y^j}^{\tau}=\arg \min_{\alpha;\beta}E\bigl[ \rho_{\tau}(y-q_j) (W|D=j)\bigr], \end{aligned}$$
where j=0;1, q j =α(D|D=j)+x′β.
The method of case (3) is developed by Frölich and Melly (2012). Due to the endogeneity of the intervention variable D, an instrumental variables estimator is used with only one instrument Z and this is a dummy. The quantiles follow from
$$\begin{aligned} Q_{y^j|c}^{\tau} = \arg \min_{\alpha_0;\alpha_1} E\bigl[ \rho_{\tau }(y-q_j)\cdot(W|D=j)\bigr], \end{aligned}$$
where j=0;1, q j =α 0+α 1(D|D=j), c means complier. The weights are
$$\begin{aligned} W = \frac{Z-p(X)}{p(X)(1-p(X))}(2D-1). \end{aligned}$$
Abadie et al. (2002) investigate case (4) and suggest a weighted linear quantile regression. The estimator is
$$\begin{aligned} Q_{y^j}^{\tau}=\arg \min_{\alpha,\beta}E\bigl[ \rho_{\tau}\bigl(y-\alpha D -x'\beta \bigr) (W|D=j)\bigr], \end{aligned}$$
where the weights are
$$\begin{aligned} W = 1 - \frac{D(1-Z)}{1-p(Z=1|X)}-\frac{(1-D)Z}{p(Z=1|X)}. \end{aligned}$$
Regression discontinuity (RD) design allows to determine treatment effects in a special situation. This approach uses information on institutional and legal regulations that are responsible that changes occur in the effects of economic measures. Thresholds are estimated indicating discontinuity of the effects. Two forms are distinguished: sharp and fuzzy RD. Either the change of the status is exactly effective at a fixed point or it is assumed that the probability of a treatment change or the mean of a treatment change is discontinuous.
In the case of sharp RD individuals or establishments (i=1,…,n) are assigned to the treatment or the control group on the base of the observed variable S. The latter is a continuous or an ordered categorial variable with many parameter values. If variable S i is not smaller than a fixed bound \(\bar{S}\) then i belongs to the treatment group (D=1)
$$\begin{aligned} D_i = 1[S_i\ge\bar{S}]. \end{aligned}$$
The following graph based on artificial data with n=40 demonstrates the design. Assuming we know that an institutional rule changes the conditions if \(S>\bar{S}=2.5\) and we want to determine the causal effect induced by the adoption of the new rule. This can be measured by the difference of the two estimated regressions at \(\bar{S}\).
In a simple regression model y=β 0+β 1 D+u the OLS estimator of β 1 would be inconsistent when D and u correlate. If, however, the conditional mean E(u|S,D)=E(u|S)=f(S) is additionally incorporated in the outcome equation (y=β 0+β 1 D+f(S)+ϵ, where ϵ=y−E(y|S,D)), the OLS estimator of β 1 is consistent. Assume f(S)=β 2 S, the estimator of β 1 corresponds to the difference of the two estimated intercepts of the parallel regressions
$$\begin{aligned} \hat{y}_0 =& \hat{E}(y|D=0)=\hat{\beta}_0+\hat{ \beta_2}S \\ \hat{y}_1 =& \hat{E}(y|D=1)=\hat{\beta}_0+\hat{ \beta}_1+\hat{\beta_2}S. \end{aligned}$$
The sharp RD approach identifies the causal effect by distinguishing between the nonlinear function due to the discontinuous character and the smoothed linear function. If, however, a nonlinear function of the general type f(S) is given, modifications have to be regarded.
Assume, the true function f(S) is a polynomial of pth order
$$\begin{aligned} y_i = \beta_0+\beta_1D_i+ \beta_{21}S_i+\beta_{22}S_i^2+ \cdots+\beta _{2p}S_i^p+u_i \end{aligned}$$
but two linear models are estimated, then the difference between the two intercepts, interpreted as the causal effect, is biased. What looks like a jump is in reality a neglected nonlinear effect.
Another strategy is to determine the treatment effect exactly at the fixed discontinuity point \(\bar{S}\) assuming a local linear regression. Two linear regressions are considered
$$\begin{aligned} y_0 - E(y_0|S=\bar{S}) =& \delta_0(S-\bar{S}) + u_0 \\ y_1 - E(y_1|S=\bar{S}) =& \delta_1(S-\bar{S}) + u_1, \end{aligned}$$
where y j =E(y|D=j) and j=0;1. In combination with
$$\begin{aligned} y = (1-D)y_0+Dy_1 \end{aligned}$$
$$\begin{aligned} y =& (1-D) \bigl(E(y_0|S=\bar{S}) + \delta_0(S-\bar{S}) + u_0\bigr) \\ &{}+ D\bigl(E(y_1|S=\bar{S}) + \delta_1(S- \bar{S}) + u_1\bigr). \end{aligned}$$
The linear regression
$$\begin{aligned} y = \gamma_0 + \gamma_1D + \gamma_2(S-\bar{S}) + \gamma_3D(S-\bar{S}) + \tilde{u} \end{aligned}$$
can be estimated, where \(\tilde{u}=u_{0}+D(u_{1}-u_{0})\). This looks like the DiD estimator but now \(\gamma_{1}=E(y_{1}|S=\bar{S})-E(y_{0}|S=\bar{S})\) and not γ 3 is of interest. The estimated coefficient \(\hat{\gamma _{1}}\) is a global but not a localized average treatment effect.
The localized average follows if a small interval around \(\bar{S}\) is modeled, i.e. \(\bar{S}-\Delta S <S_{i}<\bar{S} + \Delta S\). The treatment effect corresponds to the difference of the two former determined intercepts, restricted to \(\bar{S}<S_{i}<\bar{S}+\Delta S\) on the one hand and to \(\bar{S}-\Delta S <S_{i}<\bar{S}\) on the other hand.
A combination of the latter linear RD model with the DiD approach leads to an extended interaction model. Again, two linear regressions are considered
$$\begin{aligned} y_0 =& \gamma_{00} + \gamma_{10}D + \gamma_{20}(S-\bar{S}) + \gamma _{30}D(S-\bar{S}) + \tilde{u}_0 \\ y_1 =& \gamma_{01} + \gamma_{11}D + \gamma_{21}(S-\bar{S}) + \gamma _{31}D(S-\bar{S}) + \tilde{u}_1, \end{aligned}$$
where the first index of γ jt with j=0;1 refers to the treatment and the second index with t=0;1 refers to the period. In contrast to the pure RD model, where y j and j=0;1 is considered, now the index of y is a time index, i.e. y T and T=0;1. Using
$$\begin{aligned} y = (1-T)y_0+Ty_1 \end{aligned}$$
$$\begin{aligned} y =& \gamma_{00} + \gamma_{10}D + \gamma_{20}(S- \bar{S}) + \gamma_{30}D(S-\bar{S}) \\ &{}+(\gamma_{01}-\gamma_{00})T + (\gamma _{11}- \gamma_{10})DT\\ &{} + (\gamma_{21}-\gamma_{20}) (S-\bar{S})T \\ &{}+ (\gamma_{31}-\gamma _{30})D(S-\bar{S})T + \bigl( \tilde{u}_0 + (\tilde{u}_1-\tilde{u}_0)T\bigr) \\ =:& \beta_0 + \beta_1T + \beta_2D + \beta_3(S-\bar{S}) + \beta_4D(S-\bar{S}) \\ &{}+ \beta_5DT + \beta_6(S-\bar{S})T + \beta_7D(S-\bar{S})T + \tilde{u}. \end{aligned}$$
Now, it is possible to determine whether the treatment effect varies between T=1 and T=0. The difference follows by a DiD approach
$$\begin{aligned} &\bigl[(y_1|D=1)-(y_1|D=0)\bigr] - \bigl[(y_0|D=1)-(y_0|D=0) \bigr] \\ &\quad {}= (\gamma _{11}-\gamma_{10})+(\gamma_{31}- \gamma_{30}) (S-\bar{S}) \\ &\quad {}= \beta _5+\beta_7(S-\bar{S}) \end{aligned}$$
under the assumption that the disturbance term does not change between the periods. The hypothesis of a time-invariant break cannot be rejected if DT and \(D(S-\bar{S})T\) have no statistical influence on y.
The fuzzy RD assumes that the propensity score function of treatment P(D=1|S) is discontinuous with a jump in \(\bar{S}\)
$$\begin{aligned} P(D_i=1|S_i) =& \left \{ \begin{array}{l@{\quad }l} g_1(S_i) & \mbox{if}\ S_i\ge\bar{S}\\ g_0(S_i) & \mbox{if}\ S_i< \bar{S}, \end{array} \right . \end{aligned}$$
where it is assumed that \(g_{1}(\bar{S})>g_{0}(\bar{S})\). Therefore, treatment in \(S_{i}\ge\bar{S}\) is more likely. In principle, the functions g 1(S i ) and g 0(S i ) are arbitrary, e.g. a polynomial of pth order can be assumed but the values have to be within the interval [0;1] and different values in \(\bar{S}\) are necessary.
The conditional mean of D that depends on S is
$$\begin{aligned} E(D_i|S_i) =& P(D_i=1|S_i)\\ =& g_0(S_i) + \bigl(g_1(S_i)-g_0(S_i) \bigr)T_i, \end{aligned}$$
where \(T_{i}=1(S_{i}\ge\bar{S})\) is a dummy indicating the point where the mean is discontinuous. If a polynomial of pth order is assumed the interaction variables \(S_{i}T_{i}, S_{i}^{2}T_{i}\cdots S_{i}^{p}T_{i}\) and the dummy T i are instruments of D i . The simplest case is to use only T i as an instrument if g 1(S i ) and g 0(S i ) are discriminable constants.
We can determine the treatment effect around \(\bar{S}\)
$$\begin{aligned} \lim_{\Delta\rightarrow0}\frac{E(y_i|\bar{S} < S_i < \bar{S} + \Delta) -E(y_i|\bar{S} - \Delta< S_i < \bar{S})}{E(D_i|\bar{S} < S_i < \bar{S} + \Delta) -E(D_i|\bar{S} - \Delta< S_i < \bar{S})}. \end{aligned}$$
The empirical analogon is the Wald (1940) estimator that was first developed for the case of measurement errors
$$\begin{aligned} \frac{(\bar{y}|\bar{S}<S_i<\bar{S}+\Delta)-(\bar{y}|\bar{S}-\Delta <S_i<\bar{S})}{ (\bar{D}|\bar{S}<S_i<\bar{S}+\Delta)-(\bar{D}|\bar{S}-\Delta<S_i<\bar{S})}. \end{aligned}$$
QTE and RD analysis allow the determination of variable causal effects with a different intention. A further possibility is a separate estimation for subgroups, e.g. for industries or regions.
Applications: Some New Estimates of Cobb-Douglas Production Functions
This section presents some estimates of production functions, where IAB establishment panel data are used. The empirical analysis is restricted to the period 2006–2010. The decision to start with 2006 is the following: in this year information on company level-pacts (CLPs) were collected in the IAB establishment panel for the first time and many of the following applications deal with CLPs. Methods of Sect. 2 are applied. The intention of Sect. 3 is to illustrate that the discussed methods work with implemented STATA programmes. It is not discussed whether the applied methods are best for the given data set and the substantial problems. From a didactical perspective the paper is always concerned with only one issue and different suggestions to solve the problem are compared. The results can be found in Tables 1–10.
Table 1 Estimates of Cobb-Douglas production functions under alternative determination of standard errors using hc1, hc3, bootstrap, jackknife and cluster-robust estimates
Table 1 focus on alternative estimates of standard errors—see Sects. 2.1.1–2.1.3—of Cobb-Douglas production functions (CDF) in the logarithm representation with the input factors lnL and lnK. The estimation of conventional standard errors can be found for comparing in Table 3, column 1. The small standard deviations and therefore the large t-values are remarkable. Though the cluster-robust standard errors in Table 1, column 5 are larger, they are still by far too low. This is due to unobserved heterogeneity. Fixed effects estimates can partially solve this problem as can be seen in the Appendix, Table 15.
The estimated coefficients in column 1–3 and 5 of Table 1 are identical. Estimates with hc2 and hc4—not presented in the tables—deviate only slightly from those with hc1. This could mean that it is not necessary to distinguish between hc1 to hc4. However, one could guess that stronger differences are observed if the sample is small. Empirical investigations, where only 10, 1 and 0.1 percent of the original sample size is used, do not support this presumption. The jackknife estimates of standard errors and t-values are also not so far away from the heteroskedasticity-consistent estimates with hc1 and hc3. The nearness to estimates with hc3 is plausible because the latter is only a slightly simplified version of what one gets by employing the jackknife technique. Furthermore, Table 1 demonstrates that bootstrap and cluster-robust estimates of the t-values differ strongest of the input factor labor (lnL), measured by the number of employees in the firm. Capital (lnK), approximated by the sum of investments of the last four years, has evidently larger cluster-robust estimates of standard errors than that from the other methods.
An extended version of the Cobb-Douglas function in Table 1 is presented in Table 2. The latter estimates show smaller coefficients and smaller t-values of the input factors labor and capital. The major intention of Table 2 is to demonstrate that also in this example there is—as maintained in Sect. 2.1.4—a clear relationship between \(\bar{D}\), the mean of a dummy as independent variable, and the estimated standard errors. The nearer \(\bar{D}\) to 0.5 the smaller is the standard error. The results in Table 2 cannot be generalized in contrast to that in Table 11 because the standard error of a dummy is not only determined by the mean. Each regressor has a specific influence on the dependent variable independent of the regressor's variance.
Table 2 OLS estimates of an extended CDF with Bernoulli distributed regressors
Outliers—see Sect. 2.1.5—may have strong effects on coefficient and standard error estimates. However, estimates do not react sensitively to all outliers. This can be demonstrated if the results with and without outliers are compared. Table 3 presents an example for simple Cobb-Douglas functions in column 1 and 2. An observation in column 2 is defined as an outlier if \(|\hat{u}^{*}|>3\). The coefficients in column 1 and 2 are very similar while the differences of the standard errors become more evident. The differences are enlarged under a wider definition of an outlier, e.g. if 3 is substituted by 2. The picture becomes also clearer if observations with high leverage are eliminated—see column 3. Coefficients and standard errors in column 1 and 3 reveal a clear disparity for both input factors. This result is not unexpected but the consequence is ambiguous. Is column 1 or 3 preferable? If all observations with strong leverages are due to measurement errors the decision speaks in favor of the estimates in column 3. As no information is available to this question both estimates may be useful.
Table 3 OLS estimates of CDFs with and without outliers, t-values in parentheses; dependent variable: logarithm of sales—lnY
Column 4 extends the consideration to outliers following Hadi (1992).The squared difference between individual regressor values and the mean for all regressors—here lnL and lnK—is determined for each observation weighted by the estimated covariance matrix—see Sect. 2.1.5. The decision whether establishment i is an outlier is now based on the Mahalanobis distance. MOD, the vector of multiple outlier dummies (MOD i =1 if i is an outlier; =0 otherwise), is incorporated as an additional regressor. The estimates show that outliers have a significant effect on the output variable lnY. The coefficients and the t-values in column 2 and 4 are very similar. This is a hint that the outliers defined via \(\hat{u}^{*}\) are mainly determined by large deviations of the regressor values. From \(\hat {u}^{*}\) it is unclear whether the values of the dependent variable or the independent variables are responsible for the fact that an observation is an outlier.
As it is not obvious whether the outliers are due to measurement errors that should be eliminated or whether these are unusual but systematically induced observations that should be accounted for, parameters can only partially be identified. Therefore, in Table 4 confidence intervals are not only presented for the two extreme cases (column 1: all outliers are induced by specific events; column 2: all outliers are due to random measurement errors). Additionally, in column 3 the confidence interval (CI) based on Stoye's method is displayed. The results show that the lower and upper coefficient estimates of lnL by Stoye lies within the estimated coefficients in column 1 and 2. The upper coefficient is nearer to that of column 2 and the lower is nearer to column 1. We do not find the same pattern for input factor lnK. In this case Stoye's \(\hat{\beta}_{\ln K;u}\) deviates more from that in column 2 than in column 1. And for \(\hat{\beta}_{\ln K;l}\) we find the opposite result. Stoye's intervals (\(\Delta\hat{\beta}_{\ln L}= \hat{\beta}_{\ln L;u}-\hat{\beta}_{\ln L;l}\); \(\Delta\hat{\beta}_{\ln K}= \hat{\beta}_{\ln K;u}-\hat{\beta}_{\ln K;l}\)) are shorter than that with or without outliers. In other words, the estimates are more precise.
Table 4 Confidence intervals (CI) of output elasticities of labor and capital based on a Cobb-Douglas production function, estimated with and without outliers, Stoye's confidence interval at partially identified parameters; dependent variable: logarithm of sales—lnY
The next tables present estimates of alternative methods in order to determine causal effects. First, the difference-in-differences (DiD) approach is estimated. Results can be found in Table 5. The coefficient of the interaction variable CLP∗D2009 in column 1 is significantly different from zero. This means that sales between firms with a company-level pact (CLP), adopted in 2009, and those without such a pact differ between 2009 and the years before (2006–2008). The adoption of a CLP in the year of the Great Recession is combined with lower sales than in the years before if an unconditional DiD specification is used. In column 2 the sign changes and the effect of the interaction variable is insignificant if an extended CDF is estimated. This approach is preferred because in the former the influence of the input factors is partially added to the causal effect. Now, no influence of the adoption of a CLP on sales in 2009 can be detected. One could argue that the estimates in column 1 lead more than that in column 2 to significant results because the sample in the former is larger. This argument is not compelling. If we draw a random sample of 63.83 percent so that in column 1 the sample size is n=20,489 the interaction effect is −0.2939 and the significance is preserved (t=−2.26). If CLPs change labor and capital productivity we should not incorporate lnL and lnK in a conditional DiD. In other words, in this case we should not control for these variables before treatment.
Table 5 Unconditional and conditional DiD estimates with company-level pact (CLP) effects; dependent variable: logarithm of sales—lnY
Alternative methods to determine causal effects are matching procedures. These are suggested when there does not exist control over the assignment of treatment conditions, when in the basic equation y=Xβ+αD+u the dichotomous treatment variable D and the disturbance term u correlate, when the ignorable treatment assignment assumption is violated. In the example of the CDF it is questioned that this condition is fulfilled for CLPs. As an alternative the Mahalanobis metric matching (MM) without propensity score and the nearest neighbor matching (NNM) with caliper are applied, presented in Table 6, column 2 and 3, respectively. In the latter method non-replacement is used. That is, once a treated case is matched to a non-treated case, both cases are removed from the pool. The former method allows that one control case can be used as a match for several treated cases. Therefore, the total number of observations in the nearest neighbor is larger than that in column 2. We find that the CLP effect on sales is insignificant in both cases but the CLP coefficient of MM estimates exceeds by far that of NNM. The estimates of the partial elasticities of production are very similar in the three estimates in Table 6. The insignificance of the CLP effect confirms the result of column 2 of Table 5. If the DiD estimator of column 2 in Table 5 is applied after matching the causal effect is—not unexpected—also insignificant. The probvalue is 0.182 if the MM procedure is used and 0.999 under the NNM procedure.
Table 6 Estimates of CDFs with CLP effects using matching procedures; dependent variable: logarithm of sales—lnY
The previous estimates have demonstrated that company-level pacts (CLP) have no statistically significant influence on output. We cannot be sure that this result is also true for subgroups of firms. One way to test this is to conduct quantile estimates. As presented in Sect. 2.2 four methods can be applied to determine quantile treatment effects (QTE). The CLP effects on sales can be found in Table 7 where the results of five quantiles (q=0.1,0.3,0.5,0.7,0.9) are presented. In contrast to the previous estimations most CLP effects are significant in the columns 1–4 of Table 7. Firpo considers the simplest case without control variables under the assumption that the adoption of a company-level pact is exogenous. The estimated coefficients in column 1 (F) seem oversized. The same follows from the Frölich-Melly approach, where CLP is instrumented by a short work time dummy (column 3—F-M). Other available instruments like opening clauses, collective bargaining, works councils or research and development within the firm do not evidently change the results. One reason for the overestimated coefficients can be neglected determinants of the output that correlate with CLP. Estimates of column 2 (K-B) and 4 (A-A-I) support this hypothesis.
Table 7 Quantile estimates of CLP effects; dependent variable: logarithm of sales—lnY
From the view of expected CLP coefficients the conventional quantile estimator, the Koenker-Bassett approach, with lnL and lnK as regressors seems best. However, the ranking of the size of the coefficients within column 2 seems unexpected. The smaller the quantile the larger is the estimated coefficient. This could mean that CLPs are advantageous for small firms. However, it is possible that small firms with advantages in productivity due to CLPs have relative high costs to adopt a CLP. In this case the higher propensity of large firms to introduce a CLP is consistent with higher productivity of small firms.
The coefficients of the Abadie-Angrist-Imbens approach, a combination of Frölich-Melly's and Koenker-Bassett's model, are also large but not so large as in column 1 and 3.
Possibly, all estimates in column 1–4 of Table 7 are biased and inconsistent. This is the case when CLP and non-CLP firms fundamentally differ due to unobserved variables. To avoid this problem the QTE and the matching approaches are combined. Based on the matching of Table 6 the QTE analogously to column 1–4 in Table 7 can be estimated. In column 5 and 6 only two combinations are presented, namely MM+K-B and MM+A-A-I. We find that the ranking and the size of the coefficients are plausible in column 5. The sizes of the coefficients in column 6 are smaller than in column 4 but the identified causal effects seems still too high. The most important result is the following: the CLP effects are significant for higher quantiles, i.e. for q=0.9 in column 5 and for q=0.7 and q=0.9 in column 6. However, the median estimators (q=0.5) of CLP effects in column 5 and 6 that can be compared with the estimates of column 2 in Table 6 are insignificant. Quantile estimators highlight information that cannot be revealed by other treatment methods, i.e. in Tables 5 and 6. The estimations of the other six combinations (MM+F, MM+F-M, NNM+F, NNM+K-B, NNM+F-M, NNM+A-A-I)—not presented in the tables—are less plausible. The ranking of the size of coefficients is inconsistent in the light of theoretical and practical experience.
The final discussed treatment method in Sect. 2.2 is the regression discontinuity (RD) design. This approach exploits information of the rules determining treatment. The probability of receiving a treatment is a discontinuous function of one or more variables where treatment is triggered by an administrative definition or an organizational rule.
In a first example using a sharp RD design it is analyzed whether at an estimated probability of 0.5 that a company-level pact (CLP) exists a structural break on logarithm of output (lnY) is evident. For this purpose a probit model is estimated with profit situation, working-time account, total wages per year and works council as determinants of CLP. All coefficients are significantly different from zero—not in the tables. The estimated probability Pr(CLP) is then plotted against lnY based on a fractional polynomial model over the entire range (0<Pr(CLP)<1) and on two linear models split into Pr(CLP)<=0.5 and Pr(CLP)>0.5. The graphs are presented in Fig. 1.
Regression discontinuity of CLP probability
A structural break seems evident. Two problems have to be checked: First, is the break due to a nonlinear shape, and second, is the break significant? The answer to the first question is yes, because the shape over the range 0<Pr(CLP)<1 is obviously nonlinear when a fractional polynomial is assumed. The answer to the second question is given by a t-test—cf. Sect. 2.2—based on
$$\begin{aligned} y =& \gamma_0 + \gamma_1D\_\mathrm{Pr}( \mathrm{CLP}) + \gamma_2\bigl(\mathrm{Pr}(\mathrm{CLP})-\overline { \mathrm{Pr}(\mathrm{CLP})}\bigr) \\ &{} + \gamma_3D\_\mathrm{Pr}(\mathrm{CLP})\cdot\bigl( \mathrm{Pr}(\mathrm{CLP)}-\overline{\mathrm{Pr}(\mathrm{CLP})}\bigr) + u \\ =:& \gamma_0 + \gamma_1D\_\mathrm{Pr}( \mathrm{CLP}) + \gamma_2c\mathrm{Pr}(\mathrm{CLP}) \\ &{} + \gamma_3D\_ \mathrm{Pr}(\mathrm{CLP})\cdot c \mathrm{Pr}(\mathrm{CLP}) + u, \end{aligned}$$
$$\begin{aligned} D\_\mathrm{Pr}(\mathrm{CLP}) = \left \{ \begin{array}{l@{\quad }l} 1 &\mbox{if}\ \mathrm{Pr}(\mathrm{CLP}) \le 0.5\\ 0 &\mbox{otherwise}. \end{array} \right . \end{aligned}$$
The null that there is no break has to be rejected (\(\hat{\gamma }_{1}=-3.96\); t=−6.87; probvalue=0.000) as can be seen in Table 8.
Table 8 Testing for structural break of CLP effects between Pr(CLP)≤0.5 and Pr(CLP)>0.5
The estimates in Table 8 cannot tell us whether the output jump in Pr(CLP)=0.5 is a general phenomenon or whether the Great Recession in 2008/09 is responsible. To test this the combined method of RD and DiD—derived in Sect. 2.2—is employed and the results are presented in Table 9. The estimates show that the output jump does not significantly change between 2006/2007 and 2008/2010. The influence of D_Pr(CLP)⋅T and that of D_Pr(CLP)⋅cPr(CLP)⋅T on lnY is insignificant. Therefore, we conclude that the break is of general nature.
Table 9 Testing for differences in structural break of CLP effects between Pr(CLP)≤0.5 and Pr(CLP)>0.5 in 2006/07 and 2008/10
Two further examples are presented in Fig. 2 and 3. The Institut für Mittelstandsforschung defines small firms as such that have less than 10 employees and until 1 million Euro sales per year. The analogous definition of middle-size firms is less than 500 employees and until 50 million Euro sales per year. A sharp regression discontinuity design is applied to test whether the first and the second part of the definition are consistent. In other words, based on a Cobb-Douglas production function with only one input factor, the number of employees, it is tested whether there exists a structural break for small firms between 9 and 10 employees at a 1 million sales border. We find for small firms in Fig. 2 that there seems to be a sales break around 1 million Euro per year.
Regression discontinuity of small firms
Regression discontinuity of middle-size firms
The t-test analogously to the first example yields weak significance (\(\hat{\gamma}_{1}=-13.8667\); t=−1.61; probvalue=0.107). The same procedure for middle-size firms—see Fig. 3—leads to following results.
Apparently, there exists a break. However, the first part of the definition of middle-size firms from the Institut für Mittelstandsforschung is not compatible with the second part. The break of sales at 500 employees is not 50 million Euro per year but around 150 million Euro. Furthermore, the visual result might be due to a nonlinear relationship as the fractional polynomial estimation over the entire range suggests. The t-test does not reject the null (\(\hat{\gamma }_{1}=-8977\); t=−0.54; probvalue=0.588). The conclusion from Fig. 2 and 3 is that the graphical representation without the polynomial shape as comparison course and without testing for a structural break can lead to a misinterpretation.
The final example uses a fuzzy regression discontinuity design. It is analyzed whether the CLP effects on the logarithm of sales (lnY=ln(sales/10000)) differ between the East and West German federal states. The graphical representation can be found in Figs. 4a and 4b. The former shows the disparities in the level of sales per year and the latter those of Pr(CLP)—here measured by the relative frequency of firms with a CLP to all firms in a German federal state.
(a) Regression discontinuity of ln(sales). (b) Regression discontinuity of treatment CLP
Although clear differences are detected for both characteristics (lnY,Pr(CLP)) we cannot be sure that these disparities are significant and whether the CLP effects are smaller or larger in West Germany. This is checked by a Wald test in Table 10. We find that the CLP effects on lnY (−0.8749/−0.0571=15.3165) are significantly higher in the West German federal states (z=4.29). When the interpretation is focussed on the dummy "East Germany" as an instrument of a dummy "CLP" we should note that the former is not a proper instrument because the output lnY differs between East and West Germany independent of a CLP.
Table 10 Fuzzy regression discontinuity between East and West German federal states (GFS)—Wald test for structural break of company-level pact (CLP) effects on sales; jump at GFS>0; dependent variable: logarithm of sales—lnY
Many reasons like heteroskedasticity, clustering, basic probability of qualitative regressors, outliers and only partially identified parameters may be responsible that estimated standard errors based on classical methods are biased. Applications show that the estimates under suggested modifications do not always deviate so much from that of the classical methods.
The development of new procedures is ongoing. Especially, the field of treatment methods were extended. It is not always obvious which method is preferable to determine the causal effect. As the results evidently differ it is necessary to develop a framework that helps to decide which method is most appropriated under typically situations. We observe a tendency away from the estimation of average effects. The focus is shifted to distribution topics. Quantile analysis helps to investigate differences between subgroups of the population. This is important because economic measures have not the same influence on heterogeneous establishments and individuals. A combination of quantile regression with matching procedure can improve the determination of the causal effects. Further combinations of treatment methods seem helpful. Difference-in-differences estimates should be linked with matching procedures and regression discontinuity designs. And also regression discontinuity split to quantiles can lead to new insights.
Empirical economics is governed by econometric methods since many years. During the last 20 years contents and major questions have strongly changed in this field. Therefore methods were modified and completely new methods were developed. In comparison to conventional approaches attention is paid to peculiarities of the data, to the specification of the estimating approach, to unobserved heterogeneity, to endogeneity and causal effects. Real data are often not compatible with the assumptions of classical methods. If the latter are used, this can lead to a misinterpretation of the results. We have to ask, whether the results are correct. Is it really possible to interpret the estimated effects as causal or are these only statistical artifacts, which are irrelevant or even counterproductive for policy measures? In order to avoid this, the practitioner has to be familiarwith the wide range of existing methods for the empirical investigations. The user has to know the assumptions of the methods and whether the application allows adequate conclusions at given information. It is necessary to check the robustness of the results by alternative methods and specifications.
This paper presents a selective review of econometric methods and demonstrates by applications that the methods work. In the first part, methodological problems to standard errors and treatment effects are discussed. First, heteroskedasticity- and cluster-robust estimates are presented. Second, peculiarities of Bernoulli distributed regressors, outliers and only partially identified parameters are revealed. Approaches to the improvement of standard error estimates under heteroskedasticity differ in the weighting of residuals. Other procedures use the estimated disturbances in order to create a larger number of artificial samples, to obtain better estimates. And again others use nonlinear information. Cluster robust estimates try to solve the Moulton problem. Too low standard errors between observations within clusters are adjusted. This objective is only partially successful. We should be cautious if we compare the effects of dummy variables on an endogenous variable, because the more the mean of dummies deviates from 0.5 the higher are the standard errors. Outliers, i.e. unusual observations that are due to systematic measurement errors or extraordinary events may have enormous influence on the estimates. The suggested approaches to detect outliers vary relating to the measurement concept and do not necessarily demonstrate whether outliers should be accounted for in the empirical analysis. New methods for partially identified parameters may be helpful in this context. Under uncertainty the degree of precision, whether outliers should be eliminated, can be increased.
Four principles to estimate causal effects are in the focus: difference-in-differences (DiD) estimators, matching procedures, quantile treatment effects (QTE) analysis and regression discontinuity design. The DiD models distinguish between conditional and unconditional approaches. The range of the popular matching procedures is wide and the methods evidently differ. They aim to find statistical twins, to homogenize the characteristics of observations from the treatment and the control group. Until now, the application of QTE analysis is relatively rare in practice. Four types of models are important in this context. The user has to decide whether the treatment variable is exogenous or endogenous and whether additional control variables are incorporated or not. Regression discontinuity (RD) designs separate between sharp and fuzzy RD methods. It is distinguished whether an observation is assigned to the treatment or to the control group directly by an observable continuous variable or indirectly via the probability and the mean of treatment, respectively, conditional on this variable.
In the second part of the paper the different methods are applied to estimates of Cobb-Douglas production functions using IAB establishment panel data. Some heteroskedasticity-consistent estimates show similar results while cluster-robust estimates differ strongly. Dummy variables as regressors with a mean near 0.5 reveal as expected smaller variances of the coefficient estimators than others. Not all outliers have a strong effect on the significance. Methods of partially identified parameters demonstrate more efficient estimates than traditional procedures.
The four discussed treatment effects methods are applied to the question whether company-level pacts have a significant effect on the production output. Unconditional DiD estimators and estimates without matching display significantly positive effects. In contrast to this result we cannot find the same if conditional DiD or matching estimates based on the Mahalanobis metric are applied. This outcome has more precisely formulated under quantile regression. The higher the quantile the more is the tendency to positive and significant effects. Sharp regression discontinuity estimates display a jump at the probability 0.5 that an establishment has a company-level pact. No specific influence can be detected during the Great Recession. Fuzzy regression discontinuity estimates reveal that the output effect of company-level pacts is significantly lower in East than in West Germany. A combined application of the four principles determining treatment effects lead to some interesting new insights. We determine joint DiD and matching estimates as well as that ofthe former together with regressions discontinuity designs. Finally, matching is interrelated to quantile regression.
Empirische Wirtschaftsforschung wird schon seit vielen Jahren ganz wesentlich von ökonometrischen Methoden getragen. In den letzten 20 Jahren haben sich Inhalte und Fragestellungen in der empirischen Wirtschaftsforschung stark verändert. Dies hat dazu geführt, dass viele Methoden modifiziert oder völlig neue entwickelt wurden. Gegenüber traditionellen Ansätzen wird verstärkt auf die Besonderheiten der Daten, auf die Spezifikation des zu schätzenden Ansatzes, auf unbeobachtete Heterogenität, auf Endogenität und auf Kausaleffekte geachtet. Reale Daten sind ganz überwiegend nicht vereinbar mit den Annahmen klassischer Methoden. Werden letztere trotzdem eingesetzt, so sind damit häufig Fehlinterpretationen der Ergebnisse verbunden. Zu fragen ist, wie sicher die getroffenen Aussagen sind. Können die Schätzergebnisse tatsächlich kausal interpretiert werden oder haben sich lediglich rein statistische Zusammenhänge ergeben, die für Handlungsanweisungen irrelevant oder gar kontraproduktiv sind? Um dies zu verhindern, muss der Praktiker für seine empirischen Untersuchungen mit dem Spektrum vorhandener Methoden vertraut sein. Er muss wissen, welche Annahmen den jeweiligen Methoden zugrunde liegen und ob deren Anwendung bei gegebener Information geeignete Aussagen zulassen. Er sollte durch den Einsatz vergleichbarer Methoden die Robustheit der Ergebnisse überprüfen.
Einen Überblick über selektiv ausgewählte ökonometrische Methoden zu liefern und anhand von Anwendungen deren Arbeitsweise aufzuzeigen, ist Anliegen dieses Beitrags. Behandelt werden methodische Probleme zu Standardfehlern und Treatment-Effekten. Zunächst geht es um heteroskedastie- und cluster-robuste Schätzungen. Es folgt die Erörterung von Problemen bei bernoulliverteilten Regressoren, Ausreißern und partiell identifizierten Parametern. Vorgeschlagene Ansätze zur Verbesserung der Standardfehler bei Vorliegen von Heteroskedastie unterscheiden sich in der Gewichtung der Residuen. Andere Verfahren nutzen die geschätzten Störgrößen aus, um künstlich eine größere Anzahl von Stichproben zu erzeugen, um auf deren Basis eine bessere Schätzung der Standardfehler zu erhalten oder machen sich vorhandene Nichtlinearitäten zunutze. Clusterrobuste Schätzungen zielen darauf ab, das Moulton-Problem zu lösen. Zu geringe Standardfehler bei Vorliegen von in Clustern zusammengefassten ähnlichen Beobachtungen werden korrigiert. Dies gelingt in den vorgeschlagenen Ansätzen nur unvollständig. Ein bisher nicht erörtertes Phänomen, dass Dummy-Variablen als Regressoren zu höheren Standardfehlern führen, je mehr ihr Mittelwert von 0.5 entfernt ist, mahnt zur Vorsicht beim Vergleich hinsichtlich der Präzision des Einflusses verschiedener [0;1]-Regressoren. Ausreißer, d. h. ungewöhnliche Beobachtungen, die vor allem auf systematische Messfehler oder ungewöhnliche Ereignisse zurückzuführen sind, können erhebliche Auswirkungen auf die Schätzergebnisse haben. Die vorgeschlagenen Ansätze zur Aufdeckung von Ausreißern variieren hinsichtlich des Messkonzeptes und liefern nicht zwangsläufig Hinweise darauf, ob diese bei der empirischen Analyse zu berücksichtigen sind. Neuere Ansätze für nur partiell identifizierte Parameter können hier hilfreich sein. Erhöhen sie doch den Präzisionsgrad bei Unsicherheit, ob Ausreißer zu entfernen sind oder nicht.
Bei den Verfahren zur Bestimmung von Treatment-Effekten stehen vier Prinzipien im Fokus: Differenz-von-Differenzen-Schätzer, Matching-Verfahren, Analyse von Treatment-Effekte bei Quantilsregressionen und Regression-Discontinuity-Ansätze. Bei den Differenz-von-Differenzen-Schätzern ist zu unterscheiden, ob zusätzliche Kontrollvariablen zu berücksichtigen sind oder nicht. Das Spektrum der in neuerer Zeit sehr beliebten Matching-Verfahren, die darauf abzielen Untersuchungsgruppe und Kontrollgruppe zu homogenisieren, um statistische Zwillinge herauszufiltern, ist einerseits recht umfangreich geworden und weist andererseits methodisch bedeutsame Unterschiede auf. Noch vergleichsweise selten ist bisher der Einsatz von Quantilsregressionen zur Erfassung heterogener Kausaleffekte. Methodisch zu unterscheiden ist dabei, ob die Treatmentvariable als exogen oder endogen aufgefasst wird und ob weitere Kontrollvariablen Berücksichtigung finden oder nicht. Bei den Regression-Discontinuity-Ansätzen ist zu unterscheiden, ob die Zuordnung zur Treatment- oder Kontrollgruppe allein auf Basis einer beobachteten kontinuierlichen Variablen erfolgt oder auch nicht beobachtete Variablen herangezogen werden.
Die zunächst rein auf die Methodik abgestellte Diskussion der verschiedenen Verfahren wird im zweiten Teil dieses Beitrags um Anwendungen auf Cobb-Douglas-Produktionsfunktionen unter Verwendung von IAB-Betriebspaneldaten ergänzt. Verschiedene heteroskedastie-konsistente Schätzverfahren führen zu ähnlichen Resultaten für die Standardfehler. Cluster-robuste Schätzungen weisen deutlichere Abweichungen auf. Dummy-Variable als Regressoren mit einem Mittelwert in der Nähe von 0.5 führen zu kleineren Varianzen der Koeffizientenschätzer als Dummies mit niedrigeren oder höheren Mittelwerten. Nicht alle Ausreißer haben einen starken Einfluss auf die Signifikanz. Neuere Methoden zur Behandlung des Problems nur partiell identifizierter Parameter führen zu effizienteren Schätzungen als traditionelle Verfahren.
Die vier diskutierten Treatment-Effekt-Verfahren werden angewandt auf die Frage, ob betriebliche Bündnisse einen signifikanten Effekt auf den Produktionsoutput haben. Im Gegensatz zu unbedingten Differenz-von-Differenzen-Schätzern und Schätzern ohne Matching ergeben sich bei bedingten Differenz-von-Differenzen-Schätzern oder Matching-Schätzern auf Basis der Mahalanobis-Metrik positive, aber nur insignifikante Effekte. Das letztere Ergebnis muss im Rahmen der Quantils-Treatmenteffekt-Analyse spezifiziert werden. Je höher das betrachtete Quantil ist, umso eher besteht eine Tendenz zu positiv signifikanten Effekten. Eine einfache Regression-Discontinuity-Analyse zeigt einen Strukturbruch bei einer Wahrscheinlichkeit von 0.5, dass ein Betrieb ein betriebliches Bündnis vereinbart hat. Keine speziellen Effekte lassen sich während der großen Rezession 2008/09 ausmachen. Fuzzy Regression-Discontinuity-Schätzungen offenbaren, dass der Outputeffekt betrieblicher Bündnisse in Ostdeutschland signifikant niedriger liegt als in Westdeutschland. Eine kombinierte Anwendung der vier Grundprinzipien zur Ermittlung von Kausaleffekten führt zu interessanten neuen Erkenntnissen. So werden unter anderem Differenz-von-Differenzen Schätzer mit Matching-Verfahren verknüpft. Erstere werden auch in Verbindung mit Regressions-Discontinuity erörtert und letztere in Verbindung mit Quantilsregressionen.
Abadie, A., Angrist, J., Imbens, G.: Instrumental variables estimates of the effect of subsidized training on the quantiles of trainee earnings. Econometrica 70, 91–117 (2002)
Ai, C., Norton, E.C.: Interaction terms in logit and probit models. Econ. Lett. 80, 123–129 (2003)
Angrist, J., Pischke, J.-S.: Mostly Harmless Econometrics—an Empiricist's Companion. Princeton University Press, Princeton (2009)
Belsley, D.A., Kuh, E., Welsch, R.E.: Regression Diagnostics—Identifying Influential Data and Sources of Collinearity. Wiley, New York (1980)
Cameron, A.C., Miller, D.L.: Robust inference with clustered data. In: Ullah, A.C., Giles, D.E.A. (eds.) Handbook of Empirical Economics and Finance, pp. 1–28 (2010)
Chernozhukov, V., Hong, H., Tamer, E.: Estimation and confidence regions for parameter sets in econometric models. Econometrica 75, 1243–1284 (2007)
Cribari-Neto, F., da Silva, W.D.: A new heteroskedasticity-consistent covariance matrix estimator for the linear regression model. AStA Adv. Stat. Anal. 95, 129–146 (2011)
Cribari-Neto, F., Souza, T.C., Vasconcellos, K.L.P.: Inference under heteroskedasticity and leveraged data. Commun. Stat., Theory Methods 36, 1877–1888 (2007)
Firpo, S.: Efficient semiparametric estimation of quantile treatment effects. Econometrica 75, 259–276 (2007)
Frölich, M., Melly, B.: Unconditional Quantile Treatment Under Endogeneity. (2012) mimeo
Goldstein, H.: Multilevel Statistical Models, Kendall's Library of Statistics, 3rd edn. Arnold, London (2003)
Guo, S., Fraser, M.W.: Propensity Score Analysis. Sage Publications, Thousand Oaks (2010)
Hadi, A.S.: Identifying multiple outliers in multivariate data. J. R. Stat. Soc. B 54, 761–771 (1992)
Hamermesh, D.S.: The craft of labormetrics. Ind. Labor Relat. Rev. 53, 363–380 (2000)
Imbens, G.W., Manski, C.F.: Confidence intervals for partially identified parameters. Econometrica 72, 1845–1857 (2004)
Koenker, R., Bassett, G.: Regression quantiles. Econometrica 46, 33–50 (1978)
Krämer, W.: The cult of statistical significance—what economists should and should not do to make their data talk. J. Appl. Soc. Sci. Stud. 131, 455–468 (2011)
Leamer, E.E.: Sensitivity analyses would help. Am. Econ. Rev. 75, 308–313 (1985)
MacKinnon, J.G., White, H.: Some heteroskedasticity consistent covariance matrix estimators with improved finite sample properties. J. Econom. 29, 305–325 (1985)
Moulton, B.R.: Random group effects and the precision of regression estimates. J. Econom. 32, 385–397 (1986)
Moulton, B.R.: Diagnostic tests for group effects in regression analysis. J. Bus. Econ. Stat. 6, 275–282 (1987)
Moulton, B.R.: An illustration of a pitfall in estimating the effects of aggregate variables on micro units. Rev. Econ. Stat. 72, 334–338 (1990)
Puhani, P.: The treatment, the cross difference, and the interaction term in nonlinear 'difference-in-differences' models. Econ. Lett. 115, 85–87 (2012)
Raudenbush, A.S., Bryk, S.W.: Hierarchical Linear Models, 2nd edn. Sage Publications, Thousand Oaks (2002)
Romano, J.P., Shaikh, A.M.: Inference for the identified set in partially identified econometric models. Econometrica 78, 169–211 (2010)
Rosenbaum, P.R., Rubin, D.P.: Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. Am. Stat. 39, 33–38 (1985)
Stoye, J.: More on confidence intervals for partially identified parameters. Econometrica 77, 1299–1315 (2009)
Wald, H.: The fitting of straight line if both variables are subject to error. Ann. Math. Stat. 11, 284–300 (1940)
White, H.: A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica 48, 817–838 (1980)
Wooldridge, J.M.: Cluster-sample methods in applied econometrics. Am. Econ. Rev. 93(PaP), 133–138 (2003)
Woutersen, T.: A Simple Way to Calculate Confidence Intervals for Partially Identified Parameters. (2009) mimeo
Ziliak, S., McCloskey, D.: The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice and Lives. University of Michigan Press, Michigan (2008)
I wish to thank an anonymous reviewer for his constructive suggestions and the participants of the Nutzerkonferenz in Nürnberg for helpful comments.
Institut für Empirische Wirtschaftsforschung, Leibniz Universität Hannover, Königsworther Platz 1, 30167, Hannover, Germany
Olaf Hübler
Correspondence to Olaf Hübler.
Table 11 OLS estimates of Cobb-Douglas functions with artificial dummies (DV.) as regressor; dependent variable: logarithm of sales
Table 12 OLS estimates of Cobb-Douglas functions with an artificial dummy (D.) determined from a rectangular distributed random variable as regressor. Results are average values of 300 estimates; dependent variable: logarithm of sales
Table 13 OLS estimates of Cobb-Douglas functions with company-level pact dummy (CLP) as regressor, decreasing shares of n(CLP=1)/n; dependent variable: logarithm of sales
Table 14 OLS estimates of Cobb-Douglas functions with works council dummy (WOCO) as regressor, decreasing shares of n(WOCO=1)/n—randomly determined based on a rectangular distribution; dependent variable: logarithm of sales
Table 15 Different CDF estimates, t-values in parentheses; dependent variable: logarithm of sales—lnY
Hübler, O. Estimation of standard errors and treatment effects in empirical economics—methods and applications. J Labour Market Res 47, 43–62 (2014). https://doi.org/10.1007/s12651-013-0135-0
Standard errors
DiD estimators
Quantile regressions
Regression discontinuity
JEL Classification
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page
|
CommonCrawl
|
Are a zero-truncated Poisson and basic Poisson nested or non-nested?
I've seen plenty that discusses whether a basic Poisson regression is a nested version of a zero-inflated Poisson regression. For instance this site argues that it is, since the latter includes extra parameters to model additional zeroes, but otherwise includes the same Poisson regression parameters as the former, though the page does include a reference that disagrees.
What I can't find information about is whether a zero-truncated Poisson and a basic Poisson are nested. If the zero-truncated Poisson is just a Poisson with the extra stipulation that the probability of a zero count is zero, then I guess it sounds like they could be, but I was hoping for a more definitive answer.
The reason I'm wondering is that it will affect whether I should use Vuong's test (for non-nested models), or a more basic chi-square test based on the difference in loglikelihoods (for nested models).
Wilson (2015) talks about whether a Vuong test is appropriate for comparing the zero-inflated regression with the basic one, but I can't find a source that discusses zero-truncated data.
poisson-regression zero-inflation
amoeba says Reinstate Monica
Just come across this now. To avoid confusion, I am the Wilson of Wilson(2015) referenced in the original question, which asks whether the Poisson and truncated Poisson models are nested, non nested etc. Slightly simplifying, a smaller model is nested in a larger model if the larger model reduces to the smaller one if a subset of its parameters are fixed at stated values; two models are overlapping if they both reduce to the same model when subsets of their respective parameters are fixed to certain values, they are non-nested if no matter how parameters are fixed one cannot reduce to the other. According to this definition the truncated Poisson and standard Poisson are non-nested. HOWEVER, and this is a point that seems to have been overlooked by many, Vuong's distributional theory refers to STRICTLY nested, STRICTLY non-nested, and STRICTLY overlapping. "STRICTLY" referring to the addition of six restrictions to the basic definition of nested etc. These restrictions are not exactly simple, but they do, among other things, mean that Vuong's results about the distribution of log likelihood ratios are not applicable in cases where models/distributions are nested at a boundary of a parameter space (as is the case with Poisson/zero inflated Poisson with an identity link for the zero-inflation parameter) or when one model tends to the other when a parameter tends to infinity, as is the case with the Poisson/zero-inflated Poisson when a logit link is used to model the zero-inflation parameter. Vuong advances no theory about the distribution of log likelihood ratios in these circumstances. Unfortunately here, this is the case with Poisson and truncated Poisson distributions, one tends to the other as the parameter tends to infinty, to see this, note that the ratio of the pmfs of Poisson and truncated Poisson distributions is 1-exp(-lambda) which tends to 1 as lambda tends to infinity, thus the two distributions are not stricty non-nested, or strictly anything for that matter, and Voung's theory is not applicable.
The following R code will simulate the distribution of poisson and truncated Poisson loglikelihood ratios. It requires the VGAM package.
n<-30
lambda1<-1
H<-rep(999,10000)
for(i in 1:10000){
y<-rpospois(n, lambda1)
fit1 <- vglm(y ~ 1, pospoisson)
fit2<-glm(y~1, family=poisson(link="log"))
H[i]<-logLik(fit1)-logLik(fit2)
hist(H,col="lemonchiffon")
Pauljw11Pauljw11
The basic Poisson can be thought of as nested inside a more general form:
$p(x) = (1-p)\frac{\text{e}^{-\lambda}\lambda^x}{x!} + p1(x=0)$
When $p = 0$, we have the basic Poisson. When $p = -\exp\{-\lambda\}/(1 -\exp\{-\lambda\})$, we have the zero-truncated Poisson. When $-\exp\{-\lambda\}/(1 -\exp\{-\lambda\}) < p < 0$, we have a zero-reduced Poisson. When $0 < p < 1$, we have a zero-inflated Poisson, and we have a degenerate distribution at $p = 1$.
So it seems to me that the nested version of the Vuong test, or the chi-square as you suggest, would be appropriate in your case. Note, though, that the chi-square can have problems due to the small probabilities of "large" (relative to $\lambda$) observations. You'd probably want to use a bootstrap to get the p-value for the chi-square statistic instead of relying on the asymptotics unless you've got rather a lot of data.
jbowmanjbowman
$\begingroup$ Thanks @jbowman - that's the sort of more rigorous answer I was hoping for. I'm unclear though: I thought the whole point of a Vuong test was for non-nested models, so even though it goes beyond my original post, could you provide a little more information about the "nested version of the Vuong test". To be clear about the source of my confusion: up until this moment I was only aware of the vuong function in package pscl in R which says it's for non-nested models. I just googled and found function vuongtest in package nonnest2 which includes an argument 'nested'. Is that it? $\endgroup$ – Justin Aug 28 '16 at 18:50
$\begingroup$ Yes, that is. Actually, the Wikipedia page en.wikipedia.org/wiki/Vuong%27s_closeness_test on the Vuong test is mildly helpful (often it's not so much) in describing the difference. $\endgroup$ – jbowman Aug 28 '16 at 19:06
$\begingroup$ NB Both the Poisson & the zero-truncated Poisson are special cases of the distribution you've defined. One isn't nested in the other. So you can't use Wilks' theorem to derive an asymptotic chi-squared distribution for twice the log likelihood ratio, whichever you consider to be the null hypothesis. (I think there are some regularity conditions for the Vuong test too.) $\endgroup$ – Scortchi - Reinstate Monica♦ Aug 30 '16 at 10:53
$\begingroup$ @Scortchi I am curious about the definition of "nested" you are applying. Although I don't disagree with your conclusion, I come to it from a slightly different point of view: yes, the Poisson is nested within this family (because it arises by restricting to $p=0$) but various conclusions about asymptotic distributions of MLE parameter estimates for $p$ do not apply because this value of $p$ lies on the boundary of the family. Am I missing some important distinction? $\endgroup$ – whuber♦ Apr 15 '17 at 20:55
$\begingroup$ @whuber, I was going to comment/provide an answer about the same point. The referenced link does note: "... although the chi-square distribution may need some adjustment because the restriction is on the boundary of the parameter space" $\endgroup$ – Ben Bolker Apr 15 '17 at 20:58
Not the answer you're looking for? Browse other questions tagged poisson-regression zero-inflation or ask your own question.
Zero-inflated Poisson regression Vuong test: Raw, AIC- or BIC-corrected results
Variable selection in zero-inflation models
Poisson regression assumptions and how to test them in R
When to use zero-inflated poisson regression and negative binomial distribution
How to test for Zero-Inflation in a dataset?
Coding a Zero Inflated Poisson with regression on p and lambda in r
|
CommonCrawl
|
Evidence for a prolonged Permian–Triassic extinction interval from global marine mercury records
Mercury deposition in Western Tethys during the Carnian Pluvial Episode (Late Triassic)
Mina Mazaheri-Johari, Piero Gianolla, … Jacopo Dal Corso
Different controls on the Hg spikes linked the two pulses of the Late Ordovician mass extinction in South China
Zhen Qiu, Hengye Wei, … Caineng Zou
Biogenic carbonate mercury and marine temperature records reveal global influence of Late Cretaceous Deccan Traps
Kyle W. Meyer, Sierra V. Petersen, … Ian Z. Winkelstern
Mercury spikes as evidence of extended arc-volcanism around the Devonian–Carboniferous boundary in the South Tian Shan (southern Uzbekistan)
Michał Rakociński, Agnieszka Pisarzowska, … Nuriddin Abdiyev
Mercury evidence from southern Pangea terrestrial sections for end-Permian global volcanic effects
Jun Shen, Jiubin Chen, … Tamsin A. Mather
Permo–Triassic boundary carbon and mercury cycling linked to terrestrial ecosystem collapse
Jacopo Dal Corso, Benjamin J. W. Mills, … Paul B. Wignall
Mercury Spikes Indicate a Volcanic Trigger for the Late Ordovician Mass Extinction Event: An Example from a Deep Shelf of the Peri-Baltic Region
Justyna Smolarek-Lach, Leszek Marynowski, … Paul B. Wignall
Volcanic related methylmercury poisoning as the possible driver of the end-Devonian Mass Extinction
Michał Rakociński, Leszek Marynowski, … Hans Peter Schönlaub
Intensified continental chemical weathering and carbon-cycle perturbations linked to volcanism during the Triassic–Jurassic transition
Jun Shen, Runsheng Yin, … Shucheng Xie
Jun Shen1,2,
Jiubin Chen3,4,
Thomas J. Algeo1,5,6,
Shengliu Yuan3,
Qinglai Feng1,
Jianxin Yu5,
Lian Zhou1,
Brennan O'Connell2 &
Noah J. Planavsky2
Nature Communications volume 10, Article number: 1563 (2019) Cite this article
Element cycles
Palaeoceanography
The latest Permian mass extinction, the most devastating biocrisis of the Phanerozoic, has been widely attributed to eruptions of the Siberian Traps Large Igneous Province, although evidence of a direct link has been scant to date. Here, we measure mercury (Hg), assumed to reflect shifts in volcanic activity, across the Permian-Triassic boundary in ten marine sections across the Northern Hemisphere. Hg concentration peaks close to the Permian-Triassic boundary suggest coupling of biotic extinction and increased volcanic activity. Additionally, Hg isotopic data for a subset of these sections provide evidence for largely atmospheric rather than terrestrial Hg sources, further linking Hg enrichment to increased volcanic activity. Hg peaks in shallow-water sections were nearly synchronous with the end-Permian extinction horizon, while those in deep-water sections occurred tens of thousands of years before the main extinction, possibly supporting a globally diachronous biotic turnover and protracted mass extinction event.
The mass extinction at the end of the Permian, ~252 million years ago, was the largest biocrisis of the Phanerozoic Eon and featured ~90% of marine invertebrate taxa going extinct in a geologically short time interval (~61 ± 48 kyr1,2,3). The main cause of the latest Permian mass extinction (LPME) is generally thought to be linked to severe environmental perturbations caused by eruptions of the Siberian Traps Large Igneous Province (LIP)4,5. Although the near-synchronous occurrence of increased volcanic activity and the LPME is well established1,3,6, geochemical evidence of a direct relationship between the LPME and the Siberian Traps LIP has been generated from only for a few marine sites in Arctic Canada and southern China (e.g., refs. 7,8,9,10,11).
Mercury enrichments in marine sedimentary successions represent a promising tool for identifying periods of enhanced volcanic activity. The modern global volcanic Hg flux is 76 ± 30 tons per year, accounting for 20–40% of total natural emissions of mercury12 (Supplementary Note 1). Combustion of coal can also result in a substantial Hg flux to the atmosphere, given its generally high Hg abundances (~500–1000 ppb13), and has been a major anthropogenic source of Hg throughout the industrial era14. However, volcanic activity (including direct gaseous emissions as well as magmatic intrusion into organic-rich sediments) was the major source of Hg to the atmosphere prior to the industrial era12,14. Massive volcanic eruptions (including LIPs) have frequently been associated with a spike in sedimentary Hg concentrations at geological boundaries, e.g., the Emeishan LIP (~260 Ma) at the Guadalupian–Lopingian Boundary10, the Siberian Traps (~252 Ma) at the Permian–Triassic boundary (PTB)7,8,11, the Central Atlantic Magmatic Province (CAMP, ~201 Ma) at the Triassic–Jurassic Boundary15,16, and the Deccan Traps LIP (~66 Ma) near the Cretaceous–Paleogene Boundary17,18.
Hg concentration spikes synchronous with the LPME have been reported from PTB sections at Festningen, Buchanan Lake, Shangsi, Daxiakou, and Meishan D7,8,10,11 (Fig. 1). These data suggest large inputs of volcanic Hg into the regions of northern Laurentia and the South China Craton8,11, though they are not sufficient to establish the timing of volcanic fluxes relative to the extinction horizon on a global scale. In this study, we measured Hg concentrations in 10 marine PTB sections and analyzed Hg isotopes in three of these sections (from different areas and depositional settings) to investigate if mass-dependent and mass-independent Hg isotope fractionations can provide insight into the origin and transport vectors of Hg during the Permian–Triassic transition.
Mercury concentration profiles (Hg/TOC) of study sections. a Bálvány, b Meishan D, c Xiakou, d Xinmin, e Kejiao, f Ursula Creek, g Opal Creek, h Gujo-Hachiman, i Ubara, and j Akkamori 2. The red vertical double-arrow lines in Xiakou, Ubara, and Akkamori-2 sections represent time gaps between the initial Hg spike and the LPME. Section locations shown on Early Triassic (~250 Ma) global paleogeographic map (adapted from Ron Blakey, http://jan.ucc.nau.edu/~rcb7/). Triangles represent the sections analyzed in this study (green, purple, and blue represent sections from Paleo-Tethys, Neo-Tethys, and Panthalassic oceans, respectively), and black circles show previously investigated sections in northern Laurentia and the South China area (see text). A. Albaillella; C. Clarkina; de. A. degradans; H. Hindeodus; H.l.-C.m. H. latidentatus-C. meishanensis; H.p-I.i = H. parvus-I. isarcica; I. = Isarcicella; m. = C. meishanensis; N. Neoalbaillella; p. H. parvus; sh. Mesogondolella sheni; tr. A. triangularis; yin. C. yini. Ch. Changhsingian, Gr. Griesbachian; St (sub)stage; F formation; B bed; Z conodont zone (all sections except Gujo-Hachiman), and radiolarian zone (Gujo-Hachiman). Sections: AK Akkamori-2; BL Buchanan Lake; BN Bálvány; DXK Daxiakou; F Festningen; GH Gujo-Hachiman; KJ Kejiao; M Mud; MS Meishan; OC Opal Creek; SS Shangsi; UB Ubara; UC Ursula Creek; XM Xinmin; XK Xiakou. BSB boundary shale beds, PTO Paleo-Tethys Ocean, NTO Neo-Tethys Ocean. LPME latest Permian mass extinction, PTB Permian–Triassic boundary. Note: full geochemical data are in Supplementary Figures 2–11. Source data are provided as a Source Data file
Study sections
The 10 PTB sections chosen for this study have a wide geographic distribution (Fig. 1) and represent a range of depositional water depths, including shallow (shelf or platform, <100 m), intermediate (deep shelf to upper slope, 100–1000 m), and deep water settings (abyssal, >2000 m) (see Shen et al.19; Supplementary Note 2 and Supplementary Fig. 1). These sections are assigned to four marine regions, including two deep-shelf sections (Opal Creek and Ursula Creek) in northeastern Panthalassa; one shelf section (Bálvány) in the western Paleo-Tethys; one shallow-shelf (Meishan) and three deep-shelf to slope sections (Xiakou, Xinmin, and Kejiao) in the eastern Paleo-Tethys (South China); and three abyssal sections (Gujo-Hachiman, Akkamori-2, and Ubara) in central Panthalassa (Fig. 1). These sections are correlated to each other based on conodont and radiolarian biostratigraphy, as well as carbon-isotope chemostratigraphy (see Methods). For most study sections, Hg analysis was constrained to the narrow PTB interval (ca. ±1 Myr, Supplementary Fig. 1), but some sections were analyzed over larger stratigraphic intervals, extending downsection to Wuchiapingian-age strata (Gujo-Hachiman) or upsection to Dienerian-age strata (Ursula Creek) (Supplementary Fig. 1). Three sections representing shallow (Meishan D), intermediate (Xiakou), and deep (Gujo-Hachiman) sites were chosen for Hg isotope analysis. For each study section, a detailed description and additional geochemical profiles are given in Supplementary Note 2 (Supplementary Figs. 2–11).
Mercury concentrations
A total of 391 samples were analyzed for Hg concentrations, with 28–59 analyses per section (Figs. 1, 2). Because Hg is hosted mainly by organic matter20, Hg concentrations were normalized to total organic carbon (TOC) in order to discern enrichments independent of variations in TOC21. Background Hg/TOC ratios range from 20 at Gujo-Hachiman to 95 at Kejiao, with a mean of ~50 for the full dataset (Fig. 2a; note all Hg/TOC values have units of ppb/%). Relative to these baseline values, all 10 study sections exhibit a pronounced increase in Hg/TOC ratios close to the LPME and extending upsection some distance into the Lower Triassic (Figs. 1, 2). In order to assess secular variation in Hg enrichment, mean Hg/TOC values were calculated for pre-enrichment, enrichment, and post-enrichment intervals in each study section. The latter two values are expressed as enrichment factors (EF) relative to pre-enrichment Hg levels, where Hg-EF = (Hg/TOC)/(Hg/TOC)bg, with 'bg' representing background (pre-enrichment) values. The enrichment interval extends from the sharp rise in Hg/TOC values close to the LPME upsection into the lowermost Triassic until the point that Hg/TOC falls to <1/2 of peak values. These operationally defined intervals correspond approximately to the pre-extinction, extinction, and post-extinction phases of the PTB crisis in shallow and intermediate sections (see below), where the extinction phase is delineated by the first and second extinction pulses at Meishan D22,23. We chose not to define enrichment intervals based on the extinction phases per se because the second extinction pulse (of lowermost Triassic age) has not been identified in many of the study sections.
Mercury enrichment levels relative to latest Permian mass extinction. Average Hg/TOC ratios by study section for the pre-enrichment interval (a), and average enrichment factors (Hg-EFs) for the enrichment (b) and post-enrichment intervals (c). Vertical whiskers represent standard deviation (SD) ranges. The numbers in panel a represent the mean values of each section. Data for Buchanan Lake and Festningen are from refs. 8,10. The Opal Creek section lacked pre-LPME samples, so a pre-enrichment value of 60 ppb/% was assumed based on the mean values of the Festningen and Ursula Creek sections. See Fig. 1 for section abbreviations and Supplementary Figure 1 for depositional water depths. Refer to Supplementary Note 2 for detailed section descriptions. Source data are provided as a Source Data file
The pre-enrichment interval shows the lowest average Hg/TOC ratios, the enrichment interval the highest average ratios, and the post-enrichment interval intermediate and spatially more variable ratios (Fig. 2). For the enrichment interval, the largest EFs are observed at Akkamori-2 (8.2), Buchanan Lake (7.6), and Xinmin (5.7), while all 10 study sections (as well as two previously reported sections) exhibit EFs of >2 (Fig. 2b). For the post-enrichment interval, Buchanan Lake and Meishan D yield lower yet still significant EFs (4.3 and 2.8), but all of the remaining sections show declines in Hg-EF to <2 (Fig. 2c). Peak Hg/TOC levels are observed significantly below the LPME in the two deep-ocean sections at Akkamori-2 (‒0.5 m) and Ubara (‒0.3 m), implying the onset of Hg enrichment ~100 and ~50 kyr prior to the LPME, respectively, based on the age models for these sections1,2 (see Methods). The slope section at Xiakou exhibits an Hg/TOC peak preceding the LPME by ~20 kyr (Figs. 1, 3). In contrast, most shallow-water sections exhibit no gap between the Hg/TOC peak and the mass extinction horizon (e.g., both are in Bed 25 of the Meishan section). The one exception to this deep-vs.-shallow pattern is the deepwater Gujo-Hachiman section, where the initial Hg enrichment coincided with the mass extinction horizon, although we cannot rule out the possibility of a genuine signal, e.g., related to stratigraphic condensation or a hiatus around the LPME or, more simply, because of low sampling resolution in this section.
Mercury isotopes of selected study sections. Profiles ratios of mercury to total organic carbon (Hg/TOC), mass-dependent fractionation (δ202Hg), and mass-independent fractionation (Δ199Hg) for three sections: a Meishan D (triangles represent data from8); b Xiakou; and c Gujo-Hachiman. The red dashed lines and shaded rectangles are the background values of δ202Hg (‒0.65‰) and Δ199Hg (0‰), respectively. A. Albaillella; C. Clarkina; de. A. degradans; H.l.-C.m. H. latidentatus-C. meishanensis; I. Isarcicella; N. Neoalbaillella; p. H. parvus; tr. A. triangularis. Gr. Griesbachian, Tr. Triassic; Se Series, St (sub)stage, F formation, B bed, Z conodont zone (all sections except Gujo-Hachiman), and radiolarian zone (Gujo-Hachiman). LPME latest Permian mass extinction, PTB Permian–Triassic boundary. The horizontal bars of the isotope profiles indicate standard deviation (2σ) values, which are smaller than the symbol size for some samples. Source data are provided as a Source Data file
Mercury isotopes
Hg isotopes were measured for sample subsets from Meishan D (n = 15), Xiakou (n = 21), and Gujo-Hachiman (n = 24). The Hg isotope profiles (δ202Hg tracking mass-dependent fractionation, and Δ199Hg tracking mass-independent fractionation) show pronounced secular changes around the LPME and PTB horizons (Fig. 3). At Meishan D, Beds 25 and 28 exhibit negative excursions in both δ202Hg (to ‒1.3‰ from background values of ‒0.4 to ‒0.6‰; Fig. 3a) and Δ199Hg (to ‒0.02‰ from background values of 0 to +0.08‰; Fig. 3a). Grasby et al. 8 reported similar secular trends at Meishan over a more limited stratigraphic interval (1.0 m vs. 3.5 m in the present study) (Fig. 3a). Although the numbers of samples at Xiakou (also called Daxiakou) are limited, two negative δ202Hg excursions are present in the C. meishanensis and I. isarcica Zones22 (to <‒1.4‰ from background values of ca. ‒0.6‰; Fig. 3b). Δ199Hg shows more limited deviations (to ca. +0.1‰ from background values of 0 to +0.07‰; Fig. 3b). These δ202Hg and Δ199Hg profiles are similar to profiles for Xiakou in an earlier study11. At Gujo-Hachiman, δ202Hg shows fluctuating but low pre-extinction values (ca. ‒2.1 to ‒1.0‰) followed by a sharp rise above the LPME (to ‒0.8 to 0‰; Fig. 3c). The Δ199Hg profile shows nearly the opposite pattern, with fluctuating but high pre-extinction values (+0.12 to +0.35‰) decreasing to ‒0.02 to +0.14‰ above the LPME (Fig. 3c). It should be noted that, owing to low sedimentation rates at Gujo-Hachiman24, the interval >40 cm below the LPME is stratigraphically older than the studied intervals at Meishan and Xiakou. The lower δ202Hg and higher positive Δ199Hg values of lower to middle Changhsingian strata at Gujo-Hachiman thus represent stable deep-marine sedimentation prior to the end-Permian crisis.
The accumulation of Hg in sediments can be influenced by various factors including redox conditions, organic carbon burial fluxes, and clay mineral content16,20,25,26,27. Reducing conditions promote the formation of organic-Hg complexes and Hg-sulfides, which are likely to be the dominant forms of Hg in marine sediments20,26,27. Hg can become enriched in clay minerals under certain chemical conditions (e.g., elevated Eh) through adsorption of sparingly soluble Hg(OH)228,29. However, Hg adsorbed onto organic matter is the dominant form of Hg in most aquatic systems20. Small mass-dependent fractionations (MDF) may result from physical–chemical–biological processes during Hg uptake in marine sediments, but the lack of mass-independent fractionation (MIF) renders Hg isotope systematics (especially MIF) a powerful tracer of Hg provenance30,31. Hg-isotopic MIF variation in Phanerozoic sedimentary successions has been interpreted in terms of source changes rather than diagenetic effects8,15,32.
In most of the study sections, Hg exhibits a stronger correlation to TOC (i.e., r ranging from+0.55 to +0.95 than to sulfur (S) (r mostly <+0.45) or aluminum (Al) (r ranging from +0.20 to +0.84 (Supplementary Fig. 12; note all r values significant at p(a) < 0.01). This strong correlation supports organic matter as the dominant Hg substrate. Although there is pronounced variation in TOC concentrations in most sections, both raw and TOC-normalized Hg concentrations (i.e., Hg/TOC) show systematic stratigraphic trends in the 10 study sections, suggesting that elevated Hg fluxes to the sediment were not simply due to increased organic matter burial. Furthermore, increases in Hg/TOC around the LPME are not related to changes in sediment lithology, as samples containing <1% Al (i.e., carbonates) and those containing >1% Al (i.e., marls and shales) show nearly identical patterns of secular Hg/TOC variation in all profiles despite paleoenvironmental differences (Supplementary Fig. 13). Thus, we infer that the large increases in Hg/TOC observed around the LPME reflect a large increase in Hg fluxes to the ocean followed by rapid Hg removal to the sediment, reflecting the short residence time of Hg in the atmosphere–ocean system.
The sharp peaks of Hg/TOC that first appear near the LPME horizon (~251.94 Ma) continue upsection in each study section for stratigraphic intervals corresponding to ~50–200 kyr. This period also corresponds to the peak of the end-Permian mass extinction, characterized by major perturbations to global biogeochemical cycles and terrestrial and marine ecosystems1,33,34 (Fig. 4). This timeframe is also consistent with the interval of large-scale intrusion of Siberian Traps magmas into organic-rich sediments of the Tunguska Basin during the intrusive sill-complex phase of Burgess et al. 9. The Hg/TOC peaks, therefore, are likely to be tied, in part, to the onset of heating of subsurface organic-rich sediments by sill intrusions of the Siberian Traps LIP rather than to the onset of flood basalt eruptions3,9. However, the relationship of Hg emissions to LIP activity is not well understood at present35.
Relationship of mercury records to PTB marine ecosystem perturbations. Hg/TOC values from all study sections, biodiversity variations70,71, and inorganic carbon isotopes72. C. Clarkina, cha. C. changxingensis, dien. Neospathodus dieneri, k.-d. Neoclarkina krystyni-N. discreta, ku. Sweetospathodus kummeli, m. C. meishanensis, Nv. Novispathodus, p.-s. Hindeodus parvus-Isarcicella staeschi, w.-s. C. wangi-C. subcarinata, yin. C. yini; Gri. Griesbachian, Dien. Dienerian, LPME latest Permian mass extinction. Geochronologic and biozonation data modified from ref. 19, and Hg/TOC data of Buchanan Lake and Meishan D from ref. 8. Four samples with Hg/TOC ratios > 1000 ppb/% (Buchanan Lake = 3, Meishan D = 1) are marked by an arrow
Hg/TOC ratios exhibit only a weak relationship to distance from the Siberian Traps LIP but a strong relationship to depositional water depth (Fig. 1). Paleogeographically, sections from NE Panthalassa have higher average Hg/TOC ratios during the enrichment interval (85 ± 67 ppb/%) relative to sections from the Paleo-Tethys (62 ± 40 ppb/%) or Panthalassic oceans (30 ± 21 ppb/%; Fig. 2a, b). With regard to water depths, average Hg/TOC ratios for the pre-enrichment interval are 26 ± 14, 82 ± 60, and 27 ± 19 ppb/% for shallow, intermediate, and deep sections, respectively (Fig. 2a). Thus, intermediate-depth sections show higher background Hg/TOC values (by a factor of nearly 3) than either surface and deep-ocean sections, implying elevated aqueous Hg concentrations in the upper thermocline region (~200–500 m) of Late Permian oceans. Average EFs during the enrichment interval are 3.4 ± 0.7, 4.6 ± 1.8, and 4.9 ± 2.9 for shallow, intermediate, and deep sections, respectively (Fig. 2b), indicating that the pulse of Hg released during the PTB crisis was preferentially transferred out of the surface ocean and into deeper waters. Hg enrichment in shallow-water settings during the Toarcian (~183 Ma) was inferred to have been the result of intense terrestrial runoff36, although this is likely not the case for the present study sections owing to distinctly greater mercury enrichments at intermediate-depth relative to shallow-water settings. Instead, this pattern is similar to the Hg loading in the thermocline of modern oceans, which results from adsorption of Hg onto sinking organic particles and downward transfer through the biological pump37. However, other factors (e.g., the amount and type of organic matter) may also have influenced the depth-dependent distribution of Hg in the study sections.
There is a distinct difference in the timing of initial Hg enrichment relative to the LPME horizon in the shallow-water relative to the deep-water study sites. At shallow-water locales, the spike in Hg enrichments and faunal turnover are nearly synchronous, whereas the deep-water locales show a large time lag between the initial Hg pulse and faunal turnover. Hg/TOC peaks are ~0.5 and 0.3 m below the LPME in the deepwater Akkamori-2 and Ubara sections, representing at least a 50–100 kyr lag (Fig. 1; see Methods for age models). A smaller time gap (~20 kyr) between Hg enrichments and the LPME horizon is inferred for the intermediate-depth Xiakou section.
The synchronicity of the Hg enrichments and the extinction horizon in shallow-water sections might be related to sediment homogenization by bioturbation. However, in key sections Hg enrichments occur predominantly in sediments with limited fabric disruption38,39, indicating that the offsets in Hg enrichments and the extinction horizon are not linked to bioturbation. For instance, sediment homogenization at Meishan is limited to 2–4 cm just below the extinction horizon (Bed 25) and is largely lacking above the LPME40. The pelagic sections from Japan also exhibit strong primary sedimentary fabric preservation with only limited evidence of bioturbation39,41.
Mercury isotopes can be used to track the source and depositional pathways of mercury into marine sediments (see Blum et al. 30 and references therein) given that the two main Hg sources to the oceans, i.e., terrestrial runoff and atmospheric deposition of Hg(II), have different isotopic signatures30,31. Mercury has a complex biogeochemical cycle and undergoes transformations that may induce MDF (δ202Hg) and/or MIF (Δ199Hg) of Hg isotopes30. Volcanogenic Hg has δ202Hg values between ‒2‰ and 0‰42,43, and its MDF can be influenced by a wide range of physical, chemical, and biological processes. MIF, in contrast, occurs predominantly through photochemical processes8,30. Hg emitted by arc volcanoes or hydrothermal systems does not appear to have undergone significant MIF (~0‰), although a relatively limited number of settings have been studied to date. Coal combustion commonly leads to release of Hg with negative δ202Hg and Δ199Hg values43,44. Alternatively, photoreduction of Hg(II) complexed by reduced sulfur ligands in the photic zone can limit negative MIF45. However, Hg enrichments and negative MIF records in the present study units cannot be due exclusively to oceanic anoxia near the PTB, because Hg enrichments are measured in diverse redox environments and the Hg is hosted mainly by organic matter rather than sulfides.
The near-zero Δ199Hg values (mostly 0‰ to +0.10‰) for the pre-LPME interval at Meishan D and Xiakou may reflect photochemical reduction of Hg or the mixing of terrestrial and atmospheric sources of Hg43 (Fig. 3). However, the lower to middle Changhsingian interval at Gujo-Hachiman (the stratigraphic equivalents of which were not sampled in the Meishan D and Xiakou sections) exhibits distinctly elevated Δ199Hg compositions, ranging from +0.10‰ to +0.35‰, which are typical of marine sediments30 and consistent with photoreduction of aqueous HgII26,43. All three sections (especially the pelagic Gujo-Hachiman section) exhibit near-zero, although somewhat variable, Δ199Hg values during and following the LPME, which are consistent with predominantly volcanic and/or thermogenic (i.e., coal-derived) Hg inputs.
MDF (δ202Hg) profiles for the study sections show roughly similar patterns: Meishan D and Xiakou yield background (pre-LPME and post-PTB) values of ca. ‒0.50‰, whereas the stratigraphically older part of the Gujo-Hachiman section shows more negative pre-LPME values, ranging from ‒0.80‰ to ‒2.30‰ with a mean of ‒1.50‰ (Fig. 3). All three sections show increased variability in δ202Hg around the LPME, with Meishan D and Xiakou each possibly displaying two negative spikes. These excursions in MDF support a change in the source or cycling of marine Hg close to the LPME, although the exact nature of the controlling processes is uncertain. For the pre-LPME interval at Gujo-Hachiman, the large positive MIF and negative MDF signatures imply a dominant atmospheric transport pathway30,46. The small positive MIF and negative MDF signatures of the Meishan D and Xiakou sections may indicate mixed atmospheric and terrestrial sources, with possible Hg inputs from land plants owing to increased Hg loadings in terrestrial ecosystems.
Our new Hg-isotopic results yield insights beyond those of earlier Hg studies of the PTB. Grasby et al. 8 inferred that δ202Hg-Δ199Hg values were consistent with Hg sourced mainly from volcanic activity for a deep slope section in the Canadian Arctic (Buchanan Lake), and a combination of atmospheric inputs and terrestrial runoff for a nearshore section in China (Meishan D). Although our minimum MIF values are much less negative than those reported by Grasby et al. 8, our data for Meishan D also support a mixture of terrestrial and atmospheric Hg sources. We infer that changes around the LPME in the deep-ocean Gujo-Hachiman section (near-zero to weakly positive Δ199Hg values, a concurrent increase of MDF, and strong Hg enrichments) are evidence of atmospheric inputs of Hg (i.e., from volcanic emissions as well as volcanic-related thermogenic sources such as coal combustion) to the open ocean thousands of kilometers distant from riverine fluxes. Overall, the trends in δ202Hg-Δ199Hg values are consistent with massive inputs of Hg from volcanic emissions and/or combustion of Hg-bearing organic-rich sediments by the Siberian Traps LIP.
The LPME coincided with the onset of sill complex formation of the Siberian Traps LIP9, indicating that the initial Hg enrichments near the LPME in PTB sections were also coincident with those sills. Hg profiles can provide high-resolution records of volcanic activity given the short residence time of Hg in the atmosphere and oceanic water column (<2 years and <1000 years, respectively)37,47. Compared to the synchronicity of Hg peaks and the LPME in shallow-water sections, the observed time gaps of ~50 to 100 kyr between the initial appearance of Hg peaks and the LPME in pelagic deep-water sections (Akkamori-2 and Ubara) may support a diachronous marine extinction event. This conclusion, however, is dependent on the geological synchronicity of the Hg peaks, which depends on the age model and the placement of the LPME in each section (see Methods). A protracted extinction model has also been proposed based on the differential timing of sponge extinctions relative to the LPME in the Arctic region48 and radiolarian extinctions in the Nanpanjiang Basin49,50.
A diachronous extinction event would provide new insights into the long-debated influence of various 'kill mechanisms', e.g., hypercapnia51,52, thermal stress53, and oxygen and sulfide stresses54,55. The effects of hypercapnia and thermal stress should be nearly synchronous, as heat and carbon dioxide are fairly evenly distributed through atmospheric and marine circulation on 1–2 kyr time scales56. Moreover, the effects of hypercapnia should be coincident with peak Hg enrichments and peak outgassing (assuming the two are equivalent) given that silicate and marine weathering will begin to draw down atmospheric carbon dioxide following the onset of a carbon injection (e.g., refs. 57,58). This is consistent with the synchronous increase in atmospheric Hg and CO2 during the end-Triassic crisis15. In contrast, ocean anoxia can develop over a wide range of time scales, depending on initial local oxygen concentrations, baseline nutrient levels, and the extent and rate of nutrient release into the marine system from enhanced weathering and positive feedbacks associated with the P cycle59,60. For anoxia to develop in deep-ocean settings (e.g., extensive anoxia in deep-marine settings near the LPME24,61), greater nutrient loading (e.g., P, Fe) is needed than for shelf settings62. Thus, the presence of Hg enrichment across different marine environments (assuming a volcanogenic origin) provides new evidence for oxygen stress, rather than extreme temperatures or hypercapnia, as the critical driver of Earth's largest mass extinction event. It should also be noted that elevated temperatures reduce oxygen saturation levels in seawater and cause the metabolic effects of low oxygen to become more severe63.
Mercury enrichments near the LPME horizon in continental shelf, continental slope, and abyssal marine sections, combined with Hg isotopes (δ202Hg–Δ199Hg), provide evidence for a massive increase in volcanic-related Hg emissions during the Permian–Triassic biotic crisis. This study provides direct geochemical evidence from marine sections for near global-scale volcanic effects linking the Siberian Traps LIP to the PTB crisis. Relative to pre-LPME background values, Hg-EFs rose by factors of 3–8 during the mass extinction event before returning to near-background levels in the Early Triassic. Hg/TOC ratios are significantly higher (by a factor of nearly 3) in intermediate-depth sections relative to surface and deep-ocean sections prior to the PTB crisis, reflecting a general concentration of Hg within the upper thermocline region through the action of the biological pump. Further, with current placements of the LPME horizon in each section, stratigraphic differences between the initial spike of Hg concentrations and the LPME represent a time gap that provides evidence of a globally diachronous mass extinction event. Specifically, the extinction horizon in deep-water sections (e.g., Akkamori-2 and Ubara) postdated peak volcanogenic Hg inputs by ~50 to 100 kyr, whereas it was nearly synchronous in shallow-water sections. Because of feedbacks in the marine oxygen cycle, sulfide and oxygen stresses would have developed over thousands or even tens of thousands of years after the peak of volcanic outgassing. A lag between peak volcanogenic Hg inputs and biotic turnover is likely when ecosystem destabilization is caused by oxygen stress, in contrast to the geologically rapid response expected if extreme temperatures or hypercapnia were the main kill mechanism. In summary, evidence for a protracted extinction interval provides new support for oxygen and sulfide stresses as the main kill mechanism over a large swath of the ocean in response to Siberian Traps LIP volcanism.
Sample preparation and elemental analyses
Samples were trimmed to remove visible veins and weathered surfaces and pulverized to ~200 mesh in an agate mortar. Aliquots of each sample were prepared for different analytical procedures. Major element concentrations for the Kejiao section were determined by wavelength-dispersive X-ray fluorescence (XRF) analysis of fused glass beads using an XRF-1800 in the State Key Laboratory of Biogeology and Environmental Geology at the China University of Geosciences-Wuhan. Major element analyses of the remaining study sections had been undertaken previously in the context of other studies.
TOC concentrations for all sections except Akkamori-2 and Ubara sections were measured using an Eltra 2000 C-S analyzer at the University of Cincinnati. Data quality was monitored via multiple analyses of USGS SDO-1 standard, yielding an analytical precision (2σ) of ±2.5% of reported values for TOC. TOC for Akkamori-2 and Ubara was measured at Yale University using a Delta Plus. Data quality was monitored via multiple analyses of a Low Organic Content Soil Standard (B2152) and a Medium Organic Content Soil Standard (B2178), yielding a long-time (two-year) standard deviation of ±0.06%. An aliquot of each sample was digested in 2 N HCl at 50 oC for 6 h to dissolve carbonate minerals, and the residue was analyzed for TOC and non-acid-volatile sulfur (NAVS); total inorganic carbon (TIC) and acid-volatile sulfur (AVS) were obtained by difference.
Mercury concentrations and isotopes
Hg concentrations (391) were determined at the School of Earth Sciences of China University of Geosciences-Wuhan (Xinmin (32), Kejiao (36)), the Analytical and Stable Isotope Center of Yale University (Ursula Creek (37), Opal Creek (53), Bálvány (31), Akkamori-2 (59), and Ubara (28)), and the State Key Laboratory of Environmental Geochemistry, Institute of Geochemistry, Chinese Academy of Sciences, Guiyang (Meishan D (41), Xiakou (41), and Gujo-Hachiman (33)). At Yale University, Hg concentrations were analyzed for 120-mg aliquots of sample using a Direct Mercury Analyzer (DMA80). Data quality was monitored via multiple analyses of the MESS-3 standard, yielding an analytical precision (2σ) of ±0.5% of reported values for Hg. One replicate sample and standard were analyzed for every 10 samples. Ten replicate samples were analyzed for Hg concentrations at all three laboratories, yielding variations in reported values of less than ±5% despite some differences in methods and instrumentation.
Analysis of Hg concentrations and isotopic compositions in China followed procedures described in recent similar studies64,65. To ensure low blanks all teflona nd glassware were acid cleaned before use. Teflon materials including bottles and fittings were cleaned in a similar manner and air-dried for 24 h in a fume hood. We prepared the 0.2-M BrCl solution by mixing concentrated HCl with KBrO3 powders (>99%, ACS reagent, Aldrich, USA) at 250 °C for 12 h. We used a SnCl2 solution (from ACS reagent, Aldrich, USA) for Hg reduction prior to measurement of Hg concentrations by cold vapor atomic fluorescence spectroscopy (CVAFS) and Hg isotopes by MC–ICP–MS. We prepared a 0.2 g mL−1 NH2OH·HCl solution for BrCl neutralization, and the reductants were bubbled for 6 h with Hg-free N2 to remove trace levels of Hg.
All analyses of Hg isotopes (Meishan D (15), Xiakou (21), and Gujo-Hachiman (24)) were carried out at the State Key Laboratory of Environmental Geochemistry, Institute of Geochemistry, Chinese Academy of Sciences, Guiyang. We extracted and concentrated Hg using a previously described double-combustion and trapping dual-stage protocol65. We used thallium NIST SRM 997 standard (20 ng/mL Tl in 3% HNO3) to correct for mass bias and the international Hg standard NIST SRM 3133 to monitor analytical precision and accuracry64,65. We used UM-Almaden as the reference material as a secondary laboratory Hg standard (National Center for Standard Materials, Beijing, China). Ten percent of samples were duplicated—giving us a total of 20 replicate analyses. The Hg-trapping solution (a mixture of 4 M HNO3 and 1.3 M HCl) was diluted to a final acid concentration of ~20% and stored at 4 °C for subsequent isotope measurements. Procedural blanks were negligible (<0.13 ng, n = 8) relative to the amount of Hg in samples (>20 ng). We had near-complete recovery (98 ± 4%, 2 SD) guaranteeing that there was no Hg isotope fractionation during the pre-concentration procedure. Volatile ionized Hg generated by SnCl2 reduction in a cold-vapor generation system was introduced into the plasma (Nu-MC–ICP–MS) with Ar as a carrier gas. We used standard-sample bracketing method for all samples.
Hg isotopic results are expressed as δ values in units of per mille (‰) relative to the bracketed NIST 3133 Hg standard, as follows:
$${\mathrm{\delta }}\,{}^{202}{\mathrm{Hg = }}\left[ {\left( {\,{}^{202}{\mathrm{Hg/}}\,{}^{198}{\mathrm{Hg}}} \right)_{{\mathrm{sample}}}{\mathrm{/}}\left( {\,{}^{202}{\mathrm{Hg/}}\,{}^{198}{\mathrm{Hg}}} \right)_{{\mathrm{standard}}} - 1} \right] \times 1000\permil$$
Any Hg-isotopic value that does not follow the theoretical MDF was considered as an isotopic anomaly caused by MIF. MIF values are indicated by "capital delta (Δ)" notation (in per mille) and predicted from δ202Hg using the MDF law:
$${\mathrm{\Delta }}^{{\mathrm{199}}}{\mathrm{Hg = \delta }}^{{\mathrm{199}}}{\mathrm{Hg--0}}{\mathrm{.252}} \times {\mathrm{\delta }}^{{\mathrm{202}}}{\mathrm{Hg}}$$
The long-term measurements of GBW07405 yielded mean values of ‒1.79 ± 0.08‰, ‒0.30 ± 0.04‰, ‒0.01 ± 0.02‰, and ‒0.28 ± 0.03‰ for δ202Hg, Δ199Hg, Δ200Hg, and Δ201Hg (2 SD, n = 11), respectively, in agreement with previous studies64. Repeated measurements (n = 15) of the standard UM-Almadén Hg yielded mean δ202Hg, Δ199Hg, Δ200Hg, and Δ201Hg values of ‒0.54 ± 0.11‰, ‒0.01 ± 0.03‰, 0.00 ± 0.05‰, and 0.00 ± 0.05‰ (2σ), respectively, also in agreement with previous studies64,66,67. The obtained 2σ value is in line with uncertainties for samples analyzed only once.
Section correlations and time models
A high-resolution stratigraphic correlation framework among the study sections was generated on the basis of detailed biostratigraphic and chemostratigraphic data, including conodont zonations (for shallow-water carbonate settings), radiolarian zonations (for deep-water chert settings), and carbon isotope profiles. A few key features that distinguish the LPME include: (1) a lithologic change (generally toward more siliciclastic-rich compositions), (2) the first appearance of the conodont C. meishanensis at the base of the LPME, and (3) a pronounced negative carbon isotope excursion. The increased clay content of the beds immediately overlying the LPME has been attributed to intensification of chemical weathering on land and increased terrigenous fluxes to the ocean68. A ~2‰ to 6‰ negative excursion of both carbonate and organic carbon isotopes is associated with the LPME globally69. For the studied sections, lithological changes were obvious near the LPME, e.g., a transition from limestone to mudstone/volcanic ash beds in shallow-depth and intermediate-depth settings, and siliceous claystone to shales in the deep-water sections (Supplementary Fig. 14). For the shallow-depth and intermediate-depth sections, conodont zonations typically included the Clarkina changxingensis changxingesis—C. deflecta, C. yini, Hindeodus praeparvus (pre-LPME), C. meishanensis (LPME to PTB), H. parvus, Isarcica staeschei, and I. isarcica zones (post-PTB). The LPME was placed at the base of the C. meishanensis Zone. The deep-water sections (AK-2 and Ubara) yielded diagnostic species of radiolarians (e.g., Albaillella cf. triangularis) and conodonts (e.g., C. changxingensis and C. subcarinata) for the uppermost Permian (pre-LPME), as well as the conodont index taxon Hindeodus parvus, whose first appearance datum marks the base of the Triassic System (Supplementary Fig. 14). In addition, all sections yielded a well-defined negative carbon isotope excursion within the LPME-to-PTB interval (δ13Ccarb for carbonate-dominated sections, and δ13Corg for chert-dominated and mudstone-dominated sections), except for Xinmin, which is characterized by numerous volcanic ashes through the boundary interval (Supplementary Fig. 14).
The timescale used in this study, which was modified from ref. 19, is based on a combination of radiometric dating of key stratigraphic boundaries (e.g., LPME, PTB) in each section1,2 and the relative durations of conodont zones. For shallow-depth and intermediate-depth sections with detailed conodont zonations, it allowed development of highly detailed age models. for the deep-water sections (Akkamori 2 and Ubara), age models were constructed on the basis of two age anchor points (251.94 Ma for the LPME and 251.90 Ma for the PTB) together with astrochronological analysis of sedimentation rates. At Akkamori 2, the 0.75 m interval between the LPME and PTB represents ~40 kyr2. Assuming a ~4× increase in sedimentation rates from Upper Permian cherts to Lower Triassic shales in the central Panthalassic Ocean based on astronomical cycles68, the time span between the onset of the Hg peak (~0.7 m below the LPME) and the LPME is thus ~100 kyr.
The authors declare that the main data supporting the findings of this study are available within the Source Data file. Extra data are available from the corresponding author upon request.
Shen, S. Z. et al. Calibrating the end-Permian mass extinction. Science 334, 1367–1372 (2011).
Burgess, S. D., Bowring, S. & Shen, S. Z. High-precision timeline for Earth's most severe extinction. Proc. Natl Acad. Sci. USA 111, 3316–3321 (2014).
Burgess, S. D. & Bowring, S. A. High-precision geochronology confirms voluminous magmatism before, during, and after Earth's most severe extinction. Sci. Adv. 1, e1500470 (2015).
Svensen, H. et al. Siberian gas venting and the end-Permian environmental crisis. Earth Planet. Sci. Lett. 277, 490–500 (2009).
Bond, D. P. G. & Wignall, P. B. Large igneous provinces and mass extinctions: an update. Geol. Soc. Am. Spec. Pap. 505, 29–55 (2014).
Reichow, M. K. et al. The timing and extent of the eruption of the Siberian Traps large igneous province: implications for the end-Permian environmental crisis. Earth Planet. Sci. Lett. 277, 9–20 (2009).
Sanei, H., Grasby, S. E. & Beauchamp, B. Latest Permian mercury anomalies. Geology 40, 63–66 (2012).
Grasby, S. E. et al. Isotopic signatures of mercury contamination in latest Permian oceans. Geology 45, 55–58 (2017).
Burgess, S., Muirhead, J. & Bowring, S. Initial pulse of Siberian Traps sills as the trigger of the end-Permian mass extinction. Nat. Commun. 8, 164 (2017).
Grasby, S. E., Beauchamp, B., Bond, D. P., Wignall, P. B. & Sanei, H. Mercury anomalies associated with three extinction events (Capitanian crisis, latest Permian extinction and the Smithian/Spathian extinction) in NW Pangea. Geol. Mag. 153, 285–297 (2016).
Wang, X. et al. Mercury anomalies across the end Permian mass extinction in South China from shallow and deep water depositional environments. Earth Planet. Sci. Lett. 496, 159–167 (2018).
Pyle, D. M. & Mather, T. A. The importance of volcanic emissions for the global atmospheric mercury cycle. Atmos. Environ. 37, 5115–5124 (2003).
Yudovich, Y. E. & Ketris, M. Mercury in coal: a review: Part 1. Geochemistry. Int. J. Coal Geol. 62, 107–134 (2005).
Pirrone, N. et al. Global mercury emissions to the atmosphere from anthropogenic and natural sources. Atmos. Chem. Phys. 10, 5951–5964 (2010).
Thibodeau, A. M. et al. Mercury anomalies and the timing of biotic recovery following the end-Triassic mass extinction. Nat. Commun. 7, 11147 (2016).
Percival, L. M. et al. Mercury evidence for pulsed volcanism during the end-Triassic mass extinction. Proc. Natl. Acad. Sci. USA 114, 7929–7934 (2017).
Font, E. et al. Mercury anomaly, Deccan volcanism, and the end-Cretaceous mass extinction. Geology 44, 171–174 (2016).
Sial, A. N. et al. Mercury enrichment and Hg isotopes in Cretaceous–Paleogene boundary successions: Links to volcanism and palaeoenvironmental impacts. Cretaceous Res. 66, 60–81 (2016).
Shen, J. et al. Marine productivity changes during the end-Permian crisis and Early Triassic recovery. Earth-Sci. Rev. 149, 136–162 (2015).
Ravichandran, M. Interactions between mercury and dissolved organic matter–a review. Chemosphere 55, 319–331 (2004).
Percival, L. et al. Globally enhanced mercury deposition during the end-Pliensbachian extinction and Toarcian OAE: a link to the Karoo–Ferrar Large Igneous Province. Earth Planet. Sci. Lett. 428, 267–280 (2015).
Shen, J. et al. Two pulses of oceanic environmental disturbance during the Permian–Triassic boundary crisis. Earth Planet. Sci. Lett. 443, 139–152 (2016).
Song, H. J., Wignall, P. B., Tong, J. N. & Yin, H. F. Two pulses of extinction during the Permian–Triassic crisis. Nat. Geosci. 6, 52–56 (2013).
Algeo, T. J. et al. Spatial variation in sediment fluxes, redox conditions, and productivity in the Permian–Triassic Panthalassic Ocean. Palaeogeogr. Palaeoclimatol. Palaeoecol. 308, 65–83 (2011).
Benoit, J. M., Gilmour, C. C., Mason, R. P. & Heyes, A. Sulfide controls on mercury speciation and bioavailability to methylating bacteria in sediment pore waters. Environ. Sci. Technol. 33, 951–957 (1999).
Gehrke, G. E., Blum, J. D. & Meyers, P. A. The geochemical behavior and isotopic composition of Hg in a mid-Pleistocene western Mediterranean sapropel. Geochim. Cosmochim. Acta 73, 1651–1665 (2009).
Shen, J. et al. Mercury in marine Ordovician/Silurian boundary sections of South China is sulfide-hosted and non-volcanic in origin. Earth Planet. Sci. Lett. 551, 130–140 (2019).
Farrah, H. & Pickering, W. F. The sorption of mercury species by clay minerals. Water Air Soil Pollut. 9, 23–31 (1978).
Kongchum, M., Hudnall, W. H. & Delaune, R. Relationship between sediment clay minerals and total mercury. J. Environ. Sci. Health A 46, 534–539 (2011).
Blum, J. D., Sherman, L. S. & Johnson, M. W. Mercury isotopes in earth and environmental sciences. Annu. Rev. Earth Planet. Sci. 42, 249–269 (2014).
Chen, J. B. et al. Isotopic evidence for distinct sources of mercury in lake waters and sediments. Chem. Geol. 426, 33–44 (2016).
Thbodeau, A. M., Bergquist, B. A. Do mercury isotopes record the signature of massive volcanism in marine sedimentary record? Geology 45, 95–96 (2017).
Jin, Y. G. et al. Pattern of marine mass extinction near the Permian–Triassic boundary in South China. Science 289, 432–436 (2000).
Korte, C. & Kozur, H. W. Carbon-isotope stratigraphy across the Permian–Triassic boundary: a review. J. Asian Earth Sci. 39, 215–235 (2010).
Percival, L. M. et al. Does large igneous province volcanism always perturb the mercury cycle? Comparing the records of Oceanic Anoxic Event 2 and the end-Cretaceous to other Mesozoic events. Am. J. Sci. 318, 799–860 (2018).
Them, T. II et al. Terrestrial sources as the primary delivery mechanism of mercury to the oceans across the Toarcian Oceanic Anoxic Event (Early Jurassic). Earth Planet. Sci. Lett. 507, 62–72 (2019).
Zhang, Y. X., Jaeglé, L. & Thompson, L. Natural biogeochemical cycle of mercury in a global three-dimensional ocean tracer model. Glob. Biogeochem. Cycles 28, 553–570 (2014).
Wignall, P. B. & Newton, R. Contrasting deep-water records from the Upper Permian and Lower Triassic of South Tibet and British Columbia: evidence for a diachronous mass extinction. Palaios 18, 153–167 (2003).
Kakuwa, Y. Evaluation of palaeo-oxygenation of the ocean bottom across the Permian–Triassic boundary. Glob. Planet. Change 63, 40–56 (2008).
Chen, Z. Q. et al. Complete biotic and sedimentary records of the Permian–Triassic transition from Meishan section, South China: ecologically assessing mass extinction and its aftermath. Earth-Sci. Rev. 149, 67–107 (2015).
Takahashi, S., Yamakita, S., Suzuki, N., Kaiho, K. & Ehiro, M. High organic carbon content and a decrease in radiolarians at the end of the Permian in a newly discovered continuous pelagic section: a coincidence? Palaeogeogr. Palaeoclimatol. Palaeoecol. 271, 1–12 (2009).
Zambardi, T., Sonke, J. E., Toutain, J. P., Sortino, F. & Shinohara, H. Mercury emissions and stable isotopic compositions at Vulcano Island (Italy). Earth Planet. Sci. Lett. 277, 236–243 (2009).
Yin, R. S. et al. Mercury isotopes as proxies to identify sources and environmental impacts of mercury in sphalerites. Sci. Rep. 6, 18686 (2016).
Biswas, A., Blum, J. D., Bergquist, B. A., Keeler, G. J. & Xie, Z. Natural mercury isotope variation in coal deposits and organic soils. Environ. Sci. Technol. 42, 8303–8309 (2008).
Zheng, W., Gilleaudeau, G. J., Kah, L. & Anbar, A. D. Mercury isotope signatures record photic euxinia in the Mesoproterozoic ocean. Proc. Natl Acad. Sci. USA 115, 10594–10599 (2018).
Chen, J. B., Hintelmann, H., Feng, X. B. & Dimock, B. Unusual fractionation of both odd and even mercury isotopes in precipitation from Peterborough, ON, Canada. Geochim. Cosmochim. Acta 90, 33–46 (2012).
Gill, G. A. & Fitzgerald, W. F. Vertical mercury distributions in the oceans. Geochim. Cosmochim. Acta 52, 1719–1728 (1988).
Algeo, T. J. et al. Evidence for a diachronous Late Permian marine crisis from the Canadian Arctic region. Geolog. Soc. Am. Bull. 124, 1424–1448 (2012).
Yin, H. F., Feng, Q. L., Lai, X. L., Baud, A. & Tong, J. N. The protracted Permo-Triassic crisis and multi-episode extinction around the Permian–Triassic boundary. Glob. Planet. Change 55, 1–20 (2007).
Shen, J. et al. Volcanic perturbations of the marine environment in South China preceding the latest Permian mass extinction and their biotic effects. Geobiology 10, 82–103 (2012).
Knoll, A. H., Bambach, R., Canfield, D. & Grotzinger, J. Comparative Earth history and Late Permian mass extinction. Science 273, 452–457 (1996).
Knoll, A. H., Bambach, R. K., Payne, J. L., Pruss, S. & Fischer, W. W. Paleophysiology and end-Permian mass extinction. Earth Planet. Sci. Lett. 256, 295–313 (2007).
Joachimski, M. M. et al. Climate warming in the latest Permian and the Permian-Triassic mass extinction. Geology 40, 195–198 (2012).
Wignall, P. B. & Twitchett, R. J. Oceanic anoxia and the end Permian mass extinction. Science 272, 1155 (1996).
Grice, K. et al. Photic zone euxinia during the Permian-triassic superanoxic event. Science 307, 706–709 (2005).
Winguth, A. M. & Maier-Reimer, E. Causes of the marine productivity and oxygen changes associated with the Permian–Triassic boundary: a reevaluation with ocean general circulation models. Mar. Geol. 217, 283–304 (2005).
Uchikawa, J. & Zeebe, R. E. Influence of terrestrial weathering on ocean acidification and the next glacial inception. Geophys. Res. Lett. 35, L23608 (2008).
Penman, D. E. et al. An abyssal carbonate compensation depth overshoot in the aftermath of the Palaeocene-Eocene Thermal Maximum. Nat. Geosci. 9, 575–580 (2016).
Van Cappellen, P. & Ingall, E. D. Benthic phosphorus regeneration, net primary production, and ocean anoxia: a model of the coupled marine biogeochemical cycles of carbon and phosphorus. Paleoceanography 9, 677–692 (1994).
Reinhard, C. T. et al. Evolution of the global phosphorus cycle. Nature 541, 386–389 (2017).
Takahashi, S. et al. Bioessential element-depleted ocean following the euxinic maximum of the end-Permian mass extinction. Earth Planet. Sci. Lett. 393, 94–104 (2014).
Meyer, K. M., Ridgwell, A. & Payne, J. L. The influence of the biological pump on ocean chemistry: implications for long-term trends in marine redox chemistry, the global carbon cycle, and marine animal ecosystems. Geobiology 14, 207–219 (2016).
Reinhard, C. T., Planavsky, N. J., Olson, S. L., Lyons, T. W. & Erwin, D. H. Earth's oxygen cycle and the evolution of animal life. Proc. Natl. Acad. Sci. USA 113, 8933–8938 (2016).
Chen, J. B., Hintelmann, H. & Dimock, B. Chromatographic pre-concentration of Hg from dilute aqueous solutions for isotopic measurement by MC-ICP-MS. J. Anal. Atom. Spectrom. 25, 1402 (2010).
Huang, Q. et al. An improved dual-stage protocol to pre-concentrate mercury from airborne particles for precise isotopic measurement. J. Anal. Atom. Spectrom. 30, 957–966 (2015).
Blum, J. D. & Johnson, M. W. Recent developments in mercury stable isotope analysis. Rev. Mineral. Geochem. 82, 733–757 (2017).
Blum, J. D. & Bergquist, B. A. Reporting of variations in the natural isotopic composition of mercury. Anal. Bioanal. Chem. 388, 353–359 (2007).
Algeo, T. J. & Twitchett, R. J. Anomalous Early Triassic sediment fluxes due to elevated weathering rates and their biological consequences. Geology 38, 1023–1026 (2010).
Algeo, T. J., Chen, Z. Q., Fraiser, M. L. & Twitchett, R. J. Terrestrial–marine teleconnections in the collapse and rebuilding of Early Triassic marine ecosystems. Palaeogeogr. Palaeoclimatol. Palaeoecol. 308, 1–11 (2011).
Chen, Z. Q. & Benton, M. J. The timing and pattern of biotic recovery following the end-Permian mass extinction. Nat. Geosci. 5, 375–383 (2012).
Payne, J. L. et al. Large perturbations of the carbon cycle during recovery from the end-Permian extinction. Science 305, 506–509 (2004).
We thank Jinling Liu, Brad Erkkila, and Jonas Karosas for analytical support at the China University of Geosciences and Yale Analytical and Stable Isotope Center, respectively. This research was supported by Foundation for Innovative Research Groups of the National Natural Science Foundation of China (41821001), Natural Science Foundation of China (41602022, 41625012, U1301231, 41773112, 41473007, and 41572005), State Key R&D Project (2016YFA0601100), 111 Project (B08030), State Special Fund from Ministry of Science and Technology (2016ZX05060, 2017ZX05036002), the MOST Special Fund from the State Key Laboratory of Geological Processes and Mineral Resources, China University of Geosciences-Wuhan (MSFGPMR201702, MSFGPMR02, MSFGPMR201602), and the Fundamental Research Funds for the Central Universities, China University of Geosciences-Wuhan (CUG160625). Research by T.J.A. is supported by the China University of Geosciences-Wuhan (SKL-GPMR program GPMR201301 and SKL-BGEG program BGL21407). J.S. also gratefully acknowledge financial support from China Scholarship Council for funding to visit Yale University, as well as funding from Yale Institute for Biospheric Studies (YIBS) for mercury analysis. N.J.P. acknowledges the Packard Foundation. This work is a contribution to IGCP Projects 572 and 630.
State Key Laboratory of Geological Processes and Mineral Resources, China University of Geosciences, 430074, Wuhan, Hubei, China
Jun Shen, Thomas J. Algeo, Qinglai Feng & Lian Zhou
Department of Geology and Geophysics, Yale University, New Haven, CT, 06520-8109, USA
Jun Shen, Brennan O'Connell & Noah J. Planavsky
State Key Laboratory of Environmental Geochemistry, Institute of Geochemistry, Chinese Academy of Sciences, Guiyang, 550002, China
Jiubin Chen & Shengliu Yuan
Institute of Surface-Earth System Science, Tianjin University, 92 Weijin Road, 300072, Nankai, Tianjin, China
Jiubin Chen
State Key Laboratory of Biogeology and Environmental Geology, China University of Geosciences, 430074, Wuhan, Hubei, China
Thomas J. Algeo & Jianxin Yu
Department of Geology, University of Cincinnati, Cincinnati, OH, 45221-0013, USA
Thomas J. Algeo
Jun Shen
Shengliu Yuan
Qinglai Feng
Jianxin Yu
Lian Zhou
Brennan O'Connell
Noah J. Planavsky
J.S. and J.C. conceived the study and designed it with S.Y. and N.J.P. T.J.A. contributed samples; J.S. undertook field and laboratory work at Meishan D and Xiakou sections; J.C. and S.Y. analyzed and interpreted Hg concentrations and isotopes of Meishan D, Xiakou, and Gujo-Hachiman sections; J.S., T.J.A., and N.J.P. wrote the paper with significant input from Q.F., J.Y., L.Z., and B.O.C.
Correspondence to Jun Shen.
Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work.
Shen, J., Chen, J., Algeo, T.J. et al. Evidence for a prolonged Permian–Triassic extinction interval from global marine mercury records. Nat Commun 10, 1563 (2019). https://doi.org/10.1038/s41467-019-09620-0
Tamsin A. Mather
Mercury evidence for combustion of organic-rich sediments during the end-Triassic crisis
Runsheng Yin
Shane D. Schoepfer
Environmental crises at the Permian–Triassic mass extinction
Jacopo Dal Corso
Haijun Song
Paul B. Wignall
Nature Reviews Earth & Environment (2022)
Early and late phases of the Permian–Triassic mass extinction marked by different atmospheric CO2 regimes
Jiaheng Shen
Yi Ge Zhang
Ann Pearson
Nature Geoscience (2022)
Shucheng Xie
|
CommonCrawl
|
Partitioning CCZ classes into EA classes
AMC Home
A note on the minimum Lee distance of certain self-dual modular codes
February 2012, 6(1): 69-94. doi: 10.3934/amc.2012.6.69
Singularities of symmetric hypersurfaces and Reed-Solomon codes
Antonio Cafure 1, , Guillermo Matera 2, and Melina Privitelli 2,
Instituto del Desarrollo Humano, Universidad Nacional de General Sarmiento, J.M. Gutiérrez 1150, Los Polvorines (B1613GSX) Buenos Aires, Argentina, and, Ciclo Básico Común, Universidad de Buenos Aires, Ciudad Universitaria, Pabellón III (1428) Buenos Aires, Argentina, and, National Council of Research and Technology (CONICET), Buenos Aires, Argentina
Instituto del Desarrollo Humano, Universidad Nacional de General Sarmiento, J.M. Gutiérrez 1150, Los Polvorines (B1613GSX) Buenos Aires, Argentina, and, National Council of Research and Technology (CONICET), Buenos Aires, Argentina, Argentina
Received October 2010 Revised December 2011 Published January 2012
We determine conditions on $q$ for the nonexistence of deep holes of the standard Reed-Solomon code of dimension $k$ over $\mathbb F_q$ generated by polynomials of degree $k+d$. Our conditions rely on the existence of $q$-rational points with nonzero, pairwise-distinct coordinates of a certain family of hypersurfaces defined over $\mathbb F_q$. We show that the hypersurfaces under consideration are invariant under the action of the symmetric group of permutations of the coordinates. This allows us to obtain critical information concerning the singular locus of these hypersurfaces, from which the existence of $q$-rational points is established.
Keywords: Reed-Solomon codes, deep holes, Finite fields, singular hypersurfaces, rational points., symmetric polynomials.
Mathematics Subject Classification: Primary: 11G25, 14G15; Secondary: 14G50,05E0.
Citation: Antonio Cafure, Guillermo Matera, Melina Privitelli. Singularities of symmetric hypersurfaces and Reed-Solomon codes. Advances in Mathematics of Communications, 2012, 6 (1) : 69-94. doi: 10.3934/amc.2012.6.69
A. Adolphson and S. Sperber, On the degree of the L-function associated with an exponential sum, Compos. Math., 68 (1988), 125-159. Google Scholar
Y. Aubry and F. Rodier, Differentially 4-uniform functions, in "Arithmetic, Geometry, Cryptography and Coding Theory 2009" (eds. D. Kohel and R. Rolland), Amer. Math. Soc., (2010), 1-8. Google Scholar
A. Cafure and G. Matera, Improved explicit estimates on the number of solutions of equations over a finite field, Finite Fields Appl., 12 (2006), 155-185. doi: 10.1016/j.ffa.2005.03.003. Google Scholar
Q. Cheng and E. Murray, On deciding deep holes of Reed-Solomon codes, in "Theory and Applications of Models of Computation,'' Springer, Berlin, (2007), 296-305. doi: 10.1007/978-3-540-72504-6_27. Google Scholar
R. Coulter and M. Henderson, A note on the roots of trinomials over a finite field, Bull. Austral. Math. Soc., 69 (2004), 429-432. doi: 10.1017/S0004972700036200. Google Scholar
T. Ernst, Generalized Vandermonde determinants, report 2000: 6 Matematiska Institutionen, Uppsala Universitet, 2000; available online at http://www2.math.uu.se/research/pub/Ernst1.pdf Google Scholar
D. K. Faddeev and I. S. Sominskii, "Problems in Higher Algebra," Freeman, San Francisco, 1965. Google Scholar
S. Ghorpade and G. Lachaud, Étale cohomology, Lefschetz theorems and number of points of singular varieties over finite fields, Mosc. Math. J., 2 (2002), 589-631. Google Scholar
S. Ghorpade and G. Lachaud, Number of solutions of equations over finite fields and a conjecture of Lang and Weil, in "Number Theory and Discrete Mathematics" (eds. A.K. Agarwal et al.), Hindustan Book Agency, (2002), 269-291. Google Scholar
V. Guruswami and A. Vardy, Maximum-likelihood decoding of Reed-Solomon codes is NP-hard, IEEE Trans. Inform. Theory, 51 (2005), 2249-2256. doi: 10.1109/TIT.2005.850102. Google Scholar
J. Heintz, Definability and fast quantifier elimination in algebraically closed fields, Theoret. Comput. Sci., 24 (1983), 239-277. doi: 10.1016/0304-3975(83)90002-6. Google Scholar
N. Katz, Sums of Betti numbers in arbitrary characteristic, Finite Fields Appl., 7 (2001), 29-44. doi: 10.1006/ffta.2000.0303. Google Scholar
E. Kunz, "Introduction to Commutative Algebra and Algebraic Geometry,'' Birkhäuser, Boston, 1985. Google Scholar
A. Lascoux and P. Pragracz, Jacobian of symmetric polynomials, Ann. Comb., 6 (2002), 169-172. doi: 10.1007/PL00012583. Google Scholar
J. Li and D. Wan, On the subset sum problem over finite fields, Finite Fields Appl., 14 (2008), 911-929. doi: 10.1016/j.ffa.2008.05.003. Google Scholar
Y.-J. Li and D. Wan, On error distance of Reed-Solomon codes, Sci. China Ser. A, 51 (2008), 1982-1988. doi: 10.1007/s11425-008-0066-3. Google Scholar
R. Lidl and H. Niederreiter, "Finite Fields,'' 2nd edition, Addison-Wesley, Massachusetts, 1997. Google Scholar
F. Rodier, Borne sur le degré des polynômes presque parfaitement non-linéaires, in "Arithmetic, Geometry, Cryptography and Coding Theory,'' Amer. Math. Soc., (2009), 169-181. Google Scholar
I. R. Shafarevich, "Basic Algebraic Geometry: Varieties in projective space,'' Springer, Berlin, 1994. Google Scholar
D. Wan, Generators and irreducible polynomials over finite fields, Math. Comp., 66 (1997), 1195-1212. doi: 10.1090/S0025-5718-97-00835-1. Google Scholar
Daniele Bartoli, Adnen Sboui, Leo Storme. Bounds on the number of rational points of algebraic hypersurfaces over finite fields, with applications to projective Reed-Muller codes. Advances in Mathematics of Communications, 2016, 10 (2) : 355-365. doi: 10.3934/amc.2016010
Yujuan Li, Guizhen Zhu. On the error distance of extended Reed-Solomon codes. Advances in Mathematics of Communications, 2016, 10 (2) : 413-427. doi: 10.3934/amc.2016015
Peter Beelen, David Glynn, Tom Høholdt, Krishna Kaipa. Counting generalized Reed-Solomon codes. Advances in Mathematics of Communications, 2017, 11 (4) : 777-790. doi: 10.3934/amc.2017057
Karan Khathuria, Joachim Rosenthal, Violetta Weger. Encryption scheme based on expanded Reed-Solomon codes. Advances in Mathematics of Communications, 2021, 15 (2) : 207-218. doi: 10.3934/amc.2020053
Johan Rosenkilde. Power decoding Reed-Solomon codes up to the Johnson radius. Advances in Mathematics of Communications, 2018, 12 (1) : 81-106. doi: 10.3934/amc.2018005
José Moreira, Marcel Fernández, Miguel Soriano. On the relationship between the traceability properties of Reed-Solomon codes. Advances in Mathematics of Communications, 2012, 6 (4) : 467-478. doi: 10.3934/amc.2012.6.467
Daniele Bartoli, Leo Storme. On the functional codes arising from the intersections of algebraic hypersurfaces of small degree with a non-singular quadric. Advances in Mathematics of Communications, 2014, 8 (3) : 271-280. doi: 10.3934/amc.2014.8.271
Jayadev S. Athreya, Gregory A. Margulis. Values of random polynomials at integer points. Journal of Modern Dynamics, 2018, 12: 9-16. doi: 10.3934/jmd.2018002
Susanne Pumplün. Finite nonassociative algebras obtained from skew polynomials and possible applications to (f, σ, δ)-codes. Advances in Mathematics of Communications, 2017, 11 (3) : 615-634. doi: 10.3934/amc.2017046
Motoko Qiu Kawakita. Certain sextics with many rational points. Advances in Mathematics of Communications, 2017, 11 (2) : 289-292. doi: 10.3934/amc.2017020
Martino Borello, Olivier Mila. Symmetries of weight enumerators and applications to Reed-Muller codes. Advances in Mathematics of Communications, 2019, 13 (2) : 313-328. doi: 10.3934/amc.2019021
Jean-François Biasse, Michael J. Jacobson, Jr.. Smoothness testing of polynomials over finite fields. Advances in Mathematics of Communications, 2014, 8 (4) : 459-477. doi: 10.3934/amc.2014.8.459
Thomas Gauthier, Gabriel Vigny. Distribution of postcritically finite polynomials Ⅱ: Speed of convergence. Journal of Modern Dynamics, 2017, 11: 57-98. doi: 10.3934/jmd.2017004
Hui Liu, Duanzhi Zhang. Stable P-symmetric closed characteristics on partially symmetric compact convex hypersurfaces. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 877-893. doi: 10.3934/dcds.2016.36.877
Leonardo Manuel Cabrer, Daniele Mundici. Classifying GL$(n,\mathbb{Z})$-orbits of points and rational subspaces. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 4723-4738. doi: 10.3934/dcds.2016005
Rich Stankewitz. Density of repelling fixed points in the Julia set of a rational or entire semigroup, II. Discrete & Continuous Dynamical Systems, 2012, 32 (7) : 2583-2589. doi: 10.3934/dcds.2012.32.2583
José Carmona, Pedro J. Martínez-Aparicio. Homogenization of singular quasilinear elliptic problems with natural growth in a domain with many small holes. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 15-31. doi: 10.3934/dcds.2017002
Olav Geil, Stefano Martin. Relative generalized Hamming weights of q-ary Reed-Muller codes. Advances in Mathematics of Communications, 2017, 11 (3) : 503-531. doi: 10.3934/amc.2017041
Andreas Klein, Leo Storme. On the non-minimality of the largest weight codewords in the binary Reed-Muller codes. Advances in Mathematics of Communications, 2011, 5 (2) : 333-337. doi: 10.3934/amc.2011.5.333
Zhongjie Liu, Duanzhi Zhang. Brake orbits on compact symmetric dynamically convex reversible hypersurfaces on $ \mathbb{R}^\text{2n} $. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 4187-4206. doi: 10.3934/dcds.2019169
Antonio Cafure Guillermo Matera Melina Privitelli
|
CommonCrawl
|
3D modeling and simulation of thermal effects during profile grinding | springerprofessional.de Skip to main content
previous article Fusion of physical principles and data-driven b...
next article Effect of die pressure on the lubricating regim...
15-09-2020 | Production Process | Issue 5-6/2020 Open Access
3D modeling and simulation of thermal effects during profile grinding
Production Engineering > Issue 5-6/2020
C. Schieber, M. Hettig, M. F. Zaeh, C. Heinzel
\(A_c\)
Total contact area between workpiece and wheel
\(a_e\)
\(c_c\)
Specific heat capacity of the grinding wheel material
\(c_w\)
Specific heat capacity of the workpiece material
\(d_s\)
Grinding wheel diameter
d(x)
Local grinding wheel diameter in x-direction
\(F_n\)
Normal grinding force component
\(F_t\)
Tangential grinding force component
\(F_{ti}\)
Differential tangential force at \(l_i\)
\(h_{max}\)
Maximum height of the V-groove circular segment
h(x)
Local height of the V-groove circular segment in x-direction according to the variable radius
\(k_c\)
Thermal conductivity of the grinding wheel abrasive layer
\(k_w\)
Thermal conductivity of the workpiece material
\(l_g\)
Maximal geometric contact length
\(l_g(x)\)
Local geometric contact length in x-direction
Total grinding power
\(P_i\)
Power produced at the location \(l_i\)
Total heat flux
\(q_w\)
Heat flux transferred into the workpiece
\(\varDelta s\)
Normal cutting depth
Variable V-groove radius at the deepest point
\(v_c\)
Cutting speed
\(v_{fluid}\)
Fluid flow rate
\(v_{ft}\)
Tangential feed speed
\(\alpha\)
Angle of the side surface of the V-groove
\(\alpha (x)\)
Local angle of the circular ground surface in the V-groove
\(\epsilon\)
Heat partition ratio between heat flux into the workpiece and total heat flux
\(\rho _c\)
Mass density of the grinding wheel abrasive layer
\(\rho _w\)
Mass density of the workpiece material
\(Q'_w\)
Specific material removal rate
\(q_{ti}\)
Heat flux generated in segment \(l_i\)
\(l_i\)
Segment i of the geometric contact length \(l_g\)
The machining process induces a change of the material properties in a defined layer It can be characterized by a surface and subsurface effect on and below the newly created surface [ 1 ]. During the grinding process, the subsurface is significantly affected by external mechanical, thermal and chemical influences on the material. The mechanical and thermal influences cause a change due to the forces and temperatures occurring during grinding in the area of the inner boundary layer and can have a structural depth effect up to several millimeters. This significantly changes the properties of the material's boundary layer, such as hardness, microstructure and residual stress [ 2 ].
During machining with geometrically undefined cutting edges, the material is first elastically and plastically deformed due to the mechanical load up to the chip shear [ 3 ]. The mechanically induced plastic deformation can result in a hardening of the subsurface area and an induction of residual compressive stresses. This is due to the fact that grain cutting edges penetrating the material initially cause elastic deformation and then plastic deformation as the cutting edge engages, causing the material to be stretched parallel and orthogonal to the cutting direction and thus subjecting it to tensile load stress [ 4 ]. After the cutting edge engagement, the external load on the material decreases and the elastic deformations recede, so that residual compressive stresses result in plastically deformed material.
The thermal load on the edge zone material during grinding is basically dependent on how the heat generated in the process is distributed among the active partners (grinding wheel, workpiece, cooling lubricant and chips) and how it is partitioned in the contact zone. The heat distribution is influenced significantly by the grinding conditions, the abrasive, the material to be machined and the type and supply conditions of the cooling lubricant, resulting in a range of the conduction of the heat via the workpiece of 5–84% [ 5 ]. This can lead to high local temperatures in the surface and subsurface of the workpiece, which can cause changes in hardness and microstructure. The influence on the properties of the surface and subsurface regions is then manifested by tempering and new hardening zones (white layers), so that the residual stress state is influenced as a result of the volumetric changes in the material microstructure. In the case of a inhomogeneous distribution, this results in changes in dimension and shape of the workpiece [ 6 ]. In general, a distinction must be made between thermal and transformation stresses when thermally induced internal stresses are generated: If the material cools down from the workpiece core towards the surface, tensile residual stresses occur in the surface and subsurface areas, assuming no structural transformation is achieved [ 7 ]. When an external thermal load is applied, compressive load stresses are initially induced due to the volume expansion caused by the heat, and are then reduced by plastic compression of the material when the yield point is exceeded. During cooling, the material contracts and tensile residual stresses result in the plastically compressed area [ 8 ]. Both experimental and simulative observations of the heat generated during the profile grinding process are still a challenging issue in academic and industrial research. Up to now, mainly analyses and modeling of surface grinding processes have been conducted. However, the models cannot be simply transferred to profiled workpieces. Linear guide rails with their V-profiles are in particular often finished with a profiled grinding wheel to improve surface quality and geometric properties.
There is also an influence on the boundary zone that results in a distortion of the workpiece. Decisive influencing factors are the shape of the heat flux, the maximum temperature reached in the workpiece and the distribution of the temperature field [ 9 ]. The most critical point in describing these factors, especially with regard to modeling the process to predict distortions, is the exact local calculation of the contact length for different depths of cut, which are responsible for the specific energy absorptions and grinding heat flux in the grinding zone [ 10 ]. This paper focuses on the calculation of the contact length, on the construction of an analytical heat source model for the three-dimensional machining of the profiled workpieces and on the application possibilities for the prediction of the workpiece distortion. For this purpose, a newly developed model for the calculation of the local contact length is combined with the previously used theoretical models from Jaeger [ 11 ]. Experiments in the literature have shown that the heat flux distribution along the feed direction is triangular due to the proportional increase in the material removal rate [ 12 ]. These validated results regarding the heat flux were taken up and combined with the local variation of the contact length [ 13 ]. Furthermore, Guo and Malkin describe how a segmented energy distribution within the grinding zone can be calculated for face grinding [ 5 ]. This can also be transferred to a workpiece with a V-groove by modifying the described workpiece geometry.
Many of the existing research activities focus on the description of the heat flux distribution patterns of the grinding process for simple geometries, which can usually be reduced to two-dimensional simulation models [ 14 ]. The lack of three-dimensional models of profile grinding processes makes thermal analysis difficult, especially with regard to the occurrence of distortion. The following analytical results serve as a starting point for the implementation of such a grinding model.
The composition and phase change properties of AISI 4140 steel [ 15 ]
Composition (% weight)
Critical cooling time between 1023 K and 723 K
Austenitizing temperature
\(<0.7\)
2 Experimental setup
2.1 Material and geometry
The finite element method (FEM) offers the possibility to map the initial state and the grinding process steps one after another by using several connected simulation studies. Within the scope of the research work described here, the simulation software COMSOL Multiphysics from COMSOL Inc. was used to implement the developed analytical model. Both, the thermal and mechanical influences can be simulated in a 3D model to compare the resulting deformations with the real material behavior. The steel 42CrMo4 (AISI 4140) with the dimensions 23 mm \(\times\) 38 mm \(\times\) 250 mm (h \(\times\) w \(\times\) l), which is frequently utilized in manufacturing linear guide rails, was used as the initial material. The composition and phase change properties are shown in Table 1. To obtain a reference free of the residual stress, a soft-machined and heat-treated [QT 200 \(^\circ\)C (55HRC)] steel was used. A right-angled V-groove (h \(\times\) w = 8.67 mm \(\times\) 19 mm, 2 mm radius in the groove base) was ground centrally over the entire length of the workpiece. The workpiece was clamped on a magnetic clamping plate for the grinding process.
2.2 Mathematical heat flux model
Crucial for the modeling of the mechanical forces acting on the grinding wheel is the calculation of the stress applied to the surface for a two-dimensional, linear-elastic, isotropic material model. The third dimension can be disregarded due to the symmetry of the workpiece and the absence of the deformations in x-direction. COMSOL Multiphysics allows the calculation of the solutions of the FE model with reference to the feed speed \(v_{ft}\) via a time-dependent study. Measurements can be used to determine the typical load values for the tangential component in z-direction and the normal component in y-direction.
The geometric contact length \(l_g\) (Eq. 1), on which the grinding wheel applies its mechanical load within the V-groove, is calculated from the following relation according to the Jaeger temperature model for an FE model [ 16 ]:
$$\begin{aligned} l_g=\sqrt{a_e \cdot d_s} \end{aligned}$$
The variables \(a_e\) and \(d_s\) indicate the depth of cut and the diameter of the grinding wheel. This description serves only as an approximate initial assumption for a surface grinding process that can be represented two-dimensionally. Due to the V-groove for linear guide rails present here, the model must be extended. Both the local depth of cut perpendicular to the material and the local diameter of the grinding wheel decrease from the lowest point of the V-groove to the outermost point.
A representation of the variable depth of cut
Figure 1 shows a representation of the defined depths of cut and angles. The angle \(\alpha\) is the tilt of the side surfaces in the V-groove in relation to the horizontal surfaces. The location-dependent cutting depth perpendicular to the side surfaces is described by \(\varDelta s\). Due to the exact adaptation of the groove to the shape of the grinding wheel for a simplified and at the same time more accurate simulation, the side surfaces and the groove base with radius r can be regarded separately in this geometry. For the following calculations, the coordinate origin lies exactly at the deepest point of the groove. The curved groove base goes in the x-direction from \(-r\cdot \sin {(\alpha )}\) to \(r\cdot \sin {(\alpha )}\). Based on simply circular equation transformations, there are two universally valid equations for calculating the depth of cut perpendicular to the surfaces within the V-groove:
$$\begin{aligned} \varDelta s(\alpha )=a_e \cdot \sin {(\alpha )}, \end{aligned}$$
for \(|x|>r \cdot \sin {(\alpha )}\) on the side surfaces and
$$\begin{aligned} \varDelta s(\alpha (x))=a_e \cdot \cos {\left( -\arctan {\left( \frac{x}{\sqrt{r^2-x^2}}\right) }\right) } \end{aligned}$$
for \(|x| \le r\cdot \sin {(\alpha )}\) at the vertex area.
In addition to the depth of cut, the grinding wheel diameter is also relevant for calculating the location-dependent contact length. The geometrically determined reduction of the grinding wheel can again be determined by the height h of the circular segment in the vertex area, depending on the radius r and length of the segment, and by the angle \(\alpha\) of the groove side surfaces (pictured in Fig. 2).
Vertex area of the workpiece with defined reference coordinates in x- and y-direction and the circular segment heights h and \(h_{max}\)
Equations 4 and 5 provide the height h of the circular segment:
$$\begin{aligned} h_{max}=r \cdot (1-\cos (\alpha )) \end{aligned}$$
$$\begin{aligned} h(x)=r \cdot \left( 1-\cos {\left( \arcsin {\left( \frac{x}{r}\right) }\right) }\right) . \end{aligned}$$
Starting from the maximum grinding wheel diameter \(d_s\), the diameter can be reduced depending on the position x. The local diameter d( x) is described by Eqs. 6 and 7:
$$\begin{aligned} d(x)= d_s-2 \cdot h_{max}-2\cdot (x-r \cdot \sin {(\alpha )}) \cdot tan{(\alpha )} \end{aligned}$$
for \(|x|>r \cdot \sin {(\alpha )}\) on the side surfaces with maximum circle segment height \(h_{max}\) and
$$\begin{aligned} d(x)= d_s-2\cdot r \cdot \left( 1-\cos {\left( \arcsin {\left( \frac{x}{r}\right) }\right) }\right) \end{aligned}$$
By inserting Eqs. 2, 3, 6 and 7 into Eq. 1, one obtains the locally varying contact length \(l_g\) for the grinding wheel contact with the side surfaces and the groove base:
$$\begin{aligned} \begin{aligned} l_g(x)=\sqrt{(d_s-2\cdot h_{max}-2\cdot (x-r\cdot \sin {(\alpha )} )\cdot \tan {(\alpha )})} \\ \cdot \sqrt{(a_e\cdot \sin {(\alpha )} )} \end{aligned} \end{aligned}$$
$$\begin{aligned} \begin{aligned} l_g(x)&=\sqrt{d_s-2\cdot r \cdot \left( 1-\cos {\left( \arcsin {\left( \frac{x}{r}\right) }\right) }\right) } \\&\cdot \sqrt{a_e \cdot \cos {\left( -\arctan {\left( \frac{x}{\sqrt{r^2-x^2}}\right) }\right) }} \end{aligned} \end{aligned}$$
These calculations are valid for analyses of the contact lengths during grinding for any side surface angles and groove base radii (Fig. 3). In the experiments, a radius of \(r = 2\) mm and a maximum grinding wheel diameter of \(d_s= 400\) mm were used for the angle \(\alpha = 45^\circ\). Based on this, further considerations can now be made to calculate the heat flux into the workpiece. Rowe describes the total heat flux over \(q_w\) as a function of the grinding power P and the contact area \(A_c\) [ 17 ]:
$$\begin{aligned} q_w=\frac{P}{A_c}. \end{aligned}$$
The workpiece and the calculated contact length (z-direction) dependent on x for a depth of cut of \(a_e= 0.8\) mm
The total contact area \(A_c\) is the integral of the contact length \(l_c(x)\) over the width of the surface in the V-groove. Here it must be considered that the contact length integrated over the width of the V-groove, which is dependent on x, does not take the whole groove shape into account but only the length in x-direction. The function of the contact length must first be projected onto the length of the V-groove surfaces. After the projection, the resulting function can be integrated and provides the actual contact area. The total heat flux can then be calculated according to Eq. 10 for the corresponding depth of cut \(a_e\). The grinding forces measured by means of the force measurement platform, that was used during the experiments for each grinding process, are also necessary for the calculation of the grinding power P and thus for the total heat flux [ 18 ]. P is determined by
$$\begin{aligned} P=F_t(v_c\pm v_{ft}). \end{aligned}$$
Since the partition \(\epsilon\) of the total heat flux flowing into the workpiece is approximately constant for a grinding process with cooling lubricant in the pores of the grinding wheel, the following Eq. 12 can be used to determine the average energy partitioning [ 19 ]:
$$\begin{aligned} \epsilon =\left\{ 1+\left[ \frac{(k\rho c)_c v_c}{(k\rho c)_w v_{ft}}\right] ^{\frac{1}{2}}\right\} ^{-1}. \end{aligned}$$
In addition to the already defined velocities of the grinding wheel and the workpiece, the specific heat, thermal conductivity and density of the grinding wheel material (c) and the workpiece material (w) are used here. The average heat flux per unit area \(q_w=\frac{\epsilon P}{A_c}\) going into the workpiece is derived from Eqs. 11 and 12.
The measured grinding forces from the experiments and the calculated heat flux into the workpiece
Tangential force
Normal force
Workpiece feed speed
Grinding power
Contact area
\(F_t\) (N)
\(F_n\) (N)
\(a_e\) (mm)
\(v_{ft}\) (mm)
P (W)
\(A_c\) (mm 2)
Table 2 shows the measured forces, the grinding power and the calculated contact areas as a function of the depth of cut and the parameters used in Table 3.
Technical specifications and material properties of the wet grinding experiments of a \(90^\circ\)-V-profiled workpiece with a Baystate A60H16VCF2 grinding wheel
\(d_{max}\) (mm)
\(v_c\) (mm)
\(v_{fluid}\) (l/min)
\(c_w\) (J/kgK)
\(k_w\) (W/mK)
\(\rho _w\) (kg/m 3)
\(c_c\) (J/kgK)
\(k_c\) (W/mK)
\(\rho _c\) (kg/m 3)
0.2, 0.4, 0.6, 0.7, 0.8
Due to the given shape of the workpiece, the total heat flux cannot be distributed evenly over the contact surface. To ensure that the subsequent heat flux simulation using FEM is successful, the heat source model from Jiang et al. was used [ 20 ]. Their model describes the division of the contact surface along the z-direction into n segments of the same length and calculates the corresponding heat flux for each of these segments. In this way, the segmented power is calculated with a contact area split into i segments with \(0\le i \le n\):
$$\begin{aligned} P_i=F_{ti}\cdot v_c. \end{aligned}$$
The sketch of the meshed FE model, the triangular heat flux and the segmentation along the z-direction normal to the grinding surfaces within the V-groove
Thus, the heat flux, which is generated in the segment \(l_i\), is calculated according to Jiang et al. [ 20 ] using
$$\begin{aligned} q_{ti}=\frac{P_i}{bl_g (x)/i}. \end{aligned}$$
It is assumed that the model of Guo et al. provides the average energy partitioning and its proven triangular heat flow model (pictured in Fig. 4). This model, previously designed only in two dimensions along the feed direction (z-direction) for a constant contact length \(l_g\), was transferred and designed by segmenting the whole contact area, in particular the locally varying contact length \(l_g(x)\), in the V-groove onto the profiled component, which was transferred to the simulation as described in Sect. 3.
3 Numerical simulation
COMSOL Multiphysics offers the possibility to simulate the initial state and the grinding process. The analytical model was transferred to an FE simulation and simulated with reference to all relevant parameters. For this purpose, it was necessary not only to determine the parameters used, such as feed speed, cutting depth with corresponding specific material removal rates \(Q'_w\) (Fig. 5), cutting speed and coolant flow, but also to measure other occurring parameters such as the forces and the power during the experiments. By means of the measurements of a force-measuring platform, on which the workpiece rests during grinding, typical load values for the tangential components and the normal component in y- and z-direction can be determined.
Depths of cut with corresponding specific material removal rates \(Q'_w\) for a tangential feed speed of 1500 mm/min
The starting material is AISI 4140 and consists of pure, low-stress martensite at the beginning of the simulation. This is in accordance to the experiment. Due to the wide temperature range that is recorded during grinding, an exact representation of the resulting microstructural transformations is necessary. Thus, the austenitizing temperature of 723 \(^\circ\)C is partially exceeded. The result is a microstructural transformation within the upper material layers. Diffusion-controlled transformations are produced, for example from austenite to bainite, which can be mapped by the Johnson–Mehl–Avrami algorithm [ 21 ]. Due to the rapid cooling caused by the cooling lubricant, the formation of a layer by means of the non-diffusion controlled lattice shear is dominant at the surface. Therefore, the Koistinen–Marburger algorithm can be used for the determination of the temperature and the microstructure-dependent material properties [ 22 ]. For the calibration of the material typical time-temperature transformation (TTT) algorithm, continuous cooling (CCT) and time-temperature austenitization (TTA) diagrams are used [ 23 ]. Based on these and the percentage calculations of the microstructures, the relevant 42CrMo4 material parameters for the FE simulation can be calculated. For this purpose, the temperature gradients in the material determined from heating and cooling phases must be taken into account. This procedure allows a detailed analysis of the regions and microstructure changes critical for the distortion formation.
Apart from the phase transformations, which were imported via the COMSOL material interface, the symmetry conditions were exploited before meshing. For this purpose, a symmetry surface was defined as a boundary condition after dividing the workpiece geometry along the deepest point of the V-groove in z-direction using both the 'Structural Mechanics Module' and the 'Heat Transfer Module'. Points at the upper end edges were defined as fixed constraints to simulate the unclamping from the magnetic clamping plate after the grinding process and to ensure free deformation. Essential for the deformation is the definition of a plasticity model accurate to the material. This model was added as an additional module to the 'Structural Mechanics Module'. As assumptions proven in the literature, 'small plastic strains' with a von Mises stress yield function were used as assumptions. For the simplified linear isotropic hardening model, the values for the initial yield stress as well as for the isotropic tangent module (about 190 GPa for high-strenght tempered steel) were taken from the material database and from the experimental results of Ellermann [ 24 ]. The defined linear-elastic isotropic material together with the modulus of elasticity and the coefficient of thermal expansion provides the basis for solid state mechanics. In this material \(\tau _{xy}\) as shear stress, the maximum and minimum stress in the x-y-plane is given by \(\sigma _{max,min}=\frac{\sigma _x+\sigma _y}{2}\pm \sqrt{(\frac{\sigma _x-\sigma _y}{2})^2+\tau _{xy}^2}\) with \(\sigma _x\) the stress state in x-direction and \(\sigma _y\) the stress in y-direction [ 25 ].
Beside the 'Structural Mechanics Module' the 'Heat Transfer in Solids Module' was linked via the multiphysics function. The model for a 3D heat source, which was analytically derived in the previous chapters, was calculated from the force and power measurements of the real-time experiments before its implementation in COMSOL. The energy partitioning was then performed by building a triangular analytical ramp function that moves along the z-direction through the V-groove as a function of time, thus mapping the thermal process of grinding according to constant feed and cutting speeds. The model, which has been extended here to three dimensions, enables a more detailed distortion analysis for profiled components. Up to now, grinding processes have only been reduced to a 2D model, allowing an approximation of the workpiece distortions. The calculated heat flow for the used depths of cut flows into the workpiece as a time-dependent general inward heat flux. The heat flux was determined proportionally via \(\epsilon\). In addition to the heat flux conditions, further boundary conditions caused by the use of the cooling lubricant are also important. Parallel to the pure heat input generated by the deformation and friction on the surface, the cooling effect of the coolant is taken into account by defining a heat transfer coefficient, based on the formula of Hadad, of \(h = 24 \cdot 10^3\) \(\frac{{\text{ W }}}{{\text{ m }}^2{\text{ K }}}\) on the side surfaces and of \(h = 6 \cdot 10^3\) \(\frac{{\text{ W }}}{{\text{ m }}^2{\text{ K }}}\) behind the grinding wheel within the V-groove [ 26 ]. An overview of the simulation properties used can be found in Fig. 6.
Meshing was performed using tetrahedral elements. The tetrahedron density is very fine in the contact zone, i.e. within the V-groove, and coarser on the side surfaces in order to save computing time and to achieve a good element ratio. After meshing and modeling of the solid state mechanics and heat flow, a PARADISO solver was used via the time-dependent study function, which allows COMSOL to display the corresponding temperature and stress distribution at each time step.
The defined conditions of the numerical simulation of the grinding process
4 Results and discussion
4.1 Shape measurements
Comparison of the measured and the simulated distortions for specific material removal rates \(Q'_w\) defined in Fig. 5 with a tangential feed speed of 1500 mm/min
To demonstrate the validity of the developed analytical model, the results of the FE simulation were compared with experimental data. For this purpose, the workpiece was measured once before and once after the grinding process. In each case, 13 measuring points were defined at 20 mm intervals on the underside of the workpiece on two lines along the entire length of the workpiece. These lines are each located 4 mm from the side surfaces. A coordinate measuring machine with a precision error of about 1–3 \(\upmu {\text{ m }}\) scanned the position of these defined measuring points. Three measurements were made for each parameter set, which allowed averaging over both the three samples and the two defined measurement lines. Thus, the measurements before and after the grinding process resulted in an averaged peak-to-valley difference \(\varDelta\)PV. This difference describes the distortion caused by the tensile stresses introduced due to the heat input during grinding [ 27 ]. Due to the short minimum distance of 9.5 mm from the V-groove flank to the vertical side surface of the workpiece and the strong flow of the cooling lubricant, the integration of thermocouples was not possible. These would have led to a considerable falsification of the temperature and the distortion measurements, since the relevant tensile stress induction and the heat flux occurs close to the V-groove surface. The inclusion of thermocouples leads to a thermal scattering and thereby to an unintended symmetry disorder. As a consequence, further tests were aimed exclusively at distortion analysis, distortion prediction and possible distortion compensation of profiled ground workpieces. By using the calculated analytical heat flux model for a variation of different depths of cut, as already shown in Table 2, this distortion can be simulated according to Sect. 3 based on COMSOL Multiphysics.
4.2 Validation of the simulation approach
Temperature simulated with the new analytical heat flux distribution model for a typical profile grinding experiment with a depth of cut of \(a_e=0.8\) mm
Figure 7 reveals both the experimental and simulated \(\varDelta\)PV values. The green bars show the averaged experimental distortions together with the errors from averaging over three tests and two measurement lines. On the other hand, the blue bars represent the distortion values obtained from the simulation using the presented analytical model. For a better representation of the uncertainties of the simulations and the experiments, the relative error values were also plotted. For an error estimation between numerical and experimental results Al Hashimi et al. describes suitable methodologies [ 28 ]. The simulated and measured data on form deviation agree well. All simulated results for the distortion are within an acceptable error range of the experimental data. If the depth of cut is raised, the form deviation increases. This is due to the increase in grinding power or the location-dependent contact length and thus an increase in heat flux into the workpiece. This leads to an increment in the induced tensile stresses on the workpiece surface. It is noticeable that for cutting depths of higher than 0.6 mm the simulated distortions are lower than those measured experimentally. For example, at \(a_e=0.7\) mm, an experimentally determined distortion of 195 \(\upmu\)m was measured, whereas the simulation shows a distortion value of 185 \(\upmu\)m. This corresponds to a difference of 10 \(\upmu\)m and can be explained by the fact that the FE analysis considered mainly the thermal influences during the process. These have the greatest influence on the material structure at low cutting depths. At higher cutting depths, on the other hand, the mechanical load increases directly in the grinding zone, resulting in less distortion in the simulated results. Due to the additionally increased mechanical interactions between the abrasive grains and the workpiece at higher depths of cut, locally limited plastic flow leads to increased residual compressive stresses. In addition, with the thermal compressive stresses generated near the surface, an increased plastic flow under pressure is caused, which is partly restricted by thermal expansion of hotter material closer to the surface and partly by cooler material below the surface. After the subsequent cooling, strong tensile residual stresses occur at the highly compressed surface increased by the mechanical load of the abrasive grains. These are further increased by the metallurgical transformations occurring during the heating and cooling cycle [ 29 ]. The lower distortion in the simulation is also due to the fact that the plate is elastically bent in the finite element model due to the specific source force. The source force perpendicular to the cross-section of the plastically deformed surface layer as a measure of the force per unit width of the profiled workpiece allows predicting the shape deviations of hardened workpieces with simplified geometry. Based on the ReiSSner Theory, source stresses and source forces can be used to determine proportional form deviations which occur during grinding as input for a simple linear-elastic finite element simulation of complex workpieces [ 30 ]. The cross-section is deformed proportionally to the absolute value of the specific source force. On the other hand, the calculation of the specific source force from the beam theory confirms the measured peak-to-valley values. According to the beam theory, the mechanical energy is stored by the bending deformation. Less stored energy due to the bending deformation results in less deformation in the simulation. Because of this, the difference becomes larger with increasing source forces. However, the magnitude of the distortion is well predicted with the implemented heat source model. In addition to the direct distortion data, thermomechanical simulation also provides the temperature distribution at a certain point in time during the simulation. This temperature distribution is specific for the developed analytical 3D heat flux model. On the one hand, it can be read for any temperature point within the material if the critical cooling time has been exceeded, on the other hand, it can also be determined if the austenitization temperature has been exceeded for a certain depth in the workpiece.
The maximum temperature reached on the V-groove surface of \(T_{max}=729\) \(^\circ\)C for a cutting depth of \(a_e=0.8\) mm can be seen in Fig. 8. This corresponds to a normal cutting depth of \(\varDelta s=0.57\) mm. Measurements were taken at different, evenly spaced depths on a V-groove side flank point. Looking at the shapes of the temperature curves, they are almost identical to the ones obtained by Jiang et al. [ 20 ]. The comparison can be made to that extent since the grinding of the side flanks of the V-groove is equivalent to a plane grinding with a smaller cutting depth. First, the temperature remains constant at room temperature until the grinding wheel passes over the measuring point and the maximum temperature is reached directly in the contact zone. The material subsequently cools down much faster directly on the surface than in the inside. This is caused by the massive circulation of the cooling lubricant around the workpiece. Estimations can also be made from the simulations for the temperature curves shown here. In contrast to simulations with a uniformly distributed heat source, according to literature and previous research results, a triangular heat source can be used to simulate much more precise temperature curves over time, which more accurately represent the real process [ 31 ]. For this reason also for the here computed temperature courses by the simulated triangular heat source, which was described in the previous chapters, errors of under 6% can be determined. The error is mainly influenced by the existing geometry. The simulation accuracy of the model suffers from non-planar surfaces of the workpiece. However, with the model of variable contact length presented here, the error, which depends on the proportionality to the square root of the contact length, has been improved.
This paper presents the research results of an extended three-dimensional heat source model and a thermometallurgical FE simulation. Both lead to a better understanding, modeling and prediction of distortions during profile grinding of long steel components. Decisive for the distortions are the mechanical loads on the microstructure and the heat flux on the uppermost material layer. The more precise analysis of the V-groove geometry with regard to final finishing processes of linear guide rails made it possible to extend the surface grinding models by calculating the local contact lengths for different depths of cut. With the analytical model, it is now possible to build a thermal model for V-grooves with any flank angle, replacing the \(45^\circ\) used here. Based on these models, the workpiece distortions can be simulated more accurately. The comparison of the experiments with the results of the distortion simulation based on the new heat flux model and the phase transformation effects has shown that accurate distortion predictions can be made. A precise analysis of the stresses and the extended parameter variations will now form the basis for a subsequent more detailed distortion analysis and for distortion compensation strategies by externally forced introduction of the compressive and tensile residual stresses. If relevant parameters and values are compared with the simulations by means of the experimental validation, detailed statements can be made about temperature curves and stresses. Possible adjustments can be made and investigated in the model. In addition, the experimental determination of the tangential contact load in the contact zone offers the possibility to draw conclusions about the heat flux density distribution. By assembling simulated partial models based on thermal and mechanical pre- or post-treatments, the complete manufacturing process consisting of the distortion and the distortion compensation will be represented in the future work of this research project.
The authors would like to thank the German Research Foundation (DFG) for funding the project HE 3276/7-1—ZA 288/64-1: "Distortion Engineering during Grinding by computer-aided Design of Distortion Compensation Strategies". The results presented in this paper were gained in the course of this project.
go back to reference Huapan X, Zhi C, Hairong W, Jiuhong W, Nan Z (2018) Effect of grinding parameters on surface roughness and subsurface damage and their evaluation in fused silica. Opt Express 26(4):4638–4655 CrossRef Huapan X, Zhi C, Hairong W, Jiuhong W, Nan Z (2018) Effect of grinding parameters on surface roughness and subsurface damage and their evaluation in fused silica. Opt Express 26(4):4638–4655 CrossRef
go back to reference Jinyu Z, Cheng WG, Jie PH (2017) Effects of grinding parameters on residual stress of 42crmo steel surface layer in grind-hardening. In: Proceedings of the International Symposium on Mechanical Engineering and Material Science (ISMEMS 2017), Paris, France, 11/17/2017–11/19/2017. Atlantis Press Jinyu Z, Cheng WG, Jie PH (2017) Effects of grinding parameters on residual stress of 42crmo steel surface layer in grind-hardening. In: Proceedings of the International Symposium on Mechanical Engineering and Material Science (ISMEMS 2017), Paris, France, 11/17/2017–11/19/2017. Atlantis Press
go back to reference Bin Z, Song Z, Jianfeng L (2019) Influence of surface functional parameters on friction behavior and elastic-plastic deformation of grinding surface in mixed lubrication state. Proc Inst Mech Eng Part J J Eng Tribol 233(6):870–883 CrossRef Bin Z, Song Z, Jianfeng L (2019) Influence of surface functional parameters on friction behavior and elastic-plastic deformation of grinding surface in mixed lubrication state. Proc Inst Mech Eng Part J J Eng Tribol 233(6):870–883 CrossRef
go back to reference Mohammad R, Pascal M (2020) Investigation on surface integrity of steel din 100cr6 by grinding using cbn tool. Procedia CIRP 87:192–197 CrossRef Mohammad R, Pascal M (2020) Investigation on surface integrity of steel din 100cr6 by grinding using cbn tool. Procedia CIRP 87:192–197 CrossRef
go back to reference Changsheng G, Stephen M (2000) Energy partition and cooling during grinding. J Manufact Process 2(3):151–157 CrossRef Changsheng G, Stephen M (2000) Energy partition and cooling during grinding. J Manufact Process 2(3):151–157 CrossRef
go back to reference George ET, Maurice AHH, Tatsuo I (2002) Handbook of residual stress and deformation of steel. ASM International, Materials Park George ET, Maurice AHH, Tatsuo I (2002) Handbook of residual stress and deformation of steel. ASM International, Materials Park
go back to reference Zishan D, Gaoxiang S, Miaoxian G, Xiaohui J, Beizhi L, Steven YL (2020) Effect of phase transition on micro-grinding-induced residual stress. J Mater Process Technol 281:116647 CrossRef Zishan D, Gaoxiang S, Miaoxian G, Xiaohui J, Beizhi L, Steven YL (2020) Effect of phase transition on micro-grinding-induced residual stress. J Mater Process Technol 281:116647 CrossRef
go back to reference Haifa S, Hédi H (2012) A new analytical model of heat flux distribution to compute residual stresses in the case of external cylindrical grinding process. Adv Mater Res 445:125–130 CrossRef Haifa S, Hédi H (2012) A new analytical model of heat flux distribution to compute residual stresses in the case of external cylindrical grinding process. Adv Mater Res 445:125–130 CrossRef
go back to reference Li Beizhi, Zhu Dahu, Pang Jingzhu, Yang Jianguo (2011) Quadratic curve heat flux distribution model in the grinding zone. Int J Adv Manufact Technol 54(9–12):931–940 CrossRef Li Beizhi, Zhu Dahu, Pang Jingzhu, Yang Jianguo (2011) Quadratic curve heat flux distribution model in the grinding zone. Int J Adv Manufact Technol 54(9–12):931–940 CrossRef
go back to reference Kyung Kim Nam, Changsheng Guo, Stephen Malkin (1997) Heat flux distribution and energy partition in creep-feed grinding. CIRP Ann 46(1):227–232 CrossRef Kyung Kim Nam, Changsheng Guo, Stephen Malkin (1997) Heat flux distribution and energy partition in creep-feed grinding. CIRP Ann 46(1):227–232 CrossRef
go back to reference Conrad JJ (1942) Moving sources of heat and the temperature of sliding contacts. Proc R Society New South Wales 76:203–224 Conrad JJ (1942) Moving sources of heat and the temperature of sliding contacts. Proc R Society New South Wales 76:203–224
go back to reference Jin Tan, Yi Jun, Li Ping (2017) Temperature distributions in form grinding of involute gears. Int J Adv Manufact Technol 88(9–12):2609–2620 CrossRef Jin Tan, Yi Jun, Li Ping (2017) Temperature distributions in form grinding of involute gears. Int J Adv Manufact Technol 88(9–12):2609–2620 CrossRef
go back to reference Kim Hae-Ji, Kim Nam-Kyung, Kwak Jae-Seob (2006) Heat flux distribution model by sequential algorithm of inverse heat transfer for determining workpiece temperature in creep feed grinding. Int J Mach Tools Manufact 46(15):2086–2093 CrossRef Kim Hae-Ji, Kim Nam-Kyung, Kwak Jae-Seob (2006) Heat flux distribution model by sequential algorithm of inverse heat transfer for determining workpiece temperature in creep feed grinding. Int J Mach Tools Manufact 46(15):2086–2093 CrossRef
go back to reference Kountanya Raja, Guo Changsheng (2018) Force and temperature modeling in 5—axis grinding. Procedia Manufact 26:521–529 CrossRef Kountanya Raja, Guo Changsheng (2018) Force and temperature modeling in 5—axis grinding. Procedia Manufact 26:521–529 CrossRef
go back to reference Marks LS, Baumeister T (1978) Marks' standard handbook for mechanical engineers, 8th edn. In: Baumeister T (ed) MacGraw-Hill, New York, Düsseldorf usw Marks LS, Baumeister T (1978) Marks' standard handbook for mechanical engineers, 8th edn. In: Baumeister T (ed) MacGraw-Hill, New York, Düsseldorf usw
go back to reference Mamalis Athanasios G, Manolakos Dimitrios E, Markopoulos Angelos P, Kundrák János, Gyáni Károly (2003) Thermal modelling of surface grinding using implicit finite element techniques. Int J Adv Manufact Technol 21(12):929–934 CrossRef Mamalis Athanasios G, Manolakos Dimitrios E, Markopoulos Angelos P, Kundrák János, Gyáni Károly (2003) Thermal modelling of surface grinding using implicit finite element techniques. Int J Adv Manufact Technol 21(12):929–934 CrossRef
go back to reference Rowe WB, Morgan MN, Batako A, Jin T (2003) Energy and temperature analysis in grinding. In: Ford DK (ed) Laser metrology and machine performance VI, WIT transactions on engineering sciences, WIT Press, vol 44, pp 3–24 Rowe WB, Morgan MN, Batako A, Jin T (2003) Energy and temperature analysis in grinding. In: Ford DK (ed) Laser metrology and machine performance VI, WIT transactions on engineering sciences, WIT Press, vol 44, pp 3–24
go back to reference Foeckerer Tobias, Zaeh Michael Friedrich, Zhang Oliver Bichi (2013) A three-dimensional analytical model to predict the thermo-metallurgical effects within the surface layer during grinding and grind-hardening. Int J Heat Mass Transf 56(1–2):223–237 CrossRef Foeckerer Tobias, Zaeh Michael Friedrich, Zhang Oliver Bichi (2013) A three-dimensional analytical model to predict the thermo-metallurgical effects within the surface layer during grinding and grind-hardening. Int J Heat Mass Transf 56(1–2):223–237 CrossRef
go back to reference Guo Changsheng, Malkin Stephen (1995) Analysis of energy partition in grinding. J Eng Ind 117(1):55 CrossRef Guo Changsheng, Malkin Stephen (1995) Analysis of energy partition in grinding. J Eng Ind 117(1):55 CrossRef
go back to reference Jiang Jingliang, Ge Peiqi, Sun Shufeng, Wang Dexiang, Wang Yuling, Yang Yong (2016) From the microscopic interaction mechanism to the grinding temperature field: An integrated modelling on the grinding process. Int J Mach Tools Manufact 110:27–42 CrossRef Jiang Jingliang, Ge Peiqi, Sun Shufeng, Wang Dexiang, Wang Yuling, Yang Yong (2016) From the microscopic interaction mechanism to the grinding temperature field: An integrated modelling on the grinding process. Int J Mach Tools Manufact 110:27–42 CrossRef
go back to reference Avrami Melvin (1941) Kinetics of phase change. III: Granulation, phase change, and microstructure. J Chem Phys 9(2):177–184 CrossRef Avrami Melvin (1941) Kinetics of phase change. III: Granulation, phase change, and microstructure. J Chem Phys 9(2):177–184 CrossRef
go back to reference Koistinen Donald P, Marburger R E (1959) A general equation prescribing the extent of the austenite-martensite transformation in pure iron-carbon alloys and plain carbon steels. Acta Metallurgica 7:59–60 CrossRef Koistinen Donald P, Marburger R E (1959) A general equation prescribing the extent of the austenite-martensite transformation in pure iron-carbon alloys and plain carbon steels. Acta Metallurgica 7:59–60 CrossRef
go back to reference American Society for Metals (1977) Atlas of isothermal transformation and cooling transformation diagrams. Metals Park, Ohio American Society for Metals (1977) Atlas of isothermal transformation and cooling transformation diagrams. Metals Park, Ohio
go back to reference Ellermann Arne (2013) The Bauschinger effect in quenched and tempered, bainitic and normalized states of the steels 42CrMoS4 and 100Cr6 (german), volume, vol 15. Kassel University Press, Kassel Ellermann Arne (2013) The Bauschinger effect in quenched and tempered, bainitic and normalized states of the steels 42CrMoS4 and 100Cr6 (german), volume, vol 15. Kassel University Press, Kassel
go back to reference Hosford William F (2010) Mechanical behavior of materials, vol 2. Cambridge University Press, Cambridge MATH Hosford William F (2010) Mechanical behavior of materials, vol 2. Cambridge University Press, Cambridge MATH
go back to reference Hadad Mohammadjafar, Sadeghi Banafsheh (2012) Thermal analysis of minimum quantity lubrication-mql grinding process. Int J Mach Tools Manufact 63:1–15 CrossRef Hadad Mohammadjafar, Sadeghi Banafsheh (2012) Thermal analysis of minimum quantity lubrication-mql grinding process. Int J Mach Tools Manufact 63:1–15 CrossRef
go back to reference Borchers Florian, Meyer Heiner, Heinzel Carsten, Meyer Daniel, Epp Jérémy (2020) Development of surface residual stress and surface state of 42crmo4 in multistage grinding. Procedia CIRP 87:198–203 CrossRef Borchers Florian, Meyer Heiner, Heinzel Carsten, Meyer Daniel, Epp Jérémy (2020) Development of surface residual stress and surface state of 42crmo4 in multistage grinding. Procedia CIRP 87:198–203 CrossRef
go back to reference Al-Hashimi SAM, Madhloom HM, Khalaf RM, Nahi TN, Al-Ansari NA (2017) Flow over broad crested weirs: comparison of 2d and 3d models. J Civ Eng Arch 11(8). https://doi.org/10.17265/1934-7359/2017.08.005 Al-Hashimi SAM, Madhloom HM, Khalaf RM, Nahi TN, Al-Ansari NA (2017) Flow over broad crested weirs: comparison of 2d and 3d models. J Civ Eng Arch 11(8). https://doi.org/10.17265/1934-7359/2017.08.005
go back to reference Malkin Stephen, Guo Changsheng (2007) Thermal analysis of grinding. CIRP Ann 56(2):760–782 CrossRef Malkin Stephen, Guo Changsheng (2007) Thermal analysis of grinding. CIRP Ann 56(2):760–782 CrossRef
go back to reference Brinksmeier Ekkard, Sölter Jens (2009) Prediction of shape deviations in machining. CIRP Ann 58(1):507–510 CrossRef Brinksmeier Ekkard, Sölter Jens (2009) Prediction of shape deviations in machining. CIRP Ann 58(1):507–510 CrossRef
go back to reference Rowe WB (2001) Temperature case studies in grinding including an inclined heat source model. Proc Inst Mech Eng Part B J Eng Manufact 215(4):473–491 CrossRef Rowe WB (2001) Temperature case studies in grinding including an inclined heat source model. Proc Inst Mech Eng Part B J Eng Manufact 215(4):473–491 CrossRef
C. Schieber
M. Hettig
M. F. Zaeh
C. Heinzel
Issue 5-6/2020
Other articles of this Issue 5-6/2020
Numerical investigations regarding a novel process chain for the production of a hybrid bearing bushing
Fusion of physical principles and data-driven based models: an industry 4.0 perspective for improving the polishing process of stoneware tiles
Finite-element-analysis of the mechanical behavior of high-frequency litz wire in flat coil winding
Intelligent lightweight structures for hybrid machine tools
EOQ inventory model for perishable products under uncertainty
Effect of die pressure on the lubricating regimes achieved in wire drawing
|
CommonCrawl
|
Mathematical Biosciences and Engineering, 2006, 3(1): 189-204. doi: 10.3934/mbe.2006.3.189.
70K20, 76E30, 34D20, 37B25, 70K15, 93D30, 92A15,35K57.
A nonlinear $L^2$-stability analysis for two-species population dynamics with dispersal
Salvatore Rionero
1. University of Naples Federico II, Department of Mathematics and Applications ''R. Caccioppoli", Complesso Universitario Monte S. Angelo. Via Cinzia, 80126 Napoli
Received date: , Published date:
Abstract Related pages
The nonlinear $L^2$-stability (instability) of the equilibrium states of two-species population dynamics with dispersal is studied. The obtained results are based on (i) the rigorous reduction of the $L^2$-nonlinear stability to the stability of the zero solution of a linear binary system of ODEs and (ii) the introduction of a particular Liapunov functional V such that the sign of $\frac{dV}{dt}$ along the solutions is linked directly to the eigenvalues of the linear problem.
Keywords: Liapunov direct method; nonlinear stability; reaction diffusion equations.; two-species population dynamics
Citation: Salvatore Rionero. A nonlinear $L^2$-stability analysis for two-species population dynamics with dispersal. Mathematical Biosciences and Engineering, 2006, 3(1): 189-204. doi: 10.3934/mbe.2006.3.189
This article has been cited by:
1. A. A. Hill, Global stability for penetrative double-diffusive convection in a porous medium, Acta Mechanica, 2008, 200, 1-2, 1, 10.1007/s00707-007-0575-0
2. Bruno Buonomo, Salvatore Rionero, On the Lyapunov stability for SIRS epidemic models with general nonlinear incidence rate, Applied Mathematics and Computation, 2010, 217, 8, 4010, 10.1016/j.amc.2010.10.007
3. Salvatore Rionero, $$L^2$$ L 2 -energy decay of convective nonlinear PDEs reaction–diffusion systems via auxiliary ODEs systems, Ricerche di Matematica, 2015, 64, 2, 251, 10.1007/s11587-015-0231-2
4. Isabella Torcicollo, On the non-linear stability of a continuous duopoly model with constant conjectural variation, International Journal of Non-Linear Mechanics, 2016, 81, 268, 10.1016/j.ijnonlinmec.2016.01.018
5. Bruno Buonomo, Salvatore Rionero, Linear and nonlinear stability thresholds for a diffusive model of pioneer and climax species interaction, Mathematical Methods in the Applied Sciences, 2009, 32, 7, 811, 10.1002/mma.1068
6. Salvatore Rionero, Isabella Torcicollo, On the dynamics of a nonlinear reaction–diffusion duopoly model, International Journal of Non-Linear Mechanics, 2018, 99, 105, 10.1016/j.ijnonlinmec.2017.11.005
7. G. Mulone, B. Straughan, An operative method to obtain necessary and sufficient stability conditions for double diffusive convection in porous media, ZAMM, 2006, 86, 7, 507, 10.1002/zamm.200510272
8. Brian Straughan, A note on convection with nonlinear heat flux, Ricerche di Matematica, 2007, 56, 2, 229, 10.1007/s11587-007-0016-3
9. Isabella Torcicollo, On the dynamics of a non-linear Duopoly game model, International Journal of Non-Linear Mechanics, 2013, 57, 31, 10.1016/j.ijnonlinmec.2013.06.011
10. Salvatore Rionero, Isabella Torcicollo, Stability of a Continuous Reaction-Diffusion Cournot-Kopel Duopoly Game Model, Acta Applicandae Mathematicae, 2014, 132, 1, 505, 10.1007/s10440-014-9932-x
11. Salvatore Rionero, On the nonlinear stability of nonautonomous binary systems, Nonlinear Analysis: Theory, Methods & Applications, 2012, 75, 4, 2338, 10.1016/j.na.2011.10.032
12. Bruno Buonomo, Deborah Lacitignola, Modeling peer influence effects on the spread of high–risk alcohol consumption behavior, Ricerche di Matematica, 2014, 63, 1, 101, 10.1007/s11587-013-0167-3
13. Eric Avila-Vales, Bruno Buonomo, Analysis of a mosquito-borne disease transmission model with vector stages and nonlinear forces of infection, Ricerche di Matematica, 2015, 64, 2, 377, 10.1007/s11587-015-0245-9
14. G. Mulone, S. Rionero, W. Wang, The effect of density-dependent dispersal on the stability of populations, Nonlinear Analysis: Theory, Methods & Applications, 2011, 74, 14, 4831, 10.1016/j.na.2011.04.055
15. Florinda Capone, On the dynamics of predator-prey models with the Beddington–De Angelis functional response, under Robin boundary conditions, Ricerche di Matematica, 2008, 57, 1, 137, 10.1007/s11587-008-0026-9
16. Monica De Angelis, Pasquale Renno, Existence, uniqueness and a priori estimates for a nonlinear integro-differential equation, Ricerche di Matematica, 2008, 57, 1, 95, 10.1007/s11587-008-0028-7
Copyright Info: 2006, Salvatore Rionero, licensee AIMS Press. This is an open access article distributed under the terms of the Creative Commons Attribution Licese (http://creativecommons.org/licenses/by/4.0)
Associated material
PDF downloads(347)
HTML views(5)
Cited by(16)
[+] on Google Scholar
[+] on PubMed
[+] on ORCID
|
CommonCrawl
|
Only show content I have access to (50)
Only show open access (11)
Last month (1)
Last 3 months (2)
Last 12 months (6)
Over 3 years (168)
Physics and Astronomy (19)
Materials Research (15)
Politics and International Relations (5)
Statistics and Probability (5)
Classical Studies (2)
Worldview (18)
MRS Online Proceedings Library Archive (14)
Infection Control & Hospital Epidemiology (13)
Microscopy and Microanalysis (8)
Psychological Medicine (8)
The British Journal of Psychiatry (7)
Journal of Developmental Origins of Health and Disease (6)
Proceedings of the Nutrition Society (6)
The Journal of Laryngology & Otology (6)
Epidemiology & Infection (4)
Behaviour Change (3)
Canadian Journal of Neurological Sciences (3)
European Psychiatry (3)
Advances in X-ray Analysis (2)
American Political Science Review (2)
Canadian Journal of Emergency Medicine (2)
Disaster Medicine and Public Health Preparedness (2)
Journal of the International Neuropsychological Society (2)
The ANZIAM Journal (2)
The Canadian Entomologist (2)
Materials Research Society (15)
Society for Healthcare Epidemiology of America (SHEA) (13)
The Royal College of Psychiatrists (5)
Developmental Origins of Health and Disease Society (4)
JLO (1984) Ltd (4)
American Political Science Association (APSA) (3)
Aust Assoc for Cogitive and Behaviour Therapy (3)
Australian Mathematical Society Inc (3)
Canadian Neurological Sciences Federation (3)
European Psychiatric Association (3)
International Astronomical Union (3)
MAS - Microbeam Analysis Society (3)
Royal College of Psychiatrists / RCPsych (3)
AMA Mexican Society of Microscopy MMS (2)
Entomological Society of Canada TCE ESC (2)
MSC - Microscopical Society of Canada (2)
Mineralogical Society (2)
Society for Disaster Medicine and Public Health, Inc. SDMPH (2)
Cambridge History of South Africa (3)
Cambridge Handbooks in Psychology (1)
Values-Based Practice (1)
Cambridge Histories (3)
Cambridge Histories - Middle East & African Studies (3)
Cambridge Handbooks (1)
Cambridge Handbooks of Linguistics (1)
Cambridge Handbooks of Psychology (1)
Recreational cannabis legalization has had limited effects on a wide range of adult psychiatric and psychosocial outcomes
Stephanie M. Zellers, J. Megan Ross, Gretchen R. B. Saunders, Jarrod M. Ellingson, Tasha Walvig, Jacob E. Anderson, Robin P. Corley, William Iacono, John K. Hewitt, Christian J. Hopfer, Matt K. McGue, Scott Vrieze
Journal: Psychological Medicine , First View
Published online by Cambridge University Press: 05 January 2023, pp. 1-10
The causal impacts of recreational cannabis legalization are not well understood due to the number of potential confounds. We sought to quantify possible causal effects of recreational cannabis legalization on substance use, substance use disorder, and psychosocial functioning, and whether vulnerable individuals are more susceptible to the effects of cannabis legalization than others.
We used a longitudinal, co-twin control design in 4043 twins (N = 240 pairs discordant on residence), first assessed in adolescence and now age 24–49, currently residing in states with different cannabis policies (40% resided in a recreationally legal state). We tested the effect of legalization on outcomes of interest and whether legalization interacts with established vulnerability factors (age, sex, or externalizing psychopathology).
In the co-twin control design accounting for earlier cannabis frequency and alcohol use disorder (AUD) symptoms respectively, the twin living in a recreational state used cannabis on average more often (βw = 0.11, p = 1.3 × 10−3), and had fewer AUD symptoms (βw = −0.11, p = 6.7 × 10−3) than their co-twin living in an non-recreational state. Cannabis legalization was associated with no other adverse outcome in the co-twin design, including cannabis use disorder. No risk factor significantly interacted with legalization status to predict any outcome.
Recreational legalization was associated with increased cannabis use and decreased AUD symptoms but was not associated with other maladaptations. These effects were maintained within twin pairs discordant for residence. Moreover, vulnerabilities to cannabis use were not exacerbated by the legal cannabis environment. Future research may investigate causal links between cannabis consumption and outcomes.
A COMPARISON OF EXPLICIT RUNGE–KUTTA METHODS
Numerical analysis: Ordinary differential equations
STEPHEN J. WALTERS, ROSS J. TURNER, LAWRENCE K. FORBES
Journal: The ANZIAM Journal , First View
Published online by Cambridge University Press: 19 October 2022, pp. 1-23
Recent higher-order explicit Runge–Kutta methods are compared with the classic fourth-order (RK4) method in long-term integration of both energy-conserving and lossy systems. By comparing quantity of function evaluations against accuracy for systems with and without known solutions, optimal methods are proposed. For a conservative system, we consider positional accuracy for Newtonian systems of two or three bodies and total angular momentum for a simplified Solar System model, over moderate astronomical timescales (tens of millions of years). For a nonconservative system, we investigate a relativistic two-body problem with gravitational wave emission. We find that methods of tenth and twelfth order consistently outperform lower-order methods for the systems considered here.
Using machine learning with intensive longitudinal data to predict depression and suicidal ideation among medical interns over time
Adam G. Horwitz, Shane D. Kentopp, Jennifer Cleary, Katherine Ross, Zhenke Wu, Srijan Sen, Ewa K. Czyz
Published online by Cambridge University Press: 30 September 2022, pp. 1-8
Use of intensive longitudinal methods (e.g. ecological momentary assessment, passive sensing) and machine learning (ML) models to predict risk for depression and suicide has increased in recent years. However, these studies often vary considerably in length, ML methods used, and sources of data. The present study examined predictive accuracy for depression and suicidal ideation (SI) as a function of time, comparing different combinations of ML methods and data sources.
Participants were 2459 first-year training physicians (55.1% female; 52.5% White) who were provided with Fitbit wearable devices and assessed daily for mood. Linear [elastic net regression (ENR)] and non-linear (random forest) ML algorithms were used to predict depression and SI at the first-quarter follow-up assessment, using two sets of variables (daily mood features only, daily mood features + passive-sensing features). To assess accuracy over time, models were estimated iteratively for each of the first 92 days of internship, using data available up to that point in time.
ENRs using only the daily mood features generally had the best accuracy for predicting mental health outcomes, and predictive accuracy within 1 standard error of the full 92 day models was attained by weeks 7–8. Depression at 92 days could be predicted accurately (area under the curve >0.70) after only 14 days of data collection.
Simpler ML methods may outperform more complex methods until passive-sensing features become better specified. For intensive longitudinal studies, there may be limited predictive value in collecting data for more than 2 months.
GaLactic and Extragalactic All-sky Murchison Widefield Array survey eXtended (GLEAM-X) I: Survey description and initial data release
Murchison Widefield Array
N. Hurley-Walker, T. J. Galvin, S. W. Duchesne, X. Zhang, J. Morgan, P. J. Hancock, T. An, T. M. O. Franzen, G. Heald, K. Ross, T. Vernstrom, G. E. Anderson, B. M. Gaensler, M. Johnston-Hollitt, D. L. Kaplan, C. J. Riseley, S. J. Tingay, M. Walker
Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022
Published online by Cambridge University Press: 23 August 2022, e035
We describe a new low-frequency wideband radio survey of the southern sky. Observations covering 72–231 MHz and Declinations south of $+30^\circ$ have been performed with the Murchison Widefield Array "extended" Phase II configuration over 2018–2020 and will be processed to form data products including continuum and polarisation images and mosaics, multi-frequency catalogues, transient search data, and ionospheric measurements. From a pilot field described in this work, we publish an initial data release covering 1,447 $\mathrm{deg}^2$ over $4\,\mathrm{h}\leq \mathrm{RA}\leq 13\,\mathrm{h}$ , $-32.7^\circ \leq \mathrm{Dec} \leq -20.7^\circ$ . We process twenty frequency bands sampling 72–231 MHz, with a resolution of 2′–45′′, and produce a wideband source-finding image across 170–231 MHz with a root mean square noise of $1.27\pm0.15\,\mathrm{mJy\,beam}^{-1}$ . Source-finding yields 78,967 components, of which 71,320 are fitted spectrally. The catalogue has a completeness of 98% at ${{\sim}}50\,\mathrm{mJy}$ , and a reliability of 98.2% at $5\sigma$ rising to 99.7% at $7\sigma$ . A catalogue is available from Vizier; images are made available via the PASA datastore, AAO Data Central, and SkyView. This is the first in a series of data releases from the GLEAM-X survey.
102 Characterization of Hub and Spoke Facilities for Study of Surgical Care within United States Health Systems
JCTS 2022 Abstract Collection
Kristy K. Broman, Elizabeth Ross, Rob Weech-Maldonado, Smita Bhatia
Journal: Journal of Clinical and Translational Science / Volume 6 / Issue s1 / April 2022
Published online by Cambridge University Press: 19 April 2022, p. 1
OBJECTIVES/GOALS: An increasing number of hospitals and provider groups are consolidating into larger health systems, which hold potential to improve access to and quality of surgical cancer care through clinical integration across sites. In order to study clinical integration, we sought to develop: METHODS/STUDY POPULATION: Hospital data from the American Hospital Association were merged with data from the Agency for Healthcare Research and Quality's Compendium of United States Health Systems. For each health system with more than one acute care hospital, the hospital with the highest surgical volume (inpatient and outpatient) was categorized as the hub hospital while all other hospitals were categorized as spokes. We evaluated the concentration of case volumes at hub versus spoke hospitals and compared characteristics of these hospitals and their surrounding communities using univariate and multivariable logistic regression analyses. RESULTS/ANTICIPATED RESULTS: Within 624 health systems containing 3,554 hospitals, 355 hospitals were characterized as hub hospitals and had 2,645 affiliated spoke hospitals (median 17 spokes per hub, range 2-151). Hub hospitals performed a median of 68% of all surgical cases (25th-75th percentile 44-87%) and were concentrated in metropolitan (88.5%) and urban areas (11.5%) with none in rural areas; spoke hospitals were located in metropolitan (67%), urban (28%) and rural (5%) areas. On multivariable analysis, spoke hospitals were more often located in rural and small urban counties (OR 9.49, CI 4.57-19.70) and took care of a higher percentage of patients with less than high school education (OR 1.06 for each 1% increase, CI 1.03-1.10) but with lower poverty rates (OR 0.90 for each 1% increase in % poverty, CI 0.86-0.95). DISCUSSION/SIGNIFICANCE: For integrated health systems with multiple acute care hospitals, surgical volume is highest at a single hub hospital, supporting use of a hub-spoke taxonomy. Patient populations in counties with hub versus spoke hospitals differ in urban-rural location, poverty rates, and education level, which may impact access to quality care.
Preincision versus postincision frequent door openings during total joint arthroplasty
Danielle N. Davis, Lexie K. Ross, Zihan Feng, Ryan Imber, Craig Hogan, Heather L. Young
Journal: Antimicrobial Stewardship & Healthcare Epidemiology / Volume 2 / Issue 1 / 2022
Published online by Cambridge University Press: 12 January 2022, e2
Cervical lymphadenopathy following coronavirus disease 2019 vaccine: clinical characteristics and implications for head and neck cancer services
JLO Coronavirus Collection
A K Abou-Foul, E Ross, M Abou-Foul, A P George
Journal: The Journal of Laryngology & Otology / Volume 135 / Issue 11 / November 2021
Published online by Cambridge University Press: 16 September 2021, pp. 1025-1030
Print publication: November 2021
Patients with coronavirus disease vaccine associated lymphadenopathy are increasingly being referred to healthcare services. This work is the first to report on the incidence, clinical course and imaging features of coronavirus disease vaccine associated cervical lymphadenopathy, with special emphasis on the implications for head and neck cancer services.
This was a retrospective cohort study of all patients referred to our head and neck cancer clinics between 16 December 2020 and 12 March 2021. The main outcomes measured were the proportion of patients with vaccine-associated cervical lymphadenopathy, and the clinical and imaging characteristics.
The incidence of vaccine-associated cervical lymphadenopathy referrals was 14.8 per cent (n = 13). Five patients (38.5 per cent) had abnormal-looking enlarged and rounded nodes with increased vascularity. Only seven patients (53.9 per cent) reported full resolution within an average of 3.1 ± 2.3 weeks.
Coronavirus disease vaccine associated cervical lymphadenopathy can mimic malignant lymphadenopathy and therefore might prove challenging to diagnose and manage correctly. Healthcare services may encounter a significant increase in referrals.
ROME, PARTHIA, WAR AND PEACE - (J.M.) Schlude Rome, Parthia, and the Politics of Peace. The Origins of War in the Ancient Middle East. Pp. xvi + 221, ills, maps. London and New York: Routledge, 2020. Cased, £120, US$155. ISBN: 978-0-8153-5370-6.
Steven K. Ross
Journal: The Classical Review / Volume 71 / Issue 2 / October 2021
Published online by Cambridge University Press: 06 July 2021, pp. 491-493
Print publication: October 2021
Utilizing bycatch camera-trap data for broad-scale occupancy and conservation: a case study of the brown hyaena Parahyaena brunnea
Kathryn S. Williams, Ross T. Pitman, Gareth K. H. Mann, Gareth Whittington-Jones, Jessica Comley, Samual T. Williams, Russell A. Hill, Guy A. Balme, Daniel M. Parker
Journal: Oryx / Volume 55 / Issue 2 / March 2021
Print publication: March 2021
With human influences driving populations of apex predators into decline, more information is required on how factors affect species at national and global scales. However, camera-trap studies are seldom executed at a broad spatial scale. We demonstrate how uniting fine-scale studies and utilizing camera-trap data of non-target species is an effective approach for broadscale assessments through a case study of the brown hyaena Parahyaena brunnea. We collated camera-trap data from 25 protected and unprotected sites across South Africa into the largest detection/non-detection dataset collected on the brown hyaena, and investigated the influence of biological and anthropogenic factors on brown hyaena occupancy. Spatial autocorrelation had a significant effect on the data, and was corrected using a Bayesian Gibbs sampler. We show that brown hyaena occupancy is driven by specific co-occurring apex predator species and human disturbance. The relative abundance of spotted hyaenas Crocuta crocuta and people on foot had a negative effect on brown hyaena occupancy, whereas the relative abundance of leopards Panthera pardus and vehicles had a positive influence. We estimated that brown hyaenas occur across 66% of the surveyed camera-trap station sites. Occupancy varied geographically, with lower estimates in eastern and southern South Africa. Our findings suggest that brown hyaena conservation is dependent upon a multi-species approach focussed on implementing conservation policies that better facilitate coexistence between people and hyaenas. We also validate the conservation value of pooling fine-scale datasets and utilizing bycatch data to examine species trends at broad spatial scales.
Recommendations for Patients with Complex Nerve Injuries during the COVID-19 Pandemic
Kristine M. Chapman, Michael J. Berger, Christopher Doherty, Dimitri J. Anastakis, Heather L. Baltzer, Kirsty Usher Boyd, Sean G. Bristol, Brett Byers, K. Ming Chan, Cameron J.B. Cunningham, Kristen M. Davidge, Jana Dengler, Kate Elzinga, Jennifer L. Giuffre, Lisa Hadley, A Robertson Harrop, Mahdis Hashemi, J. Michael Hendry, Kristin L. Jack, Emily M. Krauss, Timothy J. Lapp, Juliana Larocerie, Jenny C. Lin, Thomas A. Miller, Michael Morhart, Christine B. Novak, Russell O'Connor, Jaret L. Olsen, Benjamin R. Ritsma, Lawrence R. Robinson, Douglas C. Ross, Christiaan Schrag, Alexander Seal, David T. Tang, Jessica Trier, Gerald Wolff, Justin Yeung
Journal: Canadian Journal of Neurological Sciences / Volume 48 / Issue 1 / January 2021
Published online by Cambridge University Press: 27 August 2020, pp. 50-55
4146 Establishment of Screening Methods for G6DP Deficiency – Translational and Clinical Applications
Christian Gomez, Ingrid C. Espinoza, Kerri A. Harrison, Fremel J. Backus, Krishna K. Ayyalasomayajula, Kim G. Adcock, Lisa M. Stempak, Richard L. Summers, Leigh Ann Ross, Larry Walker
Journal: Journal of Clinical and Translational Science / Volume 4 / Issue s1 / June 2020
Published online by Cambridge University Press: 29 July 2020, p. 108
OBJECTIVES/GOALS: To develop feasible screening methods for activity of the enzyme Glucose-6-phosphate dehydrogenase (G6PD) with point of care applicability. METHODS/STUDY POPULATION: Current knowledge establishes the relevance of G6PD as a critical therapeutic determinant for effective antimalarial therapy due to the occurrence of mutations that lead to post-treatment severe adverse effects. We present our findings on development of cost effective point-of-care screening methodologies to ascertain G6PD deficiency. RESULTS/ANTICIPATED RESULTS: Using Patient Cohort Explorer and data from the Department of Pathology, we established the prevalence of G6PD deficiency at the University of Mississippi Medical Center, Jackson, MS as high as 11.8% (African-American males in all population, n = 2518). Next, for selection of potential target groups, we set up a protocol for recruitment of volunteers based on ethnic background, parental ethnicity, and medical history. G6PD activity was evaluated using point of care methods [Trinity Biotech test or CareSTART Biosensor], and Gold Standard quantitative spectrophotometric assay (LabCorp). Determinations in >20 subjects have showed comparable concordance. If used with a conservative interpretation of the signal, the Trinity Biotech test showed superior potential for use in the field relative to the CareSTART Biosensor. DISCUSSION/SIGNIFICANCE OF IMPACT: We established the prevalence of G6PD deficiency in our medical center. We have also setup tests for point-of-care assessment of G6PD. Pending evaluation of the relative tests performance, we will be in position to screen individuals and select them for a prospective clinical trial to evaluate the safety of antimalarial agents on scope of G6PD deficiency.
Two cases of imported respiratory diphtheria in Edinburgh, Scotland, October 2019
Lucy Li, Daniella Ross, Katherine Hill, Sarah Clifford, Louise Wellington, Colin Sumpter, Naomi J. Gadsby, Jennifer Crane, Karen F. Macsween, Katie L. Hopkins, Norman K. Fry, Oliver Koch, Janet Stevenson
Journal: Epidemiology & Infection / Volume 148 / 2020
Published online by Cambridge University Press: 15 May 2020, e143
We report two cases of respiratory toxigenic Corynebacterium diphtheriae infection in fully vaccinated UK born adults following travel to Tunisia in October 2019. Both patients were successfully treated with antibiotics and neither received diphtheria antitoxin. Contact tracing was performed following a risk assessment but no additional cases were identified. This report highlights the importance of maintaining a high index of suspicion for re-emerging infections in patients with a history of travel to high-risk areas outside Europe.
A case series and literature review on patients with rhinological complications secondary to the use of cocaine and levamisole
R J Green, Q Gardiner, K Vinod, R Oparka, P D Ross
Journal: The Journal of Laryngology & Otology / Volume 134 / Issue 5 / May 2020
Print publication: May 2020
Levamisole is an increasingly common cutting agent used with cocaine. Both cocaine and levamisole can have local and systemic effects on patients.
A retrospective case series was conducted of patients with a cocaine-induced midline destructive lesion or levamisole-induced vasculitis, who presented to a Dundee hospital or the practice of a single surgeon in Paisley, from April 2016 to April 2019. A literature review on the topic was also carried out.
Nine patients from the two centres were identified. One patient appeared to have levamisole-induced vasculitis, with raised proteinase 3, perinuclear antineutrophil cytoplasmic antibodies positivity and arthralgia which improved on systemic steroids. The other eight patients had features of a cocaine-induced midline destructive lesion.
As the use of cocaine increases, ENT surgeons will see more of the complications associated with it. This paper highlights some of the diagnostic issues and proposes a management strategy as a guide to this complex patient group. Often, multidisciplinary management is needed.
MP33: Provincial spread of buprenorphine/naloxone initiation in emergency departments for opioid agonist treatment: a quality improvement initiative
P. McLane, K. Scott, K. Yee, Z. Suleman, K. Dong, E. Lang, S. Fielding, J. Deol, J. Fanaeian, A. Olmstead, M. Ross, K. Low, H. Hair, C. Biggs, M. Ghosh, R. Tanguay, B. Holroyd
Journal: Canadian Journal of Emergency Medicine / Volume 22 / Issue S1 / May 2020
Published online by Cambridge University Press: 13 May 2020, p. S54
Background: Since January 1, 2016 2358 people have died from opioid poisoning in Alberta. Buprenorphine/naloxone (bup/nal) is the recommended first line treatment for opioid use disorder (OUD) and this treatment can be initiated in emergency departments and urgent care centres (EDs). Aim Statement: This project aims to spread a quality improvement intervention to all 107 adult EDs in Alberta by March 31, 2020. The intervention supports clinicians to initiate bup/nal for eligible individuals and provide rapid referrals to OUD treatment clinics. Measures & Design: Local ED teams were identified (administrators, clinical nurse educators, physicians and, where available, pharmacists and social workers). Local teams were supported by a provincial project team (project manager, consultant, and five physician leads) through a multi-faceted implementation process using provincial order sets, clinician education products, and patient-facing information. We used administrative ED and pharmacy data to track the number of visits where bup/nal was given in ED, and whether discharged patients continued to fill any opioid agonist treatment (OAT) prescription 30 days after their index ED visit. OUD clinics reported the number of referrals received from EDs and the number attending their first appointment. Patient safety event reports were tracked to identify any unintended negative impacts. Evaluation/Results: We report data from May 15, 2018 (program start) to September 31, 2019. Forty-nine EDs (46% of 107) implemented the program and 22 (45% of 49) reported evaluation data. There were 5385 opioid-related visits to reporting ED sites after program adoption. Bup/nal was given during 832 ED visits (663 unique patients): 7 visits in the 1st quarter the program operated, 55 in the 2nd, 74 in the 3rd, 143 in the 4th, 294 in the 5th, and 255 in the 6th. Among 505 unique discharged patients with 30 day follow up data available 319 (63%) continued to fill any OAT prescription after receiving bup/nal in ED. 16 (70%) of 23 community clinics provided data. EDs referred patients to these clinics 440 times, and 236 referrals (54%) attended their first follow-up appointment. Available data may under-report program impact. 5 patient safety events have been reported, with no harm or minimal harm to the patient. Discussion/Impact: Results demonstrate effective spread and uptake of a standardized provincial ED based early medical intervention program for patients who live with OUD.
Changes in weight and weight-related quality of life in a multicentre, randomized trial of aripiprazole versus standard of care
Ronette L. Kolotkin, Patricia K. Corey-Lisle, Ross D. Crosby, Hong J. Kan, Robert D. McQuade
Journal: European Psychiatry / Volume 23 / Issue 8 / December 2008
This is a secondary analysis of clinical trial data collected in 12 European countries. We examined changes in weight and weight-related quality of life among community patients with schizophrenia treated with aripiprazole (ARI) versus standard of care (SOC), consisting of other marketed atypical antipsychotics (olanzapine, quetiapine, and risperidone).
Five-hundred and fifty-five patients whose clinical symptoms were not optimally controlled and/or experienced tolerability problems with current medication were randomized to ARI (10–30 mg/day) or SOC. Weight and weight-related quality of life (using the IWQOL-Lite) were assessed at baseline, and weeks 8, 18 and 26. Random regression analysis across all time points using all available data was used to compare groups on changes in weight and IWQOL-Lite. Meaningful change from baseline was also assessed.
Participants were 59.7% male, with a mean age of 38.5 years (SD 10.9) and mean baseline body mass index of 27.2 (SD 5.1). ARI participants lost an average of 1.7% of baseline weight in comparison to a gain of 2.1% by SOC participants (p < 0.0001) at 26 weeks. ARI participants experienced significantly greater increases in physical function, self-esteem, sexual life, and IWQOL-Lite total score. At 26 weeks, 20.7% of ARI participants experienced meaningful improvements in IWQOL-Lite score, versus 13.5% of SOC participants. A clinically meaningful change in weight was also associated with a meaningful change in quality of life (p < 0.001). A potential limitation of this study was its funding by a pharmaceutical company.
Compared to standard of care, patients with schizophrenia treated with aripiprazole experienced decreased weight and improved weight-related quality of life over 26 weeks. These changes were both statistically and clinically significant.
P0174 - Mindfulness-based cognitive therapy reduces depression symptoms in people with a traumatic brain injury: Results from a pilot study
M. Bedard, M. Felteau, S. Marshall, S. Dubois, B. Weaver, C. Gibbons, K. Morris, S. Ross, B. Parker
Journal: European Psychiatry / Volume 23 / Issue S2 / April 2008
Published online by Cambridge University Press: 16 April 2020, p. S243
Background and Aims:
Major depression is a significant problem for people with a traumatic brain injury (TBI) and its treatment remains difficult. A promising approach to treat depression is Mindfulness-based cognitive therapy (MBCT), a relatively new therapeutic approach rooted in mindfulness based stress-reduction (MBSR) and cognitive behavioral therapy (CBT). We conducted this study to examine the effectiveness of MBCT in reducing depression symptoms among people who have a TBI.
Twenty individuals diagnosed with major depression were recruited from a rehabilitation clinic and completed the 8-week MBCT intervention. Instruments used to measure depression symptoms included: BDI-II, PHQ-9, HADS, SF-36 (Mental Health subscale), and SCL-90 (Depression subscale). They were completed at baseline and post-intervention.
All instruments indicated a statistically significant reduction in depression symptoms post-intervention (p < .05). For example, the total mean score on the BDI-II decreased from 25.2 (9.8) at baseline to 18.2 (11.7) post-intervention (p=.001). Using a PHQ threshold of 10, the proportion of participants with a diagnosis of major depression was reduced by 59% at follow-up (p=.012).
Most participants reported reductions in depression symptoms after the intervention such that many would not meet the criteria for a diagnosis of major depression. This intervention may provide an opportunity to address a debilitating aspect of TBI and could be implemented concurrently with more traditional forms of treatment, possibly enhancing their success. The next step will involve the execution of multi-site, randomized controlled trials to fully demonstrate the value of the intervention.
484 – Depression and Suicidality in First Episode Psychosis
R. Upthegrove, M. Birchwood, K. Brunet, R. McCollum, L. Jones, K. Ross
Journal: European Psychiatry / Volume 28 / Issue S1 / 2013
A clearer understanding of the ebb and flow of depression and suicidal thinking in the early phase of psychosis, and how this relates to other symptom dimensions, is essential for developing interventions to reduce risk. The studies presented here investigate whether depression and suicidal thinking are predictable, how they relate to the early course of psychotic symptoms and develop over time.
92 patients with first episode psychosis recruited from the Birmingham Early Intervention Service completed measures of depression, including an prodromal depression, psotove and negative symptoms, self-harm, duration of untreated psychosis, insight and illness appraisals. Follow up took place over 12 months.
Depression occurred in 80% of patients at one or more phase of illness; a combination of depression and suicidal thinking was present in 63%. Depression in the prodromal phase was the most significant predictor of future depression and acts of selfharm. Post psychotic depression unheralded by previous depressive episodes was rare. Depression and suicidal thinking in the acute and post psychotic phases is associated with higher levels of loss and shame, and subordination to persecutors and malevolent voices.
Depression early in the emergence of a psychosis is fundamental to the development of future depression and suicidal thinking, and related to the personal significance and impact of positive symptom dimensions. Efforts to predict and reduce depression and self-harm in psychosis may need to target this early phase to reduce later risk.
Reproductive efficiency in the developing world
John Ross, Anrudh K Jain
Journal: Journal of Biosocial Science / Volume 52 / Issue 5 / September 2020
This study proposes a measure of reproductive losses starting from conception to age 15 as an assessment of childbearing 'efficiency'. It is suggested that losses are due to miscarriages, abortions, stillbirths and deaths to age 15. Data were drawn from various sources for seven regions embracing 129 developing countries. Mortality is an important loss in severely disadvantaged regions, especially in sub-Saharan Africa, but the abortion rates are lower there. This is reversed in the more advanced regions, where mortality is low but abortion rates are higher. Total losses numerically depend upon the rates in combination with the numbers of conceptions. The general 'efficiency' in moving from conception to a surviving child aged 15 was estimated. The abortion component of wastage has apparently not improved over time, but the mortality component has done so. Abortion rates are found to drive reproductive efficiency downwards; but efficiency is positively correlated with contraceptive use once abortion is controlled for. This implies that as efficiency is improved more couples gain confidence to turn to contraceptive use to avoid unplanned pregnancies and births.
A naturalistic, long-term follow-up of purging disorder
K. Jean Forney, Ross D. Crosby, Tiffany A. Brown, Kelly M. Klein, Pamela K. Keel
Journal: Psychological Medicine / Volume 51 / Issue 6 / April 2021
Published online by Cambridge University Press: 15 January 2020, pp. 1020-1027
Print publication: April 2021
The DSM-5 introduced purging disorder (PD) as an other specified feeding or eating disorder characterized by recurrent purging in the absence of binge eating. The current study sought to describe the long-term outcome of PD and to examine predictors of outcome.
Women (N = 84) who met research criteria for PD completed a comprehensive battery of baseline interview and questionnaire assessments. At an average of 10.24 (3.81) years follow-up, available records indicated all women were living, and over 95% were successfully located (n = 80) while over two-thirds (n = 58) completed follow-up assessments. Eating disorder status, full recovery status, and level of eating pathology were examined as outcomes. Severity and comorbidity indicators were tested as predictors of outcome.
Although women experienced a clinically significant reduction in global eating pathology, 58% continued to meet criteria for a DSM-5 eating disorder at follow-up. Only 30% met established criteria for a full recovery. Women reported significant decreases in purging frequency, weight and shape concerns, and cognitive restraint, but did not report significant decreases in depressive and anxiety symptoms. Quality of life was impaired in the physical, psychological, and social domains. More severe weight and shape concerns at baseline predicted meeting criteria for an eating disorder at follow-up. Other baseline severity indicators and comorbidity did not predict the outcome.
Results highlight the severity and chronicity of PD as a clinically significant eating disorder. Future work should examine maintenance factors to better adapt treatments for PD.
Scanning Electron Microscope 3D Surface Reconstruction via Optimization
Yasamin Sartipi, Aidan Ross, Weiwei Zhang, Samuel Norris, Hesham El-Sherif, Christopher K. Anand, Nabil D. Bassim
Journal: Microscopy and Microanalysis / Volume 25 / Issue S2 / August 2019
Published online by Cambridge University Press: 05 August 2019, pp. 224-225
Print publication: August 2019
|
CommonCrawl
|
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Join them; it only takes a minute:
How would you show that the Riemann Zeta function, $\zeta(s) < 0$ for $s \in (0,1)$?
So far I have that along the critical strip
\begin{align} \zeta(s) &= \left(\frac{2^{s-1}}{2^{s-1}-1}\right)\phi(s)\\ &= \left(\frac{1}{1-2^{1-s}}\right)\phi(s)\\ &= \left(\frac{1}{1-2^{1-s}}\right)\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^s} \end{align}
Where $\phi(s)$ is Euler's alternating zeta function. (which converges for $s > 0$) How would you show that this is always negative when $s \in (0,1)$?
sequences-and-series number-theory analytic-number-theory riemann-zeta
Yiorgos S. Smyrlis
PabloPablo
If $s\in(0,1)$ then $\frac{1}{1-2^{1-s}}<0$, while $$\sum_{n=1}^{+\infty}\frac{(-1)^{n+1}}{n^s}$$ is positive due to Leibniz' criterion.
Jack D'AurizioJack D'Aurizio
$\begingroup$ Jack, your answers are always the best, thank you so much! $\endgroup$ – Pablo Aug 25 '14 at 8:27
$\begingroup$ (+1) This is the way I usually show this, but your answer prodded me to bring out slightly heavier artillery. $\endgroup$ – robjohn♦ Aug 25 '14 at 9:29
Note that $$ \varphi(s)=\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^s}=\sum_{k=1}^\infty \left(\frac{1}{(2k-1)^s}-\frac{1}{(2k)^s}\right)>0, $$ as $$ \frac{1}{(2k-1)^s}>\frac{1}{(2k)^s}. $$
Yiorgos S. SmyrlisYiorgos S. Smyrlis
Here is an approach using the integral for $\zeta(z)\Gamma(z)$ and a simple analytic continuation.
Waxing complex: For $\mathrm{Re}(z)\gt1$, we have $$ \begin{align} \zeta(z)\Gamma(z) &=\int_0^\infty\frac{t^{z-1}}{e^t-1}\mathrm{d}t\\ &=\frac1{z-1}\int_0^\infty\frac{t}{e^t-1}\mathrm{d}t^{z-1}\\ &=\frac1{z-1}\int_0^\infty t^{z-1}\frac{1+(t-1)e^t}{(e^t-1)^2}\mathrm{d}t\tag{1} \end{align} $$ Since $\frac{\mathrm{d}}{\mathrm{d}t}\left[1+(t-1)e^t\right]=te^t\ge0$ for $t\ge0$, the integrand is positive. The integral in $(1)$ also converges for $\mathrm{Re}(z)\gt0$ and is analytic. Therefore, $(1)$ represents the analytic continuation of $\zeta(z)\Gamma(z)$ to $\mathrm{Re}(z)\gt0$.
Back to the real world: Since $\Gamma(z)\gt0$ for $z\gt0$, $(1)$ says that $\zeta(z)\lt0$ for $0\lt z\lt1$.
robjohn♦robjohn
$\begingroup$ (+1) The next volume of "Mathematics made difficult" will be yours :D $\endgroup$ – Jack D'Aurizio Aug 25 '14 at 9:35
$\begingroup$ Thanks. However, showing that $\zeta(s)=\frac1{1-2^{1-s}}\sum\limits_{n=1}^\infty\frac{(-1)^{n-1}}{n^s}$ for $0\lt s\lt1$, requires some analytic continuation. One approach is detailed in the beginning of this answer. $\endgroup$ – robjohn♦ Aug 25 '14 at 9:54
Thanks for contributing an answer to Mathematics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged sequences-and-series number-theory analytic-number-theory riemann-zeta or ask your own question.
How to show that the Laurent series of the Riemann Zeta function has $\gamma$ as its constant term?
Is $\zeta^{(k)}(s)$, $k\geq 0$, negative for $s\in (0,1)$?
Why does the Riemann zeta function have zeros in the complex plane? How is it possible to find them?
Calculating the Zeroes of the Riemann-Zeta function
Modern formula for calculating Riemann Zeta Function
Is the Riemann Zeta function negative in the critical strip?
How do you use the Riemann Zeta Function?
Dirichlet eta function and the Riemann hypothesis
Series representation of Riemann zeta function in denominator
Approximate functional equation for the Riemann zeta function
Hurwitz zeta function for $s=0$ $\zeta(0,1/2)$
|
CommonCrawl
|
Bounds on the growth of high discrete Sobolev norms for the cubic discrete nonlinear Schrödinger equations on $ h\mathbb{Z} $
Sufficiently strong dispersion removes ill-posedness in truncated series models of water waves
June 2019, 39(6): 3149-3177. doi: 10.3934/dcds.2019130
On substitution tilings and Delone sets without finite local complexity
Jeong-Yup Lee 1,2,, and Boris Solomyak 3,
Department of Mathematics Education, Catholic Kwandong University, Gangneung, Gangwon 210-701, Korea
KIAS, 85 Hoegiro, Dongdaemun-gu, Seoul, 02455, Korea
Department of Mathematics, Bar-Ilan University, Ramat-Gan 52900, Israel
* Corresponding author: Jeong-Yup Lee
Received April 2018 Revised November 2018 Published February 2019
We consider substitution tilings and Delone sets without the assumption of finite local complexity (FLC). We first give a sufficient condition for tiling dynamical systems to be uniquely ergodic and a formula for the measure of cylinder sets. We then obtain several results on their ergodic-theoretic properties, notably absence of strong mixing and conditions for existence of eigenvalues, which have number-theoretic consequences. In particular, if the set of eigenvalues of the expansion matrix is totally non-Pisot, then the tiling dynamical system is weakly mixing. Further, we define the notion of rigidity for substitution tilings and demonstrate that the result of [29] on the equivalence of four properties: relatively dense discrete spectrum, being not weakly mixing, the Pisot family, and the Meyer set property, extends to the non-FLC case, if we assume rigidity instead.
Keywords: Non-FLC, Meyer sets, discrete spectrum, Pisot family, weak mixing.
Mathematics Subject Classification: Primary: 37B50; Secondary: 52C23.
Citation: Jeong-Yup Lee, Boris Solomyak. On substitution tilings and Delone sets without finite local complexity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3149-3177. doi: 10.3934/dcds.2019130
S. Akiyama, M. Barge, V. Berthé, J.-Y. Lee and A. Siegel, On the Pisot substitution conjecture, in Mathematics of Aperiodic Order (eds. J. Kellendonk, D. Lenz and J. Savinien), Progr. Math., 309, Birkhäuser/Springer, Basel, (2015), 33–72. doi: 10.1007/978-3-0348-0903-0_2. Google Scholar
[2] M. Baake and U. Grimm, Aperiodic Order, Vol. 1. A Mathematical Invitation. With a Foreword by Roger Penrose, Encyclopedia of Mathematics and its Applications, 149. Cambridge University Press, Cambridge, 2013. doi: 10.1017/CBO9781139025256. Google Scholar
[3] M. Baake and U. Grimm, Aperiodic Order, Vol. 2. Crystallography and Almost Periodicity. With a foreword by Jeffrey C. Lagarias., Encyclopedia of Mathematics and its Applications, 166. Cambridge University Press, Cambridge, 2017. Google Scholar
M. Baake and D. Lenz, Dynamical systems on translation bounded measures: Pure point dynamical and diffraction spectra, Ergod. Th. & Dynam. Sys., 24 (2004), 1867-1893. doi: 10.1017/S0143385704000318. Google Scholar
M. Baake and D. Lenz, Spectral notions of aperiodic order, Discrete Contin. Dyn. Syst., 10 (2017), 161-190. doi: 10.3934/dcdss.2017009. Google Scholar
M. Baake and R. V. Moody, Self-similar measures for quasi-crystals, in Directions in Mathematical Quasicrystals (eds. M. Baake and R. V. Moody), CRM Monograph series, 13, AMS, Providence RI, (2000), 1–42. Google Scholar
J. Bellissard, D. J. L. Herrmann and M. Zarrouati, Hulls of aperiodic solids and gap labeling theorems, in Directions in Mathematical Quasicrystals (eds. M. Baake and R. V. Moody), CRM Monograph Series, 13, AMS, Providence RI (2000), 207–258. Google Scholar
D. Damanik and D. Lenz, Linear repetitivity. I. Uniform subadditive ergodic theorems and applications, Discrete Comput. Geom., 26 (2001), 411-428. doi: 10.1007/s00454-001-0033-z. Google Scholar
L. Danzer, Inflation species of planar tilings which are not of locally finite complexity, Proc. Steklov Inst. Math., 239 (2002), 118-126. Google Scholar
J. M. G. Fell, A Hausdorff topology for the closed subsets of a locally compact non-Hausdorff space, Proc. Amer. Math. Soc., 13 (1962), 472-476. doi: 10.1090/S0002-9939-1962-0139135-6. Google Scholar
N. P. Frank, Tilings with infinite local complexity, in Mathematics of Aperiodic Order (eds. J. Kellendonk, D. Lenz, J. Savinien), Progr. Math., 309, Birkhäuser/Springer, Basel, (2015), 223–257. doi: 10.1007/978-3-0348-0903-0_7. Google Scholar
N. P. Frank and E. A. Jr Robinson, Generalized β-expansions, substitution tilings, and local finiteness, Trans. Amer. Math. Soc., 360 (2008), 1163-1177. doi: 10.1090/S0002-9947-07-04527-8. Google Scholar
N. P. Frank and L. Sadun, Topology of (some) tiling spaces without finite local complexity, Discrete Contin. Dyn. Syst., 23 (2009), 847-865. doi: 10.3934/dcds.2009.23.847. Google Scholar
N. P. Frank and L. Sadun, Fusion: A general framework for hierarchical tilings of $ {\mathbb R}^d$, Geom. Dedicata, 171 (2014), 149-186. doi: 10.1007/s10711-013-9893-7. Google Scholar
N. P. Frank and L. Sadun, Fusion tilings with infinite local complexity, Topology Proc., 43 (2014), 235-276. Google Scholar
D. Frettlöh and C. Richard, Dynamical properties of almost repetitive Delone sets, Discrete Contin. Dyn. Syst., 34 (2014), 531-556. doi: 10.3934/dcds.2014.34.531. Google Scholar
A. Hof, On diffraction by aperiodic structures, Comm. Math. Phys., 169 (1995), 25-43. doi: 10.1007/BF02101595. Google Scholar
R. Kenyon, Self-replicating tilings, Symbolic dynamics and its application (New Haven, CT, 1991), Contemp. Math., 135, Amer. Math. Soc., Providence, RI, (1992), 239–263. doi: 10.1090/conm/135/1185093. Google Scholar
R. Kenyon, Rigidity of planar tilings, Invent. Math., 107 (1992), 637-651. doi: 10.1007/BF01231905. Google Scholar
R. Kenyon, Inflationary tilings with similarity structure, Comment. Math. Helv., 69 (1994), 169-198. doi: 10.1007/BF02564481. Google Scholar
R. Kenyon, The construction of self-similar tilings, Geometric and Funct. Anal., 6 (1996), 471-488. doi: 10.1007/BF02249260. Google Scholar
I. Környei, On a theorem of Pisot, Publ. Math. Debrecen, 34 (1987), 169-179. Google Scholar
J. C. Lagarias, Mathematical quasicrystals and the problem of diffraction, in Directions in Mathematical Quasicrystals (eds. M. Baake and R. V. Moody), CRM Monograph Series, Vol. 13, AMS, Providence, RI, (2000), 61–93. Google Scholar
J. C. Lagarias and P. A. B. Pleasants, Repetitive Delone sets and quasicrystals, Ergod. Th. & Dynam. Sys., 23 (2003), 831-867. doi: 10.1017/S0143385702001566. Google Scholar
J. C. Lagarias and Y. Wang, Substitution Delone sets, Discrete Comput. Geom., 29 (2003), 175-209. doi: 10.1007/s00454-002-2820-6. Google Scholar
J.-Y. Lee, R. V. Moody and B. Solomyak, Pure point dynamical and diffraction spectra, Ann. Henri Poincaré, 3 (2002), 1003–1018. doi: 10.1007/s00023-002-8646-1. Google Scholar
J.-Y. Lee, R. V. Moody and B. Solomyak, Consequences of pure point diffraction spectra for multiset substitution systems, Discrete Comput. Geom., 29 (2003), 525-560. doi: 10.1007/s00454-003-0781-z. Google Scholar
J.-Y Lee and B. Solomyak, Pure point diffractive substitution Delone sets have the Meyer property, Discrete Comput. Geom., 39 (2008), 319-338. doi: 10.1007/s00454-008-9054-1. Google Scholar
J.-Y. Lee and B. Solomyak, Pisot family self-affine tilings, discrete spectrum, and the Meyer property, Discrete Contin. Dyn. Syst., 32 (2012), 935-959. doi: 10.3934/dcds.2012.32.935. Google Scholar
D. Lenz and P. Stollmann, Delone dynamical systems and associated random operators, in Operator Algebras and Mathematical Physics (eds. J. M. Combes, G. Elliott, G. Nenciu, H. Siedentop and S. Stratila) Theta, Bucharest, (2003), 267–285. Google Scholar
C. Mauduit, Caractérisation des ensembles normaux subsitutifs, Invent. Math., 95 (1989), 133-147. doi: 10.1007/BF01394146. Google Scholar
D. Mauldin and S. Williams, Hausdorff dimension in graph direct constructions, Trans. Amer. Math. Soc., 309 (1988), 811-829. doi: 10.1090/S0002-9947-1988-0961615-4. Google Scholar
B. Mossé, Puissances de mots et reconnaisabilité des points fixes d'une substitution, Theor. Comp. Sci., 99 (1992), 327-334. doi: 10.1016/0304-3975(92)90357-L. Google Scholar
P. Müller and C. Richard, Ergodic properties of randomly coloured point sets, Canad. J. Math., 65 (2013), 349-402. doi: 10.4153/CJM-2012-009-7. Google Scholar
B. Praggastis, Numeration systems and Markov partitions from self-similar tilings, Trans. Amer. Math. Soc., 351 (1999), 3315-3349. doi: 10.1090/S0002-9947-99-02360-0. Google Scholar
M. Queffelec, Substitution Dynamical Systems - Spectral Analysis, 2nd edition. Lecture Notes in Math., 1294, Springer, Berlin, 2010. doi: 10.1007/978-3-642-11212-6. Google Scholar
C. Radin, Space tilings and substitutions, Geom. Dedicata, 55 (1995), 257-264. doi: 10.1007/BF01266317. Google Scholar
C. Radin and M. Wolff, Space tilings and local isomorphism, Geom. Dedicata, 42 (1992), 355-360. doi: 10.1007/BF02414073. Google Scholar
E. A. Robinson, Jr., Symbolic dynamics and tilings of $ {\mathbb R}^d$, in Symbolic Dynamics and Its Applications, Proc. Sympos. Appl. Math., 60, Amer. Math. Soc., Providence, RI, (2004), 81–119. doi: 10.1090/psapm/060/2078847. Google Scholar
B. Solomyak, Dynamics of self-similar tilings, Ergod. Th. & Dynam. Sys., 17 (1997), 695-738. Corrections to 'Dynamics of self-similar tilings', Ibid. 19 (1999), 1685. doi: 10.1017/S0143385797084988. Google Scholar
B. Solomyak, Nonperiodicity implies unique composition for self-similar translationally finite tilings, Discrete Comput. Geom., 20 (1998), 265-279. doi: 10.1007/PL00009386. Google Scholar
B. Solomyak, Eigenfunctions for substitution tiling systems, in Probability and Number Theory-Kanazawa 2005, Adv. Stud. Pure Math., 49 (2007), 433–454. Google Scholar
W. Thurston, Groups, Tilings, and Finite State Automata, AMS lecture notes, 1989. Google Scholar
P. Walters, An Introduction to Ergodic Theory, Springer, 1982. Google Scholar
Figure 1. Prototiles of the Frank-Robinson substitution tiling without FLC
Figure 2. A patch of the tiling from Example 6.4 for $a = 2-\sqrt{2}$. The dots in the figure indicate the representative points of tiles
Figure 3. Modification of Kenyon's example. The figure shows a patch of the substitution tiling in the case of $a = 2-\sqrt{2}$. The dots in the figure indicate the representative points of tiles
Héctor Barge. Čech cohomology, homoclinic trajectories and robustness of non-saddle sets. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020381
Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127
Evan Greif, Daniel Kaplan, Robert S. Strichartz, Samuel C. Wiese. Spectrum of the Laplacian on regular polyhedra. Communications on Pure & Applied Analysis, 2021, 20 (1) : 193-214. doi: 10.3934/cpaa.2020263
Bing Gao, Rui Gao. On fair entropy of the tent family. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021017
Jesús A. Álvarez López, Ramón Barral Lijó, John Hunton, Hiraku Nozawa, John R. Parker. Chaotic Delone sets. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021016
Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217
Stefan Ruschel, Serhiy Yanchuk. The spectrum of delay differential equations with multiple hierarchical large delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 151-175. doi: 10.3934/dcdss.2020321
Claudio Bonanno, Marco Lenci. Pomeau-Manneville maps are global-local mixing. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1051-1069. doi: 10.3934/dcds.2020309
Buddhadev Pal, Pankaj Kumar. A family of multiply warped product semi-Riemannian Einstein metrics. Journal of Geometric Mechanics, 2020, 12 (4) : 553-562. doi: 10.3934/jgm.2020017
Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020278
Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005
Vito Napolitano, Ferdinando Zullo. Codes with few weights arising from linear sets. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020129
Lisa Hernandez Lucas. Properties of sets of Subspaces with Constant Intersection Dimension. Advances in Mathematics of Communications, 2021, 15 (1) : 191-206. doi: 10.3934/amc.2020052
Eduard Feireisl, Elisabetta Rocca, Giulio Schimperna, Arghir Zarnescu. Weak sequential stability for a nonlinear model of nematic electrolytes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 219-241. doi: 10.3934/dcdss.2020366
José Luiz Boldrini, Jonathan Bravo-Olivares, Eduardo Notte-Cuello, Marko A. Rojas-Medar. Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations. Electronic Research Archive, 2021, 29 (1) : 1783-1801. doi: 10.3934/era.2020091
Shuxing Chen, Jianzhong Min, Yongqian Zhang. Weak shock solution in supersonic flow past a wedge. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 115-132. doi: 10.3934/dcds.2009.23.115
Tinghua Hu, Yang Yang, Zhengchun Zhou. Golay complementary sets with large zero odd-periodic correlation zones. Advances in Mathematics of Communications, 2021, 15 (1) : 23-33. doi: 10.3934/amc.2020040
Nicola Pace, Angelo Sonnino. On the existence of PD-sets: Algorithms arising from automorphism groups of codes. Advances in Mathematics of Communications, 2021, 15 (2) : 267-277. doi: 10.3934/amc.2020065
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems by stages. Journal of Geometric Mechanics, 2020, 12 (4) : 607-639. doi: 10.3934/jgm.2020029
PDF downloads (171)
Jeong-Yup Lee Boris Solomyak
|
CommonCrawl
|
Innovative tools and modeling methodology for impact prediction and assessment of the contribution of materials on indoor air quality
V Desauziers1Email author,
D Bourdin1, 2,
P Mocho3 and
H Plaisance1
Heritage Science20153:28
© Desauziers et al. 2015
The combination of more and more airtight buildings and the emission of formaldehyde and other volatile organic compounds (VOCs) by building, decoration and furniture materials lead to lower indoor air quality. Hence, it is an important challenge for public health but also for the preservation of cultural heritage, as for example, artworks in museum showcases and other cultural objects. Indeed, some VOCs such as organic acids or carbonyl compounds may play a role in the degradation of some metallic objects or historic papers. Thus, simple and cost effective sampling tools are required to meet the recent and growing demand of on-site diagnostic of indoor air quality, including emission source identification and their ranking.
In this aim, we developed new tools based on passive sampling (Solid-Phase Micro Extraction, SPME) to measure carbonyls compounds (including formaldehyde) and other VOCs and both in indoor air and at the material/air interface. On one hand, the coupling of SPME with a specially designed emission cell allows the screening and the quantification of the VOCs emitted by building, decoration or furniture materials. On the other hand, indoor air is simply analysed using new vacuum vial sampling combined with VOCs pre-concentration by SPME. These alternative sampling methods are energy free, compact, silent and easy to implement for on-site measurements. They show satisfactory analytical performance as detection limits range from 0.05 to 0.1 µg m−3 with an average Relative Standard Deviation (RSD) of 18 %. They already have been applied to monitoring of indoor air quality and building material emissions for a 6 months period. The data obtained were in agreement with the prediction of a physical monozonal model which considers building materials both as VOC sources and sinks and air exchange rate in one single room ("box model").
Results are promising, even if more data are required to complete validation, and the model could be envisaged as a predictive tool for indoor air quality. This new integrated approach involving measurements and modeling could be easily transposed to historic environments and to the preservation of cultural heritage.
Graphical abstract:
On-site sampling of VOCs material emissions by using an innovative passive device (DOSEC).
SPME
The impact of building and decoration materials on indoor air quality (IAQ) is now well known and recognized [1, 2]. For many Volatile Organic Compounds (VOCs) found in indoor environments (formaldehyde, α-pinene,…), the main sources are located inside the building [3]. Moreover, the development of low energy buildings which promotes more and more airtight constructions tends to raise indoor pollutant concentration levels. Therefore, indoor air quality became a major public health issue and, in France, a new legislation was implemented. The labeling of all building materials according to their emissions of VOCs is effective since 2013 (decree 2011-321, 23 March 2011), and the compulsory measurement of some pollutants in public buildings (formaldehyde and benzene) is being considered. In the near future, museum and libraries might be concerned.
The preservation of cultural heritage is also challenging as VOCs and carbonyl compounds may damage artwork exposed to the confined atmosphere of showcases. In this context, relevant tools are needed to perform on-site indoor air diagnosis, including emission sources identification and monitoring.
The proposed methodological approach includes a diagnosis step involving new methods relying on passive solid-phase microextraction (SPME). This technique is particularly relevant for sensitive environments (e.g. historic buildings, showcases displaying artwork, etc.) because it is non-invasive, easy to use and noiseless. Two SPME sampling methods were developed to study nine VOCs, both in indoor air and at the material/air interface [4, 5] for highlighting and quantifying emission or sink effects, and then identifying and ranking material sources. The ability to measure in situ the surface concentration of building materials allows to predict the indoor air quality by modeling approaches. The model developed here was adapted from box models which were the most widely applied to indoor environments [6]. As a decision support tool, the model could help in the selection of low emission materials and the optimization of air exchange [7, 8].
This methodology was applied to recent buildings and some examples are presented in this paper. If the first applications aimed to support IAQ management in household and public buildings, the methodology could be easily transposed to cultural heritage issues. These examples can address libraries, museum or galleries which can be placed in new buildings. In this case, indoor VOCs differ from VOCs in old buildings but may also influence the preservation of cultural objects.
SPME methods
VOCs are concentrated on a SPME fibre which is then directly desorbed into the injector of a gas chromatograph (GC) coupled to mass spectrometry (MS) for analysis [9]. A PDMS-DVB SPME fibre (Supelco, Bellafonte, PA, USA) treated with O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine hydrochloride (PFBHA) was especially developed for GC–MS analysis of carbonyl compounds including formaldehyde [10]. As SPME is a passive sampler, the amount of pollutants adsorbed on the fibre is directly proportional to the product of the concentration of the pollutant and the exposure time, product which is called "exposure dose" and expressed in µg m−3 min [11, 12].
Air sampling was performed in 250 mL glass vials provided by Entech Instruments (Simi Valley, CA, USA) equipped with SPME-adapters [13, 14]. The vials were cleaned with wet nitrogen and evacuated until 10 mtorr before sampling thanks to a 3100A Canister Cleaner (Entech Instruments). On site, they were filled with air and then stored no longer than 2 days at room temperature (20 °C). Then, the SPME fibre was introduced into the vial for 20 min prior to its thermal desorption and analysis by GC–MS.
For material emissions passive sampling was chosen as it represents an interesting means for field investigations requiring a large number of sampling points. This sampling is made in static mode by diffusion of chemicals from the material surface to a trap media, inside a closed air volume. The emission rate can be determined as described in the following Eq. 1 (from first Fick law under steady state conditions) [15]:
$${\text{F}}\,{ = }\, - {\text{D}}\frac{\text{dC}}{\text{dx}} = - {\text{D}}\frac{{{\text{C}}_{\text{a}} - {\text{C}}_{\text{as}} }}{\text{L}}$$
where F is the emission rate of the target VOC (µg m−2 s−1), D, the diffusion coefficient (m2 s−1), C a the concentration in indoor air (µg m−3), C as the gas phase concentration at the material surface (µg m−3), L the thickness of the gas phase boundary layer (m). In this study, the passive sampler was a home-made cylindrical glass cell inspired by previous work [12]. This "Device for On-Site Emission Control" (DOSEC) was optimized for SPME coupling and will be fully described in future papers. The sampling involves two steps: first, VOCs diffusion from the material to the gas phase (DOSEC headspace) and second, after introduction of SPME fibre in the DOSEC, the VOCs transfer from the gas phase to the fibre coating. Assuming equilibrium is reached in the DOSEC, the headspace concentration could be assimilated to the gas phase concentration at the material surface, C as (Eq. 1). Thus, the data measured is the concentration at the material/air interface expressed in µg m−3.
After sampling, the fibres were then stored up to 3 days in stainless steel tubes [16] and were analyzed by GC–MS.
Chromatographic analysis and target VOCs
The SPME fibres were analyzed on a Varian 3800 gas chromatograph coupled with a 1200Q quadrupole mass spectrometer (MS) (Varian, Les Ulis, France). The PTV injection port was equipped with a 0.75 mm i.d. liner and was operated at 250 °C. Acquisition was made in single ion monitoring (SIM) and scan modes.
The method was especially developed to identify and quantify nine VOCs which were selected from the compounds listed in the French regulation for material emission labeling (decree 2011-321, 23 March 2011). As buildings and showcases may contain wood-based materials, hexanal and α-pinene were included in the compound list. The target VOCs were: formaldehyde, acetaldehyde, toluene, tetrachloroethylene, p-xylene, 1,2-dichlorobenzene, styrene, hexanal and α-pinene.
Modeling method
The mass-balance model aims to simulate average indoor air pollutant concentration as a function of outdoor concentration, building characteristics (volume, air exchange rate…) and indoor sources/sinks. It is a single zone model which considers the room (or other closed environment) as a zone where pressure, temperature, and pollutant concentration are homogeneous. The mass balance of a VOC i is written according to the following equation (Eq. 2) where materials are considered both as VOC sources and sinks from indoor air.
$$\frac{{\partial C_{i} }}{\partial t} = \mathop \sum \limits_{j = 1}^{m} Q_{ij} + \lambda C_{iout} - \lambda C_{i}$$
where C i is the average indoor air concentration of the pollutant i (µg m−3), Q ij the contribution of the material j to the IAQ (source or sink of pollutant i) (µg m−3 s−1), λ the outdoor air exchange rate (s−1), C iout the average outdoor air concentration of pollutant i (µg m−3), t the time and m the total number of materials within the room. In Fig. 1, C sij is the air concentration of the pollutant i at the surface of the material j.
Buildings and rooms studied—a meeting room; b classroom; c living room.
At the material/air interface, VOC mass transfer can be expressed as:
$$Q_{ij} = h_{ij} \frac{{A_{j} }}{V}(C_{sij} - C_{i} )$$
where h ij is the convective mass transfer coefficient of pollutant i through the boundary layer over the material j (determined from empirical relationships [8] ), A j the surface area of the material j and V the volume of the room.
Substituting (Eq. 3) into (Eq. 2), under steady state conditions, we finally obtain (Eq. 4) [8]:
$$C_{i} = \frac{{\mathop \sum \nolimits_{j = 1}^{m} h_{ij} \frac{{A_{j} }}{V}C_{sij} + \lambda C_{iout} }}{{\mathop \sum \nolimits_{j = 1}^{m} h_{ij} \frac{{A_{j} }}{V} + \lambda }}$$
Air exchange rate measurement
Air exchange rate was determined from the elimination kinetic of injected CO2 according to the method described in ASTM standards [17].
Three new buildings (or constructed less than 2 years before the measurement campaigns) were studied: a meeting room in an office building (Fig. 1a), a classroom in a high school built according to the HQE® (High Environmental Quality) French label (Fig. 1b) and the living room of an unoccupied and non-furnished house (Fig. 1c). All the buildings are located in the south west of France and measurements were made when the rooms were unoccupied in order to only consider material sources of VOCs. The high school, which is studied here in details, is located in a rural area, near a pine forest. The presented results will mostly concern the classroom for which the sampling campaigns began just after the building delivery, and took place every 2 weeks over a period of 6 months.
The Table 1 presents the building materials and the furniture of the classroom. All the VOCs surface concentrations were determined using the DOSEC, indoor and outdoor air were sampled using the vacuum vials. All the building materials were rated "A+" by the new French regulation on sanitary labeling (including exposure concentration less than 10 µg m−3 of formaldehyde).
Building materials and furniture of the classroom
Surface (m2)
French sanitary labeling
Not supplied by the architect
Melamine resin
Materials not concerned by the sanitary labeling
Varnished beech
Desks (top + bottom)
Laminate + melamine resin
Analytical performance
The limits of detection (LOD) and of quantification (LOQ) (Table 2) were determined for the target VOCs analyzed by GC–MS. LOD and LOQ were evaluated for a signal to noise ratio of 3 and 10 respectively. They correspond to 20 min extraction of standard gas in vacuum vials. It can be pointed out that the DOSEC performance is comparable to that of the vial as the device volumes are similar.
Limits of detection and limits of quantification determined for SPME–GC–MS analysis
LOD (µg m−3)
LOQ (µg m−3)
1,2-Dichlorobenzene
Tetrachloroethylene
Acetaldehyde
Hexanal
All the LOD and LOQ were below the µg m−3 level and the average RSD (relative standard deviation) for 6 replicates is 18 %.
SPME air sampling was compared to standard methods for three model compounds (formaldehyde, α-pinene and styrene) identified in the indoor air of the unoccupied house. The temperature was 17.6 °C and the relative humidity 64 %. For formaldehyde, the standard method consists in active sampling on a cartridge filled with Florisil® treated with 2,4-dinitrophenyl hydrazine (DNPH) which specifically reacts with carbonyls. The cartridge is eluted by acetonitrile and the resulting extract is analyzed by high performance liquid chromatography equipped with UV detection (HPLC–UV) [18]. For the other VOCs, active sampling is carried out on a Tenax® tube further analyzed by thermal desorption coupled to GC–MS [19]. Results presented in Table 3 show that there is no significant difference between SPME and standard methods for the compounds investigated. Therefore, SPME is relevant and could be considered as an alternative to standard methods, even if a full validation should be further performed.
Comparison of SPME and standard methods for the analysis of formaldehyde, α-pinene and styrene in indoor air
SPME C ± SD (µg m−3) (n = 3)
Standard method C ± SD (µg m−3) (n = 3)
101.5 ± 21.0
103.3 ± 8.1
1.3 ± 0.3
C concentration, SD standard deviation, n number of replicates.
Identification of material sources in the classroom
As formaldehyde is an important pollutant of indoor air, the presented results will focus on this compound.
The Fig. 2 presents the identification of the formaldehyde material sources in the classroom studied. The surface concentrations obtained by DOSEC sampling are reported for each material at the beginning of the sampling campaign (September 2012) and at the end, 6 months later (March 2013). The lines indicate the indoor air concentrations for the two sampling periods, showing a factor two decrease within the first 6 months after the building delivery (from 13.5 µg m−3 for September 2012 to 5.5 µg m−3 for March 2013). Material emissions, which are the main sources of VOCs, also significantly decreased during this period. As expected, formaldehyde is mostly emitted by wood based materials. The surface concentration of the desk underside is particularly high (125 µg m−3). The high emission of the interactive board (125 µg m−3) is more surprising: this can be explained by its coating made of melamine resin which contains formaldehyde. Another unexpected result is the emission of the PVC flooring which normally does not emit formaldehyde. It is supposed that the adhesive or other products (underlayment…) used for the floor are responsible for this emission. This result demonstrates that it is important to perform in situ surface measurements in order to take into account the way the material is implemented.
Formaldehyde surface concentrations of the building materials and furniture of the classroom at the beginning and at the end of the sampling period.
Material source ranking
Thanks to the previous results, the material sources were ranked at the beginning and at the end of the sampling period. To rank these data, the surface concentration Cas i of each material i was weighted by its surface S i in the room (Cas i x S i ) and expressed as a percentage of the total material contribution (∑(Cas i x S i )). The results for formaldehyde are presented in Fig. 3. Two main material sources are identified: the floor and the desks. As the desks are made of particle board, they are obviously a significant source of formaldehyde. If action by source reduction should be proposed, the replacement of desks would certainly be the easiest to implement, instead of removing the floor. Figure 3 indicates also that the source ranking is the same at the beginning of the sampling campaign (September 2012) and 4 months later (January 2013). Despite these numerous formaldehyde sources, the impact on indoor air quality is limited thanks to an efficient air exchange rate (3.3 h−1). The formaldehyde concentrations determined all along the sampling period did not exceed 15 µg m−3 in indoor air and are largely below the guide value of 30 µg m−3 which could be imposed by the future French legislation.
Formaldehyde source ranking in the classroom in September 2012 and in January 2013.
Identification of adsorption/desorption
Another advantage of the DOSEC is the possibility to highlight VOCs adsorption/desorption on material surfaces. An example is given for α-pinene in Fig. 4. The main source of this compound is located outside the classroom, in the adjacent hall where the walls are covered by pine panels (see picture in Fig. 4). The concentration decreased in the classroom whereas the source (indoor air of the hall) remained nearly constant (the concentration in the hall was not measured during week 49). These data may suggest that α-pinene, and hence, other VOCs, can be deposited on the building material surfaces which may constitute VOCs sinks. To support this hypothesis, the Fig. 5 shows that there was a clear deposit of α-pinene on the PVC floor all along the measurement campaign: the concentration of α-pinene in the air of the classroom was 58 times higher than the surface concentration of the PVC flooring. Then, this ratio strongly decreased to reach 0.2 during week 4, meaning a concentration five times higher at the floor surface than in air.
α-Pinene concentrations in the indoor air of classroom and hall and in the outdoor air—picture of the hall.
Evolution of the ratio "concentration in air/concentration at the floor surface" for α-pinene.
VOCs deposit on material surfaces is also reported by the literature where sorption processes are described through laboratory chamber testings [20, 21]. These processes are rarely identified in the on-site studies. The main reported influencing factors are the boiling point and the chemical properties of the compound, the physical properties of the material, such as the surface area, and the environmental conditions [20]. Hence, Jorgensen et al. [22] showed that α-pinene is better adsorbed by surface materials than toluene which is more volatile. They also demonstrated the adsorption of α-pinene on PVC flooring.
IAQ modeling
The Fig. 6 presents the first modeling applications to formaldehyde in the different buildings studied. The indoor air concentrations were predicted according to the model described in the experimental section. The input data were the surface concentrations of all the building and furniture materials, the outdoor air concentrations and the air exchange rates. These results are promising, but modeling should be further developed and validated with more experimental trials. It could become a useful tool for decision making for IAQ management (ventilation conditions, selection of low emission materials).
Correlation between predicted and measured formaldehyde concentrations in different building indoor air.
Simple, sensitive and non-destructive methods to analyze VOCs and formaldehyde in indoor air and at the material/air interface were developed. They allow in situ measurements to study materials in their real environments, by taking into account the conditions for their implementation. DOSEC measurements also permit source identification and their ranking, and the quantification of adsorption/desorption processes at the material surfaces. A predictive modeling using these new measurements as input data was also developed as a decision making tool. If the first applications aimed to support IAQ management in new buildings, the methodology is easily transposable to cultural heritage to evaluate IAQ in old and new buildings (e.g. staff and visitors exposure), to study IAQ in showcases (modeling for design support, impact of building materials on IAQ), and finally to study the impact of IAQ on sensitive materials such as artworks, papers, paints, textiles, furniture or other cultural objects.
ASTM:
American Society for Testing and Materials
DNPH:
2,4-dinitro phenyl hydrazine
DOSEC:
device for on-site emission control
GC–MS:
gas chromatography–mass spectrometry
HPLC–UV:
high performance liquid chromatography–ultra violet detection
HQE:
French label for high environmental quality buildings
IAQ:
limit of detection
LOQ:
limit of quantification
PDMS-DVB:
Poly dimethyl siloxane–divinyl benzene
PVC:
poly vinyl chloride
PTV:
programmable temperature vaporizer
single ion monitoring
SPME:
solid phase microextraction
VOC:
VD was the coordinator of this work as the supervisor of the PhD thesis of DB, participated to the sampling campaigns and drafted the manuscript. DB carried out the experimental part of the study (sampling campaigns and laboratory analysis) and data exploitation within the framework of her PhD thesis. PM was the co-supervisor of DB's PhD and developed the monozonal box model. HP contributed in results interpretation. All authors read and approved the final manuscript.
The authors thank Nobatek and Christophe Cantau for his contribution to the selection of the buildings studied and information on building materials. They also acknowledge the buildings' managers for allowing us access to their buildings.
Compliance with ethical guidelines
Competing interest The authors declare that they have no competing interest.
Centre des Matériaux (C2MA), Ecole des mines d'Alès, Hélioparc, 2 avenue P. Angot, 64000 Pau, France
Nobatek, 67 avenue de Mirambeau, 64600 Anglet, France
Laboratoire Thermique Energétique et Procédés (LaTEP), Université de Pau et des Pays de l'Adour, rue Jules Ferry, 64000 Pau, France
Missia DA, Dimitriou E, Michael N, Tolis EI, Bartzis JG (2010) Indoor exposure from building material emissions: a field study. Atmos Environ 44:4388–4395View ArticleGoogle Scholar
Hodgson A, Rudd AF (2010) Volatile organic compounds and emission rates in new manufactured and site-built houses. Indoor Air 10:178–192View ArticleGoogle Scholar
Brown SK (2002) Volatile organic pollutants in new and established buildings in Melbourne, Australia. Indoor Air 12:1255–1263View ArticleGoogle Scholar
Nicolle J, Desauziers V, Mocho P (2008) Solid phase microextraction sampling for a rapid and simple on-site evaluation of volatile organic compounds emitted from building materials. J Chromatogr A 1208:10–15View ArticleGoogle Scholar
Nicolle J, Desauziers V, Mocho P (2009) Optimization of FLEC-SPME for field passive sampling of VOCs emitted from solid building materials. Talanta 80:730–737View ArticleGoogle Scholar
National Research Council (1981) Indoor pollutants. National Academy Press, washington DCGoogle Scholar
Yamashita S, Kume K, Horiike T, Honma N, Fusaya M, Ohura T et al (2010) A simple method for screening emission sources of carbonyl compounds in indoor air. J Hazard Mat 178:370–376View ArticleGoogle Scholar
Bourdin D, Mocho P, Desauziers V, Plaisance H (2014) Formaldehyde emission behavior of building materials: on-site measurements and modeling approach to predict indoor air pollution. J Hazard Mat 280:164–173View ArticleGoogle Scholar
Arthur CL, Pawliszyn J (1990) Solid-phase microextraction with thermal desorption using fused-silica optical fibers. Anal Chem 62:2145–2148View ArticleGoogle Scholar
Bourdin D, Desauziers V (2014) Development of SPME on-fiber derivatization for the sampling of formaldehyde and other carbonyl compounds in indoor air. Anal Bioanal Chem 406:317–328View ArticleGoogle Scholar
Tuduri L, Desauziers V, Fanlo JL (2003) A simple calibration procedure for volatile organic compounds sampling in air with adsorptive solid-phase microextraction fibres. Analyst 128:1028–1032View ArticleGoogle Scholar
Larroque V, Desauziers V, Mocho P (2006) Comparison of two solid-phase micoextraction methods for the quantitative analysis of VOCs in indoor air. Anal Bioanal Chem 386:1457–1464View ArticleGoogle Scholar
Desauziers V, Auguin B (2010) Device and method for gas sampling, French patent no. 1003271Google Scholar
Desauziers V, Auguin B (2012) SPME-adapter: a rapid sampling for the analysis of VOCs traces in air, in Techniques de l'ingénieur Editions, IN 149Google Scholar
Shinohara N, Fujii M, Yamasaki A, Yanagisawa Y (2007) Passive flux sampler for measurement of formaldehyde emission rates. Atmos Environ 41:4018–4028View ArticleGoogle Scholar
Larroque V, Desauziers V, Mocho P (2006) Study of preservation of polydimethylsiloxane/carboxen solid-phase microextraction fibres before and after sampling of volatile organic compounds in indoor air. J Chromatogr A 1124:106–111View ArticleGoogle Scholar
ASTM E741–11 (2011) Standard test method for determining air change in a single zone by means of a tracer gas dilution. ASTM International, West ConshohockenGoogle Scholar
NF ISO 16000-3 December 2011 (2011) Indoor air—determination of formaldehyde and other carbonyl compounds in indoor air and test chamber air—part 3: active sampling method—Indoor air, AFNOR StandardGoogle Scholar
NF ISO 16000-6 March 2012 (2012) Indoor air—part 6: determination of volatile organic compounds in indoor and test chamber air by active sampling on Tenax TA(R) sorbent, thermal desorption and gas chromatography using MS or MS/FID, AFNOR standardGoogle Scholar
Zhang JS, Chen Q, Yang X (2002) A critical review on studies of volatile organic compound (VOC) sorption on building materials. ASHRAE Trans 108:162–174Google Scholar
Tichenor BA, Guo Z, Dunn JE, Sparks LE, Mason MA (1991) The interaction of vapor phase organic compounds with indoor sinks. Indoor Air 1:23–35View ArticleGoogle Scholar
Jorgensen RB (2007) Sorption of VOCs on material surfaces as the deciding factor when choosing a ventilation strategy. Build Environ 12:1913–1920View ArticleGoogle Scholar
In these collections
11th International Conference on Indoor Air Quality in Heritage and Hist...
|
CommonCrawl
|
Methodology Article
Significance evaluation in factor graphs
Tobias Madsen1, 2Email authorView ORCID ID profile,
Asger Hobolth2,
Jens Ledet Jensen3 and
Jakob Skou Pedersen†1, 2
BMC BioinformaticsBMC series – open, inclusive and trusted201718:199
Factor graphs provide a flexible and general framework for specifying probability distributions. They can capture a range of popular and recent models for analysis of both genomics data as well as data from other scientific fields. Owing to the ever larger data sets encountered in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models.
Two novel numerical approximations for evaluation of statistical significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from a theoretical viewpoint and with simulations. A guideline for choosing between the normal approximation, saddle-point approximation and importance sampling is also provided. Finally, the applicability of the methods is demonstrated with examples from cancer genomics, motif-analysis and phylogenetics.
The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets in the general factor graph framework.
Significance evaluation
Factor graph
Saddlepoint approximation
Factor graphs are a graphical model formalism, able to capture both Bayesian networks and Markov networks, i.e. directed and undirected graphical models [1]. Graphical models enjoy widespread use in genomics, in diverse areas such as genetics, integrative genomics and comparative genomics [2–4]. A range of well-known bioinformatical models, such as position weighted matrices, hidden Markov models, hierarchical models and phylogenetic models can all be cast into the factor graph formalism. Therefore the overall return from efficient algorithms and methods operating on factor graphs is high.
Signals in data are often associated with large deviation from a null (noise) model. The amount of deviation is quantified with a score, such as the odds-ratio, i.e. the ratio between the probability of an observation under a foreground model to the probability under a null model. The odds-ratio is a popular choice of score, but in statistical practice other scores can be preffered, either because they are more robust, easier to compute, easier to interpret or simply because there is no explicit foreground model. Irrespectively of the chosen score function, an important question is the statistical significance of the score, i.e. what is the probability that a score as high or higher is generated from the null model.
In the present paper we consider the problem of evaluating statistical significance of rare events defined over factor graphs. A problem which is generally NP-hard even in the special case where all variables are independent of one another [5, 6]. Accordingly, it is important to formulate numerical approximations instead of exact methods. We have developed two approximation methods, one is based on importance sampling, the other on a saddlepoint approximation. Both methods rely on novel algorithms for their efficient evaluation. The merits of the individual methods are assessed both theoretically and in a simulation study.
The applicability of the methods is demonstrated with four models used in different areas of bioinformatics. First, we consider the Poisson binomial distribution. This model has a number of applications, among others in cancer driver detection, where it is used to find regions of the genome that contain a surprisingly high number of somatic mutations [7]. Second, the ubiquitous position weight matrix model for motif description is discussed. The literature on PWM models also contains proofs that the problem of evaluating the significance of a motif match score defined by a PWM under a genomic background represented by a first-order Markov model is NP-hard [5, 6]. This also shows that the more general class of problems is NP-hard. The third model is higher-order Markov chains. Again a Markov chain is an extremely versatile model with applications both inside and outside of bioinformatics. Here we focus on a recent use in modelling sequence motifs, where parameters in a higher-order Markov chain are learned in a regularized fashion [8]. Finally we look at phylogenetic models and models of nucleotide substitution. Phylogenetic models are interesting in their own right but also serve to illustrate the use on models with more complex dependency structures. We use the framework to evaluate if a position in an alignment column shows evidence of evolutionary conservation. Though simplified this is conceptually similar to the widely used phyloP-score, measuring evolutionary conservation and acceleration [9].
For many probabilistic models it can be computationally expensive if not intractable to evaluate the statistical significance of an observation. Even for the models where an efficient computational scheme exists it is often time-consuming to derive and implement. With the genericity of the factor graph formalism, we believe that the methods proposed here, will aid the analysis of data using a wide range of models. We have implemented the importance sampling and saddlepoint approximation methods in a freely available R-package called dgRaph and provide code for the examples discussed in the paper. For efficiency the core algorithms are implemented in C++ using the Rcpp R-package [10]. dgRaph also contains methods for training factor graph models using the EM-algorithm, this however is not a focus in the current paper, where we will treat models and parameters as given.
Despite the fact that the saddlepoint approximation was conceived as far back as 1954 [11], it has only seen sporadic use in genomics [12, 13]. We find that there are ample opportunities to apply saddlepoint approximation in genomics, but its intimidating appearance may have prevented more widespread application. By supplying an R-package we hope to reduce the barriers towards the use of saddlepoint approximation.
In applications of importance sampling, the proposal distribution is often picked based on experience, calibration or experimentation. By pointing out similarities between saddlepoint approximation and importance sampling and tying it up to existing litterature, we can advise the choice of proposal distribution in importance sampling on factor graphs. Applying this more principled approach could lead to the discovery of more effecient importance sampling schemes for particular problems.
Throughout the paper we will work with factor graphs [14, 15]. Importantly, both directed (Bayesian networks) and undirected (Markov random fields) graphical models can be cast into the factor graph formalism ([16], ch. 8). A factor graph is a bipartite graph consisting of variable nodes, \(\mathcal {X}\), and factor nodes \(\mathcal {A}\) (Fig. 1). There can only be edges between variables and factors. To every factor node, a, we associate a potential, f a (·), which is a non-negative function of the neighbouring variables, x a . The factor graph induces a probability measure over the variables
$$ P(x) \propto \prod\limits_{a \in \mathcal{A}}f_{a}\left(x_{a}\right). $$
A factor graph with two variables. The probability function is p(x 1,x 2)∝f a (x 1)f b (x 1,x 2). It is customary to shade observed variables and leave latent variables unshaded. To calculate the marginal probability of the observed variables, we need to sum out the latent variables. The sum-product algorithm does that efficiently taking advantage of the conditional independence structure of the graph
If \(\sum _{\mathcal {X}}\prod _{a \in \mathcal {A}}f_{a}\left (x_{a}\right) = 1\) we will say that the factor graph is normalised and the proportionality in (1) can be replaced with equality. The sum-product algorithm, the main algorithm for calculating likelihoods and marginals, operate on graphs free of undirected cycles and with finite state spaces [15]. We will therefore limit ourselves to cycle-free graphs with finite state spaces. As continuous distributions can be discretized and thus this does not present a major limitation.
We define a score of an observation, x, as:
$$ S(x) = \sum\limits_{a\in\mathcal{A}} g_{a}\left(x_{a}\right), $$
where \(\left \{g_{a}(\cdot)\right \}_{a\in \mathcal {A}}\) is a collection of functions. Given a null model, \(\left (\mathcal {X}, \mathcal {A}, \left (f_{a}\right)_{a\in \mathcal {A}}\right)\), we are interested in determining how often extreme scores occur, that is we will address the problem of evaluating significance, P(S(x)>t).
It has been shown that even the simpler subclass of this problem where the variables are independent, i.e. each variable node form a connected component together with its neighbouring factor nodes, is NP-hard (see section Markov Chains), yet an exact solution can be obtained with a method known as convolution ([17], Ch. 7). The convolution approach may be generalized to the depedence scenarios, factor graphs can represent, but not without significant additional bookkeeping, rendering the method intractable even in problems of modest size. In this light we investigate a number of approximation methods namely naive sampling, importance sampling, normal approximation and saddlepoint approximation.
In many real world problems, for instance in genomics, interesting findings often has significance, z=P(S(x)>t), in the order of 10−6 or smaller. Where an absolute error of e.g. 10−5 is more than good enough for a probability in the order of 10−2, it is inadequate for a probability in the order of 10−6. Generally, if \(\hat {z}\) denotes our estimate, to establish the order of magnitude of z, we need a small relative error, \(\left |z-\hat {z}\right |/{z}\), rather than a small absolute error, \(\left | z - \hat {z} \right |\).
A number of different scores can be employed, indeed the examples will give an idea of the flexibility that Eq. (2) offers in devising the scoring scheme. Two choices deserve special attention. First, consider the case where we have a null model P b g (·) and define S(x)=− log(P b g (x)). With this choice of score a high value is equivalent to a small likelihood indicating an observation that is unlikely under the model. We can write
$$ S(x) = -\log(P(x)) = \sum\limits_{a\in\mathcal{A}}-\log\left(\,f_{a}(x_{a})\right) $$
and it can be seen immediately that S(x) is indeed of the form (2) with g a (x a )=− log(f a (x a )).
Second, if instead we want to compare a background model to a foreground model, P f g (·), we can define the score from the ratio of the probabilities in the two models:
$$ S(x) = \log\left(\frac{P^{fg}(x)}{P^{bg}(x)}\right) = \sum\limits_{a\in\mathcal{A}}\log\left(\frac{f^{fg}_{a}(x_{a})}{f^{bg}_{a}(x_{a})}\right). $$
Again this is on the form (2) with
$$ g_{a}\left(x_{a}\right)=\log\left(\frac{f^{fg}_{a}\left(x_{a}\right)}{f^{bg}_{a}\left(x_{a}\right)}\right). $$
Sampling based methods
In the following we introduce three different approximation methods for significance evaluation. In the end of the section we highlight similarities and differences.
The first method is importance sampling (IS) using the following class of proposal distributions parameterised by α:
$$ \tilde{P}_{\alpha}(x) \propto P(x)\exp\left(\alpha \sum\limits_{a\in \mathcal{A}}g_{a}\left(x_{a}\right)\right). $$
As α increases the corresponding proposal distributions will generate higher scores more frequently. Note that by taking α=0, IS is reduced to naive sampling. With the particular choice of \(\left \{g_{a}\right \}_{a \in \mathcal {A}}\) from Eq. (5), the proposal distributions have the form
$$ \tilde{P}_{\alpha}(x) \propto P_{bg}(x)^{1-\alpha}P_{fg}(x)^{\alpha}. $$
Here the parameter α gradually skews the proposal distribution from the background distribution (α=0) to the foreground distribution (α=1) and beyond.
Due to the factorisation properties, the proposal distributions generally have a particularly simple form
$$ \begin{aligned} \tilde{P}_{\alpha}(x) & \propto P(x)\exp\left(\alpha \sum\limits_{a\in \mathcal{A}}g_{a}\left(x_{a}\right)\right) \\ & = \prod_{\mathcal{A}} f_{a}\left(x_{a}\right) \exp\left(\alpha g_{a}\left(x_{a}\right)\right) \\ & = \prod_{\mathcal{A}} \tilde{f}_{a,\alpha}(x_{a}), \end{aligned} $$
where \(\tilde {f}_{a,\alpha } = f_{a}(x_{a}) \exp \left (\alpha g_{a}(x_{a})\right)\). This is again an (unnormalised) factor graph model with the same structure. The marginal distribution for each variable and for each set of variables neighbouring the same factor node can be found with the sum-product algorithm. Using a method reminiscent of the forward sampling method used for Bayesian networks ([1], p. 488-489), we can generate samples, x 1,x 2,…,x N , from the proposal distribution (8) (Additional file 1: Figure S1). The weight of each sample is \(w_{i} = w(x_{i}) = {P(x_{i})}/{\tilde {P}_{\alpha }(x_{i})}\) and the score is s i =S(x i ). The IS estimate of the significance is then,
$$P(S > t) \approx \frac 1N\sum_{i=1}^{N}w_{i}\mathbb{I}\left(s_{i} > t\right). $$
As with sampling in general the variance of the estimate is \(\mathcal {O}(1/N) \), yet the choice of α is critical to the performance of IS in practice. One natural choice of α is such that the mean score under the importance sampling distribution is equal to the score threshold, t, i.e. \({\mathbb {E}_{\alpha }\left [ S \right ] = t}\).
Using proposal distributions of the form (7) has been explored previously in sequence analysis; the same idea is applied to hidden Boltzmann models, a generalization of hidden Markov models, in [18]. This theory enables computation of significance statistics over sequences of arbitrary length, whereas we generalize to arbitrary structures. We will see later that this particular class of proposal distributions is in fact an example of exponential tilting ([19], pp. 129-131), an idea tightly linked to the method of saddlepoint approximation that we will explore next. In [18] it is recommended to pick α using calibration requiring sampling with multiple different values of α's. Below we provide a method for choosing an α, that in many cases has the property of logarithmic efficiency (see Efficiency), and can computed efficiently.
Analytical approximations
We now derive two analytical approximations. First the conceptually simpler normal approximation and second the saddlepoint approximation.
Consider a random variable S with density function f S (s) and define the moment generating function(mgf), \(\varphi (\theta) = \mathbb {E}\left [e^{\theta S} \right ]\), and the cumulant generating function(cgf), κ(θ)= logφ(θ). The exponential family generated by S is defined by
$$ f(s;\theta) = \exp(\theta s - \kappa(\theta))f_{S}(s). $$
The probability measures in this family are called the exponentially tilted measures. The following important identities connect the mean and variance of distributions in this family to the cumulant generating function, see e.g. ([20], p. 6):
$$ \mathbb{E}_{\theta}\left[ S\right] =\kappa^{\prime} (\theta) \quad \text{and} \quad \mathbb{V}_{\theta}\left[ S\right] =\kappa^{\prime\prime} (\theta). $$
In a normal approximation the score distribution is approximated by a normal distribution having the same mean and variance. These quantities can be found using the cumulant generating function:
$$ m_{s} = \mathbb{E}\left[S(x)\right] = \kappa^{\prime}(0) $$
$$ v_{s} = \mathbb{V}\left[S(x)\right] = \kappa^{\prime\prime}(0). $$
The tail estimate is then:
$$ P(S > t) \approx 1 - \Phi\left(\frac{t - m_{s}}{v_{s}}\right) $$
where Φ is the distribution function of the standard normal distribution. The normal approximations has generally quite poor performance in the tail of the distribution as we will show later.
Saddlepoint approximation (SA) is another analytical approximation that has better performance in the tails ([20], ch. 4; [21]). SA is typically used for independent variables or in weak dependence scenarios [22], but we have developed algorithms that allow their evaluation on general factor graphs. Along with introducing SA, that might be unfamiliar to many readers, we will also indicate where these algorithms are used.
SA proceeds by choosing the parameter, θ=θ(t), called the saddle-point, such that the mean under f(s;θ(t)) is t, that is
$$ \mathbb{E}_{\theta(t)}\left[ S\right] = \kappa^{\prime}(\theta(t)) = t. $$
We want to evaluate the tail probability
$$ \begin{array}{ll} P(S > t) &= \int^{\infty}_{t}f_{S}(s) ds \\ & = \int^{\infty}_{t} \exp\Big(-\theta(t)s+\kappa(\theta(t))\Big)f(s;\theta(t)) ds. \end{array} $$
Now approximate f(s;θ(t)) with a normal distribution having the same mean, t=κ ′(θ(t)), and variance, v≡κ ′′(θ(t)). Then we have
$$\begin{array}{*{20}l} P(S > t) &\approx \int^{\infty}_{t} \exp(-\theta(t)s+\kappa(\theta(t)))\\ &\quad\times\frac{1}{\sqrt{2\pi v}}\exp(-\frac{1}{2v}(s-t)^{2}) ds \\ &= \varphi(\theta(t)) \int^{\infty}_{t} \frac{1}{\sqrt{2\pi v}}\exp\left(-\frac{(s-t+\theta(t)v)^{2}}{2v} \right.\\ &\quad+\left. \frac{1}{2}\theta(t)^{2}v-\theta(t)t\right) ds \\ &= \varphi(\theta(t))\exp(-t\theta(t))\exp\left(\frac{\theta(t)^{2}v}{2}\right)\\ &\quad\times\left[ 1-\Phi(\theta(t)\sqrt{v})\right]. \end{array} $$
In order to obtain the saddlepoint approximation we need to solve (14) and compute κ ′′(θ(t)). It turns out that both κ ′ and κ ′′ can be calculated exactly with extensions of the standard message passing algorithm (Additional file 1: Figure S9). We solve (14) numerically using Newton-Raphson and then proceed to calculate κ ′′(θ(t)).
Importance sampling vs. saddlepoint approximation
Importance sampling and saddlepoint approximation are more similar than they appear at a first glance. Let us look again at (9), f(s,0) is the density function of s(x) with x∼P, similarly f(s,θ) is the density function of s(x) with x being distributed according to
$$ \begin{array}{ll} f(x;\theta) &= \exp(\theta s(x) - \kappa(\theta))P_{bg}(x) \\ &= \varphi(\theta)^{-1}\exp\left(\theta \sum\limits_{a\in \mathcal{A}}g_{a}(x_{a}) \right)P(x). \\ \end{array} $$
We see that we recover (6) and that importance sampling and saddlepoint approximation are essentially just two strategies for evaluating (15): Either sampling f(s,θ) indirectly through f(x,θ) or approximating f(s,θ) by a normal distribution. The above analysis also suggests that a good choice of α for importance sampling around t is using the saddlepoint κ ′(α)≈t. We will call importance sampling using this strategy for choosing α saddlepoint guided importance sampling (SG-IS).
Poission-binomial
As a first example we discuss the Poisson-binomial distribution. The Poisson-binomial distribution arises as the number of succeses in N independent but not neccesarily identically distributed Bernoulli trials. Let p 1,…,p N be a set of probabilities and \(\{Y_{n}\}_{n=1}^{N}\) be independent with Y n ∼Bernoulli(p n ). Then \({S = \sum _{n=1}^{N}Y_{n}}\) is Poisson-binomial distributed. In the case where p n =p the Poisson-binomial reduces to the regular binomial distribution.
The model has seen widespread use in a variety of fields, including genomics, forensics, psychometrics and ecology [7, 23–25]. As an example Melton et al. [7] considers regional somatic mutation status in cancer samples. A logistic regression model is used to determine the mutation rate at each loci for each sample. They then identify cancer-drivers by testing if a given genomic region has a surprisingly high number of mutated samples.
We compute the tail of a Poisson-binomial using SA and using a fast Fourier transform based method (DFT-CF) [26] as implemented in the R-package poibin (Fig. 2 a). In the simple case with p i =p we also compare with the exact binomial probabilities (Additional file 1: Figure S10). All comparisons are qualitatively alike: The saddlepoint and DFT-CF methods give identical results for most part of the tail. The saddlepoint approximation is not suited for calculating large (not significant) p-values (>0.1). On the other hand the DFT-CF method experiences numerical underflow for small p-values (<10−13). Large p-values are typically not of interest and can usually be computed efficiently by other means.
a The tail of a Poisson-binomial distribution where p i is drawn independently from a Beta(1,100) and N=1000. The saddlepoint approximation tracks the exact distribution perfectly. The Poisson-binomial as implemented in poibin R-package. b A comparison of the computation time for the two algorithms. The DFT-CF method has quadratic run-time complexity whereas the Saddlepoint method has linear run-time complexity
An additional argument for prefering saddlepoint approximation over DFT-CF is the run-time complexity. Although the DFT-CF uses the fast Fourier transform, the required preprocessing step makes it an \(\mathcal {O}(N^{2})\) algorithm. In contrast, the saddlepoint approximation scales linearly with N, having complexity \(\mathcal {O}(N)\) (Fig. 2 b, Additional file 1: Table S1).
For many applications it is attractive to assign a different score, s n , for each event, Y n , leading to a new score of the form \(S = \sum _{n=1}^{N}s_{n}Y_{n}\). Using a different score and thus a different test statistic can be used to increase the statistical power of the test. The DFT-CF does not readily generalize to different scoring schemes whereas this is immediately achieved with SA and SG-IS.
PWMs
Our next two examples revolve around sequence motifs. We consider analysis of motifs with both the classical position weighted matrix model and a more recent Bayesian motif model.
Consider a simplistic DNA model, where DNA is a sequence of letters, x 1⋯x L , from a four-letter alphabet. The x i 's are independent and identically distributed and we let p j =P(x i =j) for j∈{A,C,G,T}. A motif is (for our purpose) a fixed length subsequence of DNA that exhibits a specific pattern. This pattern can be described with a probability distribution (f ji ) j={A,C,G,T} at each position i∈{1,…,N} and is typically represented in a position weighted matrix (PWM), which is a 4×N matrix, M, where M(j,i)=f ji .
If we think of the DNA-model as the background model and the motif as the foreground model, the log score for a subsequence x m ⋯x m+N−1 of length N is simply:
$$S\left(x_{m}\cdots x_{m+N-1}\right) = \sum\limits_{i=0}^{N-1} \log \frac{f_{{ix}_{m+i}}}{p_{x_{m+i}}}. $$
This is a sum of independent random variables and the motif model can be encoded in a rather simple factor graph, where each variable has its own potential (Fig. 3 b). The significance can be evaluated using discretization and dynamic programming. These computations can be accelerated using heuristics such as branch-and-bound, still the problem remains NP-hard [5, 6].
a Sequence logo for the CTCF binding motif. The larger the letters, the higher the fold-enrichment compared to the background distribution. b The PWM model represented as a factor graph. Note that since the nucleotides are considered independent of one another, no variable nodes are connected. c The approximations to the tail obtained from IS, SA and the method from the TFMPvalue package. d The relative difference between significance estimates from TFMPvalue and IS and SA respectively for all JASPAR Vertebrate motifs. The differences for the CTCF motif is indicated with yellow stars
As an illustration we analyse 1080 motifs from the JASPAR database [27]. Sequence motifs are often represented with so-called sequence logos that show the log2 fold enrichment of a given base relative to the background (Fig. 3 a).
We calculate the significance over a range of scores using both SA and IS and compare with the estimates of the significance obtained from the TFMPvalue software package [5]. Here we show the estimates for the CTCF motif. (Fig. 3 c). As a means for evaluating the difference between the approximations, we compute the relative difference at a number of quantiles and take the average of the numerical value of these. By this measure it can be seen that all three methods agree well: IS showing relative differences in the order of 10% with 1000 samples and without tuning α. The relative differences for SA decreases with motif length and is typically less than 10% for motifs longer than 10 basepairs. The motifs where the saddlepoint approximation performs poorly have a strong preference for a single base at each site. For these motifs the score matrix have similar contributions at each site, causing the score distribution to have a discrete nature, not well approximated by SA (Fig. 3 d and Additional file 1: Figure S13).
The Poisson-Binomial and the PWM models can be seen as special cases of a more general class of models with variables taking a discrete set of values. In the supplement we state a theorem giving conditions, where the saddlepoint approximation has uniform relative error \(\mathcal {O}(1/N)\) for this class of models (see Additional file 1: Section xiv). We then give sufficient conditions for both the Poisson-Binomial and PWM model. Although this result involves the limiting behaviour of the approximation, it has been demonstrated that the saddlepoint approximation has remarkably small error even for small N [21].
TFMPvalue has two modules for p-value computation. The first calculates the exact p-value and the other rounds the score-matrix to increase computational speed. The exact p-value computation module in the TFM software has exponential computational time complexity (Additional file 1: Figure S11) therefore we only compare with the approximate p-value calculation from TFM.
The approximate TFMPvalue computation is \(\mathcal {O}\left (N^{2}\right)\), but faster in practice due to the branch-and-bound heuristic. Again computing saddlepoint approximation is roughly \(\mathcal {O}(N)\). For shorter motifs this does not make any practical difference, but for longer motifs (>20 bp) the difference can be sizeable depending on the exact problem and the desired level of accuracy (Additional file 1: Figure S12). In the next section we will show that the SA and IS methods can be applied to richer motif models, where the convolution methods can not easily be adapted.
Higher-order Markov chains
First- and higher-order Markov chains is another application domain of SA and SG-IS. In a recent paper by Siebert et al. [8] they argue convincingly for replacing the PWM motif models with higher order Markov chains using a Bayesian prior (BaMMs).
A PWM model assumes that each base in a motif is independent. In contrast Markov chains are able to capture context dependent nucleotide frequencies at the expense of more parameters. Siebert et al. overcome the challenge of training the parameter rich models by employing a Bayesian model, where the prior shrinks the high-order parameters towards their lower-order counterparts for contexts rarely encountered in the training data.
BaMMs outperformed PWMs in discriminating ChIP-seq peak-sequences from simulated background sequences of the same length and tri-mer composition. Including flanking regions widens the gap between BaMMs and PWMs in terms of predictive power. This is possibly explained by two modes of DNA-protein binding specificity; base readout and shape readout. In base readout the protein recognizes the DNA sequence. This form of binding specificity is dominant in the core motif and is reasonably well-captured by PWMs. In shape readout the protein recognizes the shape of the DNA, the DNA shape is in turn determined by motifs showing high neighbour correlation [28].
Due to the large-scale nature of motif-detection accurate p-value evaluation is important. As PWMs are Markov chains of order zero, we are again dealing with an NP-hard problem, making it natural to look for approximate methods.
We obtain a BaMM for the CTCF transcription factor binding motif in MCF-7 cell lines (see Additional file 1: Section x). Second- and higher-order Markov chains contain cycles and are therefore not immediately suited for the framework. However by compounding variables an n-th order Markov chain can be represented as a first order Markov chain (Fig. 4 a).
a 2nd and higher-order Markov chains contain cycles. But higher-order Markov chains can be viewed as first order Markov chains by compounding variables. We thereby obtain a tree-structured graph. b Simulating 106 sequences of the same length as our motif and estimated the significance and corresponding 95% confidence interval shown in shaded grey. We compare this with SA and IS using (α=0.5) and 104 samples. c We simulated 104 sequences of length 200bp and calculated the maximum motif match score of all offsets, the 95% confidence interval is shown in shaded grey. To calculate the significance of this maximum, we used the calculation from a single match and employed a Poisson approximation
First the significance of the log-odds score of a single match is determined using SA and IS. Second the accuracy of the approximations is verified using deep naive sampling, generating 106 background sequences of the same length as the motif (16bp) with a homogeneous second order Markov model. Comparing the approximation to the estimates obtained from deep naive sampling we the see that they track each other perfectly (Fig. 4 b).
Another classification task of interest is identifying longer sequences containing the motif. We simulate 104 200 bp long sequences again with a homogenouos second order Markov model. Within a 200 bp long sequence a motif of length k can start at 200−k+1 positions (offsets). We consider the max of log-odds scores obtained from evaluating a motif match in all offsets. To calculate the significance we use the estimates of significance for a single match and employ a Poisson approximation [29] (see Additional file 1: Section x). The Poisson approximation is typically valid if the sequence we search is sufficiently long and the motif is not of low complexity (i.e. not highly repetitive). Again we observe that the SA and IS method combined with Poisson approximation provides a good approximation of the statistical significance (Fig. 4 c).
Phylogenetic trees
Our final example is derived from molecular evolution. A phylogenetic tree represents the relationship among species. With each leave representing a species and internal nodes common ancestors.
Evolutionary conservation manifests itself by a slower than normal substitution rate. At the population level, purifying selection maintains phenotypic function by constraining the evolutionary process and effectively eliminates some mutations from being fixed as substitutions. Evolutionary conservation therefore reflects presence of functional constraints. Using a fixed phylogenetic tree and a model for nucleotide substitution we can calculate the expected number of substitutions along the branches of the tree given the present day sequences. We can then evaluate if this number is significantly lower than expected. This is conceptually similar to the widely used phyloP-score, although their method is more sophisticated; modelling and accounting for clade specific mutation rates and indels [9].
We perform our analysis on a phylogenetic tree with 11 leaves, corresponding to present day sequences (Fig. 5 a, for a detailed description of the phylogenetic tree see Additional file 1: Section xii). A phylogenetic tree model can be cast into a factor graph where leaf and each internal nodes are variable nodes and branches are factor nodes (Fig. 5 b). Assuming the Jukes-Cantor substitution model, we can calculate the transition probabilities and the expected number of transitions conditional on the end points of each branch. These are exactly the matrices needed in order to compute the expected number of substitutions conditional on the present-day sequences. Note that we are not limited to the Jukes-Cantor model, these matrices can be computed for any substitution rate matrix [30].
a A phylogenetic tree with 11 present sequences. A single alignment column with a high degree of identity across sequences indicate evolutionary conservation. b A phylogenetic tree can easily be converted to a factor graph, here shown for a phylogenetic tree with only 3 species. Note that the common ancestors are typically not available for sequencing and their sequences are unknown. The internal variables are therefore unshaded indicating a hidden variable. c The distribution of the conditional expectation of the number of substitutions over the whole phylogenetic tree, given the present sequences. The distribution is obtained by simulating 105 times. d We use IS to estimate the tail of the distribution by sampling n=1,000 scores. Two different α-values were used: 0, corresponding to naive sampling and 1.5. Note that naive sampling has much wider confidence bands in the tail compared to importance sampling
The distribution of the conditional expectation of the number of substitutions is obtained by simulating 105 alignment columns (Fig. 5 c). As we are testing for evolutionary conservation, a low number of expected substitutions is significant, testing for accelertion is however easily done by instead regarding a high number of expected substitutions as significant. While the actual number of substitutions is evidently an integer, the conditional expectation can be any non-negative number. Note also that even for complete sequence identity the expected number of substitutions is non-zero as multiple substitutions at the same site can anull each other. Observing less than 14 expected substitutions is a moderately rare event. But using IS we can get a good handle on these probabilities (Fig. 5 d).
In the present example the chosen score factorizes neatly according to (2), but this would not have been the case had we chosen a likelihood based score. As opposed to the previous examples this example contains latent variables. The log likelihood does not factor into (3). This is not a problem for the IS procedure, where we can still simulate data from the full data distribution and then calculate the likelihood. For the SA method there is no immediate solution.
In the following two sections we address the question of efficiency; basically establishing and evaluating appropriate measures of the quality of our approximations, both in terms of accuracy and computational cost.
Normal and saddlepoint approximations
The error bounds typically given for the normal and saddlepoint approximations, are derived for sums of i.i.d. variables or Markov chains [21, 22]. We review a few of these results. As i.i.d. variables and Markov chains are special cases in our setup, they will inform us on the behaviour we should expect in the general case.
For the normal approximation the Berry-Esseen theorem [31] provides us with an upper bound on the absolute errors. Consider a sequence X 1,X 2,…,X N of i.i.d. variables having mean μ and variance σ 2. Set \(S=\sum _{i=1}^{N} X_{i}\), then
$$\sup\limits_{x \in \mathbb{R}}\left| P(S < t) - \Phi\left(\frac{t-N\mu}{\sigma\sqrt{N}}\right) \right| = \mathcal{O}\left(\frac{1}{\sqrt{N}}\right). $$
However the relative error is not bounded, which in most cases can be ascertained by considering a t of order N.
On the other hand the saddlepoint approximation has relative error of order \(\mathcal {O}(1/N)\) [20]. This bound holds for homogeneuous Markov models and under mild regularity conditions for the Poisson-Binomial and PWM models (see Additional file 1: Section xiv). Opposed to the normal approximation, the saddlepoint approximation will recognize bounded variables in the sense that (14) has no solution if t is outside the range of S.
To study the behaviour of the saddlepoint approximation in the general case, we conduct a simulation study. We investigated how the complexity of the graph, the degree of independency between the contributions from each factor and the size of the graph affects the quality of the approximation. The graphs were chosen as balanced trees (Additional file 1: Figure S2) and such that the contribution to the sum (2) from each factor had the same marginal distribution. The complexity is adjusted by the degree of the variable nodes in the tree. The degree of independency is measured by the variance ratio, the ratio of the variance of the score and the sum of variances from each factor (i.e. the variance we would have seen if each contribution was independent)
$$ VR = \frac{\mathbb{V}\left[ \sum_{a \in \mathcal{A}}S_{a}(X_{a}) \right]}{\sum_{a \in \mathcal{A}}\mathbb{V}\left[ S_{a}(X_{a}) \right]}. $$
For a more detailed description see Additional file 1: Section ii.
First note that as we go to smaller percentiles the errors in the saddlepoint approximation remains stable, whereas they increase in the normal approximation increases. This parallels the situation for i.i.d. variables (Fig. 6 b, Additional file 1: Figure S3). As expected the relative error decreases with the size of the graph (Fig. 6 a), note however that the errors do not seem to converge to zero. This we believe is explained by the discrete nature of the scores, there exist a correction factor to the saddlepoint approximation in the case the variables take values on a lattice:
$$ K(\theta, \alpha) = \frac{\alpha\left|\theta\right|}{1-\exp\left(-\alpha\left|\theta\right|\right)}, $$
a We investigate the quality of the approximation as a function of graph size and conditioned on the degree of independence between variables as measured by the variance ratio (18). Here we found the 1%-quantile in a Markov chain using importance sampling. We then found the saddlepoint approximation of the tail probability in this particular point and plot the relative error as a function of the length of the Markov chain. For details see Additional file 1: Section ii. b The relative error measured at different quantiles for both SA and normal approximation, this was done under the same range of conditions as above. We see that while the errors remain stable for the SA they increase for the normal approximation as we move to smaller percentiles
where α is distance between two consecutive points in the lattice. Generally log-odds scores will not take values on a lattice, still as the correction factor is larger than 1, it suggests that the tail probability is underestimated and more so for large θ, this explains the only near convergence to zero. It is further observed that the convergence is slower for more complex graphs, i.e. graphs having many nodes with high degree, and that there appear to be an optimal amount of correlation between the contributions from each factor in the graph (Additional file 1: Figures S4 and S5).
Both naive sampling and importance sampling gives unbiased estimates. We are therefore concerned with the variance of our estimate and not the bias. Statements about the variance are typically phrased in an asymptotic setup. Let {P n } denote a family of probability distributions, where P n is derived from a factor graph with n factor nodes. Assume also that the contribution of each factor to (2) is identically distributed with mean μ. Consider now z n =P n (S>n(μ+ε)) and let Z n be the estimate of a z n obtained from a single sample. We will say that the class of estimators, \(\{Z_{n}\}_{n=1}^{\infty }\), has bounded relative error if
$$\limsup_{n \rightarrow \infty} \frac{\mathbb{V}(Z_{n})}{z_{n}^{2}} < \infty. $$
For technical reasons one often considers the slightly weaker criterion of logarithmic efficiency namely,
$$\limsup\limits_{n \rightarrow \infty} \frac{\mathbb{V}(Z_{n})}{z_{n}^{2-\epsilon}} = 0, \qquad \forall \epsilon > 0. $$
The relationship between naive and importance sampling resembles that between normal and saddlepoint approximation: In the case where the contributions to (2) are also independent, SG-IS has logarithmic efficiency as proven in [19]. For naive sampling the absolute error tends to zero but the relative error tends to infinity. We have strong reasons to believe that if certain regularity conditions regarding the correlation between neighbouring variables are imposed, logarithmic efficiency also holds in the more general case of factor graphs having bounded degree. We are currently working on a proof, this is however beyond the scope of the current work.
Computational speed
Making a direct comparison of the computational speed of evaluating the saddlepoint and normal approximations on one hand and doing importance sampling on the other is not meaningful, as the accuracy of the importance sampling depends both on the number of samples and the choice of tuning parameter α.
Furthermore the three methods have different behaviours when it comes to evaluating a range of points and not only a single point: The additional computing time for importance sampling is negligible as long as the points have roughly similar significance so that we can use a single batch of samples generated with one α value. Similarly for the normal approximation the mean and the variance of the score needs being calculated only once, the computational cost of evaluating the normal distribution is again negligible. The saddlepoint approximation does get sligthly faster per evaluation with consecutive evaluation, as the Newton-Raphson procedure can be initiated with the previous saddlepoint, still there is a linear cost in the number of evaluations (Additional file 1: Figures S7 and S8).
All three methods scale linearly with the number of nodes in the graph (Additional file 1: Figure S6). This suggests that we can formulate a rule-of-thumb regarding the number of points we need to evaluate before importance sampling becomes faster than SA. The benchmarks show that one evaluation of the saddlepoint approximation takes about the same time as generating 40 importance samples.
In conclusion saddlepoint approximation is accurate and relatively fast for a single evaluation. If we have to do multiple evaluations importance sampling is preferable. If speed is really the main concern and we need to evaluate a large range of scores, we can use the normal approximation to obtain rough estimates.
The saddlepoint approximation was originally conceived by Daniels as far back as 1954 [11]. Although it has found uses in some areas of biomedical science e.g. survival analysis [32], its intimidating look may have prevented widespread use in genomics, where we believe there is ample opportunity to apply it. The R-package we have developed contains methods for both building and training models, but also for applying the saddlepoint approximation and importance sampling algorithms. Thereby we hope to reduce the barriers towards the use of saddlepoint approximations.
SG-IS was derived by noting similarities between importance sampling and saddlepoint approximation. The literature contains proofs that this importance sampling scheme is in a certain respect the optimal [19]. Taking this more principled way of designing importance sampling distributions is likely to lead to faster convergence to effective importance sampling schemes.
A direction of research that can be further pursued is how to deal with latent variables: As briefly discussed in the context of phylogenetic trees, the log-odds score does not factorize when we have latent variables. It is therefore not possible to calculate the moment generating function and its derivatives efficiently using the algorithms we use here. This prevents the use of the saddlepoint approximation. Importance sampling will however still work, by just using the tilted distribution on the full data distribution.
The methods and algorithms have been phrased in terms of the factor graph formalism throughout the paper. As factor graphs can capture both directed (Bayesian network) and undirected (Markov random field) models the theory applies to both of them. Especially Bayesian networks have gained much popularity as a tool for integrating the vast array of molecular profiling experiments. The general framework of factor graphs is a powerful tool to analyze such data.
In the current paper we have presented saddlepoint approximation and importance sampling based methods for evaluation of significance in factor graphs. Efficient algorithms were developed for computing the first and second order statistics, required to derive the saddlepoint approximation, making the saddlepoint approximation feasible for large graphs. We provide an adaption of the forward-sampling algorithm tailored to factor graphs, needed for importance-sampling.
We further reviewed the theoretical properties of the two methods. As most results are derived for independent identically distributed variables, a simulation study was performed to confirm that many of the properties still hold in a range of dependence scenarios. Further we compared the computational speed of the methods to give rough guidelines for deciding between the two.
We demonstrated the utility of the methods considering four different bioinformatics applications. The examples were chosen to show that current models can make use of the methods, but also point forward to new applications. First we looked at the Poisson-binomial model, despite or because it is the simplest of the models, it has numerous uses. At the same time it appears that the algorithms used for analysing the Poisson-binomial model is not state of the art. For exact computation, an adaption of the algorithm implemented in the TFMPvalue R-package [5], is likely to outperform the DFT-CF method. We showed that our approximation methods were able to compute the significance to a high accuracy.
Two motif examples were given, both to show that the problem we are solving is NP-hard, but also to provide useful methods to the motif-analysis field. These methods are especially likely to prove valuable for long and complex motifs such as nucleosome binding motifs.
The phylogenetic example was of a more complex nature than the other examples, highlighting the flexibility of the methods, more than trying to compete with existing methods. Yet, it is qualitatively similar to the phyloP-score. With the availability of massive multiple alignments, such as the UCSC 100-way vertebrate alignment and the coming results of the Genome 10K projects [33], there should be ample opportunity to apply these methods.
BaMM:
Bayesian Markov model
DFT-CF:
Discrete fourier transform of characteristic function
Expectation maximization
PWM:
Position-weight matrix
SG-IS:
Saddlepoint guided importance sampling
Variance-ratio
We would like to thank Johanna Bertl, Qianyun Guo and Malene Juul Rasmussen, for their valuable comments and feedback to this manuscript.
This work was supported by the Sapere Aude program from the Danish Councils for Independent Research, Medical Sciences (FSS) and by a Danish Strategic Research Council grant to Center for Computational and Applied Transcriptomics (COAT). The funding bodies had no role in the design of the study, the collection, analysis, and interpretation of data or in writing the manuscript.
The datasets analysed during the current study are available from UCSC Geome Browser, http://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeAwgTfbsUniform/, and the JASPAR database, http://jaspar.genereg.net.
JSP and TM conceived the project with critical input from AH and JLJ. TM and JSP developed the dgRaph software. JLJ conceived the saddlepoint approximation and assisted in statistical analysis. AH designed and supplied algorithms for the phylogenetic example. TM carried out computational analysis with input from AH, JLJ and JSP. TM wrote the initial draft of the manuscript. AH was responsible for a critical revision of the inital draft. TM and JLJ conceived the algorithmic contributions. All authors read and approved the manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Additional file 1 Supplementary material. This file contains extended method descriptions and supplementary figures. Additionally there is a vignette accompanying the dgRaph R-package as well as vignettes covering each of the models used in the result section. (PDF 699 kb)
Department of Molecular Medicine, Aarhus University, Palle Juul-Jensens Boulevard 99, Aarhus, Denmark
Bioinformatics Research Center, Aarhus University, C.F. Møllers Allé 8, Aarhus, Denmark
Department of Mathematics, Aarhus University, Ny Munkegade 118, Aarhus, Denmark
Koller D, Friedman N. Probabilistic Graphical Models: Principles and Techniques. Cambridge: MIT press; 2009.Google Scholar
Lauritzen SL, Sheehan NA. Graphical models for genetic analyses. Stat Sci. 2003; 18(4):489–514.View ArticleGoogle Scholar
Ni Y, Stingo FC, Baladandayuthapani V. Integrative Bayesian network analysis of genomic data. Cancer Informat. 2014; 13(Suppl 2):39.Google Scholar
Gronau I, Arbiza L, Mohammed J, Siepel A. Inference of natural selection from interspersed genomic elements based on polymorphism and divergence. Mol Biol Evol. 2013; 30(5).Google Scholar
Touzet H, Varré J-S. Efficient and accurate p-value computation for Position Weight Matrices. Algoritm Mol Biol. 2007; 2(1):15.View ArticleGoogle Scholar
Zhang J, Jiang B, Li M, Tromp J, Zhang X, Zhang MQ. Computing exact p-values for DNA motifs. Bioinformatics. 2007; 23(5):531–7.View ArticlePubMedGoogle Scholar
Melton C, Reuter JA, Spacek DV, Snyder M. Recurrent somatic mutations in regulatory regions of human cancer genomes. Nat Genet. 2015; 47(7):710–6.View ArticlePubMedPubMed CentralGoogle Scholar
Siebert M, Söding J. Bayesian Markov models consistently outperform PWMs at predicting motifs in nucleotide sequences. Nucleic Acids Res. 2016; 44(13):6055–69.View ArticlePubMedPubMed CentralGoogle Scholar
Pollard KS, Hubisz MJ, Rosenbloom KR, Siepel A. Detection of nonneutral substitution rates on mammalian phylogenies. Genome Res. 2010; 20(1):110–21.View ArticlePubMedPubMed CentralGoogle Scholar
Eddelbuettel D, François R, Allaire J, Chambers J, Bates D, Ushey K. Rcpp: Seamless R and C++ integration. J Stat Softw. 2011; 40(8):1–18.View ArticleGoogle Scholar
Daniels HE. Saddlepoint approximations in statistics. Ann Math Stat. 1954; 25(4):631–50.View ArticleGoogle Scholar
Stojmirović A, Yu YK. Robust and accurate data enrichment statistics via distribution function of sum of weights. Bioinformatics. 2010; 26(21):2752–759.View ArticlePubMedPubMed CentralGoogle Scholar
Hyrien O, Chen R, Mayer-Pröschel M, Noble M. Saddlepoint approximations to the moments of multitype age-dependent branching processes, with applications. Biometrics. 2010; 66(2):567–77.View ArticlePubMedGoogle Scholar
Frey BJ. Graphical Models for Machine Learning and Digital Communication. Cambridge: MIT press; 1998.Google Scholar
Kschischang FR, Frey BJ, Loeliger HA. Factor graphs and the sum-product algorithm. IEEE Trans Inf Theory. 2001; 47(2):498–519.View ArticleGoogle Scholar
Bishop CM. Pattern Recognition and Machine Learning (Information Science and Statistics). New York: Springer; 2006.Google Scholar
Grinstead CM, Snell JL. Introduction to Probability, 2nd Edition. Providence: Am Math Soc; 2012.Google Scholar
Newberg LA. Error statistics of hidden Markov model and hidden Boltzmann model results. BMC Bioinforma. 2009; 10(1):212.View ArticleGoogle Scholar
Asmussen S, Glynn PW. Stochastic Simulation: Algorithms and Analysis. New York: Springer; 2007.Google Scholar
Barndorff-Nielsen OE, Cox DR. Asymptotic Techniques for Use in Statistics. Boca Raton: Chapman and Hall; 1989.View ArticleGoogle Scholar
Jensen JL. Saddlepoint Approximations. Oxford: Oxford University Press; 1995.Google Scholar
Jensen JL. On a saddlepoint approximation to the Markov binomial distribution. Braz J Probab Stat. 2013; 27:150–61.View ArticleGoogle Scholar
Isaacson J, Schwoebel E, Shcherbina A, Ricke D, Harper J, Petrovick M, Bobrow J, Boettcher T, Helfer B, Zook C, et al. Robust detection of individual forensic profiles in DNA mixtures. Forensic Sci Int Genet. 2015; 14:31–7.View ArticlePubMedGoogle Scholar
González J, Wiberg M, von Davier AA. A note on the Poisson's binomial distribution in item response theory. Appl Psychol Meas. 2016; 40(4):302–10.View ArticleGoogle Scholar
Sellman S, Säterberg T, Ebenman B. Pattern of functional extinctions in ecological networks with a variety of interaction types. Theor Ecol. 2016; 9(1):83–94.View ArticleGoogle Scholar
Hong Y. On computing the distribution function for the Poisson binomial distribution. Comput Stat Data Anal. 2013; 59:41–51.View ArticleGoogle Scholar
Mathelier A, Fornes O, Arenillas DJ, Chen C-y, Denay G, Lee J, Shi W, Shyr C, Tan G, Worsley-Hunt R, et al. JASPAR 2016: A major expansion and update of the open-access database of transcription factor binding profiles. Nucleic Acids Res. 2016; 44(D1):110–5.View ArticleGoogle Scholar
Rohs R, Jin X, West SM, Joshi R, Honig B, Mann RS. Origins of specificity in protein-DNA recognition. Ann Rev Biochem. 2010; 79:233–69.View ArticlePubMedPubMed CentralGoogle Scholar
Arratia R, Goldstein L, Gordon L. Two moments suffice for Poisson approximations: the Chen-Stein method. Ann Probab. 1989; 17(1):9–25.View ArticleGoogle Scholar
Hobolth A, Jensen JL. Summary statistics for endpoint-conditioned continuous-time Markov chains. Appl Probab, J. 2011; 48(4):911–24.View ArticleGoogle Scholar
Berry AC. The accuracy of the gaussian approximation to the sum of independent variates. Trans Am Math Soc. 1941; 49(1):122–36.View ArticleGoogle Scholar
Huzurbazar S, Huzurbazar AV. Survival and hazard functions for progressive diseases using saddlepoint approximations. Biometrics. 1999; 55(1):198–203.View ArticlePubMedGoogle Scholar
Koepfli KP, Paten B, O'Brien SJ. The genome 10k project: A way forward. Annu Rev Anim Biosci. 2015; 3(1):57–111.View ArticlePubMedGoogle Scholar
Sequence analysis (methods)
|
CommonCrawl
|
Sun, 05 Jan 2020 19:22:23 GMT
2.7: Solve Compound Inequalities
[ "article:topic", "showtoc:no" ]
Book: Intermediate Algebra (OpenStax)
2: Solving Linear Equations
Solve Compound Inequalities with "and"
Solve Compound Inequalities with "or"
Solve Applications with Compound Inequalities
By the end of this section, you will be able to:
Before you get started, take this readiness quiz.
Simplify: \(\frac{2}{5}(x+10)\).
If you missed this problem, review [link].
Simplify: \(−(x−4)\).
Now that we know how to solve linear inequalities, the next step is to look at compound inequalities. A compound inequality is made up of two inequalities connected by the word "and" or the word "or." For example, the following are compound inequalities.
\[\begin{array} {lll} {x+3>−4} &{\text{and}} &{4x−5\leq 3} \\ {2(y+1)<0} &{\text{or}} &{y−5\geq −2} \\ \end{array} \nonumber\]
COMPOUND INEQUALITY
A compound inequality is made up of two inequalities connected by the word "and" or the word "or."
To solve a compound inequality means to find all values of the variable that make the compound inequality a true statement. We solve compound inequalities using the same techniques we used to solve linear inequalities. We solve each inequality separately and then consider the two solutions.
To solve a compound inequality with the word "and," we look for all numbers that make both inequalities true. To solve a compound inequality with the word "or," we look for all numbers that make either inequality true.
Let's start with the compound inequalities with "and." Our solution will be the numbers that are solutions to both inequalities known as the intersection of the two inequalities. Consider the intersection of two streets—the part where the streets overlap—belongs to both streets.
To find the solution of the compound inequality, we look at the graphs of each inequality and then find the numbers that belong to both graphs—where the graphs overlap.
For the compound inequality \(x>−3\) and \(x\leq 2\), we graph each inequality. We then look for where the graphs "overlap". The numbers that are shaded on both graphs, will be shaded on the graph of the solution of the compound inequality. See Figure \(\PageIndex{1}\).
Figure \(\PageIndex{1}\)
We can see that the numbers between \(−3\) and \(2\) are shaded on both of the first two graphs. They will then be shaded on the solution graph.
The number \(−3\) is not shaded on the first graph and so since it is not shaded on both graphs, it is not included on the solution graph.
The number two is shaded on both the first and second graphs. Therefore, it is be shaded on the solution graph.
This is how we will show our solution in the next examples.
Example \(\PageIndex{1}\)
Solve \(6x−3<9\) and \(2x+7\geq 3\). Graph the solution and write the solution in interval notation.
\(6x−3<9\) and \(2x+9\geq 3\)
Step 1. Solve each
inequality. \(6x−3<9\) \(2x+9\geq 3\)
\(6x<12\) \(2x\geq −6\)
\(x<2\) and \(x\geq −3\)
Step 2. Graph each solution. Then graph the numbers that make both inequalities true. The final graph will show all the numbers that make both inequalities true—the numbers shaded on both of the first two graphs.
Step 3. Write the solution in interval notation. \([−3,2)\)
All the numbers that make both inequalities true are the solution to the compound inequality.
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(4x−7<9\) and \(5x+8\geq 3\).
SOLVE A COMPOUND INEQUALITY WITH "AND."
Solve each inequality.
Graph each solution. Then graph the numbers that make both inequalities true.
This graph shows the solution to the compound inequality.
Write the solution in interval notation.
Solve \(3(2x+5)\leq 18\) and \(2(x−7)<−6\). Graph the solution and write the solution in interval notation.
\(3(2x+5)\leq 18\) and \(2(x−7)<−6\)
Solve each
inequality. \(6x+15\leq 18\) \(2x−14<−6\)
\(6x\leq 3\) \(2x<8\)
\(x\leq \frac{1}{2}\) and \(x<4\)
Graph each
Graph the numbers
that make both
inequalities true.
Write the solution
in interval notation. \((−\infty, \frac{1}{2}]\)
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(2(3x+1)\leq 20\) and \(4(x−1)<2\).
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(5(3x−1)\leq 10\) and \(4(x+3)<8\).
Solve \(\frac{1}{3}x−4\geq −2\) and \(−2(x−3)\geq 4\). Graph the solution and write the solution in interval notation.
\(\frac{1}{3}x−4\geq −2\) and \(−2(x−3)\geq 4\)
Solve each inequality. \(\frac{1}{3}x−4\geq −2\) \(−2x+6\geq 4\)
\(\frac{1}{3}x\geq 2\) \(−2x\geq −2\)
\(x\geq 6\) and \(x\leq 1\)
Graph each solution.
Graph the numbers that
make both inequalities
There are no numbers that make both inequalities true.
This is a contradiction so there is no solution.There are no numbers that make both inequalities true.
This is a contradiction so there is no solution.
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(\frac{1}{4}x−3\geq −1\) and \(−3(x−2)\geq 2\).
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(\frac{1}{5}x−5\geq −3\) and \(−4(x−1)\geq −2\).
Sometimes we have a compound inequality that can be written more concisely. For example, \(a<x\) and \(x<b\) can be written simply as \(a<x<b\) and then we call it a double inequality. The two forms are equivalent.
DOUBLE INEQUALITY
A double inequality is a compound inequality such as \(a<x<b\). It is equivalent to \(a<x\) and \(x<b\).
\[\text{Other forms:} \quad \begin{array} {lllll} {a<x<b} &{\text{is equivalent to }} &{a<x} &{\text{and}} &{x<b} \\ {a\leq x\leq b} &{\text{is equivalent to }} &{a\leq x} &{\text{and}} &{x\leq b} \\ {a>x>b} &{\text{is equivalent to }} &{a>x} &{\text{and}} &{x>b} \\ {a\geq x\geq b} &{\text{is equivalent to }} &{a\geq x} &{\text{and}} &{x\geq b} \\ \end{array} \nonumber\]
To solve a double inequality we perform the same operation on all three "parts" of the double inequality with the goal of isolating the variable in the center.
Example \(\PageIndex{10}\)
Solve \(−4\leq 3x−7<8\). Graph the solution and write the solution in interval notation.
\(-4 \leq 3x - 7 < 8\)
Add 7 to all three parts. \( -4 \,{\color{red}{+\, 7}} \leq 3x - 7 \,{\color{red}{+ \,7}} < 8 \,{\color{red}{+ \,7}}\)
Simplify. \( 3 \le 3x < 15 \)
Divide each part by three. \( \dfrac{3}{\color{red}{3}} \leq \dfrac{3x}{\color{red}{3}} < \dfrac{15}{\color{red}{3}} \)
Simplify. \( 1 \leq x < 5 \)
Graph the solution.
Write the solution in interval notation. \( [1, 5) \)
When written as a double inequality, \(1\leq x<5\), it is easy to see that the solutions are the numbers caught between one and five, including one, but not five. We can then graph the solution immediately as we did above.
Another way to graph the solution of \(1\leq x<5\) is to graph both the solution of \(x\geq 1\) and the solution of \(x<5\). We would then find the numbers that make both inequalities true as we did in previous examples.
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(−5\leq 4x−1<7\).
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(−3<2x−5\leq 1\).
To solve a compound inequality with "or", we start out just as we did with the compound inequalities with "and"—we solve the two inequalities. Then we find all the numbers that make either inequality true.
Just as the United States is the union of all of the 50 states, the solution will be the union of all the numbers that make either inequality true. To find the solution of the compound inequality, we look at the graphs of each inequality, find the numbers that belong to either graph and put all those numbers together.
To write the solution in interval notation, we will often use the union symbol, \(\cup\), to show the union of the solutions shown in the graphs.
SOLVE A COMPOUND INEQUALITY WITH "OR."
Graph each solution. Then graph the numbers that make either inequality true.
Solve \(5−3x\leq −1\) or \(8+2x\leq 5\). Graph the solution and write the solution in interval notation.
\(5−3x\leq −1\) or \(8+2x\leq 5\)
Solve each inequality. \(5−3x\leq −1\) \(8+2x\leq 5\)
\(−3x\leq −6\) \(2x\leq −3\)
\(x\geq 2\) or \(x\leq −\frac{3}{2}\)
Graph numbers that
make either inequality
\((−\infty,−32]\cup[2,\infty)\)
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(1−2x\leq −3\) or \(7+3x\leq 4\).
Solve \(\frac{2}{3}x−4\leq 3\) or \(\frac{1}{4}(x+8)\geq −1\). Graph the solution and write the solution in interval notation.
\(\frac{2}{3}x−4\leq 3\) or \(\frac{1}{4}(x+8)\geq −1\)
inequality. \(3(\frac{2}{3}x−4)\leq 3(3)\) \(4⋅\frac{1}{4}(x+8)\geq 4⋅(−1)\)
\(2x−12\leq 9\) \(x+8\geq −4\)
\(2x\leq 21\) \(x\geq −12\)
\(x\leq \frac{21}{2}\)
\(x\leq \frac{21}{2}\) or \(x\geq −12\)
Graph numbers
that make either
inequality true.
The solution covers all real numbers.
\((−\infty ,\infty )\)
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(\frac{3}{5}x−7\leq −1\) or \(\frac{1}{3}(x+6)\geq −2\).
Solve the compound inequality. Graph the solution and write the solution in interval notation: \(\frac{3}{4}x−3\leq 3\) or \(\frac{2}{5}(x+10)\geq 0\).
Situations in the real world also involve compound inequalities. We will use the same problem solving strategy that we used to solve linear equation and inequality applications.
Recall the problem solving strategies are to first read the problem and make sure all the words are understood. Then, identify what we are looking for and assign a variable to represent it. Next, restate the problem in one sentence to make it easy to translate into a compound inequality. Last, we will solve the compound inequality.
Due to the drought in California, many communities have tiered water rates. There are different rates for Conservation Usage, Normal Usage and Excessive Usage. The usage is measured in the number of hundred cubic feet (hcf) the property owner uses.
During the summer, a property owner will pay $24.72 plus $1.54 per hcf for Normal Usage. The bill for Normal Usage would be between or equal to $57.06 and $171.02. How many hcf can the owner use if he wants his usage to stay in the normal range?
Identify what we are looking for. The number of hcf he can use and stay in the "normal usage" billing range.
Name what we are looking for. Let x=x= the number of hcf he can use.
Translate to an inequality. Bill is $24.72 plus $1.54 times the number of hcf he uses or \(24.72+1.54x\).
\(\color{Cerulean}{\underbrace{\color{black}{\text{His bill will be between or equal to }$57.06\text{ and }$171.02.}}}\)
\(57.06 \leq 24.74 + 1.54x \leq 171.02 \)
Solve the inequality.
\(57.06 \leq 24.74 + 1.54x \leq 171.02\)
\(57.06 \,{\color{red}{- \,24.72}}\leq 24.74 \,{\color{red}{- \,24.72}} + 1.54x \leq 171.02 \,{\color{red}{- \,24.72}}\)
\( 32.34 \leq 1.54x \leq 146.3\)
\( \dfrac{32.34}{\color{red}{1.54}} \leq \dfrac{1.54x}{\color{red}{1.54}} \leq \dfrac{146.3}{\color{red}{1.54}}\)
\( 21 \leq x \leq 95 \)
Answer the question. The property owner can use \(21–95\) hcf and still fall within the "normal usage" billing range.
Due to the drought in California, many communities now have tiered water rates. There are different rates for Conservation Usage, Normal Usage and Excessive Usage. The usage is measured in the number of hundred cubic feet (hcf) the property owner uses.
During the summer, a property owner will pay $24.72 plus $1.32 per hcf for Conservation Usage. The bill for Conservation Usage would be between or equal to $31.32 and $52.12. How many hcf can the owner use if she wants her usage to stay in the conservation range?
The homeowner can use \(5–20\) hcf and still fall within the "conservation usage" billing range.
During the winter, a property owner will pay $24.72 plus $1.54 per hcf for Normal Usage. The bill for Normal Usage would be between or equal to $49.36 and $86.32. How many hcf will he be allowed to use if he wants his usage to stay in the normal range?
The homeowner can use \(16–40\) hcf and still fall within the "normal usage" billing range.
Access this online resource for additional instruction and practice with solving compound inequalities.
How to solve a compound inequality with "and"
Graph each solution. Then graph the numbers that make both inequalities true. This graph shows the solution to the compound inequality.
A double inequality is a compound inequality such as \(a<x<b\). It is equivalent to \(a<x\) and \(x<b.\)
Other forms: \[\begin{align*} a<x<b & & \text{is equivalent to} & & a<x\;\text{and}\;x<b \\
a≤x≤b & & \text{is equivalent to} & & a≤x\;\text{and}\;x≤b \\
a>x>b & & \text{is equivalent to} & & a>x\;\text{and}\;x>b \\
a≥x≥b & & \text{is equivalent to} & & a≥x\;\text{and}\;x≥b \end{align*}\]
How to solve a compound inequality with "or"
2.6E: Exercises
|
CommonCrawl
|
About 5% of the power of a 100 W light bulb is converted to visible radiation.
About 5% of the power of a 100 W light bulb is converted to visible radiation. What is the average intensity of visible radiation
(a) at a distance of 1 m from the bulb?
(b) at a distance of 10 m?
Assume that the radiation is emitted isotropically and neglect reflection.
Power rating of bulb, P = 100 W
It is given that about 5% of its power is converted into visible radiation.
$\therefore$ Power of visible radiation,
$P^{\prime}=\frac{5}{100} \times 100=5 \mathrm{~W}$
Hence, the power of visible radiation is 5W.
(a) Distance of a point from the bulb, d = 1 m
Hence, intensity of radiation at that point is given as:
$I=\frac{P^{\prime}}{4 \pi d^{2}}$
$=\frac{5}{4 \pi(1)^{2}}=0.398 \mathrm{~W} / \mathrm{m}^{2}$
(b) Distance of a point from the bulb, d1 = 10 m
$I=\frac{P^{\prime}}{4 \pi\left(d_{1}\right)^{2}}$
$=\frac{5}{4 \pi(10)^{2}}=0.00398 \mathrm{~W} / \mathrm{m}^{2}$
|
CommonCrawl
|
July 2016 , Volume 15 , Issue 4
Large time behavior of a conserved phase-field system
Ahmed Bonfoh and Cyril D. Enyi
We investigate the large time behavior of a conserved phase-field system that describes the phase separation in a material with viscosity effects. We prove a well-posedness result, the existence of the global attractor and its upper semicontinuity, when the heat capacity tends to zero. Then we prove the existence of inertial manifolds in one space dimension, and for the case of a rectangular domain in two space dimension. We also construct robust families of exponential attractors that converge in the sense of upper and lower semicontinuity to those of the viscous Cahn-Hilliard equation. Continuity properties of the intersection of the inertial manifolds with bounded absorbing sets are also proven. This work extends and improves some recent results proven by A. Bonfoh for both the conserved and non-conserved phase-field systems.
Ahmed Bonfoh, Cyril D. Enyi. Large time behavior of a conserved phase-field system. Communications on Pure & Applied Analysis, 2016, 15(4): 1077-1105. doi: 10.3934\/cpaa.2016.15.1077.
Nonlinear noncoercive Neumann problems
Leszek Gasiński, Liliana Klimczak and Nikolaos S. Papageorgiou
We consider nonlinear, nonhomogeneous and noncoercive Neumann problems with a Carathéodory reaction which is either $(p-1)$-superlinear near $\pm\infty$ (without satisfying the usual in such cases Ambrosetti-Rabinowitz condition) or $(p-1)$-sublinear near $\pm\infty$. Using variational methods and Morse theory (critical groups) we prove two existence theorems.
Leszek Gasi\u0144ski, Liliana Klimczak, Nikolaos S. Papageorgiou. Nonlinear noncoercive Neumann problems. Communications on Pure & Applied Analysis, 2016, 15(4): 1107-1123. doi: 10.3934\/cpaa.2016.15.1107.
Nodal solutions for nonlinear Schrödinger equations with decaying potential
Zuji Guo
This paper concerns the following nonlinear Schrödinger equations: \begin{eqnarray} \left\{ \begin{array}{ll} \displaystyle -\varepsilon^2\Delta u +V(x)u= |u|^{p_+-2}u^++|u|^{p_--2}u^-,\ x\in\mathbb{R}^N,\\ \lim\limits_{|x|\rightarrow\infty}u(x)=0, \\ \end{array} \right. \end{eqnarray} where $N\geq 3$ and $2 < p_{\pm} < \frac{2N}{N-2}$. We obtain nodal solutions for the above nonlinear Schrödinger equations with decaying and vanishing potential at infinity, i.e., $\lim\limits_{|x|\rightarrow\infty}V(x)=0$.
Zuji Guo. Nodal solutions for nonlinear Schr\u00F6dinger equations with decaying potential. Communications on Pure & Applied Analysis, 2016, 15(4): 1125-1138. doi: 10.3934\/cpaa.2016.15.1125.
Existence and uniqueness for $\mathbb{D}$-solutions of reflected BSDEs with two barriers without Mokobodzki's condition
Imen Hassairi
In this paper, we are interested in the problem of existence and uniqueness of a solution which belongs to class $\mathbb{D}$ for a backward stochastic differential equation with two strictly separated continuous reflecting barriers in the case when the data are $\mathbb{L}^1$-integrable and with generator satisfying the Lipschitz property. The main idea is to use the notion of local solution to obtain the global one.
Imen Hassairi. Existence and uniqueness for $\\mathbb{D}$-solutions of reflected BSDEs with two barriers without Mokobodzki\'s condition. Communications on Pure & Applied Analysis, 2016, 15(4): 1139-1156. doi: 10.3934\/cpaa.2016.15.1139.
On the $L^p-$ theory of Anisotropic singular perturbations of elliptic problems
Ogabi Chokri
In this article we give an extension of the $L^2-$theory of anisotropic singular perturbations for elliptic problems. We study a linear and some nonlinear problems involving $L^{p}$ data ($1 < p < 2$). Convergences in pseudo Sobolev spaces are proved for weak and entropy solutions, and rate of convergence is given in cylindrical domains.
Ogabi Chokri. On the $L^p-$ theory of Anisotropic singular perturbations of elliptic problems. Communications on Pure & Applied Analysis, 2016, 15(4): 1157-1178. doi: 10.3934\/cpaa.2016.15.1157.
On the initial boundary value problem for certain 2D MHD-$\alpha$ equations without velocity viscosity
Jitao Liu
This paper is concerned with the initial boundary value problem of certain 2D MHD-$\alpha$ equations without velocity viscosity over a bounded domain with smooth boundary. We show that the equations have a unique global smooth solution $(u,b)$ for $W^{4,p}\times H^4$ initial data and physical boundary condition.
Jitao Liu. On the initial boundary value problem for certain 2D MHD-$\\alpha$ equations without velocity viscosity. Communications on Pure & Applied Analysis, 2016, 15(4): 1179-1191. doi: 10.3934\/cpaa.2016.15.1179.
Transition fronts in nonlocal Fisher-KPP equations in time heterogeneous media
Wenxian Shen and Zhongwei Shen
The present paper is devoted to the study of transition fronts of nonlocal Fisher-KPP equations in time heterogeneous media. We first construct transition fronts with exact decaying rates as the space variable tends to infinity and with prescribed interface location functions, which are natural generalizations of front location functions in homogeneous media. Then, by the general results on space regularity of transition fronts of nonlocal evolution equations proven in the authors' earlier work ([25]), these transition fronts are continuously differentiable in space. We show that their space partial derivatives have exact decaying rates as the space variable tends to infinity. Finally, we study the asymptotic stability of transition fronts. It is shown that transition fronts attract those solutions whose initial data decays as fast as transition fronts near infinity and essentially above zero near negative infinity.
Wenxian Shen, Zhongwei Shen. Transition fronts in nonlocal Fisher-KPP equations in time heterogeneous media. Communications on Pure & Applied Analysis, 2016, 15(4): 1193-1213. doi: 10.3934\/cpaa.2016.15.1193.
Least energy solutions of nonlinear Schrödinger equations involving the half Laplacian and potential wells
Miaomiao Niu and Zhongwei Tang
In this paper, we are concerned with the existence of least energy solutions of nonlinear Schrödinger equations involving the half Laplacian \begin{eqnarray} (-\Delta)^{1/2}u(x)+\lambda V(x)u(x)=u(x)^{p-1}, u(x)\geq 0, \quad x\in R^N, \end{eqnarray} for sufficiently large $\lambda$, $2 < p < \frac{2N}{N-1}$ for $N \geq 2$. $V(x)$ is a real continuous function on $R^N$. Using variational methods we prove the existence of least energy solution $u(x)$ which localize near the potential well int$(V^{-1}(0))$ for $\lambda$ large. Moreover, if the zero sets int$(V^{-1}(0))$ of $V(x)$ include more than one isolated components, then $u_\lambda(x)$ will be trapped around all the isolated components. However, in Laplacian case, when the parameter $\lambda$ large, the corresponding least energy solution will be trapped around only one isolated component and become arbitrary small in other components of int$(V^{-1}(0))$. This is the essential difference with the Laplacian problems since the operator $(-\Delta)^{1/2}$ is nonlocal.
Miaomiao Niu, Zhongwei Tang. Least energy solutions of nonlinear Schr\u00F6dinger equations involving the half Laplacian and potential wells. Communications on Pure & Applied Analysis, 2016, 15(4): 1215-1231. doi: 10.3934\/cpaa.2016.15.1215.
Persistence of the hyperbolic lower dimensional non-twist invariant torus in a class of Hamiltonian systems
Lei Wang, Quan Yuan and Jia Li
We consider a class of nearly integrable Hamiltonian systems with Hamiltonian being $H(\theta,I,u,v)=h(I)+\frac{1}{2}\sum_{j=1}^{m}\Omega_j(u_j^2-v_j^2)+f(\theta,I,u,v)$. By introducing external parameter and KAM methods, we prove that, if the frequency mapping has nonzero Brouwer topological degree at some Diophantine frequency, the hyperbolic invariant torus with this frequency persists under small perturbations.
Lei Wang, Quan Yuan, Jia Li. Persistence of the hyperbolic lower dimensional non-twistinvariant torus in a class of Hamiltonian systems. Communications on Pure & Applied Analysis, 2016, 15(4): 1233-1250. doi: 10.3934\/cpaa.2016.15.1233.
On a parabolic Hamilton-Jacobi-Bellman equation degenerating at the boundary
Daniele Castorina, Annalisa Cesaroni and Luca Rossi
We derive the long time asymptotic of solutions to an evolutive Hamilton-Jacobi-Bellman equation in a bounded smooth domain, in connection with ergodic problems recently studied in [1]. Our main assumption is an appropriate degeneracy condition on the operator at the boundary. This condition is related to the characteristic boundary points for linear operators as well as to the irrelevant points for the generalized Dirichlet problem, and implies in particular that no boundary datum has to be imposed. We prove that there exists a constant $c$ such that the solutions of the evolutive problem converge uniformly, in the reference frame moving with constant velocity $c$, to a unique steady state solving a suitable ergodic problem.
Daniele Castorina, Annalisa Cesaroni, Luca Rossi. On a parabolic Hamilton-Jacobi-Bellman equation degenerating at the boundary. Communications on Pure & Applied Analysis, 2016, 15(4): 1251-1263. doi: 10.3934\/cpaa.2016.15.1251.
The lifespan of solutions to semilinear damped wave equations in one space dimension
Kyouhei Wakasa
In the present paper, we consider the initial value problem for semilinear damped wave equations in one space dimension. Wakasugi [7] have obtained an upper bound of the lifespan for the problem only in the subcritical case. On the other hand, D'Abbicco $\&$ Lucente $\&$ Reissig [3] showed a blow-up result in the critical case. The aim of this paper is to give an estimate of the upper bound of the lifespan in the critical case, and show the optimality of the upper bound. Also, we derive an estimate of the lower bound of the lifespan in the subcritical case which shows the optimality of the upper bound in [7]. Moreover, we show that the critical exponent changes when the initial data are odd functions.
Kyouhei Wakasa. The lifespan of solutions to semilinear damped wave equations in one space dimension. Communications on Pure & Applied Analysis, 2016, 15(4): 1265-1283. doi: 10.3934\/cpaa.2016.15.1265.
The Nehari manifold for fractional systems involving critical nonlinearities
Xiaoming He, Marco Squassina and Wenming Zou
We study the combined effect of concave and convex nonlinearities on the number of positive solutions for a fractional system involving critical Sobolev exponents. With the help of the Nehari manifold, we prove that the system admits at least two positive solutions when the pair of parameters $(\lambda,\mu)$ belongs to a suitable subset of $R^2$.
Xiaoming He, Marco Squassina, Wenming Zou. The Nehari manifold for fractional systems involving critical nonlinearities. Communications on Pure & Applied Analysis, 2016, 15(4): 1285-1308. doi: 10.3934\/cpaa.2016.15.1285.
Soliton solutions for a quasilinear Schrödinger equation with critical exponent
Wentao Huang and Jianlin Xiang
This paper is concerned with the existence of soliton solutions for a quasilinear Schrödinger equation in $R^N$ with critical exponent, which appears from modelling the self-channeling of a high-power ultrashort laser in matter. By working with a perturbation approach which was initially proposed in [26], we prove that the given problem has a positive ground state solution.
Wentao Huang, Jianlin Xiang. Soliton solutions for a quasilinear Schr\u00F6dinger equation with critical exponent. Communications on Pure & Applied Analysis, 2016, 15(4): 1309-1333. doi: 10.3934\/cpaa.2016.15.1309.
Higher integrability of weak solution of a nonlinear problem arising in the electrorheological fluids
Zhong Tan and Jianfeng Zhou
In this paper, we study the Dirichlet problem arising in the electrorheological fluids \begin{eqnarray} \begin{cases} -{\rm div}\ a(x,Du)=k(u^{\gamma-1}-u^{\beta-1}) & x\in \Omega, \\ u=0 & x\in \partial \Omega, \end{cases} \end{eqnarray} where $\Omega$ is a bounded domain in $R^n$ and ${\rm div}\ a(x,Du)$ is a $p(x)$-Laplace type operator with $1<\beta<\gamma<\inf_{x\in \Omega} p(x)$, $p(x)\in(1,2]$. By establish a reversed Hölder inequality, we show that for any suitable $\gamma,\beta$, the weak solution of previous equation has bounded $p(x)$ energy satisfies $|Du|^{p(x)}\in L_{\text{loc}}^{\delta}$ with some $\delta>1$.
Zhong Tan, Jianfeng Zhou. Higher integrability of weak solution of a nonlinear problem arising in the electrorheological fluids. Communications on Pure & Applied Analysis, 2016, 15(4): 1335-1350. doi: 10.3934\/cpaa.2016.15.1335.
A multiplicity result for some Kirchhoff-type equations involving exponential growth condition in $\mathbb{R}^2 $
Sami Aouaoui
In this paper, we consider the existence and multiplicity of sign-changing solutions to some Kirchhoff-type equation involving a nonlinear term with exponential growth. In a first result, we prove the existence of at least three solutions: one solution is positive, one is negative and the third one is sign-changing. The existence of infinitely many sign-changing solutions is proved as our second result in this work. Our method is mainly based on invariant sets of descending flow in the framework of classical critical point theory.
Sami Aouaoui. A multiplicity result for some Kirchhoff-type equations involving exponential growth condition in $\\mathbb{R}^2 $. Communications on Pure & Applied Analysis, 2016, 15(4): 1351-1370. doi: 10.3934\/cpaa.2016.15.1351.
On well-posedness of the plasma-vacuum interface problem: the case of non-elliptic interface symbol
Yuri Trakhinin
We consider the plasma-vacuum interface problem in a classical statement when in the plasma region the flow is governed by the equations of ideal compressible magnetohydrodynamics, while in the vacuum region the magnetic field obeys the div-curl system of pre-Maxwell dynamics. The local-in-time existence and uniqueness of the solution to this problem in suitable anisotropic Sobolev spaces was proved in [17], provided that at each point of the initial interface the plasma density is strictly positive and the magnetic fields on either side of the interface are not collinear. The non-collinearity condition appears as the requirement that the symbol associated to the interface is elliptic. We now consider the case when this symbol is not elliptic and study the linearized problem, provided that the unperturbed plasma and vacuum non-zero magnetic fields are collinear on the interface. We prove a basic a priori $L^2$ estimate for this problem under the (generalized) Rayleigh-Taylor sign condition $[\partial q/\partial N]<0$ on the jump of the normal derivative of the unperturbed total pressure satisfied at each point of the interface. By constructing an Hadamard-type ill-posedness example for the frozen coefficients linearized problem we show that the simultaneous failure of the non-collinearity condition and the Rayleigh-Taylor sign condition leads to Rayleigh-Taylor instability.
Yuri Trakhinin. On well-posedness of the plasma-vacuum interface problem: the case of non-elliptic interface symbol. Communications on Pure & Applied Analysis, 2016, 15(4): 1371-1399. doi: 10.3934\/cpaa.2016.15.1371.
Parabolic problems with general Wentzell boundary conditions and diffusion on the boundary
Davide Guidetti
We show a result of maximal regularity in spaces of Hölder continuous function, concerning linear parabolic systems, with dynamic or Wentzell boundary conditions, with an elliptic diffusion term on the boundary.
Davide Guidetti. Parabolic problems with general Wentzell boundary conditions and diffusion on the boundary. Communications on Pure & Applied Analysis, 2016, 15(4): 1401-1417. doi: 10.3934\/cpaa.2016.15.1401.
On the viscous Cahn-Hilliard-Navier-Stokes equations with dynamic boundary conditions
Laurence Cherfils and Madalina Petcu
In the present article we study the viscous Cahn-Hilliard-Navier-Stokes model, endowed with dynamic boundary conditions, from the theoretical and numerical point of view. We start by deducing results on the existence, uniqueness and regularity of the solutions for the continuous problem. Then we propose a space semi-discrete finite element approximation of the model and we study the convergence of the approximate scheme. We also prove the stability and convergence of a fully discretized scheme, obtained using the semi-implicit Euler scheme applied to the space semi-discretization proposed previously. Numerical simulations are also presented to illustrate the theoretical results.
Laurence Cherfils, Madalina Petcu. On the viscous Cahn-Hilliard-Navier-Stokes equations with dynamic boundary conditions. Communications on Pure & Applied Analysis, 2016, 15(4): 1419-1449. doi: 10.3934\/cpaa.2016.15.1419.
Nonexistence of traveling wave solutions, exact and semi-exact traveling wave solutions for diffusive Lotka-Volterra systems of three competing species
Chiun-Chuan Chen and Li-Chang Hung
In reaction-diffusion models describing the interaction between the invading grey squirrel and the established red squirrel in Britain, Okubo et al. ([19]) found that in certain parameter regimes, the profiles of the two species in a wave propagation solution can be determined via a solution of the KPP equation. Motivated by their result, we employ an elementary approach based on the maximum principle for elliptic inequalities coupled with estimates of a total density of the three species to establish the nonexistence of traveling wave solutions for Lotka-Volterra systems of three competing species. Applying our estimates to the May-Leonard model, we obtain upper and lower bounds for the total density of a solution to this system. For the existence of traveling wave solutions to the Lotka-Volterra three-species competing system, we find new semi-exact solutions by virtue of functions other than hyperbolic tangent functions, which are used in constructing one-hump exact traveling wave solutions in [2]. Moreover, new two-hump semi-exact traveling wave solutions different from the ones found in [1] are constructed.
Chiun-Chuan Chen, Li-Chang Hung. Nonexistence of traveling wave solutions, exact and semi-exact traveling wave solutions for diffusive Lotka-Volterra systems of three competing species. Communications on Pure & Applied Analysis, 2016, 15(4): 1451-1469. doi: 10.3934\/cpaa.2016.15.1451.
Threshold asymptotic behaviors for a delayed nonlocal reaction-diffusion model of mistletoes and birds in a 2D strip
Huimin Liang, Peixuan Weng and Yanling Tian
A time-delayed reaction-diffusion system of mistletoes and birds with nonlocal effect in a two-dimensional strip is considered in this paper. By the background of model deriving, the bird diffuses with a Neumann boundary value condition, and the mistletoes does not diffuse and thus without boundary value condition. Making use of the theory of monotone semiflows and Kuratowski measure of non-compactness, we discuss the existence of spreading speed $c^\ast$. The value of $c^*$ is evaluated by using two auxiliary linear systems accompanied with approximate process.
Huimin Liang, Peixuan Weng, Yanling Tian. Threshold asymptotic behaviors for a delayed nonlocal reaction-diffusion model of mistletoes and birds in a 2D strip. Communications on Pure & Applied Analysis, 2016, 15(4): 1471-1495. doi: 10.3934\/cpaa.2016.15.1471.
Classification of bifurcation curves of positive solutions for a nonpositone problem with a quartic polynomial
Kuan-Ju Huang, Yi-Jung Lee and Tzung-Shin Yeh
We study exact multiplicity and bifurcation curves of positive solutions of the boundary value problem \begin{eqnarray} &u"(x)+\lambda (-u^4+\sigma u^3-\tau u^2+\rho u)=0, -1 < x < 1, \\ &u(-1)=u(1)=0, \end{eqnarray} where $\sigma, \tau \in \mathbb{R}$, $\rho \geq 0,$ and $\lambda >0$ is a bifurcation parameter. Then on the $(\lambda, \|u\|_\infty)$-plane, we give a classification of four qualitatively different bifurcation curves: an S-shaped curve, a broken S-shaped curve, a $\subset$-shaped curve and a monotone increasing curve.
Kuan-Ju Huang, Yi-Jung Lee, Tzung-Shin Yeh. Classification of bifurcation curves of positive solutions for a nonpositone problem with a quartic polynomial. Communications on Pure & Applied Analysis, 2016, 15(4): 1497-1514. doi: 10.3934\/cpaa.2016.15.1497.
|
CommonCrawl
|
Vanishing viscosity limit of 1d quasilinear parabolic equation with multiple boundary layers
CPAA Home
An extension of the concept of exponential dichotomy in Fréchet spaces which is stable under perturbation
March 2019, 18(2): 869-886. doi: 10.3934/cpaa.2019042
A general approach to weighted $L^{p}$ Rellich type inequalities related to Greiner operator
Ismail Kombe and Abdullah Yener ,
Department of Mathematics, Faculty of Humanities and Social Sciences, Istanbul Ticaret University, Beyoglu, 34445, Istanbul, Turkey
Received April 2018 Revised July 2018 Published October 2018
In this paper we exhibit some sufficient conditions that imply general weighted
$L^{p}$
Rellich type inequality related to Greiner operator without assuming a priori symmetric hypotheses on the weights. More precisely, we prove that given two nonnegative functions
$a$
$b$
, if there exists a positive supersolution
$\vartheta $
of the Greiner operator
$Δ _{k}$
such that
${\Delta _k}\left( {a|{\Delta _k}\vartheta {|^{p - 2}}{\Delta _k}\vartheta {\rm{ }}} \right) \ge b{\vartheta ^{p - 1}}$
almost everywhere in
$\mathbb{R}^{2n+1}, $
satisfy a weighted
Rellich type inequality. Here,
$p>1$
$Δ _{k} = \sum\nolimits_{j = 1}^n {} \left(X_{j}^{2}+Y_{j}^{2}\right) $
is the sub-elliptic operator generated by the Greiner vector fields
${X_j} = \frac{\partial }{{\partial {x_j}}} + 2k{y_j}|z{{\rm{|}}^{2k - 2}}\frac{\partial }{{\partial l}}, \;\;\;\;{Y_j} = \frac{\partial }{{\partial {y_j}}} - 2k{x_j}|z{|^{2k - 2}}\frac{\partial }{{\partial l}}, \;\;\;\;j = 1, ..., n, $
$\left( z, l\right) = \left( x, y, l\right) ∈\mathbb{R}^{2n+1} = \mathbb{R}^{n}×\mathbb{R}^{n}×\mathbb{R}, $
$|z{\rm{|}} = \sqrt {\sum\nolimits_{j = 1}^n {} \left( {x_j^2 + y_j^2} \right)} $
$k≥ 1$
. The method we use is quite practical and constructive to obtain both known and new weighted Rellich type inequalities. On the other hand, we also establish a sharp weighted
Rellich type inequality that connects first to second order derivatives and several improved versions of two-weight
Rellich type inequalities associated to the Greiner operator
on smooth bounded domains
$Ω $
$\mathbb{R}^{2n+1}$
Keywords: Greiner operator, weighted Rellich inequality, remainder term.
Mathematics Subject Classification: Primary: 26D10, 22E30; Secondary: 43A80.
Citation: Ismail Kombe, Abdullah Yener. A general approach to weighted $L^{p}$ Rellich type inequalities related to Greiner operator. Communications on Pure & Applied Analysis, 2019, 18 (2) : 869-886. doi: 10.3934/cpaa.2019042
Adimurthi, M. Grossi and S. Santra, Optimal Hardy-Rellich inequalities, maximum principle and related eigenvalue problem, J. Funct. Anal., 240 (2006), 36-83. doi: 10.1016/j.jfa.2006.07.011. Google Scholar
Adimurthi and S. Santra, Generalized Hardy-Rellich inequalities in critical dimension and its applications, Commun. Contemp. Math., 11 (2009), 367-394. doi: 10.1142/S0219199709003405. Google Scholar
S. Ahmetolan and I. Kombe, A sharp uncertainty principle and Hardy-Poincaré inequalities on sub-Riemannian manifolds, Math. Inequal. Appl., 15 (2012), 457-467. doi: 10.7153/mia-15-40. Google Scholar
S. Ahmetolan and I. Kombe, Hardy and Rellich type inequalities with two weight functions, Math. Inequal. Appl., 19 (2016), 937-948. doi: 10.7153/mia-19-68. Google Scholar
W. Allegretto and Y. X. Huang, A Picone's identity for the p−Laplacian and applications, Nonlinear Anal., 32 (1998), 819-830. doi: 10.1016/S0362-546X(97)00530-0. Google Scholar
G. Barbatis, Improved Rellich inequalities for the polyharmonic operator, Indiana Univ. Math. J., 55 (2006), 1401-1422. doi: 10.1512/iumj.2006.55.2752. Google Scholar
R. Beals, B. Gaveau and P. Greiner, On a geometric formula for the fundamental solution of sub-elliptic Laplacians, Math. Nachr., 181 (1996), 81-163. doi: 10.1002/mana.3211810105. Google Scholar
R. Beals, B. Gaveau and P. Greiner, Uniform hypoelliptic Green's functions, J. Math. Pures Appl., 77 (1998), 209-248. doi: 10.1016/S0021-7824(98)80069-X. Google Scholar
D. M. Bennett, An extension of Rellich's inequality, Proc. Amer. Math. Soc., 106 (1989), 987-993. doi: 10.2307/2047283. Google Scholar
P. Caldiroli and R. Musina, Rellich inequalities with weights, Calc. Var. Partial Differential Equations, 45 (2012), 147-164. doi: 10.1007/s00526-011-0454-3. Google Scholar
E. B. Davies and A. M. Hinz, Explicit constants for Rellich inequalities in Lp(Ω), Math. Z., 227 (1998), 511-523. doi: 10.1007/PL00004389. Google Scholar
A. Detalla, T. Horiuchi and H. Ando, Sharp remainder terms of the Rellich inequality and its application, Bull. Malays. Math. Sci. Soc., 35 (2012), 519-528. Google Scholar
G. B. Folland, A fundamental solution for a sub-elliptic operator, Bull. Amer. Math. Soc., 79 (1973), 373-376. doi: 10.1090/S0002-9904-1973-13171-4. Google Scholar
V. A. Galaktionov and I. V. Kamotski, On nonexistence of Baras-Goldstein type for higherorder parabolic equations with singular potentials, Trans. Am. Math. Soc., 362 (2010), 4117-4136. doi: 10.1090/S0002-9947-10-04855-5. Google Scholar
N. Garofalo and E. Lanconelli, Frequency functions on the Heisenberg group, the uncertainty principle and unique continuation, Ann Inst Fourier (Grenoble), 40 (1990), 313-356. Google Scholar
F. Gazzola, H. C. Grunau and E. Mitidieri, Hardy inequalities with optimal constants and remainder terms, Trans. Amer. Math. Soc., 356 (2004), 2149-2168. doi: 10.1090/S0002-9947-03-03395-6. Google Scholar
P. C. Greiner, A fundamental solution for a nonelliptic partial differential operator, Canad. J. Math., 31 (1979), 1107-1120. doi: 10.4153/CJM-1979-101-3. Google Scholar
I. Kombe, On the nonexistence of positive solutions to nonlinear degenerate parabolic equations with singular coefficients, Appl. Anal., 85 (2006), 467-478. doi: 10.1080/00036810500404967. Google Scholar
B. Lian, Some sharp Rellich type inequalities on nilpotent groups and application, Acta Math. Sci. Ser. B Engl. Ed., 33 (2013), 59-74. doi: 10.1016/S0252-9602(12)60194-5. Google Scholar
P. Lindqvist, On the equation $\text{div}(|\nabla u|^{p-2}\nabla u)+\lambda |u|^{p-2}u = 0$, Proc. Amer. Math. Soc., 109 (1990), 157-164. doi: 10.2307/2048375. Google Scholar
G. Metafune, M. Sobajima and S. C. Motohiro, Weighted Calderón-Zygmund and Rellich inequalities in Lp, Math. Ann., 361 (2015), 313-366. doi: 10.1007/s00208-014-1075-x. Google Scholar
A. Moradifam, Optimal weighted Hardy-Rellich inequalities on $H^{2}\cap H_{0}^{1}$, J. Lond. Math. Soc., 85 (2012), 22-40. doi: 10.1112/jlms/jdr045. Google Scholar
R. Musina, Weighted Sobolev spaces of radially symmetric functions, Annali di Matematica, 193 (2014), 1629-1659. doi: 10.1007/s10231-013-0348-4. Google Scholar
P. Niu, Y. Ou and J. Han, Several Hardy type inequalities with weights related to generalized Greiner operator, Canad. Math. Bull., 53 (2010), 153-162. doi: 10.4153/CMB-2010-029-9. Google Scholar
P. Niu, H. Zhang and Y. Wang, Hardy type and Rellich type inequalities on the Heisenberg group, Proc. Amer. Math. Soc., 129 (2001), 3623-3630. doi: 10.1090/S0002-9939-01-06011-7. Google Scholar
F. Rellich, Halbbeschränkte Differentialoperatoren höherer Ordnung, Proceedings of the International Congress of Mathematicians, 1954, Amsterdam, vol. Ⅲ, Erven P. Noordhoff N.V., Groningen; North-Holland Publishing Co., Amsterdam, 1956, pp. 243–250. Google Scholar
A. Tertikas and N. Zographopoulos, Best constants in the Hardy-Rellich inequalities and related improvements, Adv. in Math., 209 (2007), 407-459. doi: 10.1016/j.aim.2006.05.011. Google Scholar
A. Yener, General weighted Hardy type inequalities related to Greiner operators, to appear in Rocky Mountain J. Math., https://projecteuclid.org/euclid.rmjm/1528164034. Google Scholar
Nguyen Huu Can, Nguyen Huy Tuan, Donal O'Regan, Vo Van Au. On a final value problem for a class of nonlinear hyperbolic equations with damping term. Evolution Equations & Control Theory, 2021, 10 (1) : 103-127. doi: 10.3934/eect.2020053
Wenjun Liu, Hefeng Zhuang. Global attractor for a suspension bridge problem with a nonlinear delay term in the internal feedback. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 907-942. doi: 10.3934/dcdsb.2020147
Xing-Bin Pan. Variational and operator methods for Maxwell-Stokes system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3909-3955. doi: 10.3934/dcds.2020036
Ole Løseth Elvetun, Bjørn Fredrik Nielsen. A regularization operator for source identification for elliptic PDEs. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021006
Huiying Fan, Tao Ma. Parabolic equations involving Laguerre operators and weighted mixed-norm estimates. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5487-5508. doi: 10.3934/cpaa.2020249
Dmitry Dolgopyat. The work of Sébastien Gouëzel on limit theorems and on weighted Banach spaces. Journal of Modern Dynamics, 2020, 16: 351-371. doi: 10.3934/jmd.2020014
Wenbin Lv, Qingyuan Wang. Global existence for a class of Keller-Segel models with signal-dependent motility and general logistic term. Evolution Equations & Control Theory, 2021, 10 (1) : 25-36. doi: 10.3934/eect.2020040
Laure Cardoulis, Michel Cristofol, Morgan Morancey. A stability result for the diffusion coefficient of the heat operator defined on an unbounded guide. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020054
Claudia Lederman, Noemi Wolanski. An optimization problem with volume constraint for an inhomogeneous operator with nonstandard growth. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020391
Anna Canale, Francesco Pappalardo, Ciro Tarantino. Weighted multipolar Hardy inequalities and evolution problems with Kolmogorov operators perturbed by singular potentials. Communications on Pure & Applied Analysis, 2021, 20 (1) : 405-425. doi: 10.3934/cpaa.2020274
Cheng Peng, Zhaohui Tang, Weihua Gui, Qing Chen, Jing He. A bidirectional weighted boundary distance algorithm for time series similarity computation based on optimized sliding window size. Journal of Industrial & Management Optimization, 2021, 17 (1) : 205-220. doi: 10.3934/jimo.2019107
Zi Xu, Siwen Wang, Jinjin Huang. An efficient low complexity algorithm for box-constrained weighted maximin dispersion problem. Journal of Industrial & Management Optimization, 2021, 17 (2) : 971-979. doi: 10.3934/jimo.2020007
Gongbao Li, Tao Yang. Improved Sobolev inequalities involving weighted Morrey norms and the existence of nontrivial solutions to doubly critical elliptic systems involving fractional Laplacian and Hardy terms. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020469
Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436
Yoshitsugu Kabeya. Eigenvalues of the Laplace-Beltrami operator under the homogeneous Neumann condition on a large zonal domain in the unit sphere. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3529-3559. doi: 10.3934/dcds.2020040
Guo Zhou, Yongquan Zhou, Ruxin Zhao. Hybrid social spider optimization algorithm with differential mutation operator for the job-shop scheduling problem. Journal of Industrial & Management Optimization, 2021, 17 (2) : 533-548. doi: 10.3934/jimo.2019122
Tomáš Oberhuber, Tomáš Dytrych, Kristina D. Launey, Daniel Langr, Jerry P. Draayer. Transformation of a Nucleon-Nucleon potential operator into its SU(3) tensor form using GPUs. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1111-1122. doi: 10.3934/dcdss.2020383
Ismail Kombe Abdullah Yener
|
CommonCrawl
|
Streaming algorithms for identification of pathogens and antibiotic resistance potential from real-time MinIONTM sequencing
Minh Duc Cao1,
Devika Ganesamoorthy1,
Alysha G. Elliott1,
Huihui Zhang1,
Matthew A. Cooper1 &
Lachlan J.M. Coin ORCID: orcid.org/0000-0002-4300-455X1,2
GigaScience volume 5, Article number: 32 (2016) Cite this article
The recently introduced Oxford Nanopore MinION platform generates DNA sequence data in real-time. This has great potential to shorten the sample-to-results time and is likely to have benefits such as rapid diagnosis of bacterial infection and identification of drug resistance. However, there are few tools available for streaming analysis of real-time sequencing data. Here, we present a framework for streaming analysis of MinION real-time sequence data, together with probabilistic streaming algorithms for species typing, strain typing and antibiotic resistance profile identification. Using four culture isolate samples, as well as a mixed-species sample, we demonstrate that bacterial species and strain information can be obtained within 30 min of sequencing and using about 500 reads, initial drug-resistance profiles within two hours, and complete resistance profiles within 10 h. While strain identification with multi-locus sequence typing required more than 15x coverage to generate confident assignments, our novel gene-presence typing could detect the presence of a known strain with 0.5x coverage. We also show that our pipeline can process over 100 times more data than the current throughput of the MinION on a desktop computer.
Massively parallel, short-read sequencing has profoundly transformed genomics research [1, 2] and has become the dominant technology for sequencing DNA. However, one inherent limitation of most current technologies is that the sequencing run must finish before data analysis can begin. As a result, sequence analysis algorithms have been designed to make inference on a complete sequencing data set. In contrast, streaming algorithms are applied to a sequence of data events and typically maintain an internal summary of the data, as well as an approximation of the full inference, without needing to store all of the observations [3]. Streaming algorithms have applications in particle and solar physics, computer network analysis and finance [4].
Oxford Nanopore Technologies has recently released a portable MinION sequencing device, which utilises nanopore sequencing technology originally proposed in the 1990s [5]. The key innovation of this device is that it measures changes in electrical current as single-stranded DNA passes through the nanopore and uses the signal to determine the nucleotide sequence of the DNA strand [6, 7]. These sequence data can be retrieved and analysed as they are generated, providing the opportunity to obtain answers in the shortest possible time. Real-time sequencing has many potential applications, especially in time-critical areas such as rapid clinical diagnosis.
In order to realise this potential there is a need to develop streaming bioinformatics algorithms that continually update and report results as each sequence read is generated. To be of practical use – for example to know when to make a diagnosis in the clinic – these algorithms must continuously update not only a point estimate (e.g., which species are present and their proportions), but also confidence intervals in that estimate. Several systems incorporating real-time analysis of MinION data have been developed recently, such as the cloud-based platform Metrichor (Oxford Nanopore), work by Quick et al. [8] and MetaPORE [9], which place the sample on a phylogenetic tree but without estimating the confidence in this assignment.
Here we present a flexible framework for the real-time analysis of MinION sequence data directly as it is sequenced and base-called. The framework can incorporate multiple real-time analyses to suit the problems at hand and can be deployed on a single computer or on a high-performance computing facility and computing cloud. We also present four streaming algorithms for the identification and characterisation of pathogen samples. These algorithms, which are seamlessly integrated into the pipeline, report analysis results along with their confidence levels so that users can decide when to stop a sequencing run.
By sequencing four bacterial isolate samples and a mixture sample on the MinION sequencer, we demonstrate that we can reliably determine the species and strain of a sequenced sample with only 500 reads. This was achieved in less than half an hour of sequencing with the current throughput of the MinION. Furthermore, we show that we can identify most of the drug resistance genes present in a sample within 2 h of sequencing, and the full drug resistance profile within 10 h. The pipeline can perform all these analyses on a single computer at a throughput of over 100 times higher than our best runs. As the throughput of nanopore sequencing is expected to increase, the time to obtain these results will be significantly shortened. Our findings support the potential use of MinION sequencing for the real-time analysis of clinical samples for species detection and analysis of antibiotic resistance.
Real-time analysis framework
Our real-time analysis framework consists of several of streaming programs communicating to each other via the network sockets or inter-process communication pipes provided by Unix-like operating systems. These programs typically take a sequence of items as input and process them every time a given small number of items arrive. They either retain only the relevant statistics of the data, or upon processing any data items, immediately forward only the necessary information to the downstream programs for further processing. This data processing methodology requires only a small memory footprint and hence is relevant for processing large amounts of data, especially real-time data from MinION sequencing.
We developed a number of auxiliary programs to facilitate setting up a real-time pipeline to analyse MinION sequencing data. These include scripts for setting up communication channels in a pipeline, thereby allowing the pipeline to be deployed on a high-performance computing cluster to scale with massive amounts of data. Programs for simple analyses of MinION sequencing data such as initial analysis (npReader [10]) and read-filtering on the basis of read length and read quality are also provided.
We also developed streaming algorithms for a handful of identification problems, namely species typing, strain typing and identification of antibiotic resistance profiles (see Methods). We integrated the implementations of these algorithms into the analysis pipeline (see Fig. 1). In this pipeline, npReader [10] continuously scans the folder containing sequencing data in parallel with MinION sequencing. It picks up sequenced reads as soon as they are generated (from Metrichor), and simultaneously streams them through the pipeline for identification analyses. The pipeline also makes use of off-the-shelf bioinformatics tools such as BWA-MEM [11], as described later. In each step of this pipeline, data are piped from one process to the next without being written to disk, with the exception of base-calling via Metrichor in which each read is written to disk once it has been base-called, and is then picked up almost immediately by npReader.
Schematic of the real-time analysis pipeline. Once the MinION starts sequencing, DNA fragments are sequenced (on the MinION) and base-called (by Metrichor cloud) instantaneously, and are simultaneously streamed through the pipeline where they are aligned by BWA-MEM [11]. Arrows show the data flow
We evaluated our real-time analysis pipeline and the accuracy of our algorithms using five MinION sequencing data sets. Four of these data sets were collected before the pipeline was developed, and hence we emulated the timing of the sequencing for the evaluation from these data sets. Specifically, we extracted the time that each read was sequenced, and streamed the sequence reads in the exact order and timing into the pipeline. With the emulation, we were able to stream the sequencing data at a hypothetical throughput 120 times higher than that we obtained with the MinION. This allowed us to test the scalability of the pipeline against the projected future throughput such as from the PromethION platform. The fifth data set was passed through our pipeline as it was base-called from Metrichor, and thus represents a true demonstration of the real-time capability of the pipeline. Finally, we validated the analysis results by sequencing these samples with the Illumina MiSeq platform, which has well-established bioinformatics analysis methods.
We prepared samples from cultured isolates of two Klebsiella pneumoniae strains ATCC BAA-2146, ATCC 13883; one Klebsiella quasipneumoniae strain ATCC 700603 and a library mixture sample. This mixture sample contained two different sequencing libraries prepared from the Escherichia coli strain ATCC 25922 and the Staphylococcus aureus strain ATCC 25923, pooled at different levels prior to sequencing (Table 1). We sequenced sample ATCC BAA-2146 and ATCC 700603 with the MinION using chemistry R7 and the others using the improved chemistry R7.3 (see Methods).
Table 1 Details of the four samples
To validate the analysis results from MinION sequencing, we sequenced all aforementioned isolates with the established Illumina platform MiSeq to a coverage exceeding 100-fold. Isolates in the mixture sample were sequenced separately. We assembled the MiSeq sequencing reads to obtain high quality assemblies of the five strains. With the assemblies, we were able to identify the sequence types and the antibiotic resistance profiles of these strains (see Methods). These results were used as the benchmarks to validate the analysis of MinION sequencing data.
Sequencing yields and quality of MinION sequencing
Sequence reads from the MinION were classified into three types: template, complement and higher quality 2D reads (i.e., reads resulted from computationally merging a template and a complement read). The average Phred quality of template and complement reads across four runs was in the region of 5, while 2D reads were in higher quality, with average Phred quality about 9 (see Table 2 and Additional file 1: Figure S1). The median read lengths of three K. pneumoniae samples were approximately 5 Kb, while the mixture sample was only less than 1 Kb. We observed variation in terms of sequence yields across the four runs. While we obtained about 36 000 reads (185 Mb) for sample K. pneumoniae ATCC BAA-2146 after 60 h of sequencing, the run for sample K. quasipneumoniae ATCC 700603 yielded only 7092 reads (39 Mb) with the same running time (Fig. 2). We sequenced sample K. pneumoniae ATCC 13883 and the mixture sample for 36 and 20 h respectively, both with the chemistry 7.3, but the yields were markedly different. The read length and accuracy of our runs were consistent with other user reports [12–15].
Sequencing yields over time for the four samples. Yields are shown in terms of read count (left) and base count (right)
Table 2 Details of the four MinION sequencing runs
Species detection
For real-time bacterial species detection, we built a database from 2785 complete genomes of 1489 bacterial species available in GenBank (accessed Nov 2014), augmented with two K. quasipneumoniae genomes (which was not the strain we sequenced) as none were present in the database. The database contained several K. pneumoniae, E. coli and S. aureus strains (10, 63 and 49 respectively), but none of the five strains in our samples were present. The pipeline aligns sequence reads as they are generated from the sequencer to this database. The species typing algorithm periodically computes the simultaneous proportions of the species present in the sample and reports the 95 % confidence intervals of these proportions (see Methods).
In both K. pneumoniae samples as well as the K. quasipneumoniae sample, we successfully detected the major species present in the isolate. This was achieved with as little as 120 sequence reads requiring only 5 min of sequencing time (Fig. 3a, b and c). For K. pneumoniae strains ATCC BAA-2146 and ATCC 13883, it required less than 500 reads (10 and 15 min of sequencing, respectively) to reach a 95 % confidence interval of less than 0.05. For strain ATCC 700603 it required only 300 reads to correctly identify K. quasipneumoniae as the species.
Real-time identification of bacterial species from MinION sequencing data for four different bacterial samples: a) K. pneumoniae ATCC BAA-2146, b) K. quasipneumoniae ATCC 700603, c) K. pneumoniae ATCC 13883 and d) Mixture of 75 % E. coli ATCC 25922 and 25 % S. aureus ATCC 25923. The bars represent confidence intervals at the 95 % level
The pipeline accurately identified the two species in the mixture sample as E. coli and S. aureus after obtaining around 100 reads (5 min of sequencing). The reported proportions became stable after around 1200 reads (35 min of sequencing). E. coli was the predominant species type in the mixture sample and it was evident with high proportion of sequencing reads supporting the E. coli species.
Multi-locus sequence typing
Most bacteria are conventionally strain-typed using a multi-locus sequence typing (MLST) system that requires accurate genotyping to distinguish the alleles of seven house-keeping genes [16]. Our analysis of MinION raw read quality (Additional file 1: Figure S1), together with other user reports [12–15], indicated high error rates in MinION sequencing in comparison to Illumina Miseq sequencing. This suggested that MLST analysis would be challenging with MinION sequence data, especially in real-time fashion.
We developed a method to carry out MLST using MinION sequence data. Our method selected reads spanning each of the house-keeping genes. It then used multiple reads aligned to the same gene to correct error in the raw sequence reads and subsequently combined information across multiple alleles in a likelihood-based framework (see Methods). Table 3 presents the top five highest score sequence types (in log-likelihood) for K. pneumoniae and K. quasipneumoniae strains using MinION sequencing. In all three strains, the correct sequence types were the highest score out of 1678 sequence types available in the MLST database. We noticed that the typing system also outputted several other sequence types with the same likelihood (e.g., ST-751 and ST-864 for strain ATCC BAA-2146 and ST-851 for strain ATCC 700603). We examined the profiles of these sequence types, and found them to be highly similar. For example, sequence types ST-751 and ST-864 (reported for strain ATCC BAA-2146) differed to the correct sequence type ST-11 by only one single nucleotide polymorphism (SNP) from the total of 3012 bases in seven genes. Similarly, sequence type ST-851 (co-highest score reported for strain ATCC 700603) differed to the correct sequence type ST-489 by two alleles (genes phoE and tonB). Because the run had a poor yield, only one read was aligned to these two genes by the end of the run, which may have also contributed to the inability to differentiate these two sequence types. While the results were encouraging, this also suggested that traditional MLST with nanopore sequencing requires high coverage to report the sequence type with absolute certainty. A more accurate strain-typing methodology would need to consider all of the sequenced reads, rather than just those 7 house-keeping genes. Therefore we further devised a method for strain-typing which was based on presence or absence of genes.
Table 3 MLST results for three K. pneumoniae strains
Strain typing by presence or absence of genes
We developed a novel strain typing method to identify a known bacterial strain from the MinION sequence reads based on patterns of gene presence and absence. This approach is intended to rapidly identify the presence of a sequence type that has already been characterised, for example in an outbreak scenario, with subsequent confirmation using MLST once more data has been collected. We downloaded the genome assemblies of all strains for K. pneumoniae, E. coli and S. aureus species from the RefSeq repository and identified their sequence types using the relevant MLST schemes. This resulted in sets of 125 sequence types for K. pneumoniae, 353 for E. coli and 107 for S. aureus. For each sequence type, we picked the highest quality assembly (in terms of N50 statistics) and extracted gene sequences from its RefSeq gene annotation. We then grouped genes from a species based on 90 % sequence identity, and therein obtained the gene profile for each sequence type.
Our pipeline identified genes present in the sample from sequence reads as they were generated by the MinION device. It then used this information to infer the posterior probability of each of the sequence types, as well as the 95 % confidence intervals in this estimate (see Methods). For our K. pneumoniae and K. quasipneumoniae samples, we successfully identified the corresponding sequence types from the sequence data with 95 % confidence within 10 min of sequencing time and with as few as 200 sequence reads (Fig. 4a, b and c). We streamed sequence reads from the mixture sample through the strain typing systems for E. coli and S. aureus, and in both cases, the correct sequence types of two species in the sample were also recovered. The correct sequence type for E. coli strain in the 75 %/ 25 % E. coli, S. aureus mixture was recovered after 25 min of sequencing with about 1000 total reads (or approximately 750 E. coli derived reads) (Fig. 4d). The pipeline was able to correctly predict the S. aureus strain (which is known to have much less gene content variation) in this mixture sample after 2 h of sequencing with about 2800 total reads (or approximately 700 S. aureus derived reads).
Real-time strain identification from MinION sequencing data on three different K. pneumoniae strains (a, b and c) and a E. coli strain (d) and a S. aureus strain (e) from the mixture sample. The bars represent confidence intervals at the 95 % level
Antibiotic resistance detection
The antibiotic resistance gene profiles of the samples were also characterised with MinION sequencing data. We obtained antibiotic drug resistance genes from the ResFinder database [17] (accessed July 2015). This set contained 2132 gene sequences, including variants of the same genes. We grouped these gene sequences based on 90 % sequence identity into 609 groups. In this grouping, we found that sequences in a group were variants of the same gene.
Our antibiotic resistance profile identification pipeline aligned sequence reads to this antibiotic gene database. The algorithms retained reads that aligned to these genes, and periodically performed multiple alignment of reads that were aligned to the same gene. It then generated a consensus sequence from these reads, and used a probabilistic Finite State Machine [18] to re-align the consensus sequence to the gene sequence (see Methods). The pipeline reported the presence of a resistance gene as soon as the alignment score reached a threshold.
Table 4 shows the time-line of antibiotic gene detection from MinION sequencing of three K. pneumoniae strains. For the NDM-1-producing strain ATCC BAA-2146, we identified the presence of 26 antibiotic resistance genes in the MiSeq assembly of the strain. Our real-time pipeline identified all these 26 genes and an additional gene blaSHV from 10 h of MinION sequencing. No further genes were detected thereafter. As gene blaSHV was reported with high confidence from the real-time analysis, we further investigated the alignment of the MiSeq assembly with this gene, and found that the gene was aligned to two contigs in the assembly suggesting the MiSeq assembly was fragmented in the middle of the gene. We sourced a high quality assembly of the strain's genome using PacBio sequencing [19] and found that the assembly contained the gene. In other words, our pipeline detected precisely the antibiotic gene profile for this strain from 10 h of MinION sequencing. We observed that the majority of these genes were identified in the early stage of sequencing, i.e., three quarters were reported within 1.5 h of sequencing, at fewer than 4000 reads (making up only a 3-fold coverage of the genome). We observed similar performance for K. pneumoniae strain ATCC 13883 where 5 out of 6 genes were detected after 2 h of sequencing. The last gene (oqxB) was detected after 9.5 h of sequencing, again recovering the full resistance profile without any false positive. For the multi-drug resistant K. quasipneumoniae strain ATCC 700603, the pipeline only detected 8 out of 11 genes. The reduced sensitivity for this sample was most likely due to the low sequence yield (33 Mb of data in total, or only 7-fold coverage of the genome).
Table 4 Time-line of resistance gene detection from the K. pneumoniae samples
To date, only a few pipelines exist to identify species/subspecies from nanopore sequencing data, namely Metrichor [8, 20] and MetaPORE [9]. These methods commonly place the sample of question to a phylogeny taxonomy based on the number of reads that either are aligned to, or have a similar k-mer profile to, the taxon's reference genome. Our species typing method is somewhat similar to this approach, although it additionally estimates confidence intervals in the species assignment. While we found that this approach can successfully identify species within 500 reads, the signal-to-noise from nanopore sequencing is too low to use a similar approach to correctly discriminate at the strain level, unless a large amount of data is available. Our strain typing uses a novel approach based on the presence and absence of genes and hence is able to make inference from a smaller number of reads.
Among the mentioned methods, only Metrichor [20] and MetaPORE [9] support genuine real-time analysis. As MetaPORE only focuses on viral species identification, we could only directly compare the performance of our method to Metrichor. We uploaded the first 1000 reads from our single samples and the first 3000 reads from our mixture sample to the Metrichor What's In My Pot Bacteria k24 for SQK-MAP005 v1.27 (WIMP) workflow. Along with the species/subspecies and strains reported, WIMP provides a classification score filter where users can set the permissiveness of reporting. Table 5 presents the bacterial taxa reported by the WIMP workflow for our data with the default classification score. For sample K. pneumoniae ATCC BAA-2146, WIMP only returned the taxon K. pneumoniae at the species level. On the other hand, for the second and third samples (K. quasipneumoniae ATCC 700603 and K. pneumoniae ATCC 13883), WIMP reported several K. pneumoniae strains, but not the correct sequence types of these samples (ST489 and ST3). For the mixture sample, two E. coli and three S. aureus strains were reported, but these were also the incorrect sequence types (E. coli ST73 and S. aureus ST243). While it was unclear whether the sequence types of these samples were included in WIMP's database, ST11 clearly was as it was reported in sample K. pneumoniae ATCC 700603. However, WIMP was unable to identify sample K. pneumoniae ATCC BAA-2146 to the strain level with 1000 reads, while our pipeline could do so in less than 400 reads (Fig. 4).
Table 5 Report of Metrichor What's In My Pot Bacteria k24 for SQK-MAP005 v1.27 (WIMP) from the first 1000 reads of three single samples and the first 3000 reads of the mixture sample
Our species typing module has some similarities to the approach used by MetaPhlAn [21], which was designed for metagenomics inference using millions of short-reads. Like MetaPhlAn, we used the proportion of reads that map to different taxonomic groupings to estimate the proportion of different species in a sample. MetaPhlAn optimises computational speed by aligning to a precomputed database of sequences that are pervasive within a single taxonomic grouping but not seen outside that grouping. This allows it to blast against a database that is 20 times smaller than a full bacterial genomic database. Our species typing approach, on the other hand, is designed to make a similar inference using only hundreds of reads, and moreover, also continuously updates confidence intervals so the user knows when they can stop sequencing and make a diagnosis.
Antibiotic resistance gene detection from MinION sequencing was also explored in Judge et al. [22]. Their approach was broadly similar to ours in that it initially aligned sequence reads to a resistance gene database, and then constructed a consensus sequence from the multiple alignment of matched reads. Both pipelines reported close to perfect resistance gene identification when compared with Illumina MiSeq sequencing. However, our pipeline uses a novel alignment parameter estimation using probabilistic Finite State Machines (see Methods). It is hence able to confidently report the presence of a resistance gene as soon as sufficient supporting data are available. This is the essence of real-time analysis presented here.
Computational time
In our analyses, sequence reads were streamed through the pipeline in the exact order and timing that they were generated. Analysis results were generated periodically (every minute for species typing and strain typing and every five minutes for resistance gene identification). We examined the scalability of the pipeline to higher throughput by running the pipeline on a single computer equipped with 16 CPUs and streaming all sequence reads from the highest yield run (185 Mb from sample K. pneumoniae ATCC BAA-2146) through the pipeline at 120 times higher speed than they were generated (e.g., data sequenced in 2 min were streamed within 1 s). Analysis results were generated every 5 s for typing and every one minute for gene resistance analysis. With this hypothetical throughput, our pipeline correctly identified the species and strain of the sample in less than 20 s; thereupon we could terminate the typing analyses. The pipeline then reported all the resistance genes in five minutes, which corresponded to the data generated in the first 10 h of actual sequencing. This demonstrates the scalability of our pipeline to higher throughput sequencing platforms in the future.
Real-time analysis of a clinical isolate
With the pipeline in place, we analysed a clinical K. pneumoniae isolate collected in Greece that was found to be resistant to an extensive range of antibiotics. We sequenced the sample on the MinION with Chemistry R7.3 and ran the Metrichor service, which performed basecalling and sample identification during the first three hours of the run. We also ran our pipeline in real-time on the base-called data returned from the Metrichor service.
We observed a delay from the base-calling of the data; the first read was sequenced on the MinION within one minute from starting the run, but the base-called data were received after 6 min. The delay tended to increase as more data were generated. We found the base-called data returned during the three-hour run of the Metrichor service were actually sequenced within 45 min on the MinION. This highlights the need for a local base-calling step to improve real-time analysis. Figure 5a and 5b show the timing (from the start of the MinION run) of sample identification using our pipeline. The pipeline reported K. pneumoniae as the only species in the sample within 10 min, and reached a confidence interval of less than 0.1 in 40 min when approximately 200 reads were analysed. We noticed that these 200 reads were actually sequenced in 7 min by the MinION. For strain identification, our pipeline initially reported ST1199 but after 2.5 h, reported ST258 as the sequence type for this isolate. It is worth noting that the two strains are highly similar; their MLST profiles differ by only one SNP in the seven house-keeping genes. By sequencing the isolate on the Illumina MiSeq as described above, we confirmed that the sequence type for the strain is ST258. On the other hand, the sample identification from Metrichor initially reported K. pneumoniae 1084 (ST23), but finally reported two strains namely K. pneumoniae JM45 (ST11) and K. pneumoniae HS11286 (ST11) after 3 h (Additional file 2: Figure S2). During the three-hour run with less than 4000 reads (16 Mb of data), our pipeline reported two antibiotic resistance genes, namely sul2 (sulphonamide) and tetA (tetracycline). Our analysis of the Illumina data for this strain confirmed the presence of these two genes. Clinical susceptibility testing also showed the resistance of this isolate to tetracycline and sulfamethoxazole-trimethoprim (MIC ≥ 16 μg/mL and ≥ 320 μg/mL, respectively analyzed by VITEK®;2 bioMérieux, Inc). Finally, we re-analysed the data from this run using the emulation described previously, and obtained the same results as from the real-time analysis.
Real-time species typing (a) and strain typing (b) of a clinical isolate directly from the MinION using our pipeline and the Metrichor service. The time includes basecalling timing
In recent years high-throughput sequencing has become an integrative tool for infectious disease research [23, 24], predominantly using massively parallel short-read sequencing technologies. These technologies achieve a very high base calling accuracy, making them ideally suited to applications requiring accurate calling of SNPs. However, these technologies attain their high yield by sequencing a single base per cycle for millions of sequence fragments in parallel, where each cycle takes at least 5 min.
The Oxford Nanopore MinION device, on the other hand, generated as many as 500 reads in the first 10 min of sequencing in our hands (which is 3 times lower than the theoretical maximum). The error rate of these reads was substantially higher than the corresponding Illumina short-read data. Many existing bioinformatics algorithms rely on accurate base and SNP calling, which makes their application to MinION data challenging. As an example, most existing strain typing approaches often use a MLST system, either on a pre-defined set of house keeping genes [25], or on core genes set [26]. These approaches are highly standardised, reproducible and portable, and hence are routinely used in laboratories around the world. Rapid genomic diagnosis tools using MLST from high-throughput sequencing such as SRST2 [27] have also been developed. While we showed in this article that MLST can be adapted to identify bacterial strains from nanopore sequencing, this requires high coverage sequencing of the gene set to overcome the high error rates.
The main contribution of this article is to demonstrate that despite the higher error rate, it is possible to return clinical actionable information, including species and strain identification from as few as 500 reads. We achieved this by developing novel approaches that are less sensitive to base-calling errors and which use whatever subset of genome-wide information is observed up to a point in time, rather than a panel of pre-defined markers or genes. For example, the strain typing presence/absence approach relies only on being able to identify homology to genes and also allows for a level of incorrect gene annotation.
Our strain typing module has the advantage of being able to rapidly type a known strain with a small number of low quality (i.e., mostly 1D) reads. Competing approaches using k-mers appear to require substantially more high quality data. The drawback of our approach is that if a large number of genes are lost or gained in a single event, such as the gain or loss of a plasmid, the most likely strain may be incorrect. Thus it would be ideally suited for rapidly typing a known strain in an outbreak scenario.
Our antibiotic resistance module is able to identify the drug resistance potential of an isolate within a few hours of sequencing with very high specificity. In particular, with the most recent chemistry utilised in this paper (R7.3), we were able to identify the complete resistance potential of a K. pneumoniae isolate without any false positives in 9.5 h, and with approximately 8000 reads, (80 % of the resistance genes were identified with 3000 reads in 2 h). In order to achieve high specificity we designed a probabilistic Finite State Machine for error correction.
One of the major advantages of a whole-genome sequencing approach to drug resistance profiling is that it is not necessary to restrict the analysis to a limited panel of drug-resistance tests, but it is possible to discover the complete drug resistance profile in a sample. With a complete picture of the drug-resistance profile within a few hours, a clinician may be able to design an antibiotic treatment regimen that is both more likely to succeed and less likely to induce further antibiotic resistance. However, even achieving completely accurate identification of resistance genes is only a first step in accurately predicting the resistance profile, as mutations may effect the rate at which these genes are transcribed and also their antibiotic resistance activity. Prediction of antibiotic resistance from genotype is an area that warrants substantial further research.
In summary, we have developed an open-source, flexible pipeline for real-time analysis of MinION sequencing data. The pipeline includes various streaming algorithms to identify pathogens and their antibiotic resistance, but others can be seamlessly integrated into [28]. The only step in our pipeline at which data are written to, and then re-read from disk is the base-calling step using Metrichor. npReader immediately identifies new reads as they are generated by Metrichor; however, some delay can occur while waiting for base-called data to be returned from Metrichor. Oxford Nanopore Technologies have recently opened up the Application Programming Interface to extract raw data directly from the MinION. This, together with the recent development of open source base-calling algorithms [29, 30] to run on the local machine, will allow future development of a completely streaming pipeline, in the sense of never saving data to disk. Our pipeline can be deployed on a single 16 core computer, capable of analysing MinION data streaming at up to 120 × the current rate of sequencing; or on a high performance computing cluster to scale with the potential even higher throughput of forthcoming nanopore sequencing platforms.
DNA extraction
Bacterial strains K. pneumoniae ATCC BAA-2146, ATCC 13883, K. quasipneumoniae ATCC 700603, E. coli ATCC 25922 and S. aureus ATCC 25923 were obtained from the American Type Culture Collection (ATCC, USA). K. pneumoniae clinical isolate was acquired from Hygeia General Hospital, Athens, Greece from a patient stool sample in 2014 (Lab ID 100575214, isolate 1). Clinical susceptibility profiling by VITEK®;2 (bioMérieux Inc.) identified the isolate as carbapenemase-producing (KPC), giving rise to extended spectrum β-lactam resistance. It was also deemed resistant to aminoglycoside, phenicol, quinolone, sulphonamide, tetracycline and trimethoprim antibiotics, rendering it an extensively drug-resistant bacterial isolate. Bacterial cultures were grown overnight from a single colony at 37 °C with shaking (180 rpm). Whole cell DNA was extracted from the cultures using the DNeasy Blood and Tissue Kit (QIAGEN Ⓒ, Cat #69504) according to the bacterial DNA extraction protocol with enzymatic lysis pre-treatment.
MinION sequencing
Library preparation was performed using the Genomic DNA Sequencing kit (Oxford Nanopore) according to the manufacturer's instruction. For the R7 MinION Flow Cells SQK-MAP-002 sequencing kit was used and for R7.3 MinION Flow Cells SQK-MAP-003 or SQK-MAP-006 Genomic Sequencing kits were used according to the manufacturer's instruction.
For the library mixture sample, the DNA concentration of each library was measured using Qubit Fluorimeter (Thermo Fisher Scientific). Based on the concentration, 75 % of E. coli (ATCC 25922) library and 25 % of S. aureus (ATCC 25923) library were mixed prior to sequencing.
A new MinION Flow Cell (R7 or R7.3) was used for sequencing each sample. The library mix was loaded onto the MinION Flow Cell and the Genomic DNA 48 h sequencing protocol was initiated on the MinKNOW software.
MinION data analysis
The sequence read data were base-called with Metrichor Agent. We used npReader [10] to convert base-called sequence data in fast5 format to fastq format. The npReader program also extracted the time that each read was sequenced and used this information to sort the read sequences in order they were produced. For the real-time analyses, we wrote a program to emulate the sequencing process in that it streamlined each read in the exact order it was sequenced. The program also allowed us to scale up the sequencing emulation to a factor of choice. Our pipeline allows for filtering out 1D reads at multiple stages (including via npReader). All subsequent analyses in this paper used both 1D and 2D reads.
MiSeq sequencing and data analysis
Library preparation was performed using the NexteraXT DNA Sample preparation kit (Illumina), as recommended by the manufacturer. Libraries were sequenced on the MiSeq instrument (Illumina) with 300 bp paired end sequencing, to a coverage of over 100-fold. Read data were trimmed with trimmomatic [31] (V0.32) and subsequently assembled using SPAdes [32] (V3.5), resulting in assemblies with N50 exceeding 200 Kb. Their sequence types were identified by submitting the assembled genomes to the MLST servers [33] for K. pneumoniae, E. coli (set #1) and S. aureus.
We identified the antibiotic resistance profiles of these strains from their MiSeq assemblies. We used blastn (V2.29)to align these assemblies to the database of resistance genes obtained from ResFinder [17]. Genes that were covered at least greater than 85 % by the alignments and with greater than 85 % sequence identity were considered to be present in the sample. These gene profiles were used as a benchmark to validate the MinION sequencing analysis.
Species typing
We downloaded the bacterial genome database on GenBank (accessed 19 Nov 2014), which contained high quality complete genomes of 2785 bacterial strains from 1487 bacteria species. We expanded this database to include two K. quasipneumoniae genomes. Our species typing pipeline streamed read data from npReader directly to BWA-MEM [11] (V0.7.10-r858), which aligned the reads to the database. Output from BWA in SAM format was streamed directly into our species typing pipeline, which calculated the proportion of reads aligned to each of these species. Our species typing method considers the proportions {p1,p2,..,p k } of k species in the mixture as the parameters of a k-category multinomial distribution, and the read counts {c1,c2,..,c k } for the species as an observation from c1+c2+..+c k independent trials drawn from the distribution. It then uses the MultinomialCI package in R [34] to calculate the 95 % confidence intervals of these proportions from the observation.
MinION sequence reads from K. pneumoniae strains were aligned to the seven house-keeping genes specified by the MLST system using BWA-MEM [11]. We then collected reads that were aligned to a gene and performed a multiple alignment on them using kalign2 [35]. The consensus sequence created from the multiple alignment was then globally aligned to all alleles of the gene using a probabilistic Finite State Machine (see below) for global alignment. The score of a sequence type was determined by the sum of the scores of seven alleles making up the type.
Strain typing
We built gene profile databases for K. pneumoniae, E. coli and S. aureus from the RefSeq annotation. Specifically, we obtained the publicly available assemblies of these species listed on the RefSeq (accessed 17 July 2015). We used the relevant MLST schemes obtained from [33] to identify sequence type of each assembly. For each sequence type, we selected the assembly with highest N50 statistic and use the RefSeq gene annotation of the assembly to determine the gene content of the sequence type.
In order to develop a simple probabilistic presence/absence strain typing model, we considered the genomes of each of the strains simply as a collection of genes. Denote by Stj=1..J all the strains in our database (for a fixed species). Denote by gj,k the kth gene in the database for strain j, where the genes are listed in no particular order. Denote by N j the total number of genes in St j .
We aligned each sequence read r i from the MinION device to the gene database using BWA-MEM [11]. We counted the number of genes of each strain that aligned to read r i , denoted by N j (r i ).
We describe below how to calculate the likelihood, P(r i |St j ), of each strain generating each read, from which we can calculate the posterior probability of each strain St j conditional on observing the reads r1…r m :
$$ P({St}_{j}|r_{1}..r_{m}) = \frac{\prod_{i=1..m}P(r_{i}|{St}_{j})}{\sum_{j'}\prod_{i=1..m}P(r_{i}|{St}_{j})} $$
The probability P(r i |St j ) could be calculated using a simple model as:
$$ P_{\text{simple}}(r_{i} | {St}_{j}) = \frac{N_{j}(r_{i})}{N_{j}}, $$
However, this model suffers from the problem that if we observe any read that overlaps a gene not in the reference genome for St j , then the posterior probability of that strain will become zero. Thus, this model is very unstable. In order to make this estimate more stable, we used a mixture model that allows the read to have been generated by a background model:
$$ \begin{aligned} P(r_{i} | {St}_{j}) = (1-c)*\frac{N_{j}(r_{i})}{N_{j}} + (c)*P\left(r_{i} | \bigcup_{j'} {St}_{j'}\right). \end{aligned} $$
The background model considers the probability that the read was generated from any of the strains:
$$ P\left(r_{i} | \bigcup_{j'} {St}_{j'}\right) = \frac{\sum_{j'} N_{j'}(r_{i})}{\sum_{j'} N_{j'}}. $$
This makes the posterior probability estimates more stable. It also makes the model robust to incorrect annotation of the reads from the MinION sequencer and incorrect annotation of the reference genome. We have investigated use of c=0.2, c=0.1 and c=0.05 and found that it has little impact on the results, with slightly smaller confidence intervals (data not shown). We chose c=0.2 in order to conservatively estimate confidence intervals.
Finally, in order to calculate confidence intervals we employed a bootstrap resampling approach in which we resampled m reads from r1,…r m with replacement. This is repeated 1000 times, and the posterior probabilities are recalculated every iteration. We calculated the 95 % confidence intervals from the empirical distribution of these posterior probabilities.
To gain some insight into how this model works in response to gene presence, consider a gene g, which is present in a fraction f of strains, including St j but not including St k . For simplicity, assume that each strain has N genes. The difference in log-likelihood St j and St k conditional on g can be approximated by log(1/c)+ log(1/f), showing that a more specific gene has a stronger effect in our model than a common gene in distinguishing strains.
To gain insight into the effect of gene absence in contrast to gene presence, assume instead that the only difference between St j and St k is the deletion of a single gene (g) in St j , and denote by N=N j =N k −1. If we sequence N ln(2) genes from St j without seeing gene g, the difference in log-likelihood becomes N ln(2)∗(log(N)− log(N−1))≈1bit, corresponding to the likelihood that St j is twice as big as the likelihood of St k . For example, if a strain has 1000 genes, then we would need to observe 693 genes without observing g to be able to conclude that the observed data were twice as likely to be generated from the species with a single gene deletion. For comparison, we would need to only sequence 100 genes from St k to get an expected log-likelihood difference of 1 bits versus St j , demonstrating the extra information in gene 'presence' versus 'absence' typing.
Antibiotic resistance gene classes detection
We downloaded the resistance gene database from ResFinder [17] (accessed July 2015). We aligned each gene to the collection of bacterial genomes in RefSeq using blastn [36], and used the best alignment of the gene to extract 100 bp sequences flanking the antibiotic resistance genes. We found that the inclusion of these flanking sequences improved the sensitivity of mapping MinION reads to the gene database.
We then grouped these genes based on 90 % sequence identity into 609 groups. We manually checked and found that genes within a group were variants of the same gene. We selected the longest gene in each group to make up a reduced resistance gene database. To create a benchmark of resistance genes for a sample, we used blastn to compare the Illumina assembly of the sample against this reduced gene database, and reported genes with greater than 85 % coverage and identity.
Our analysis pipeline aligned MinION sequencing data to this reduced resistance gene database using BWA-MEM [11] in a streamlined fashion, and examined genes with reads mapping to the whole gene (not including flanking sequences). Because of high error rates with MinION sequence data, we noticed a high rate of false positive genes. To reduce false positives, we used kalign2 [35] to perform a multiple alignment of reads that were aligned to the same gene. The consensus sequence resulting from the multiple alignment was then compared with the gene sequence using a probabilistic Finite State Machine (see below). The pipeline then reported gene classes based on the genes detected.
Sensitive alignment of noisy sequences with probabilistic Finite State Machines
Our methods for MLST strain typing and antibiotic resistance gene identification require the alignment of a consensus sequence to a gene or a gene allele. Such an alignment generally assumes a model and a set of parameters of the differences between the sequences. It is widely recognised that the accuracy of the alignment is sensitive to these parameters [37–39]. However, in the context of real-time analysis of MinION sequencing, it is not possible to select in advance a sensible set of parameters. On the one hand, the quality among sequence reads differs remarkably; as shown in Additional file 1: Figure S1 and Table 2 – the majority (95 %) of the reads across our four runs have a Phred score ranging between 3 and 7 for template and complement reads (corresponding to 50–80 % accuracy) and between 6 to 12 for 2D reads (75–95 % accuracy). On the other hand, a consensus sequence is computationally constructed from a set of reads. Its quality is hence contingent to not only the quality of the reads but also the number of reads in the set.
We use a probabilistic Finite State Machine (pFSM) [40] to model the differences, and hence the simultaneous error profile of the consensus sequence. Briefly, a pFSM is a probabilistic model of genomic alignment that takes into account different types of variations including SNPs, insertions and deletions. A pFSM is equivalent to a hidden Markov Model. The pFSM consists of a set of states and transitions between states. Each transition corresponds to an action and is associated with a cost for the action. An action could be one of copy (C), substitute (S), delete (D) and insert (I). Figure 6 depicts a three-state pFSM, which is equivalent to an affine gap penalty alignment model. In order to assess an alignment of two sequences A and B, under a hypothesis specified by the parameters, the pFSM computes the cost to generate one sequence (say A) given the other (B). For example, while in state Copy, the machine consumes the next base in B, generates the next base in A; it is said to take action C if the two bases are the same, or action S otherwise, and to follow either transition to state Copy. Alternatively, the machine can take either action D (consumes the next base in B without generating any base in A and moves to state Delete), or action I (generates the next base in A without consuming a base in B and moves to state Insert). These actions are repeated until the whole sequence B is generated.
Schematic of a three-state probabilistic Finite State Machine
We used an information-theoretic measure whereby the cost of a transition is that of encoding the generated base, or in other words, the negative logarithm of the probability of the associated action (c=−log2(P(a)). The foundation of this approach goes back to the 1960s when it was proposed as a basis for inductive inference [41, 42]. It has since been used in several bioinformatics applications such as for calculating the BLOSUM matrix [43] and modelling DNA sequences [44, 45]. More importantly, this information-theoretic framework allows one to estimate a sensible set of parameters for any related two sequences. This is done via an Expectation-Maximisation process. This starts with an initial set of probabilities at each state. In the E-step, the best alignment (lowest cost) is calculated by a dynamic programming algorithm. The frequencies of actions at each state are then used to re-estimate the probabilities in the M-step. A detailed discussion of this process is provided in Allison et al [40] and Cao et al. [46]. The process is guaranteed to converge to an optimal, and it does so in only a few iterations in our experience.
MLST, multi-locus sequence typing; pFSM, probabilistic Finite State Machine
Boyd SD. Diagnostic applications of high-throughput DNA sequencing. Ann Rev Pathol. 2013; 8:381–410. doi:10.1146/annurev-pathol-020712-164026.
Koboldt DC, Steinberg KM, Larson DE, Wilson RK, Mardis ER. The next-generation sequencing revolution and its impact on genomics. Cell. 2013; 155(1):27–38. doi:10.1016/j.cell.2013.09.006.
Gaber MM, Zaslavsky A, Krishnaswamy S. Mining data streams. ACM SIGMOD Record. 2005; 34(2):18. doi:10.1145/1083784.1083789.
Muthukrishnan S. Data Streams: Algorithms and Applications. Foundations Trends Theor Comput Sci. 2005; 1(2):117–236.
Kasianowicz JJ, Brandin E, Branton D, Deamer DW. Characterization of individual polynucleotide molecules using a membrane channel. Proc Nat Acad Sci. 1996; 93(24):13770–3. doi:10.1073/pnas.93.24.13770.
Branton D, Deamer DW, Marziali A, Bayley H, Benner SA, Butler T, Di Ventra M, Garaj S, Hibbs A, Huang X, Jovanovich SB, Krstic PS, Lindsay S, Ling XS, Mastrangelo CH, Meller A, Oliver JS, Pershin YV, Ramsey JM, Riehn R, Soni GV, Tabard-Cossa V, Wanunu M, Wiggin M, Schloss JA. The potential and challenges of nanopore sequencing. Nat Biotechnol. 2008; 26(10):1146–53. doi:10.1038/nbt.1495.
Stoddart D, Heron AJ, Mikhailova E, Maglia G, Bayley H. Single-nucleotide discrimination in immobilized DNA oligonucleotides with a biological nanopore. Proc Nat Acad Sci USA. 2009; 106(19):7702–7. doi:10.1073/pnas.0901054106.
Quick J, Ashton P, Calus S, Chatt C, Gossain S, Hawker J, Nair S, Neal K, Nye K, Peters T, De Pinna E, Robinson E, Struthers K, Webber M, Catto A, Dallman TJ, Hawkey P, Loman NJ. Rapid draft sequencing and real-time nanopore sequencing in a hospital outbreak of Salmonella. Genome Biol. 2015; 16(1):114. doi:10.1186/s13059-015-0677-2.
Greninger AL, Naccache SN, Federman S, Yu G, Mbala P, Bres V, Stryke D, Bouquet J, Somasekar S, Linnen JM, Dodd R, Mulembakani P, Schneider BS, Muyembe-Tamfum JJ, Stramer SL, Chiu CY. Rapid metagenomic identification of viral pathogens in clinical samples by real-time nanopore sequencing analysis. Genome Med. 2015; 7(1):99. doi:10.1186/s13073-015-0220-9.
Cao MD, Ganesamoorthy D, Cooper MA, Coin LJM. Realtime analysis and visualization of MinION sequencing data with npReader. Bioinformatics. 2016; 32(5):764–6. doi:10.1093/bioinformatics/btv658.
Li H. Aligning sequence reads, clone sequences and assembly contigs with BWA-MEM. 2013. 1303.3997#.
Quick J, Quinlan AR, Loman NJ. A Reference Bacterial Genome Dataset Generated on the {MinION} Portable Single-molecule Nanopore Sequencer. GigaScience. 2014; 3(1):22. doi:10.1186/2047-217x-3-22.
Ashton PM, Nair S, Dallman T, Rubino S, Rabsch W, Mwaigwisya S, Wain J, O'Grady J. MinION nanopore sequencing identifies the position and structure of a bacterial antibiotic resistance island. Nat Biotechnol. 2015; 33(3):296–300. doi:10.1038/nbt.3103.
Kilianski A, Haas JL, Corriveau EJ, Liem AT, Willis KL, Kadavy DR, Rosenzweig CN, Minot SS. Bacterial and viral identification and differentiation by amplicon sequencing on the MinION nanopore sequencer. GigaScience. 2015;4(1). doi:10.1186/s13742-015-0051-z.
Jain M, Fiddes IT, Miga KH, Olsen HE, Paten B, Akeson M. Improved data analysis for the MinION nanopore sequencer. Nat Methods. 2015; 12(4):351–6. doi:10.1038/nmeth.3290.
Diancourt L, Passet V, Verhoef J, Grimont PAD, Brisse S. Multilocus Sequence Typing of Klebsiella pneumoniae Nosocomial Isolates. J Clin Microbiol. 2005; 43(8):4178–82. doi:10.1128/JCM.43.8.4178-4182.2005.
Zankari E, Hasman H, Cosentino S, Vestergaard M, Rasmussen S, Lund O, Aarestrup FM, Larsen MV. Identification of Acquired Antimicrobial Resistance Genes. J Antimicrobial Chemother. 2012; 67(11):2640–4. doi:10.1093/jac/dks261.
Allison L, Wallace CS, Yee CN. When is a string like a string? In: Artificial Intelligence and Mathematics.1990. Ft. Lauderdale FL.
Poznik DG, Henn BM, Yee MC, Sliwerska E, Euskirchen GM, Lin AA, Snyder M, Quintana-Murci L, Kidd JM, Underhill PA, Bustamante CD. Sequencing {Y} Chromosomes Resolves Discrepancy in Time to Common Ancestor of Males Versus Females. Science. 2013; 341(6145):562–5. doi:10.1126/science.1237619.
Juul S, Izquierdo F, Hurst A, Dai X, Wright A, Kulesha E, Pettett R, Turner DJ. What's in my pot? Real-time species identification on the MinION. bioRxiv. 2015. doi:10.1101/030742.
Segata N, Waldron L, Ballarini A, Narasimhan V, Jousson O, Huttenhower C. Metagenomic microbial community profiling using unique clade-specific marker genes. Nat Methods. 2012; 9(8):811–4. doi:10.1038/nmeth.2066.
Judge K, Harris SR, Reuter S, Parkhill J, Peacock SJ. Early insights into the potential of the Oxford Nanopore MinION for the detection of antimicrobial resistance genes. J Antimicrobial Chemother. 2015; 70(10):2775–778. doi:10.1093/jac/dkv206.
Dunne WM, Westblade LF, Ford B. Next-generation and whole-genome sequencing in the diagnostic clinical microbiology laboratory. Eur J Clin Microbiol Infect Dis Off Publ Eur Soc Clin Microbiol. 2012; 31(8):1719–26. doi:10.1007/s10096-012-1641-7.
Fricke WF, Rasko DA. Bacterial genome sequencing in the clinic: bioinformatic challenges and solutions. Nat Rev Genet. 2014; 15(1):49–55. doi:10.1038/nrg3624.
Maiden MC, Bygraves JA, Feil E, Morelli G, Russell JE, Urwin R, Zhang Q, Zhou J, Zurth K, Caugant DA, Feavers IM, Achtman M, Spratt BG. Multilocus sequence typing: a portable approach to the identification of clones within populations of pathogenic microorganisms. Proc Nat Acad Sci USA. 1998; 95(6):3140–145. doi:10.1073/pnas.95.6.3140.
Cody AJ, McCarthy ND, Jansen van Rensburg M, Isinkaye T, Bentley SD, Parkhill J, Dingle KE, Bowler ICJW, Jolley KA, Maiden MCJ. Real-Time Genomic Epidemiological Evaluation of Human Campylobacter Isolates by Use of Whole-Genome Multilocus Sequence Typing. J Clin Microbiol. 2013; 51(8):2526–34. doi:10.1128/JCM.00066-13.
Inouye M, Dashnow H, Raven LA, Schultz MB, Pope BJ, Tomita T, Zobel J, Holt KE. SRST2: Rapid genomic surveillance for public health and hospital microbiology labs. Genome Med. 2014; 6(11):90. doi:10.1186/s13073-014-0090-6.
Cao MD, Nguyen SH, Ganesamoorthy D, Elliott A, Cooper M, Coin LJM. Scaffolding and Completing Genome Assemblies in Real-time with Nanopore Sequencing. BioRxiv. 2016. 054783. doi:10.1101/054783.
David M, Dursi LJ, Yao D, Boutros PC, Simpson JT. Nanocall: An Open Source Basecaller for Oxford Nanopore Sequencing Data. BioRxiv. 2016. 046086. doi:10.1101/046086.
Boža V, Brejová B, Vinar T. DeepNano: Deep Recurrent Neural Networks for Base Calling in MinION Nanopore Reads. 2016. 1603.09195.
Bolger AM, Lohse M, Usadel B. Trimmomatic: a flexible trimmer for Illumina sequence data. Bioinformatics. 2014; 30(15):2114–120. doi:10.1093/bioinformatics/btu170.
Bankevich A, Nurk S, Antipov D, Gurevich AA, Dvorkin M, Kulikov AS, Lesin VM, Nikolenko SI, Pham S, Prjibelski AD, Pyshkin AV, Sirotkin AV, Vyahhi N, Tesler G, Alekseyev MA, Pevzner PA. SPAdes: A New Genome Assembly Algorithm and Its Applications to Single-Cell Sequencing. J Comput Biol. 2012; 19(5):455–77. doi:10.1089/cmb.2012.0021.
Larsen MV, Cosentino S, Rasmussen S, Friis C, Hasman H, Marvig RL, Jelsbak L, Sicheritz-Pontén T, Ussery DW, Aarestrup FM, Lund O. Multilocus Sequence Typing of Total-Genome-Sequenced Bacteria. J Clin Microbiol. 2012; 50(4):1355–61. doi:10.1128/JCM.06094-11.
Sison CP, Glaz J. Simultaneous Confidence Intervals and Sample Size Determination for Multinomial Proportions. J Am Stat Assoc. 1995; 90(429):366. doi:10.2307/2291162.
Lassmann T, Frings O, Sonnhammer ELL. Kalign2: High-performance Multiple Alignment of Protein and Nucleotide Sequences Allowing External Features. Nucleic Acids Res. 2009; 37(3):858–65. doi:10.1093/nar/gkn1006.
Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ. Basic Local Alignment Search Tool. J Mol Biol. 1990; 215(3):403–10. doi:10.1016/S0022-2836(05)80360-2.
Gusfield D, Balasubramanian K, Naor D. Parametric Optimization of Sequence Alignment. Algorithmica. 1994; 12(4):312–26. doi:10.1007/bf01185430.
Frith M, Hamada M, Horton P. Parameters for Accurate Genome Alignment. BMC Bioinformatics. 2010; 11(1):80. doi:10.1186/1471-2105-11-80.
Cao MD, Dix TI, Allison L. A genome alignment algorithm based on compression. BMC Bioinformatics. 2010; 11(1):599. doi:10.1186/1471-2105-11-599.
Allison L, Wallace CS, Yee CN. Finite-state models in the alignment of macromolecules. J Mol Evol. 1992; 35(1):77–89. doi:10.1007/BF00160262.
Solomonoff R. A Formal Theory of Inductive Inference. Inform Control. 1964; 7(2):1–22224254.
Wallace CS, Boulton DM. An Information Measure for Classification. Comput J. 1968; 11(2):185–94.
Henikoff S, Henikoff JG. Amino acid substitution matrices from protein blocks. Proc Nat Acad Sci. 1992; 89(22):10915–9.
Cao MD, Dix TI, Allison L, Mears C. A simple statistical algorithm for biological sequence compression. In: Data Compression Conference. Utah: IEEE: 2007. p. 43–52, doi:10.1109/DCC.2007.7.
Cao MD, Dix TI, Allison L. A biological compression model and its applications In: Arabnia HRR, Tran Q-N, editors. Software Tools and Algorithms for Biological Systems. Advances in Experimental Medicine and Biology. New York: Springer: 2011. p. 657–66, doi:10.1007/978-1-4419-7046-6_67.
Cao MD, Dix TI, Allison L. Computing substitution matrices for genomic comparative analysis In: Theeramunkong T, Kijsirikul B, Cercone N, Ho T-B, editors. Advances in Knowledge Discovery and Data Mining. Lecture Notes in Computer Science. Berlin Heidelberg: Springer: 2009. p. 647–55, doi:10.1007/978-3-642-01307-2_64.
Cao MD. Java package for sequence analysis. 2015. https://github.com/mdcao/japsa.
Cao MD, Ganesamoorthy D, Elliott A, Zhang H, Cooper M, Coin L. Support data for "Streaming algorithms for identification of pathogens and antibiotic resistance potential from real-time MinION sequencing". GigaScience Database. 2016. doi:10.5524/100206.
Elliott AG, Ganesamoorthy D, Coin L, Cooper MA, Cao MD. Complete genome sequence of klebsiella quasipneumoniae subsp. similipneumoniae Strain ATCC 700603. Genome Announcements. 2016; 4(3):00438–16. doi:10.1128/genomeA.00438-16.
We thank Ilias Karaiskos and Helen Giamarellou (6th Dept. of Internal Medicine, Hygeia General Hospital, Athens, Greece) for providing the clinical K. pneumoniae isolate. MAC is an NHMRC Principal Research Fellow (APP1059354). LC is an ARC Future Fellow (FT110100972). The research is supported by funding from the Institute for Molecular Bioscience Centre for Superbug Solutions (610246).
Availability and requirements
Project name: Streaming algorithms to identify pathogens and antibiotic resistance from real-time MinION.
Project home page: https://github.com/mdcao/npAnalysis
Operating system(s): Platform independent
Programming language: Java and R.
License: FreeBSD.
The source code of the software is publicly available in Japsa github repository [47]. All scripts for the presented analyses are provided in the project home page. The sequencing data for the experiments presented are available in European Nucleotide Archive under accession PRJEB14532. Supporting data and snapshots of the code are available in the GigaDB repository [48].
MDC, DG, MC and LC conceived the study, performed the analysis and wrote the first draft of the manuscript. AE performed the bacterial cultures and DNA extractions. DG performed the MinION sequencing. MDC and LC designed and developed the algorithms and the analysis framework. MDC, HZ, and LC performed the bioinformatics analyses. All authors contributed to editing the final manuscript. All authors read and approved the final manuscript.
MC is a participant of Oxford Nanopore's MinION Access Programme (MAP) and received the MinION device, MinION Flow Cells and Oxford Nanopore Sequencing Kits in return for an early access fee deposit. None of the authors have any commercial or financial interest in Oxford Nanopore Technologies Ltd. The authors declare that they have no competing interests.
Institute for Molecular Bioscience, The University of Queensland, 306 Carmody Road, St Lucia, Brisbane, QLD 4072, Australia
Minh Duc Cao, Devika Ganesamoorthy, Alysha G. Elliott, Huihui Zhang, Matthew A. Cooper & Lachlan J.M. Coin
Department of Genomics of Common Disease, Imperial College London, London, W12 0NN, UK
Lachlan J.M. Coin
Minh Duc Cao
Devika Ganesamoorthy
Alysha G. Elliott
Huihui Zhang
Matthew A. Cooper
Correspondence to Lachlan J.M. Coin.
Additional file 1
Figure S1. Histograms of read quality (in Phred score) and read lengths of four MinION sequencing runs. (PDF 112 kb)
Figure S2. Screen shot of What's In My Pot (WIMP) analysis of the clinical sample after three hours of sequencing. (PNG 58 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Cao, M.D., Ganesamoorthy, D., Elliott, A.G. et al. Streaming algorithms for identification of pathogens and antibiotic resistance potential from real-time MinIONTM sequencing. GigaSci 5, 32 (2016). https://doi.org/10.1186/s13742-016-0137-2
Nanopore sequencing
Real-time analysis
Pathogen identification
|
CommonCrawl
|
Eric Jang
Technology, A.I., Careers
Uncertainty: a Tutorial
A PDF version of this post can be found here.
Chinese translation by Xiaoyi Yin
Notions of uncertainty are tossed around in conversations around AI safety, risk management, portfolio optimization, scientific measurement, and insurance. Here are a few examples of colloquial use:
"We want machine learning models to know what they don't know.''
"An AI responsible for diagnosing patients and prescribing treatments should tell us how confident it is about its recommendations.''
"Significant figures in scientific calculations represent uncertainty in measurements.''
"We want autonomous agents to explore areas where they are uncertain (about rewards or predictions) so that they may discover sparse rewards.''
"In portfolio optimization, we want to maximize returns while limiting risk.''
"US equity markets finished disappointingly in 2018 due to increased geopolitical uncertainty.''
What exactly then, is uncertainty?
Uncertainty measures reflect the amount of dispersion of a random variable. In other words, it is a scalar measure of how "random" a random variable is. In finance, it is often referred to as risk.
There is no single formula for uncertainty because there are many different ways to measure dispersion: standard deviation, variance, value-at-risk (VaR), and entropy are all appropriate measures. However, it's important to keep in mind that a single scalar number cannot paint a full picture of "randomness'', as that would require communicating the entire random variable itself!
Nonetheless, it is helpful to collapse randomness down to a single number for the purposes of optimization and comparison. The important thing to remember is that "more uncertainty'' is usually regarded as "less good'' (except in simulated RL experiments).
Types of Uncertainty
Statistical machine learning concerns itself with the estimation of models $p(\theta|\mathcal{D})$, which in turn estimate unknown random variables $p(y|x)$. Multiple forms of uncertainty come into play here. Some notions of uncertainty describe inherent randomness that we should expect (e.g. outcome of a coin flip) while others describe our lack of confidence about our best guess of the model parameters.
To make things more concrete, let's consider a recurrent neural network (RNN) that predicts the amount of rainfall today from a sequence of daily barometer readings. A barometer measures atmospheric pressure, which often drops when its about to rain. Here's a diagram summarizing the rainfall prediction model along with different kinds of uncertainty.
Uncertainty can be understood from a simple machine learning model that attempts to predict daily rainfall from a sequence of barometer readings. Aleatoric uncertainty is irreducible randomness that arises from the data collection process. Epistemic uncertainty reflects confidence that our model is making the correct predictions. Finally, out-of-distribution errors arise when the model sees an input that differs from its training data (e.g. temperature of the sun, other anomalies).
Aleatoric Uncertainty
Aleatoric Uncertainty draws its name from the Latin root aleatorius, which means the incorporation of chance into the process of creation. It describes randomness arising from the data generating process itself; noise that cannot be eliminated by simply drawing more data. It is the coin flip whose outcome you cannot know.
In our rainfall prediction analogy, aleatoric noise arises from imprecision of the barometer. There are also important variables that the data collection setup does not observe: How much rainfall was there yesterday? Are we measuring barometric pressure in the present day, or the last ice age? These unknowns are inherent to our data collection setup, so collecting more data from that system does not absolve us of this uncertainty.
Aleatoric uncertainty propagates from the inputs to the model predictions. Consider a simple model $y = 5x$, which takes in normally-distributed input $x \sim \mathcal{N}(0,1)$. In this case, $y \sim \mathcal{N}(0, 5)$, so the aleatoric uncertainty of the predictive distribution can be described by $\sigma=5$. Of course, predictive aleatoric uncertainty is more challenging to estimate when the random structure of the input data $x$ is not known.
One might think that because aleatoric uncertainty is irreducible, one cannot do anything about it and so we should just ignore it. No! One thing to watch out for when training models is to choose an output representation capable of representing aleatoric uncertainty correctly. A standard LSTM does not emit probability distributions, so attempting to learn the outcome of a coin flip would just converge to the mean. In contrast, models for language generation emit a sequence of categorical distributions (words or characters), which can capture the inherent ambiguity in sentence completion tasks.
Epistemic Uncertainty
"Good models are all alike; every bad model is wrong in its own way."
Epistemic Uncertainty is derived from the Greek root epistēmē, which pertains to knowledge about knowledge. It measures our ignorance of the correct prediction arising from our ignorance of the correct model parameters.
Below is a plot of a Gaussian Process Regression model on some toy 1-dimensional dataset. The confidence intervals reflect epistemic uncertainty; the uncertainty is zero for training data (red points), and as we get farther away from training points, the model ought to assign higher standard deviations to the predictive distribution. Unlike aleatoric uncertainty, epistemic uncertainty can be reduced by gathering more data and "ironing out" the regions of inputs where the model lacks knowledge.
1-D Gaussian Process Regression Model showcasing epistemic uncertainty for inputs outside its training set.
There is a rich line of inquiry connecting Deep Learning to Gaussian Processes. The hope is that we can extend the uncertainty-awareness properties of GPs with the representational power of neural networks. Unfortunately, GPs are challenging to scale to the uniform stochastic minibatch setting for large datasets, and they have fallen out of favor among those working on large models and datasets.
If one wants maximum flexibility in choosing their model family, a good alternative to estimating uncertainty is to use ensembles, which is just a fancy way of saying "multiple independently learned models''. While GP models analytically define the predictive distribution, ensembles can be used to compute the empirical distribution of predictions.
Any individual model will make some errors due to randomized biases that occur during the training process. Ensembling is powerful because other models in the ensembles tend to expose the idiosyncratic failures of a single model while agreeing with the correctly inferred predictions.
How do we sample models randomly to construct an ensemble? In Ensembling with bootstrap aggregation, we start with a training dataset of size $N$ and sample $M$ datasets of size $N$ from the original training set (with replacement, so each dataset does not span the entire dataset). The $M$ models are trained on their respective datasets and their resulting predictions collectively form an empirical predictive distribution.
If training multiple models is too expensive, it is also possible to use Dropout training to approximate a model ensemble. However, introducing dropout involves an extra hyperparameter and can compromise single model performance (often unacceptable for real world applications where calibrated uncertainty estimation is secondary to accuracy).
Therefore, if one has access to plentiful computing resources (as one does at Google), it is often easier to just re-train multiple copies of a model. This also yields the benefits of ensembling without hurting performance. This is the approach taken by the Deep Ensembles paper. The authors of this paper also mention that the random training dynamics induced by differing weight initializations was sufficient to introduce a diverse set of models without having to resort to reducing the training set diversity via bootstrap aggregation. From a practical engineering standpoint, it's smart to bet on risk estimation methods that do not get in the way of the model's performance or whatever other ideas the researcher wants to try.
Out-of-Distribution Uncertainty
For our rainfall predictor, what if instead of feeding in the sequence of barometer readings, we fed in the temperature of the sun? Or a sequence of all zeros? Or barometer readings from a sensor that reports in different units? The RNN will happily compute away and give us a prediction, but the result will likely be meaningless.
The model is totally unqualified to make predictions on data generated via a different procedure than the one used to create the training set. This is a failure mode that is often overlooked in benchmark-driven ML research, because we typically assume that the training, validation, and test sets consist entirely of clean i.i.d data.
Determining whether inputs are "valid'' is a serious problem for deploying ML in the wild, and is known as the Out of Distribution (OoD) problem. OoD is also synonymous with model misspecification error and anomaly detection.
Besides its obvious importance for hardening ML systems, anomaly detection models are an intrinsically useful technology. For instance, we might want to build a system that monitors a healthy patient's vitals and alerts us when something goes wrong without necessarily having seen that pattern of pathology before. Or we might be managing the "health" of a datacenter and want to know whenever unusual activity occurs (disks filling up, security breaches, hardware failures, etc.)
Since OoD inputs only occur at test-time, we should not presume to know the distribution of anomalies the model encounters. This is what makes OoD detection tricky - we have to harden a model against inputs it never sees during training! This is exactly the standard attack scenario described in Adversarial Machine Learning.
There are two ways to handle OoD inputs for a machine learning model: 1) catch the bad inputs before we even put them through the model 2) let the "weirdness'' of model predictions imply to us that the input was probably malformed.
In the first approach, we assume nothing about the downstream ML task, and simply consider the problem of whether an input is in the training distribution or not. This is exactly what discriminators in Generative Adversarial Networks (GANs) are supposed to do. However, a single discriminator is not completely robust because it is only good for discriminating between the true data distribution and whatever the generator's distribution is; it can give arbitrary predictions for an input that lies in neither distribution.
Instead of a discriminator, we could build a density model of the in-distribution data, such as a kernel density estimator or fitting a Normalizing Flow to the data. Hyunsun Choi and I investigated this in our recent paper on using modern generative models to do OoD detection.
The second approach to OoD detection involves using the predictive (epistemic) uncertainty of the task model to tell us when inputs are OoD. Ideally, malformed inputs to a model ought to generate "weird'' predictive distribution $p(y|x)$. For instance, Hendrycks and Gimpel showed that the maximum softmax probability (the predicted class) for OoD inputs tends to be lower than that of in-distribution inputs. Here, uncertainty is inversely proportional to the "confidence'' as modeled by the max sofmax probability. Models like Gaussian Processes give us these uncertainty estimates by construction, or we could compute epistemic uncertainty via Deep Ensembles.
In reinforcement learning, OoD inputs are actually assumed to be a good thing, because it represents inputs from the world that the agent does not know how to handle yet. Encouraging the policy to find its own OoD inputs implements "intrinsic curiosity'' to explore regions the model predicts poorly in. This is all well and good, but I do wonder what would happen if such curiousity-driven agents are deployed in real world settings where sensors break easily and other experimental anomalies happen. How does a robot distinguish between "unseen states" (good) and "sensors breaking" (bad)? Might that result in agents that learn to interfere with their sensory mechanisms to generate maximum novelty?
Who Will Watch the Watchdogs?
As mentioned in the previous section, one way to defend ourselves against OoD inputs is to set up a likelihood model that "watchdogs" the inputs to a model. I prefer this approach because it de-couples the problem of OoD inputs from epistemic and aleatoric uncertainty in the task model. It makes things easy to analyze from an engineering standpoint.
But we should not forget that the likelihood model is also a function approximator, possibly with its own OoD errors! We show in our recent work on Generative Ensembles (and also showed in concurrent work by DeepMind), that under a CIFAR likelihood model, natural images from SVHN can actually be more likely than the in-distribution CIFAR images themselves!
Likelihood estimation involves a function approximator that can itself be susceptible to OoD inputs. A likelihood model of CIFAR assigns higher probabilities to SVHN images than CIFAR test images!
However, all is not lost! It turns out that the epistemic uncertainty of likelihood models is an excellent OoD detector for the likelihood model itself. By bridging epistemic uncertainty estimation with density estimation, we can use ensembles of likelihood models to protect machine learning models against OoD inputs in a model-agnostic way.
Calibration: the Next Big Thing?
A word of warning: just because a model is able to spit out a confidence interval for a prediction doesn't mean that the confidence interval actually reflects the actual probabilities of outcomes in reality!
Confidence intervals (e.g. $2\sigma$) implicitly assume that your predictive distribution is Gaussian-distributed, but if the distribution you're trying to predict is multi-modal or heavy-tailed, then your model will not be well calibrated!
Suppose our rainfall RNN tells us that there will be $\mathcal{N}(4, 1)$ inches of rain today. If our model is calibrated, then if we were to repeat this experiment over and over again under identical conditions (possibly re-training the model each time), we really would observe empirical rainfall to be distributed exactly $\mathcal{N}(4, 1)$.
Machine Learning models developed by academia today mostly optimize for test accuracy or some fitness function. Researchers are not performing model selection by deploying the model in repeated identical experiments and measuring calibration error, so unsurprisingly, our models tend to be poorly calibrated.
Going forward, if we are to trust ML systems deployed in the real world (robotics, healthcare, etc.), I think a much more powerful way to "prove our models understand the world correctly'' is to test them for statistical calibration. Good calibration also implies good accuracy, so it would be a strictly higher bar to optimize against.
Should Uncertainty be Scalar?
As useful as they are, scalar uncertainty measures will never be as informative as the random variables they describe. I find methods like particle filtering and Distributional Reinforcement Learning very cool because they are algorithms that operate on entire distributions, freeing us from resorting to simple normal distributions to keep track of uncertainty. Instead of shaping ML-based decision making with a single scalar of "uncertainty", we can now query the full structure of distributions when deciding what to do.
The Implicit Quantile Networks paper (Dabney et al.) has a very nice discussion on how to construct "risk-sensitive agents'' from a return distribution. In some environments, one might favor an opportunitistic policy that prefers to explore the unknown, while in other environments unknown things may be unsafe and should be avoided. The choice of risk measure essentially determines how to map the distribution of returns to a scalar quantity that can be optimized against. All risk measures can be computed from the distribution, so predicting full distributions enables us to combine multiple definitions of risk easily. Furthermore, supporting flexible predictive distributions seems like a good way to improve model calibration.
Performance of various risk measures on Atari games as reported by the IQN paper.
Risk measures are a deeply important research topic to financial asset managers. The vanilla Markowitz portfolio objective minimizes a weighted variance of portfolio returns $\frac{1}{2}\lambda w^T \Sigma w$. However, variance is an unintuitive choice of "risk'' in financial contexts: most investors don't mind returns exceeding expectations, but rather wish to minimize the probability of small or negative returns. For this reason, risk measures like Value-at-Risk, Shortfall Probability, and Target Semivariance, which only pay attention to the likelihood of "bad'' outcomes, are more useful objectives to optimize.
Unfortunately, they are also more difficult to work with analytically. My hope is that research into distributional RL, Monte Carlo methods, and flexible generative models will allow us to build differentiable relaxations of risk measures that can play nicely with portfolio optimizers. If you work in finance, I highly recommend reading the IQN paper's "Risks in Reinforcement Learning" section.
Here's a recap of the main points of this post:
Uncertainty/risk measures are scalar measures of "randomness''. Collapsing a random variable to a single number is done for optimization and mathematical convenience.
Predictive uncertainty can be decomposed into aleatoric (irreducible noise arising from data collection process), epistemic (ignorance about true model), and out-of-distribution (at test time, inputs may be malformed).
Epistemic uncertainty can be mitigated by softmax prediction thresholding or ensembling.
Instead of propagating OoD uncertainty to predictions, we can use a task-agnostic filtering mechanism that safeguards against "malformed inputs''.
Density models are a good choice for filtering inputs at test time. However, it's important to recognize that density models are merely approximations of the true density function, and are themselves susceptible to out-of-distribution inputs.
Self-plug:Generative Ensembles reduce epistemic uncertainty of likelihood models so they can be used to detect OoD inputs.
Calibration is important and underappreciated in research models.
Some algorithms (Distributional RL) extend ML algorithms to models that emit flexible distributions, which provides more information than a single risk measure.
I especially recommend Chapter 3 ("Risk Measurement") of Modern Investment Management by Litterman et al. to learn about risk in a concrete way.
http://uqpm2017.usacm.org/sites/default/files/DStarcuzzi_UQConf.pdf
http://mlg.eng.cam.ac.uk/yarin/blog_2248.html
Posted by Eric at 10:14 AM
Labels: AI, Finance, Machine Learning, Statistics
Comments will be reviewed by administrator (to filter for spam and irrelevant content).
Not for reproduction. Simple theme. Powered by Blogger.
|
CommonCrawl
|
Effects of yellow natural dyes on handmade Daqian paper
Yanbing Luo ORCID: orcid.org/0000-0003-2859-45281 &
Xiujuan Zhang2
Natural yellow plant dyes and traditional medicines were used widely on historical papers in ancient China for religious reasons and conservation considerations. This study aims to evaluate some traditional yellow botanical sources of dyes that contain different chemical colorant compositions in order to understand their effects on the properties of traditional handmade paper. The physical and chemical changes in paper specimens treated with plant dyes were studied by examining properties such as the color, pH, thermogravimetric (TG) characteristics, tensile strength, folding endurance and microstructure by scanning electron microscopy (SEM). The results indicated that different colorants had different toning effects and that the main components, including carboxyl and ketone groups, could affect the paper stability at high temperatures. The results also revealed that the mechanical properties of paper specimens were improved after treatment with plant dyes. The significant improvements in the tensile strength and folding endurance and the slightly higher decomposition temperature of Amur cork tree-dyed paper could be ascribed to the strong interaction between the colorants' main components and the fibers. The scientific evaluation of the property changes is therefore valuable information for weighing the advantages and disadvantages of the various yellow toning materials for paper conservation treatment.
Throughout history, botanically-sourced dyes extracted from locally available plants have been used to color papers, textiles and other objects. According to the remaining colored ancient Chinese books, yellow dyes have been widely used for thousands of years. The most famous of these books are the large collections of ancient Buddhist manuscripts at Dunhuang, some of which were dyed yellow [1, 2]. There are many reasons for the use of plants rich in yellow colorants in China: religious purposes (yellow represents solemnity in Buddhism); in hierarchical symbols because yellow was regarded as an imperial color—the emperors and imperial family of China were the only ones allowed to wear yellow robes; the high tinctorial strength of these plant dyes; the vividness of the colors obtained from them (even though some are not very colorfast); their ease of use; and last, conservation considerations, since yellow alkaloid dyes have insecticidal properties; that is, berberine, curcumin, crocetin and rutin dyes can be used as antioxidants and antibacterial materials. Even the Chinese government decree of AD 675 stated that yellow paper should be used by various government offices when issuing decrees and orders since white paper was often damaged by insects [3].
The yellow or golden color of autumn leaves, roots and tree bark are important sources of yellow dyes and paints. Since the earliest times, dyers worldwide have discovered a wealth of yellow dyes in many botanical sources. The pagoda bud (called huaimi in Chinese, the pagoda flower and bud of Sophora japonica L.), gardenia (called zhizi in Chinese, the fruit of Gardenia jasminoides Ellis), turmeric (called qianhuang in Chinese, the rhizome of Curcuma longa L.), and the Amur cork tree (called huangbo in Chinese, the inner part of the bark of Phellodendron amurense Rupr.), were known and used for thousands of years in China. The dyes can be fixed onto paper fibers by a brushing or pulling process without the need for a mordant, a quality that makes them quite easy to use and that, along with their fair anti-insect and/or colorfastness properties, has ensured their long-lasting popularity with dyers. A description of how to use the pagoda bud for dyeing was included in the eighth century Bencao shiyi [4]. The cultivation and use of gardenia for dyeing have been recorded since the Northern and Southern Dynasties (420–589 AD) [5]. Turmeric was recorded as a yellow dye in historical documents such as Bencao gangmu (本草纲目, Compendium of Materia Meica, 1578)[6]. A detailed description of the preparation of Amur cork tree dye for dyeing paper was found in Qiming yaoshu (齐民要术, Main techniques for the welfare of the people, ca. 533–544 AD), written during the Northern Dynasty (386–534 AD)[7]. Multidisciplinary research involving botanists, historians and chemists has already yielded interesting results, showing that these yellow dye plants contain different color sources [8]. Gardenia contains crocin and crocetin colorants and produces colors ranging from yellow to orange-red. Turmeric contains curcumin and provides a bright yellow color. Amur cork tree contains the colorant berberine, a kind of bright yellow material. The main colorant in the pagoda buds from opened/unopened pagoda trees has been identified as rutin. In China, all of these dyes could be used directly without mordant dyeing papers. Usually, dried botanical sources are thoroughly soaked, pounded, boiled and pressed in a cloth sack. The prepared liquid is saved and heated to a certain temperature to dye paper by a brushing or pulling process.
As an increasing number of historical papers have been discovered that have been dyed with different botanical yellow dyes, learning how these plant dyes affect the properties of traditional handmade papers is of interest. Paper conservators and museum professionals will be able to make better decisions when selecting colorants for conservation treatment. There have been some reports about natural yellow dyes used in ancient objects [9,10,11,12,13]; unfortunately, very few reports on ancient paper properties have been published prior to now. Considering the many works on natural yellow dyes, a strong project might result in several papers. In the present study, we limited our research to comparing the properties of handmade paper dyed with four different natural yellow dyes. To avoid contamination by other materials, no mordant was used during the dyeing process.
Handmade Daqian paper (25.46 g/m2 in grammage and 94 μm in thickness), in which the paper pulp was not bleached, was obtained from the Daqian Paper Shop in Sichuan Province.
The gardenia, pagoda bud, turmeric, and Amur cork tree materials were purchased from a local Chinese medicine store in Chengdu. All materials were purchased commercially and used as received. Photographic images of these botanical sources are presented in Fig. 1.
Photographic images of botanical sources of yellow natural dyes
Preparation of colorant dyes
The dyeing process for botanical sources usually includes sourcing botanical plants, extracting dyeing components and dyeing while controlling parameters such as temperature and time duration in dyeing experiments. In our experiments, the dyestuff extraction procedure was based on traditional Chinese recipes adopted for laboratory procedures, and a liquor ratio of 1 g of plant material to 20 mL deionized water was maintained [14,15,16]. Commercially available plant materials (50 g) were chopped into small pieces and soaked at room temperature in 500 mL of deionized water for a period of 12 h, followed by gradually heating the mixtures to boiling and simmering them at 80 ℃ with regular stirring for 30 min. Then, the liquor was obtained by sedimentation and mixed with pure juice pressed from the boiled dregs in a piece of cloth sack. This procedure was repeated twice to allow as many of the colorants as possible to come out of the plants. Both times, as much liquid as possible was mixed and saved for later use. Photographic images of the prepared colorant dyes are presented in Fig. 2.
Photographic images of prepared colorant dyes
Preparation of the dyed paper samples
The dyed paper samples were prepared by pulling papers in different temperature colorant dyes according to traditional Chinese recipes and the stability and solubility of different colored compound: the colorant dyes were kept at 40–50 ℃ when dyeing with the gardenia and pagoda bud dyes, at 50–60 ℃ with the turmeric dye, and at 80–90 ℃ with the Amur cork tree dye. The treated papers were hung on glass rods at 20–25 ℃ for 48 h. These dyed paper samples were labeled YP-A, YP-P, YP-G, or YP-T, where A, P, G, and T indicated the plant dyes of the Amur cork tree (A), pagoda bud (P), gardenia (G) and turmeric (T). The untreated paper samples were labeled YP-0. Photographic images of the colorant dyed papers are presented in Fig. 3.
Photographic images of dyed papers
pH tests of colorant dyes
Portable pH measurements (Horiba, LAQUA twin-pH-22) were used to test the pH of the prepared colorant dyes. The light shield cover was opened, and 3–4 drops of liquid colorant dyes were placed on the flat sensor to cover the entire flat sensor. The dye was allowed to rest on the sensor until the measured value was displayed.
pH tests of papers
The pH tests of the papers were carried out under laboratory conditions using a cold extraction measurement, which is more accurate than a surface method but also more destructive. For this method, 1 g of paper samples was cut into small fractions and left in 100 ml of cold, distilled water for 1h. Then, the pH of the cold extract was analyzed without filtration under stirring using a Mettler Toledo Inlab Power pH meter (Mettler Toledo, Switzerland) according to the standard ISO 6588–1:2012 [17]. The accuracy of the pH measurements was an average pH < ± 0.02 units (n = 5).
Weight change measurements of paper
The percentage of weight change of the paper was calculated from the following equations. All the values presented are the average of three specimens.
$${\text{Weight change }}\left( \% \right) \, = { 1}00\,\, \times \,\frac{{{\text{W}}_{d} - W_{0} }}{{W_{0} }}$$
where Wd is the weight of dyed paper and W0 is the weight of undyed paper.
Color change measurements of paper
The change in the color of the paper was determined using a solid reflection spectrophotometer (CM-700D from Minolta Co., Japan) according to the standard ISO 11475:2004 [18]. The conditions used in the experiment were the standard illuminant D65 and the 10° observer. The CIEL*a*b* color space was used, and the color variations were evaluated using the parameter total color difference (ΔE*). The samples were always measured at five identical places. The average color difference was expressed as \(\Delta E = \sqrt {\left( {\Delta L*} \right)^{2} + \left( {\Delta a*} \right)^{2} + (\Delta b*)^{2} }\).
ATR-FTIR analyses
Infrared spectra were obtained with a Thermo Nicolet iZ10 FTIR (Thermo Scientific Instrument) spectrophotometer combined with a Smart Orbit single reflection diamond attenuated total reflectance (ATR) accessory from 4000 to 650 cm−1 for 128 scans with 4 cm−1 spectral resolution. The FTIR microscope was equipped with an internal pressure sensor, a very precise and accurate motorized X–Y stage, and an MCT detector and was cooled with liquid nitrogen.
Thermogravimetric (TG) analysis was carried out with a TGA 550 (Thermal Analysis Instrument). The samples were heated from room temperature to 900 °C at a rate of 10 °C/min in the TGA instrument under ultrahigh-purified nitrogen at a flow rate of 25 mL/min.
Mechanical property measurements
All the paper samples were conditioned according to ISO 187–1990 [19] before mechanical measurements at a temperature of 23 ± 1℃ and an RH of 50% ± 2% for 24 h.
The mechanical properties were determined according to the TAPPI and ISO standards. Tensile strength test specimens were prepared by cutting samples 15 mm wide with sides within 0.1 mm and 250 mm long in the horizontal and vertical directions. Tensile strength tests on dyed and undyed paper sheets were performed on a computer-controlled TMI 84–56 tensile tester (horizontal) (Testing Machines, Inc., Holland) at room temperature using an extensometer gauge of 180 mm × 15 mm, load cells of 100 N and a test speed of 25 mm/min. All tests were carried out according to the TAPPI T-494 and ISO 1924 standards [20, 21]. The reported values were calculated as the averages of at least ten specimens.
The folding endurance experiments were performed on a TMI 31–23 folding endurance tester (Testing Machines, Inc., USA) according to TAPPI/ANSI T511 [22] and ISO5626:1993 [23] with standard 14 cm long by 15 mm wide paper samples. The applied force was 0.5 Kg and double folds were 175 per minute. The reproted values were calculated as the averages of at least ten specimens.
Microscope examination
Scanning electron microscopy (SEM) images were recorded with a Philips FEI INSPECT F instrument operated at 5 or 10 kV working voltage after specimen coating with a very thin gold layer deposited by sputtering under vacuum. The surface of the paper specimens was examined.
Figures 1 and 2 show images of different dye plants and obtained liquid colorant dyes, respectively, under experimental conditions. In general, plant dyes caused highly visually perceptible color changes in the papers relative to their untreated counterparts [16]. The T dyes, G dyes, P dyes, and A dyes showed orange-red colors, yellow–brown colors, tea colors, and yellow–brown colors, respectively.
Colorimetric information was obtained by employing the International Commission on Illumination CIELAB color space, which mathematically simulates the perception of color by providing a standard process for measuring and quantifying color changes according to the standard test method (ISO11475:2004). The variables L*, A*, and B* of the International Commission on Illumination (CIE) color space are used to designate lightness-darkness, redness-greenness and yellowness-blueness, respectively. The variation in the colorimetric coordinate L*, a* and b* values is shown in Table 1. All dyed paper samples showed a reduction in lightness after dyeing with plant dyes, but the effects on redness and yellowness were different. The papers treated with A colorants experienced a slight reduction in redness and a strong increase in yellowness, which suggested that A had the greatest yellow effect on the paper samples. The color changes recorded for the G paper samples revealed an increase in both redness and yellowness but less yellowness than samples treated with A dyes, which suggested that the G dyed paper samples showed a significant red-yellow color. The paper samples treated with P and T dyes exhibited only a reduction in redness and a slight increase in yellowness, representing the smallest color change. The different color results demonstrated that the different dyes had different toning effects.
Table 1 The colour changes of paper samples after dyeing with different botanical sources
The pH results of the colorant dyes and paper samples are shown in Tables 2 and 3. According to Table 3, the undyed paper samples demonstrated weak alkalinity (pH = 7.8), which might be ascribed to the presence of alkaline materials during the papermaking process. After treatment with different plant dyes, the paper samples had a slightly lower pH than the undyed paper. By comparing Tables 2 and 3, it was found that the decreased pH of different dyed papers was induced by adding plant dyes that reacted with weak alkaline paper. Additionally, the different pH results and color changes demonstrated that the main components of these plant dyes were different. These results corresponded with reports that most plant dyes could decrease the pH of paper [24]. In the experiments, the G dyes exhibited the lowest pH. The main dye constituents of G include crocin and crocetin (shown in Fig. 4), both of which are yellow dyes with high tinctorial strength and produce a golden-yellow hue on almost all materials. Crocin, with the molecular formula C44H24O4 and a molecular weight of 997.21, contains natural glucoses on both sides of the molecule. Crocetin, with a molecular formula of C20H24O4 and molecular weight of 328.4, contains carboxyl groups attached to both sides; these groups cause the G solution to have the lowest pH. It was reported that the carotenoid pigments extracted from G could be larger than those extracted from saffron [25]. However, G is quite unstable under light and alkaline conditions. For A, its main colored compound is berberine (shown in Fig. 5), with a strong yellow color and the molecular formula C20H18NO4, which can be positively charged with protons to form a cationic alkaloid. The weak acidity might be due to the presence of obacunonic acid, ester, ketone and polysaccharide in solution. The main colored compound in T is curcumin (shown in Fig. 6). The phenol group in curcumin makes the T solution weakly acidic. Curcumin, which exhibits a yellow color and has a diferuloyl methane structure and a molecular weight of 368.4, was isolated in the nineteenth century [26]. Its colorant consists of a mixture of compounds known as curcuminoids. The main colorant in the P was rutin (shown in Fig. 7). Rutin, with the molecular formula C27H30O16 and a molecular weight of 610.5, is a yellow crystalline flavonol glycoside that was isolated and identified by Stein in 1853 [27]. The four phenolic hydroxyl groups in rutin might be the reason why the solution is weakly acidic. Due to the different main components of the dyes, the dyed paper samples yielded different pH results, although all of the dyed papers had a slightly lower pH than the undyed papers. The presence of carotenoid pigments in G affected the pH of the dyed paper. To determine whether the pH change accelerates the instability of the paper during aging, we are conducting aging tests under thermal and UV-light artificial accelerating conditions. We will share our results in other reports in the near future.
Table 2 PH results and weight changes of dyed papers
Table 3 PH results of colorant dyes
The characteristic components of gardenia
Berberine, the characteristic components of Amur cork tree
Curcumin, the characteristic component of turmeric
Rutin, the characteristic components of pagoda bud
Table 2 shows that the weight change of the paper samples was different after dyeing. A higher weight increase could indicate that larger materials were absorbed on the paper surfaces. The P-treated paper specimens exhibited the highest weight change. According to the main components of different plant dyes, all of the rutin, curcumin, crocin and crocetin molecules included hydroxyl structures, which can form hydrogen bonds with paper cellulose and semicellulose and may be contribute to the weight increase. Other materials, such as solvent water, could be absorbed by these colorant dyes. The molecular weight and the number of hydroxyl structures in the dyes affected the weight change of the paper. For these different dyes, berberine has no hydroxyl structure; however, the presence of nitrogen means it could be positively charged with the ability to form a salt. Rutin has the largest number of hydroxyl groups and highest molecular weight of the colorant materials, which might be why the P-dyed paper exhibited the highest weight increase. These absorbed materials might act as barriers to defy attack from the paper's environment.
To further identify the effects of different colorant components on paper, TG analysis was used to obtain the mass loss as a function of temperature for different samples. Figure 8 shows that the weight loss for all samples was between 280 and 400 ℃, which corresponds with paper fiber decomposition, resulting in volatiles and low molecular weight materials such as CO2, CO, ketones and aldehydes. For the nondyed paper, the removal of free water began ca. 55 ℃. The highest decomposition temperature was ca. 356 ℃. After that, the weight loss of the samples was small, and a carbonization process and rearrangement of aromatic rings began. Figure 8 shows that the A dyed paper specimens had higher decomposition temperatures than the undyed paper. However, the G-, T- and P-dyed papers displayed lower onset and end decomposition temperatures. The different TG results demonstrated that there were chemical bonds between the colorant and paper fibers. The chemical bonds could affect the thermal stability of the paper samples. According to the characteristic components of different dyes, the decomposition of rutin, crocin and curcumin could produce weak acid materials due to the existence of carboxyl and ketone groups, which could accelerate the decomposition of paper fibers under high temperature conditions. The experiments demonstrated that the dyed papers had lower thermal stability. The G-dyed paper had a higher weight increase and decomposition temperature than the P- and T-dyed papers. This result meant that the hydrogen bond between the paper and colorant dyes could affect the decomposition temperature. The A dyed paper had a higher decomposition temperature than the nondyed paper, which could be ascribed to the fact that berberine more strongly interacted with the paper fibers. The FTIR results (Fig. 9) showed a chemical bond ca. 1337 cm−1, which was ascribed to the aromatic amine of the A dyed paper, while the other yellow dyed papers had no such characteristic bond. The aromatic amine of A could form a chemical bond with the hydroxyl groups of the paper fibers through nitrogen. The TG results further indicated that the positively charged nitrogen and hydroxyl groups exhibited a stronger interaction than the hydrogen bonds of the oxygen and hydroxyl groups.
TG results of dyed papers
FTIR spectra of dyed papers
To identify how the toning materials affected the mechanical properties of paper, tensile strength and folding endurance experiments were conducted. The mechanical strength of handmade paper is determined by the intrinsic strength of the fibers and the bonding strength between fibers. Additionally, the tensile strength of handmade paper is different in different directions. The tensile strength of traditional handmade paper usually depends on the orientation along the longitudinal direction (LD) and transverse direction (TD). The average values of the tensile strength and breaking length are shown in Figs. 10 and 11. The undyed paper had tensile strengths of ca. 21.8 N along the transverse direction and 19.6 N along the longitudinal direction. Slightly better tensile strengths and folding endurance were observed for the P-, T- and G-dyed papers. However, the A-treated paper specimens led to a distinctive improvement in the tensile strength and breaking length. These results could be explained by the different characteristic components of these colorant dyes, which affected the intrinsic resistance of the paper fibers within the experimental accuracy. The tensile strength is related to the binding force between fibers. The increases in the tensile strength and folding endurance were attributed to the formation of more electrostatic interactions due to the chemical reaction between the colorants and paper fibers. Furthermore, the A dye contained sugar materials that covered the fiber surface and led to an increase in the mechanical strength. To explain the mechanical properties, the surface paper specimens were further observed by SEM.
Tensile strength values of dyed papers
Folding endurance values of dyed papers
According to our Herzberg dye testing [28, 29], the main plants in the handmade paper specimens were Sinocalamus bamboo (Sinocalamus affinis (Rendle) McClure) and mulberry bark (Morus alba L.). SEM observations (in Fig. 12) indicated that the main plant included round tubular mulberry bark; the surface of the sample was uneven, and there were a few holes in the longitudinal direction and transverse stripes in the horizontal direction [30]. Joints, ravines and grooves were seen in the bamboo fibers [31]. After dyeing, the fibers were deposited by toning particle materials, some of which filled into the cracks, holes and ravines of the fibers, leading to a reduction in the porosity and pore size distribution. The filled colorant dyes could form chemical bonds with fibers by hydrogen bonds. This might be the reason for the improved mechanical properties of all the dyed papers. The best mechanical properties found, induced by A, were ascribed to the higher interfacial interaction due to the network and viscous substance coating on the paper surface and connecting the paper fibers. The mechanical test results confirmed that the electrovalent chemical bonds of the nitrogen and hydroxyl groups between the fibers and A were stronger than the interactions of the hydroxyl bonds induced only by the oxygen and hydroxyl groups. We will discuss how A affects the properties of the paper further in future work.
SEM images of different papers. A undyed paper, B Amur cork tree dyed paper, C pagoda bud dyed paper, D gardenia dyed paper, E. turmeric dyed paper
This experiment confirmed that handmade Daqian paper dyed with traditional botanical dye sources, such as Amur cork tree, turmeric, pagoda bud and gardenia, could have different properties. The different botanical sources had different effects on the properties, including the color, thermal stability and mechanical properties, due to their main characteristic components. The improved tensile strengths and folding endurances of these toning materials after dyeing are beneficial for paper. The chemical interaction between the Amur cork tree dyes and paper fibers had a stronger effect on the mechanical and thermal stability than those of the gardenia-, turmeric- and pagoda bud-treated paper due to the stronger hydroxyl-nitrogen bond and hydroxyl groups and viscous substances. This stability may be one of the reasons why most of the ancient books that currently still exist were dyed with yellow Amur cork tree dyes. All of these chosen yellow plant dyes slightly decreased the pH of handmade paper. We should be cautious when choosing how to conserve paper artifacts because some of them have a low pH, which might affect the conservation of the paper artifacts.
Thermogravimetric
LD:
Longitudinal direction
Transverse direction
ATR-FTIR:
Attenuated total reflection flourier transformed infrared spectroscopy
CIE:
International Commission on Illumination
YP-0:
The undyed paper samples; YP-A, YP-P, YP-G, YP-T, the Amur cork tree (A), pagoda bud (P), gardenia (G) and turmeric (T) dyed paper samples, respectively
Gibbs PJ, Seddon KR, Brovenko NM, Petrosyan YA, Barnard M. Analysis of ancient dyed chinese papers by high-performance liquid chromatography. Anal Chem. 1997;69(10):1965–9.
Richardin P, Cuisance F, Buisson N, Asensi-Amoros V, Lavier C. AMS radiocarbon dating and scientific examination of high historical value manuscripts: application to two Chinese manuscripts from Dunhuang. J Cult Herit. 2010;11(4):398–403.
Tsien TH. Paper and Printing. Science and Civilization in China. Vol. 5, part 1: Chemistry and Chemical Technology. Cambridge: Cambridge University Press. 1985. pp 74–76.
Su S, Shang ZJ. Illustrated Classic of the Materia Medica (Bencao Tujing). Hefei: Anhui Scientific & Technology Publishing House. 1994: 372 (in Chinese)
Shu YL. Ancient gardenia and its cultivation and utilization. Agric Hist China. 1992;12(3):78–84 (in Chinese).
Li SZ. Compendium of Materia Medica, rev. and puncturated (Bencao Gangmu: Jiaodianben). Beijing: People's Medical Publishing House. 1975. pp 968 (in Chinese).
Zhao KH, Zhou JH. A history of dyeing chemistry in ancient China (Zhongguo Gudai Ranse Huasueshi). In: History of Chinese science and technology, chemistry volume (Zhongguo Kexue Jishushi, Huaxuejuan). Beijing: Beijing Science Press; 1998. p. 640–1 (in Chinese).
Han J. The historical and chemical investigation of dyes in high status chinese costume and textiles of the ming and qing dynasties. (1368–1911) PhD Thesis, University of Glasgow, 2016: 86–158.
Gibbs PJ, Seddon KR. Berberine and huangbo: ancient colorants and dyes. London: British Library; 1998.
Zhang X, Mouri R, Mikage R, Laursen R. Preliminary studies toward identification of sources of protoberberine alkaloids used as yellow dyes in asian objects of historical interest. Stud Conserv. 2010;55(3):177–85.
Zhang X, Corrigan K, MacLaren B, Leveque M, Laursen R. Characterization of yellow dyes in nineteenth-century chinese textiles. Stud Conserv. 2007;52(3):211–20.
Liu J, Ji LF, Chen L, Pei KM, Zhao P, Zhou Y, Zhao F. Identification of yellow dyes in two wall coverings from the Palace Museum: evidence for reconstitution of artifacts. Dyes Pigm. 2018;153:137–43.
Bell SEJ, Bourguignon ESO, Dennis AC, et al. Identification of dyes on ancient Chinese paper samples using the subtracted shifted Raman. Anal Chem. 2000;72(1):234–9.
Song YX, Pan JX. Exploitation of products from the nature by combination of artificial skills and natural power (Tiangong Kaiwu). Shanghai: Shanghai Chinese Classics Publishing House; 2007. p. 118–26.
Cardon D. Natural dyes: sources, tradition, technology and science. London: Archetype Publication Ltd.; 2007. p. 322–34.
Soleymani S, Ireland T, Mcnevin D. Effects of plant dyes, watercolors and acrylic paints on the colorfastness of japanese tissue papers. J Am Inst Conserv. 2016;55(1):56–70.
ISO 6588–1: 2012, Paper, board and pulps-Determination of pH of aqueous extracts- part 1: Cold extraction. 2012.
ISO 11475: 2004, Paper and board-Determination of CIE whiteness, D65/10°(outdoor daylight). 2004.
ISO 187: 1990, Paper board and pulps-Standard atmosphere for conditioning and testing and procedure for monitoring the atmosphere and conditions of samples. 1990.
TAPPI T494 om-13, Tensile properties of paper and paperboard (using constant rate of elongation apparatus). 20013.
ISO 1924–2: 2008, Paper and board-Determination of tensile properties-Part 2: Constant rate of elongation method (20mm/min). 2008.
TAPPI T511 om-13, Folding endurance of paper (MIT tester). 2013.
ISO 5626: 1993, Paper –Determination of folding endurance. 1993.
Soleymani S. The effects of plant dyes, watercolours and acrylic paints on the physical, chemical and biological stability of Japanese tissue paper used in paper conservation. PhD Thesis, University of Canberra, 2015.
Cardon D. Natural dyes: Sources, Tradition, Technology and Science. London: Archetype Publication Ltd.; 2007. p. 308.
Govindarajan VS, Stahl WH. Turmeric-chemistry, technology, and quality. CRC Crit Rev Food Sci Nutr. 1980;12(3):199–301.
Cardon D. Natural dyes: sources, tradition, technology and science. London: Archetype publications Ltd.; 2007. p. 211.
TAPPI 401OS-74, Fiber analysis of paper and paperboard. 1975.
ISO 9184–3, Paper, board and pulps-Fiber furnish analysis-Part 3: Herzberg test. 1990.
Chen GX, Chen HW. Structure and performances of mulberry bark. J Nant Text Vocat Technol Coll. 2009;9(1):10–3 (in Chinese).
Wang LM, Sheng Y, Zhang HF, et al. Characteristic bamboo fibers and study its products. J Text Dye Finish. 2011;33(8):17–21 (in Chinese).
The authors gratefully acknowledge the financial support from the National key research and development project (Grant No.: 2019YFC1520404), the Joint Funds of the National Natural Science Foundation of China (Grant No.: U19A2045) and Science and Technology Support Programme of Sichuan Province (Grant No.: 2019YFS0494). We thanks Lingzhu Yu for her help with SEM measurements and Ren Wang for her assistant with mechanical tests.
The research was supported by the National key research and development project (Grant No.: 2019YFC1520404), the Joint Funds of the National Natural Science Foundation of China (Grant No.: U19A2045) and Science and Technology Support Programme of Sichuan Province (Grant No.: 2019YFS0494).
School of History and Culture, National Center for Experimental Archaeology Education, Sichuan University, Chengdu, 610064, China
Yanbing Luo
Chongqing Hongyan Revolution Museum, Chongqing, 400043, China
Xiujuan Zhang
Data were collected by YBL, XJZ. YBL and XJZ prepared and revised the manuscript. All authors read and approved the final manuscript.
Correspondence to Yanbing Luo.
Not applicable for that statement.
Luo, Y., Zhang, X. Effects of yellow natural dyes on handmade Daqian paper. Herit Sci 9, 85 (2021). https://doi.org/10.1186/s40494-021-00560-x
Yellow natural dyes
Historical paper
|
CommonCrawl
|
Enthalpy & Thermochemistry
Entropy & Thermochemistry
Colligative properties
Energy is of no use without work.
A spring can be compressed to store potential energy, but it's of little use to us if we can't do something with that stored energy. The spring is a storehouse of energy – almost like a battery. We have to do work to compress it, and at a later time it can unleash its stored energy to do the work of pushing or pulling something.
Work transfers energy from one place or time to another, and possibly from one form to another.
Work is caused by a force. Without any force, there is no work. When a force works over a distance, work has been done, and its value, in units of energy, can be calculated.
Work (w) is force (F, see Newton's laws) multiplied by the distance (d) over which the force was exerted:
$$w = F \cdot d$$
The units of work are the units of energy, Joules (J). Joules are the SI unit of energy. Here's a unit analysis of F·d :
$$\left( \frac{Kg \cdot m}{s^2} \right) \cdot m = \frac{Kg\cdot m^2}{s^2} = \bf J$$
Notice also that 1J = 1 N·m, or 1 Newton-meter (N·m). Once in a while you'll see N·m used instead of Joules.
Definition of work
Work is force exerted over a distance.
The SI unit of work is the unit of energy, the Joule (J). $1 \, J = 1 \, Kg \cdot m^2 \cdot s^{-2}.$
Work can be interchanged with potential energy (PE) and kinetic energy (KE).
Equivalence of work and energy
Work is equivalent to (or can be "traded for" either kinetic energy (KE) or potential energy (PE). Two examples might help.
Gravitational PE
We know that the potential energy of an object of mass m lifted to a height h is $PE = mgh,$ where m is the mass (in Kg), g is the acceleration of gravity ($g = 9.81 m/s^2$ near the surface of Earth), and h is the height in meters. The work that it takes to lift an object to such a height, giving it that PE, is exactly equal to the PE gained. In this case we express the equivalence of work and PE as
$$w = -\Delta (PE)$$
The negative is because the work done to "buy" PE is always in the opposite direction of the motion that would result if the PE were translated into KE.
The work of brakes in a car
A car moving in a straight line at a velocity v has kinetic energy, $KE = \frac{1}{2}mv^2.$ If the brakes in that car are applied to cause it to stop, the amount of work done by the brakes in counteracting the KE is exactly equal to the amount of KE that the car initially had. In most vehicles, that energy goes out into the environment in the form of heat, the heating of the brakes, tires and road surface. In electric vehicles, much of that energy can be re-converted into stored electricity by using the motor "backward" as a generator.
There are other kinds of work, such as the pressure-volume (PV) work we encounter in chemistry, and we will discuss some others below.
$$w = -\Delta (KE)$$
The sign of the work in this case is dependent on what we get an object to do by doing work on it. We can, for example, speed up or slow down a moving object by doing work – exerting a force over a distance. Each results on a change in KE equal in size to the work done.
Work against gravity
Equivalence of work and gravitational PE
The work done in lifting an object a vertical distance is equivalent to the potential energy gained in doing so. The gravitational PE is
$$PE = mgh$$
where m is the mass, g is the acceleration of gravity near the surface of Earth, and h is the height.
A nice example of this is a roller coaster. Consider the roller coaster diagram below:
The first step in any roller coaster's journey is to be lifted to the top of the tallest hill (A). The amount of work done is w = mgh, which is exactly equal to the amount of potential energy gained: PE = mgh.
As the coaster begins to descend hill A, it loses potential energy (PE) and gains kinetic energy (KE), the energy of motion. In a sense, the coaster is "trading in" PE for KE.
At point B, which is a little higher than the origin of the coaster, most of the PE has been converted to KE, though some remains.
As the coaster passes B and begins to climb hill C, it loses KE and gains PE. All of this is due to the original work done in lifting the coaster to the top of hill A.
Upon descending hill C, the coaster returns to its initial height, thus converting all of its PE (therefore all of the work that was done) to KE.
A roller coaster could keep going like that forever if it wasn't for friction (between the wheels and the track and because of air resistance). The friction force does work on the coaster, much as the brakes on a car do work to stop it, reducing its KE. Because of friction, such a coaster could never climb another hill as high as hill A after the initial lift.
The pendulum
A pendulum is another great example of how work, potential and kinetic energy can be interconverted. The only thing that makes a pendulum swing back-and-forth is gravity.
We get a pendulum started by doing some work on it to swing the bob to one side, which not only move it to the side, but elevates it as well. The swing of a pendulum is then an ongoing exchange between kinetic and potential energy of the bob (the weight). Here's the basic idea.
In order to swing the pendulum to the left of right, the bob must be raised up against the force of gravity by doing work equal to the PE gained, $w = mgh.$
This work is equivalent to the amount of PE stored in the pendulum at this point in its arc. When the pendulum is let go, the force of gravity converts that potential energy into kinetic energy,
$$KE = \frac{1}{2}mv^2,$$
where v is the velocity of the weight. The velocity at each of the turning points is zero, thus KE = 0 there, and KE has its maximum value at the equilibrium (lowest) point of the pendulum.
The figure below illustrates how the motion of the pendulum interconverts between kinetic and potential energy.
The graph below the pendulum diagram, a plot of energy vs. position, aligns with points in its arc.
At any point in the path of the bob, the sum of PE and KE is constant, and equal to the work done in rasing it:
$$mgh + \frac{1}{2}mv^2 = k,$$
where k is a constant. At the equilibrium position, the KE is equal to the PE at the turning points, so we have
$$\frac{1}{2} mv^2 = mgh$$
We can solve this equation for the maximum velocity, $v_{max}:$
$$v_{max} = \sqrt{2 gh}$$
A 50 Kg box is moved 15 m across a floor with a constant force of 27 N. How much work was done?
Solution: Work is force × distance, so we have
$$ \begin{align} w &= F d \\[5pt] &= 27 \frac{Kg m}{s^2} \cdot 15 \, m \\[5pt] &= 405 \, J \end{align}$$
Notice that we didn't use the 50 Kg mass of the box. Don't be distracted by red herrings like this. Know what you're looking for and calculate is. In this case, work depends on force and distance, so those are the only things we need.
I've expanded all of the units here, but once you're confident that $1 \, N \cdot m = 1 \, J,$ there's no need to.
A SpaceX Dragon rocket can lift 6000 Kg of payload into space. The stratosphere of Earth's atmosphere begins at about 10 Km where the Dragon is lanched. How much work does the Dragon do in lifting a 6000 Kg payload to the edge of the stratosphere?
Solution: The work of lifting to a height is just the potential energy (PE) gained. That is
$$ \begin{align} w &= mgh \\[5pt] &= (6000 \, Kg)\left( 9.8 \, \frac{m}{s^2} \right)(1 \times 10^4 \, m) \\[5pt] &= 5.89 \times 10^8 \, J \\[5pt] &= 589 \, MJ \end{align}$$
Notice that I've converted 10 Km to meters (1 Km = 1,000 m).
In reporting numbers like this, it's nice to convert to a larger unit like mega-Joules (MJ), which makes the base number a little easier to comprehend and remember. Most humans have a better sense of what 100 is compared to 1 × 108. That's my theory, anyway.
120 mJ of energy is used to push an object a distance of 1 cm. How much force was required to move the object?
Solution: The force can be found by rearranging the work formula with a little algebra:
$$w = F \, d \phantom{000} \color{#E90F89}{\longrightarrow} \phantom{000} F = \frac{w}{d}$$
So the force is
$$ \begin{align} F &= \frac{w}{d} = \frac{120 \times 10^{-3} \, J}{1 \times 10^{-2} \, m} \\[5pt] &= 12 \, N \end{align}$$
Don't forget your basic algebraic rearrangements of these simple formulas. Algebra is everywhere!
$$ \begin{align} \require{cancel} w &= F \, d \\[5pt] \frac{w}{d} &= \frac{F \cancel{d}}{\cancel{d}} \: \color{#E90F89}{\leftarrow \: \text{divide by d}} \\[5pt] \frac{w}{d} &= F \end{align}$$
Work in chemistry: PV work
The kind of work we encounter most often in chemistry is pressure-volume (PV) work.
The product of pressure and volume has units of energy. Here's how it works: The units of pressure are force divided by area, which we can write as Newtons (N) divided by m2,
$$P = \frac{N}{m^2}$$
Multiplying by matching units of volume, m3, gives
$$PV = \frac{n}{m^2} m^3 = N\cdot m = J$$
Recall that 1 N = 1 Kg·m·s-2, so multiplying by meters gives 1 J = 1 Kg·m2·s-2.
PV work is apparent when we heat a container of gas under constant pressure. This can be done using a cylinder fitted with a movable piston, atop of which sits a weight. The weight produces a constant force on the piston (which has a fixed area), and thus a constant pressure.
If we then heat the container, the volume must increase (recall from the ideal gas law that V = nRT/P). Play the animation to see how it works.
When the volume expands, we say (by convention) that the surroundings have done work on the system, and we call this positive work (+w). When the volume decreases, we say that the surroundings have done work on the system, and we call that negative work (-w).
Play the animation (arrow) a few times to get the idea.
The signs of heat and work (+ / -)
When the surroundings do work on the system (e.g. to compress the system), positive work has been done from the point of view of the system.
When the system does work on the surroundings (e.g. to expand its volume), negative work has been done.
Think of the sign conventions for heat and work as system-centric. Heat added to or work done on a system is positive. Otherwise, both are negative.
Equivalence of heat and work
Doing work on a chemical system, such as a liquid, is completely equivalent to adding an equal amount of heat (remember, the units are the same). The total energy input into a system is the heat it receives plus the work it does; it's part of the principle of conservation of energy.
Prescott Joule, one of the early developers of thermodynamics, performed an experiment that showed that mechanical work done on a container of water produces a temperature rise in the water equivalent to the amount of heat needed to cause the temperature change.
In Joule's experiment, a weight of known mass was dropped a known distance, thus producing a known amount of kinetic energy which produced mechanical stirring of a quantity of water. The friction of the stirring process produced a temperature rise equivalent to that kinetic energy.
Joule's experiment
Heat and work are equivalent in chemical systems.
Suppose that the weight in Joules experiment is 1 Kg and drops a distance of 10 m (which could be accomplished by ten 1m drops if the weight was cranked back to the top of its 1m range slowly), and suppose our jar contains 1 L of water (1 Kg because the density of water is 1 g/mL).
Now PE = mgh = (1 Kg)(9.8 m·s-2)(10 m) = 98 J. So that 98 J is "released" into the water through friction between the turning paddle blades and the water. The KE of motion of the paddles is translated to the motion of the water molecules, which is what heat is.
Now to calculate the temperature rise, we need a formula with which you might not be familiar (it's here): The heat (q) required to change the temperature of m grams of a substance by ΔT ˚C is
$$q = mC\Delta T$$
where C is a property (which Joule measured) called the specific heat capacity. The specific heat capacity of water is C = 4.184 J/g˚C, so the temperature rise would be
$$ \begin{align} \Delta T &= \frac{q}{mC} \\ &= \frac{98 \, J}{(1000 \, g)(4.184 \, J/g˚C)} \\ \\ &= 0.023 \, ˚C \end{align}$$
which is a small, but measurable temperature rise.
How much work (in Joules) is required to lift a 330 Kg piano to a window located 9.5 m from the ground?
The work of lifting against gravity is $w = mgh,$ where m is mass, h is height, and g is the acceleration of gravity on Earth.
$$ \begin{align} w &= mgh \\[5pt] &= (330 \, Kg)\left( 9.8 \frac{m}{s^2} \right)(9.5 m) \\[5pt] &= 30,273 \, J = 30.3 \, KJ \end{align}$$
A force of 90 N is applied to a crate, moving it 20 m along the direction of the applied force. How much work is done on the crate?
Work = force × distance, so we have
$$ \begin{align} w &= F \cdot d \\[5pt] &= 90 N \cdot 20 m \\[5pt] &= 1800 \, N\cdot m \\[5pt] &= 1800 \, J \\[5pt] &= 1.8 \, KJ \end{align}$$
A box rests on a horizontal surface (and we'll ignore friction). One person pushes on the box with a force of 15N to the right, and another pushes with a force of 12N to the left. The box moves 3.0m to the right. Calculate the work done by (a) the first person, (b) the second person, and (c) the net force.
It's always helpful to sketch a diagram. Let's let force in the rightward direction be a positive force, then the 12 N force will be a negative force.
$$w_{15} = F \cdot d = 15 \, N \cdot 3.0 \, m = 45 \, J$$
$$w_{12} = F \cdot d = -12 \, N \cdot 3.0 \, m = -36 \, J$$
Now the net work done is the difference, + 9.0 J. We could also first find the net force, $F_{net} = 15 - 12 = +3 \, N.$ Then the work done by that net force would be
$$w = F_{net} \cdot d = 3 \, N \cdot 3.0 \, m = 9.0 \, J.$$
A climber climbs 15.2 m up a wall, expending 6250 J of energy (work) to do so. Calculate the mass of the climber.
Work = force × distance, so the work of lifting is $w = mgh,$ where the force is $F = mg$ and the distance is the height. Rearranging to solve for mass gives
$$w = mgh \: \color{#E90F89}{\longrightarrow} \: m = \frac{w}{gh}$$
$$ \require{cancel} \begin{align} m &= \frac{6250 \, J}{9.8 \frac{m}{s^2} \cdot 15.2 \, m} \\[5pt] &= \frac{6260 \frac{Kg \cancel{m^2}}{\cancel{s^2}}}{9.8 \frac{\cancel{m}}{\cancel{s^2}} \cdot 15.2 \cancel{m}} \\[5pt] &= 42 \, Kg \end{align}$$
In the second step above, the units were expanded so you can see that the result has the units we want, units of mass (Kg).
SI units
SI stands for Système international (of units). In 1960, the SI system of units was published as a guide to the preferred units to use for a variety of quantities. Here are some common SI units
length meter (m)
mass Kilogram (Kg)
time second (s)
force Newton N
energy Joule J
Red herring is a metaphor for a kind of logical fallacy. A school of herring can contain millions of silvery 20-30 cm fish swimming in a synchronized way. The idea is that if you notice that one is red, it will capture your attention, and you might miss the main point: millions of silver fish swimming as one. Sometimes people also say "Don't miss the forest for (looking at) the trees."
|
CommonCrawl
|
MAIN MENU Donate Home Exam Review Contact
💡 SWITCH
LIBRARY MENU
Top 200 Drugs 2014 PTCB ExCPT
Medical Terminology Prefixes
Medical Terminology Root Words
Medical Terminology Suffixes
Dosage Form Abbreviations
Drug Abbreviations
JCAHO Do Not Use List
Medical Abbreviations
Pharmacy Sig Codes
Refrigerated Drugs & Medications
14 Ratio and Proportion
There are two types of ratios. Let's call them Ratio #1 and Ratio #2. Both describe the relationship between two or more quantities but with two main differences. The 1st difference is that Ratio #1 can have more parts than just a numerator and denominator. For this reason, we often use a colon (:) to separate its parts. Remember that back in "08 Fundamental Math Review" we discussed proper fractions. In a proper fraction, the numerator says how many parts of something there is and the denominator says how many individual parts make up 1 whole unit of that something. In Ratio #1 the denominator does DOES NOT. Instead, THE SUM OF ALL PARTS makes up 1 whole unit.
As an example, let's say we're mixing together 3 different ingredients, A, B & C to form a paste. If we're directed to mix these ingredients together in a ratio of 4:3:2 this means that our mixture will contain a total of (4 + 3 + 2 = 9) parts. The parts themselves will be 4 parts A, 3 parts B and 2 parts C where each individual part represents 1/9th of the total mixture (see the illustration below). In this example I randomly selected an ingredient as 1 of our 3 available parts but in a real life situation, it's important to follow the directions precisely as directed and select only the proper ingredient for each part.
Ratio #2 is used when we're talking about proportions and is a fraction just like the unit factors we discussed in "13 Converting Measurements by Dimensional Analysis (Unit Factor Method)". Sometimes you'll see Ratio #2 written with a colon in between the numerator and the denominator like 5:1,000 (instead of 5/1,000) just like Ratio #1. With the difference being that this time the ratio describes 5 parts out of a total of 1,000 parts and not a total of 1,005 parts. Get it?! With Ratio #2, it's common to simplify the ratio down to a 1 in the front. For example, 5:1,000 becomes 1:200 and 4:5 becomes 1:1.25
Now let's move on to proportions. A proportion describes the relationship between two or more pairs of ratios. This is the main difference between a proportion and a unit factor, how they're set up to solve a problem. Previously, we converted 30 ounces to pounds by multiplying it by a ratio (or unit factor):
$$\frac{\text{30 ounces}}{1} \times \frac{\text{1 pound}}{\text{16 ounces}}$$
$$= \text{1.875 pounds}$$
When working with a proportion however, the ratios are made equal to one other. This is done by inverting any ratio in the problem which allows all the numerators & denominators to have the same units. Then we solve for the unknown value, typically referred to as "x":
$$\frac{\text{30 ounces}}{\text{(x)}} = \frac{\text{16 ounces}}{\text{1 pound}}$$
Since they are now equal, "x" will represent the value we're attempting to convert to, which in this example is pounds:
$$\frac{\text{30 ounces}}{\text{(x) pounds}} = \frac{\text{16 ounces}}{\text{1 pound}}$$
Now to solve for "x" we cross multiply. Cross multiplication lets you find 1 of the 4 values in the proportion when you know 3 of them.
First, identify the numerator and denominator, opposite of each other diagonally, that are present. These are the first two values you will work with.
Multiply them together.
Finally, divide this result by the the remaining value (the 3rd one) to obtain the 4th and final value.
Thus, to find "x" (or convert 30 ounces to pounds), we multiply 30 ounces by 1 pound and divide this result by 16 ounces. The ounces cancel each other out and we're left with pounds.
$$\frac{\text{30 ounces} \times \text{1 pound}}{\text{16 ounces}}$$
Which balances our original proportion:
$$\frac{\text{30 ounces}}{\text{1.875 pounds}} = \frac{\text{16 ounces}}{\text{1 pound}}$$
Congratulations, you have learned a lot! Now that you have a firm understanding of unit factors and proportions, let's reinforce their connection by doing some more examples!
If a patient is taking 325 mg Tylenol BID, let's figure out how many Tylenol pills they will be taking during a 15 day hospital stay. Without really thinking, you already know that the answer is 30 tablets because 2 tablets/day times 15 days = 30 tablets, right? Well, that's just how easy it is to write the problem down on paper as well:
$$\frac{\text{2 tablets}}{\text{1 day}} \times \text{15 days} = \text{30 tablets}$$
Now challenge yourself & quickly convert this to a proportion. Ready?
$$\frac{\text{1 day}}{\text{2 tablets}} = \frac{\text{15 days}}{\text{30 tablets}}$$
You can invert both ratios if it makes it easier for you to read:
$$\frac{\text{2 tablets}}{\text{1 day}} = \frac{\text{30 tablets}}{\text{15 days}}$$
Solve the following problems using proportions:
1) The ratio, "epinephrine 5:1,000" indicates there's 5 parts epinephrine solute to 1000 parts solution. You have simplified this ratio down to "epinephrine 1:200" How many parts solvent does the solution contain? (Clue: parts solvent = parts solution – parts solute)
2) If a container of Amoxicillin suspension has a dosage strength of 125 mg / 5 mL how many mg of Amoxicillin are in 2 teaspoons of suspension? (Remember that 1 tsp = approximately 5 mL)
3) If a patient is prescribed 200mg ibuprofen tablets QID PRN, up to what amount of ibuprofen can they take per day?
4) At a private facility, you're required to take a very wealthy patient to the park 5x per week (who happens to be your favorite celeb 🙂 ). How many times will you take them to the park per year, assuming 1 year has 52 weeks & you don't mind taking them because you get paid very well!
5) Your pharmacy has decided to start selling potatoes. (Don't ask, it's a decision from the people at corporate). If 2 bags of potatoes contain 40 potatoes then how many potatoes are in 57 bags of potatoes?
NEXT: 15 Milligram Percents, Parts Per Million & Specific Gravity 👉 👈 PREVIOUS: 13 Converting Measurements by Dimensional Analysis (Unit Factor Method) 🏠 Certification Exam Review
© 2000-2020 PharmacyTechnicianToday.com™ | All Rights Reserved | Terms & Privacy
|
CommonCrawl
|
why silicon carbide conduct electricity usage
Electric Vehicle Research Project: Eskom/Nissan …
Electricity from waste heat or fuel made the existing industry more efficient & reduce electricity cost Large industry with furnaces and kilns: ferro-chrome, ferro-manganese, silicon, carbide, platinum, cement, lime, steel, carbon black (ca 2000 MW potential) Sugar Industry (1000 MW …
Industrial: Powder Metallurgy - Characteristics and …
2020-1-1 · Generally, the major portion of the matrix is copper with about 5-15% low melting metal such as tin; 5-25% lubricant which may be lead, litharge, graphite, or galena; up to 20% friction material such as silica, alumina, magnetite, silicon carbide or aluminum silicide; and up to 10% wear-resistant materials such as cast iron grit or shot.
How Wire EDM Machining Works
2017-3-30 · Wire EDM Machining is an electro thermal production process where a thin single-strand metal wire in conjunction with de-ionized water (used to conduct electricity) allows the wire to cut through metal by the use of heat from electrical sparks. Wire EDM is commonly used on hard metals which are often difficult to machine.
Percepio Tracealyzer - Reveal the runtime world, …
Percepio Tracealyzer - Reveal the runtime world, power up your software development., PP-PERC-TRACE, STMicroelectronics
Wide Bandgap Semiconductors Go Beyond Silicon | …
In power electronics, silicon carbide (SiC) and gallium nitride (GaN), both wide bandgap (WBG) semiconductors, have emerged as the front-running solution to the slow-down in silicon in the high power, high temperature segments.
Chemistry of ceramics, glass, adhesives and sealants
Other advanced usage includes uranium dioxide (UO2) ceramics used in nuclear power plant elements, laser materials, ceramic capacitors, piezoelectric materials. Ceramics making process. Ceramics is made up of clay, talc, silica, feldspar, organometallic compounds, silicon carbide, alumina, and …
Marsel KADYROV | C.E.O. | Consulting
Obtaining of optimal electricity usage in a multi chiller results in saving higher power in houses or industrial consumers. As well, this optimal problem has high importance in such plants.
Aluminum - Advantages and Properties of Aluminum
Aluminum-magnesium-manganese alloys are an optimum mix of formability with strength, while aluminum-magnesium-silicon alloys are ideal for automobile body sheets, which show good age-hardening when subjected to the bake-on painting process. Aluminum is an excellent heat and electricity conductor and in relation to its weight is almost twice
Lightning, Surge Protection and Earthing of Electrical
2020-8-20 · The violent updraughts and downdraughts within the cloud system can generate static electricity charges running to several kV in magnitude. Though the exact mechanism of charge separation is not clear, observations indie that the ice particles in the top portion of the cloud are positively charged whereas the heavier water particles in the
Selenium - Element information, properties and uses
Selenium has both a photovoltaic action (converts light to electricity) and a photoconductive action (electrical resistance decreases with increased illumination). It is therefore useful in photocells, solar cells and photocopiers. It can also convert AC electricity to DC electricity, so is extensively used in rectifiers.
Guide to Understanding Modern Electric Fencing - …
Guide to Understanding Modern Electric Fencing. An electric fence system consists of an electric fence energiser, a fence wire or coination of wires supported on insulators, fixed on posts.These together with the EARTH system, form a pulsed high voltage OPEN LOOP with the animal being the completing link. The effectiveness of the fence is the SHOCK, in both the meanings of the word.
Water - STMicroelectronics
Water management approach: Evaluation of water use patterns. Each manufacturing site conducts regular, detailed analyses of how every drop of water is used in the production processes, which enables us to identify and focus on the most water-consuming or polluting production phases.
Thermal Conductivity - an overview | ScienceDirect …
In another study, Zhou et al. [81] utilized synergetic effect of MWCNTs and micro-silicon carbide (SiC) as hybrid filler to upgrade the thermal conductivity of epoxy. Hybrid filler consisting of 5 wt% of MWCNTs and 55 wt% of micro-SiC generated about 23-fold greater thermal conductivity than that of pure epoxy [81].In another study, Yang et al. [82] obtained a higher thermal conductivity of
What is a Silicon Diode? - wiseGEEK
2020-7-16 · Heather Phillips Last Modified Date: July 16, 2020 . A silicon diode is a semiconductor that has positive and negative polarity, and can allow electrical current to flow in one direction while restricting it in another. The element silicon, in its pure form, acts as an electrical insulator.To enable it to conduct electricity, minute amounts of other elements — in a process known as doping
Silicon Dioxide: What It Is, Side Effects, and Health
2020-8-19 · Silicon is a mineral. And we know minerals are generally considered healthy. In fact, some of the healthiest foods are those rich in minerals. But when it comes to minerals, iron, calcium, potassium, zinc, and similar are the first that come to mind. Silicon dioxide, also known as silica, is a chemical compound that is
Electronics Basics: What Is a Semiconductor? - dummies
2020-8-21 · Semiconductors are used extensively in electronic circuits. As its name implies, a semiconductor is a material that conducts current, but only partly. The conductivity of a semiconductor is somewhere between that of an insulator, which has almost no conductivity, and a conductor, which has almost full conductivity. Most semiconductors are crystals made of certain materials, […]
Molybdenum | Plansee
MHC is a particle-reinforced molybdenum-based alloy which contains both hafnium and carbon. Thanks to the uniformly distributed, extremely fine carbides, the material benefits from outstanding heat and creep resistance and, at 1,550 °C, the maximum recommended …
Boron - Periodic table
Boron is a poor room temperature conductor of electricity but its conductivity improves markedly at higher temperatures. Uses of Boron. Boron is used to dope silicon and germanium semiconductors, modifying their electrical properties. Boron oxide (B 2 O 3) is used in glassmaking and ceramics.
Graphene - What Is It? | Graphenea
The usage of graphene in energy storage is most notably researched through the use of graphene in advanced electrodes. Coining graphene and silicon nanoparticles resulted in anodes that maintain 92% of their energy capacity over 300 charge-discharge cycles, with a high maximum capacity of 1500 mAh per gram of silicon.
Scotlight Direct Blog | How Light Emitting Diodes …
As silicon does not readily conduct electricity, and neither does the junction that has been created, a barrier is created between the n-type silicon and the p-type silicon. This is known as a depletion zone, due to the fact that it contains neither free electrons or holes.
Nanotechnology | What is Nanotechnology - …
2020-8-4 · Nanotechnology is the study and use of structures between 1 nanometer and 100 nanometers in size. Website discussing the latest uses of nanotechnology in electronics, medicine, energy, consumer products and all other fields.
Why is stainless steel a poor conductor of electricity?
2020-7-12 · You are correct, stainless steel is a really poor conductor compared to most metals. This source lists it as $7.496 \times 10^{-7}\: \mathrm{\Omega \cdot m}$ which is more than 40 times worse than copper.. The reason is that conductivity in metals is high is that metals form a crystal lattice where the outer shell electrons are shared and easily move through the lattice.
materials - Why does diamond conduct heat better …
2020-6-12 · The reason why diamond, in particular, is an especially good thermal conductor even compared to other well-ordered crystals boils down to two factors: the mass of the carbon atoms and the strength of the bonds connecting them.
Market Research Reports® Inc. | Better Reports, Better
2020-8-17 · At Market Research Reports, Inc. we aim to make it easier for decision makers to find relevant information and loe right market research reports which can save their time and assist in what they do best, i.e. take time-critical decisions.
Why Not Graphene? - OILMAN Magazine
The ability to conduct 200 times more electricity than silicon, offers us a much thinner power cable than commonly used, which is also more reliable in terms of power. On the other hand, we have its resistance to the different pollutants that are exposed and a power cable designed to resist much more than the one factories tend to use.
Mjolner - STMicroelectronics
STMicroelectronics makes no representations or warranties about the suitability of the products and services offered or provided by the partners and STMicroelectronics hereby disclaims all warranties and conditions, whether express, implied or statutory with respect to any product or services provided by the partners, including but not limited
Is Silicon Dioxide Safe? - Healthline
Silicon dioxide (SiO 2), also known as silica, is a natural compound made of two of the earth's most abundant materials: silicon (Si) and oxygen (O 2).. Silicon dioxide is most often recognized
why calcium is more metallic than bromine in monaco
explain why calcium metal after reacting with water in united kingdom
why graphite conduct electricity but silicone asia
why carbonated drinks are bad vendors
why is magnesium alloy used in cars in nigeria
silicon carbide is as hard as diamond why importers
why silicon carbide conduct electricity in vietnam
why does silicon carbide conduct electricity africa
why boiling points of silicon carbide and in india
why does silicon carbide conduct electricity types
why is welding wire copper coated in turkmenistan
why fused calcium chloride used in reaction of
why are magnesium alloys used in aircraft in sweden
why sodium and calcium not prefered as sale price
why calcium metal cannot be found as natural in moldova
why sodium and calcium not prefered as function
why is silicon carbide used as an abrasive quotes
silicon carbide is as hard as diamond why instruction
explain why graphite conducts electricity but in canada
|
CommonCrawl
|
Anticonvulsant effects of antiaris toxicaria aqueous extract: investigation using animal models of temporal lobe epilepsy
Priscilla Kolibea Mante1,
Donatus Wewura Adongo2 &
Eric Woode1
BMC Research Notes volume 10, Article number: 167 (2017) Cite this article
Antiaris toxicaria has previously shown anticonvulsant activity in acute animal models of epilepsy. The aqueous extract (AAE) was further investigated for activity in kindling with pentylenetetrazole and administration of pilocarpine and kainic acid which mimic temporal lobe epilepsy in various animal species.
ICR mice and Sprague–Dawley rats were pre-treated with AAE (200–800 mg kg−1) and convulsive episodes induced using pentylenetetrazole, pilocarpine and kainic acid. The potential of AAE to prevent or delay onset and alter duration of seizures were measured. In addition, damage to hippocampal cells was assessed in kainic acid-induced status epilepticus test. 800 mg kg−1 of the extract suppressed the kindled seizure significantly (P < 0.05) as did diazepam. AAE also produced significant effect (P < 0.01) on latency to first myoclonic jerks and on total duration of seizures. The latency to onset of wet dog shakes was increased significantly (P < 0.05) by AAE on kainic acid administration. Carbamazepine and Nifedipine (30 mg kg−1) also delayed the onset. Histopathological examination of brain sections showed no protective effect on hippocampal cells by AAE and nifedipine. Carbamazepine offered better preservation of hippocampal cells in the CA1, CA2 and CA3 regions.
Antiaris toxicaria may be effective in controlling temporal lobe seizures in rodents.
Epilepsy is a common neurological disorder which may be due to an imbalance between excitatory and inhibitory arms of the central nervous system—produced by a decrease in GABAergic and/or an increase in glutamatergic transmission [1].
Kindling and Status Epilepticus are the two most commonly used animal models of Temporal Lobe Epilepsy (TLE). Both models provide a dependable induction of a persistent, epileptic-like condition, despite their unique characteristics. Kindling is a simple phenomenon in which repeated induction of focal seizure discharge produces a progressive, highly reliable, increase in epileptic response to the inducing agent, usually electrical stimulation [2]. However, the use of chemical inducing agents, such as pentylenetetrazole has been shown to be equally effective [3, 4]. Acute administration of a high dose of pilocarpine in rodents is widely used to study the pathophysiology of seizures. It was first described by Turski et al. in 1983 [5, 6]. Pilocarpine-induced seizures reveal behavioural and electroencephalographic features that are similar to those of human temporal lobe epilepsy. Kainic acid, like pilocarpine, can also be used to induced a similar TLE or status epilepticus state in a variety of species using either systemic, intrahippocampal or intra-amygdaloid administrations [7].
Temporal lobe epilepsy is the most common form of complex partial seizures accounting for approximately 60% of all patients with epilepsy. Medial temporal lobe epilepsy which is the commonest temporal lobe epilepsy is also frequently resistant to medications and associated with hippocampal sclerosis. Management is challenging and often surgery has to be resorted to [8].
The plant Antiaris toxicaria (family Moraceae) is a common plant in Ghanaian forests. It has been employed traditionally as an analgesic and anticonvulsant [9]. Previous studies have shown that Antiaris possesses anticonvulsant activity in various acute murine models [10]. Antiaris toxicaria in this present study was evaluated to determine its properties in kindling models and post-status models of temporal lobe epilepsy. This investigation sought to determine if the extract possessed potential as an antiepileptogenic agent as well as efficacy in the management of temporal lobe epilepsy.
Stem bark of A. toxicaria was harvested from the KNUST campus, Kumasi and identified by a staff member of the Pharmacognosy Department where a voucher specimen (KNUST/HM1/011/S007) has been retained in the herbarium.
Preparation of Antiaris toxicaria aqueous extract
The dry stem bark was powdered using a commercial grinder. The coarse powder (431 g) was extracted by cold maceration with distilled water as solvent at room temperature for 5 days. The resultant filtrate was oven-dried to obtain 23.40%\({\raise0.7ex\hbox{${ w}$} \!\mathord{\left/ {\vphantom {{ w} w}}\right.\kern-0pt} \!\lower0.7ex\hbox{$w$}}\) of A. toxicaria aqueous extract (AAE).
Naïve male ICR mice (20–25 g) and Sprague–Dawley Rats (120-145 g) were obtained from Noguchi Memorial Institute for Medical Research, Accra, Ghana and kept in the departmental Animal House. Animals were maintained under laboratory conditions (room temperature; 12-h light–12-h dark cycle) in stainless steel cages (34 × 47 × 18 cm3) with wood shavings as bedding and allowed free access to water and food ad libitum. They were fed with normal commercial diet (GAFCO Ltd). Animals were tested in groups of eight. Groups were assigned randomly. Sample size was calculated using method of power analysis using the G-power software version 3.0.5. Experiments were carried out during the day. All animals were handled in accordance the Guide for the Care and Use of Laboratory Animals [11] and experiments were approved by the Faculty of Pharmacy and Pharmaceutical Sciences Ethics Committee, KNUST.
Drugs and chemicals
Diazepam (DZP), pentylenetetrazole (PTZ) pilocarpine (PILO) and kainic acid (KA) were purchased from Sigma-Aldrich Inc., St. Louis, MO, USA.
Kindling induction
PTZ kindling was initiated using a subconvulsive dose of PTZ 40 mg kg−1 body weight injected into the soft skin fold of the neck on every 2nd day (i.e. Day 1, Day 3, Day 5…). The PTZ injections were stopped when the control animals showed adequate kindling, i.e. Racine score of 5. After each PTZ injection, the convulsive behaviour of the rodent was observed for 30 min in an observation chamber. The resultant seizures were scored as follows: Stage 0 (no response); Stage 1 (hyperactivity, restlessness and vibrissae twitching); Stage 2 (head nodding, head clonus and myoclonic jerks); Stage 3 (unilateral or bilateral limb clonus); Stage 4 (forelimb clonic seizures); Stage 5 (generalized clonic seizures with loss of postural control). AAE was tested at doses of 200, 400 and 800 mg kg−1 body weight orally and diazepam (0.1, 0.3 and 1 mg kg−1, i.p). PTZ was injected 30 min after administration of test drugs. Control animals received 3 ml kg−1 of distilled water. Seven groups of eight animals each were used. Group 1 = distilled water-treated control group; groups 2–4 = AAE-treated groups and groups 5–7 = diazepam-treated group.
Pilocarpine-induced Status epilepticus
Seizures were induced by an i.p. injection of pilocarpine (PILO) (300 mg kg−1, i.p.) into drug or vehicle-treated male rats. Rats were pre-treated with AAE (100–1000 mg kg−1, p.o.) or diazepam (0.3–3.0 mg kg−1, i.p.) for 30 or 15 min, respectively, before PILO injection. To reduce peripheral autonomic effects produced by PILO, the animals were pre-treated with n-butyl-bromide hyoscine (1 mg kg−1, i.p.) 30 min before PILO administration. Animals were placed in observation cages and observed via video recordings. Latency to and duration of seizures were scored.
Rat kainate model
Animals were pre-treated with the plant extract 30 min as above before administration of kainic acid (10 mg kg−1, i.p.). Other animals were treated with carbamazepine (30 mg kg−1, p.o) and nifedipine (30 mg kg−1, p.o) 30 min before induction of convulsions. Animals were observed for wet dog shakes over a 1 h period [12]. Brains were harvested for histopathological examination after an hour. Tissues were fixed in 10% buffered formalin (pH 7.2). Dehydration was done with a series of ethanolic solutions, embedded in paraffin wax and processed for histological analysis. Coronal sections (2 µm thick) were cut and stained with haematoxylin-eosin for examination. The stained tissues were observed through an Olympus microscope (BX-51) and photographed by a chare-couple device (CCD) camera.
Data were presented as mean ± S.E.M and significant differences between means determined by one-way analysis of variance (ANOVA) followed by Newman–Keuls' post hoc test. Statistical analyses were carried out with Graph Pad Prism® Version 5.0 (GraphPad Software, San Diego, CA, USA) and SigmaPlot® Version 11.0 (Systat Software, Inc.). Data from 5 to 8 animals in each group were included in the analyses. P < 0.05 was considered significant in all cases. None were excluded.
Effects in kindling
In PTZ + vehicle-treated group, repeated administration of subconvulsive dose of PTZ (40 mg kg−1) on every alternate day for 20 days resulted in increasing convulsive activity leading to generalized clonic seizures (Racine score of 5). Administration of AAE in the dose of 200 and 400 mg kg−1 did not modify the course of kindling induced by PTZ significantly. However, a higher dose of 800 mg kg−1 suppressed the kindled seizure significantly (P < 0.05; Fig. 1a, b) as the group could not achieve a mean score of 5. The standard anticonvulsant diazepam significantly (P < 0.01; Fig. 1c, d) modified the course of kindling at all three dose levels compared to the control. ED50 obtained for the extract was 276.70 mg kg−1 compared to 0.05 mg kg−1 for diazepam. The extract was however more efficacious than diazepam achieving an Emax of 88.83% compared to 60.36% for diazepam (Fig. 2).
Effects of AAE (200, 400 and 800 mgkg−1, p. o.; a and b) and diazepam (0.1, 0.3 and 1 mgkg−1, i.p.; c and d) on the stages of convulsion attained in PTZ-induced kindling. Data are presented as group mean ± SEM (n = 8). *P < 0.05, ** P < 0.01, ***P < 0.001 compared with vehicle treated group (One-way analysis of variance followed by Newman–Keuls post hoc test)
Dose-response curves of AAE and diazepam on the % decrease in stages of convulsions in PTZ-induced kindling. Each point represents mean ± S.E.M (n = 8)
Pilocarpine induced behavioural changes including hypoactivity, tremor and myoclonic movements of the limbs progressing to recurrent myoclonic convulsions with rearing, falling, and status epilepticus. AAE produced significant effect (P < 0.01, Fig. 3a) on the latency to first myoclonic jerks as compared to control at the highest dose only. It had a similar effect on the total duration of seizures (Fig. 3c). Diazepam was used as the reference drug and it also significantly reduced the total duration of seizures (P < 0.01, Fig. 3d) and latency (P < 0.001, Fig. 3b) at 1 and 3 mg kg−1. Diazepam was more potent than the extract in increasing the % latency with an ED50 of 0.66 mg kg−1 as against 424.50 mg kg−1 for the extract (Fig. 4a). Diazepam was also more efficacious achieving an Emax of 108.90% compared to 100% for the extract. Likewise, for the % duration AAE produced ED50 = 80.06 mg kg−1 and Emax = 100% while the standard diazepam achieved ED50 = 1.67 mg kg−1 and Emax = 100% (Fig. 4b).
Effect of AAE (100–1000 mg kg−1, p.o.) and diazepam (0.3–3 mg kg−1, i.p.) on the latency to (a and b) and total duration of seizures (c and d) induced by PILO. Each column represents the mean ± SEM (n = 8). **P < 0.01, ***P < 0.001 compared to vehicle-treated group (One-way ANOVA followed by Newman–Keuls post hoc test)
Dose-response curves of AAE and diazepam on the % increase in latency (a) and % decrease in durations (b) of status epilepticus induced with pilocarpine. Each point represents mean ± S.E.M (n = 8)
Effects in rat kainate model
Kainic acid (10 mg kg−1, i. p) produced wet dog shakes in all animals. AAE (400 mg kg−1) produced a significant (P < 0.05) increase in time taken to the onset of wet dog shakes (Fig. 5a). Carbamazepine (30 mg kg−1) and Nifedipine (30 mg kg−1) also delayed the onset. Histopathological examination of the coronal section of the brain showed no protective effect on hippocampal cells by AAE and nifedipine. Carbamazepine offered better preservation of hippocampal cells in the CA1, CA2 and CA3 regions (Fig. 6). The brain to body ratio decreased significantly (P < 0.001; Fig. 5b) with all three treatments.
Effects of AAE (400 mgkg−1, p.o.), carbamazepine (30 mgkg−1, p.o.) and nifedipine (30 mgkg−1, p.o.) on the latency to wet dog shakes (a) and % brain to body ratio (b) in rat kainate model. Data are presented as group mean ± SEM (n = 8). *P < 0.05, ***P < 0.001 compared to vehicle-treated group (One-way analysis of variance followed by Newman–Keuls' post hoc Test)
Photomicrographs of coronal sections of the brain of rats AAE (400 mg kg−1), carbamazepine (30 mg kg−1) and nifedipine (30 mg kg−1) on kainate– induced hippocampal damage (H & E, ×100)
Kindling is a chronic model of epilepsy and epileptogenesis. Repeated administration of a subconvulsive dose of PTZ (a blocker of the GABAA receptor) results in the progressive intensification of convulsant activity, culminating in a generalized seizure [3, 4]. The highest dose of AAE (800 mg kg−1) significantly delayed progression of convulsion similarly to diazepam.
Many substances interacting with GABA receptors have been shown to produce potent anticonvulsant effects on seizures in previously kindled animals [2, 13]. It has been shown that AAE produces anticonvulsant effects by interacting with the GABAA receptor. The fact that it acts via GABAergic mechanisms may be a possible explanation for anticonvulsant effects being exhibited in the kindling model.
There is some evidence that free radicals are actively involved in physiological processes during oxidative stress induced by administration of convulsants [14]. Of all the free oxygen radicals that occur in vivo, the hydroxyl-free radicals (OH−) are considered to be most hazardous [15,16,17]. Different mechanisms may lead to the increase of the free radicals in PTZ-induced convulsions. It may be assumed that further reason exist for the increased formation of OH− in kindled animals during PTZ seizure, such as reduced activity of superoxide dismutase (SOD), a major defence system for counteracting the toxic effects of reactive oxygen species such as O2−. However, antioxidant activity of AAE has not been firmly established.
AAE exhibited anticonvulsant effects against pilocarpine-induced seizures. Pilocarpine is a cholinergic agonist, widely used experimentally to induce limbic seizures in structures containing a high concentration of muscarinic receptors such as the cerebrum [18,19,20]. Status Epilepticus produces significant decreases in M 1, M 2, and GABAergic receptor densities [21] and hence neurotransmission. Freitas et al. have also reported in 2004 on increased levels of superoxide dismutase and catalase and reductions in acetylcholinesterase enzymatic activities in the rat frontal cortex and hippocampus. During pilocarpine-induced seizures and SE in adult rats, lipid peroxidation processes are increased [21, 22] suggesting free radical involvement in the pilocarpine-induced brain damage. Certain antioxidants, such as ascorbic acid, have therefore been shown to possess anticonvulsant activity against pilocarpine-induced SE [22, 23]. Muscarinic receptor stimulation is alleged to be responsible for the onset of pilocarpine-induced seizures, while glutamate acting on NMDA receptors sustains seizure activity [18]. Analysis of the brain morphology after pilocarpine administration demonstrates that the CA1 hippocampal neurones and the hilus of dentate gyrus are predominantly susceptible to neuronal cell loss [5]. Neuronal cell death during SE occurs largely by excitotoxic injury caused by the activation of glutamatergic pathways [6, 24]. Thus, the ability of AAE to attenuate seizures induced by pilocarpine could be attributed to cholinergic antagonism at the M1 or M2 receptors, increase in GABA, and/or its receptor densities, decrease in glutamate levels or through antioxidant pathways. Activation of potassium ion conductance can also contribute as it results in inhibition of the release of glutamate [25, 26]. AAE may therefore have potential in the management of status epilepticus.
Kainic acid is a neuro excitotoxic analogue of glutamate used in studies of epilepsy to model experimentally induced limbic seizures [27, 28]. Kainate-treated rats may respond differently. Some may produce wet dog shakes (equivalent to a class III seizure on the Racine scale) or more severe seizures [29]. Previous studies have shown pattern of neurodegeneration in the hippocampus with high concentration of high affinity KA binding sites (CA3 pyramidal cells of the hippocampus) [7, 30]. The dentate gyrus from kainate-treated rats has shown the presence of mossy fibre sprouting in the inner molecular layer [7, 31]. Examination of the hippocampus after seizures revealed hippocampal damage, especially in the CA3 and CA2 regions as shown in the photomicrographs. The extract showed no significant protection against such damage even though it significantly delayed the latency to wet dog shakes. This implies that the extract possesses general anticonvulsant properties but offers no protection against morphological changes. The kainate-treated rat model is used to study temporal lobe epilepsy. However, similarity of seizure occurrence to human temporal lobe epilepsy has not been studied comprehensively. But there are several characteristics of the seizures that resemble temporal lobe epilepsy in humans. For instance, some of the animals produce a few observed motor seizures (even with several months of observation) after a latent period, while other animals have seizures at a frequency as high as 1–2 hz which is reminiscent of epilepsy in the human population [8]. The rats often demonstrate confusion (e.g. hyperactive exploration of their cage) after a seizure, resembling the post-ictal confusion in most humans with temporal lobe epilepsy [32, 33]. Rats treated with the extract exhibited fewer seizures as compared to the control implying a possibility that it might be effective in the treatment of temporal lobe epilepsy. The CA1 and CA3 regions of the hippocampus are also known to possess one of the highest densities of the dihydropyridine receptors in the rat brain [34, 35]. The results of many experimental studies have shown that calcium channel blockers are effective against several different types of seizures [35,36,37]. Hence, nifedipine proved its effect in this model. As much as these experiments model human temporal lobe epilepsy, caution has to be exercised in translating the results directly to man without the needed clinical trials. The results however further lend scientific credence to the traditional use of A. toxicaria as an antiepileptic.
Antiaris toxicaria possesses anticonvulsant properties in kindling and status epilepticus murine models and may be antiepileptogenic and a candidate for the management of temporal lobe epilepsy (Additional file 1).
Meldrum BS. Antiepileptic drugs potentiating GABA. Electroencephalogr Clin Neurophysiol Suppl. 1999;50:450–7.
Morimoto K, Fahnestock M, Racine RJ. Kindling and status epilepticus models of epilepsy: rewiring the brain. Prog Neurobiol. 2004;73(1):1–60.
Corda MG, Giorgi O, Orlandi M, Longoni B, Biggio G. Chronic administration of negative modulators produces chemical kindling and GABAA receptor down-regulation. Adv Biochem Psychopharmacol. 1990;46:153–66.
Dhir A. Pentylenetetrazol (PTZ) kindling model of epilepsy. Curr Protoc Neurosci. 2012. doi:10.1002/0471142301.ns0937s58.
Turski WA, Cavalheiro EA, Schwarz M, Czuczwar SJ, Kleinrok Z, Turski L. Limbic seizures produced by pilocarpine in rats: behavioural, electroencephalographic and neuropathological study. Behav Brain Res. 1983;9(3):315–35.
Lopes MW, Lopes SC, Costa AP, Gonçalves FM, Rieger DK, Peres TV, et al. Region-specific alterations of AMPA receptor phosphorylation and signaling pathways in the pilocarpine model of epilepsy. Neurochem Int. 2015;87:22–33.
Levesque M, Avoli M. The kainic acid model of temporal lobe epilepsy. Neurosci Biobehav Rev. 2013;37(10 Pt 2):2887–99.
French J, Williamson P, Thadani V, Darcey T, Mattson R, Spencer S, et al. Characteristics of medial temporal lobe epilepsy: i. results of history and physical examination. Ann Neurol. 1993;34(6):774–80.
Mshana RN, Abbiw DK, Addae-Mensah I, Adjanouhoun E, Ahyi MRA, Ekpere JA, et al. Traditional medicine and pharmacopoeia; Contribution to the revision of ethnobotanical and floristic studies in Ghana. Organization of African Unity/Scientific, Technical & Research Commission; 2000.
Mante PK, Adongo DW, Woode E, Kukuia KK, Ameyaw EO. Anticonvulsant effect of Antiaris toxicaria (Pers.) Lesch. (Moraceae) aqueous extract in rodents. ISRN Pharmacol. 2013;2013:9.
National Research Council. Guide for the care and use of laboratory animals. Washington D.C.: The National Academies Press; 1996.
Cilio M, Bolanos A, Liu Z, Schmid R, Yang Y, Stafstrom C, et al. Anticonvulsant action and long-term effects of gabapentin in the immature brain. Neuropharmacology. 2001;40(1):139–47.
Bittencourt S, Dubiela FP, Queiroz C, Covolan L, Andrade D, Lozano A, et al. Microinjection of GABAergic agents into the anterior nucleus of the thalamus modulates pilocarpine-induced seizures and status epilepticus. Seizure. 2010;19(4):242–6.
Coyle JT, Puttfarcken P. Oxidative stress, glutamate, and neurodegenerative disorders. Science. 1993;262(5134):689–95.
Halliwell B. Reactive oxygen species and the central nervous system. J Neurochem. 1992;59(5):1609–23.
Halliwell B. Oxidative stress and neurodegeneration: where are we now? J Neurochem. 2006;97(6):1634–58.
Nowak JZ. Oxidative stress, polyunsaturated fatty acidsderived oxidation products and bisretinoids as potential inducers of CNS diseases: focus on age-related macular degeneration. Pharmacol Rep. 2013;65(2):288–304.
Turski L, Ikonomidou C, Turski WA, Bortolotto ZA, Cavalheiro EA. Review: cholinergic mechanisms and epileptogenesis. The seizures induced by pilocarpine: a novel experimental model of intractable epilepsy. Synapse. 1989;3(2):154–71.
Clifford DB, Olney JW, Maniotis A, Collins RC, Zorumski CF. The functional anatomy and pathology of lithium-pilocarpine and high-dose pilocarpine seizures. Neuroscience. 1987;23(3):953–68.
Gao F, Liu Y, Li X, Wang Y, Wei D, Jiang W. Fingolimod (FTY720) inhibits neuroinflammation and attenuates spontaneous convulsions in lithium-pilocarpine induced status epilepticus in rat model. Pharmacol Biochem Behav. 2012;103(2):187–96.
Freitas RM, Sousa FC, Vasconcelos SM, Viana GS, Fonteles MM. Pilocarpine-induced status epilepticus in rats: lipid peroxidation level, nitrite formation, GABAergic and glutamatergic receptor alterations in the hippocampus, striatum and frontal cortex. Pharmacol Biochem Behav. 2004;78(2):327–32.
Xavier SM, Barbosa CO, Barros DO, Silva RF, Oliveira AA, Freitas RM. Vitamin C antioxidant effects in hippocampus of adult Wistar rats after seizures and status epilepticus induced by pilocarpine. Neurosci Lett. 2007;420(1):76–9.
Tejada S, Sureda A, Roca C, Gamundi A, Esteban S. Antioxidant response and oxidative damage in brain cortex after high dose of pilocarpine. Brain Res Bull. 2007;71(4):372–5.
Cavalheiro EA, Naffah-Mazzacoratti MG, Mello LE, Leite JP. The pilocarpine model of seizures. Models of seizures and epilepsy. New York: Elsevier; 2006. p. 433-448.
Morales-Villagrán A, Tapia R. Preferential stimulation of glutamate release by 4-aminopyridine in rat striatum in vivo. Neurochem Int. 1996;28(1):35–40.
Maljevic S, Lerche H. Potassium channels: a review of broadening therapeutic possibilities for neurological diseases. J Neurol. 2013;260(9):2201–11.
Ben-Ari Y, Cossart R. Kainate, a double agent that generates seizures: two decades of progress. Trends Neurosci. 2000;23(11):580–7.
Biziere K, Slevin J, Zaczek R, Collins J. Kainic acid neurotoxicity and receptor in CNS pharmacology neuropeptides. In: Proceedings of the 8th international congress of pharmacology, Tokyo, 1981. Elsevier; 2013.
Hellier JL, Dudek FE. Chemoconvulsant model of chronic spontaneous seizures. Curr Protoc Neurosci. Chapter 9: Unit 9.19.
Fisher RS. Animal models of the epilepsies. Brain Res Rev. 1989;14(3):245–78.
Buckmaster PS, Dudek FE. Neuron loss, granule cell axon reorganization, and functional changes in the dentate gyrus of epileptic kainate-treated rats. J Comp Neurol. 1997;385(3):385–404.
Fenwick P. Psychiatric disorders and epilepsy. In: Epilepsy, Hopkins A, Shorvon S, Cascino G, editors. London: Chapman and Hall Medical; 1995. p. 453–502.
Liu A, Bryant A, Jefferson A, Friedman D, Minhas P, Barnard S, et al. Exploring the efficacy of a 5-day course of transcranial direct current stimulation (TDCS) on depression and memory function in patients with well-controlled temporal lobe epilepsy. Epilepsy Behav. 2016;55:11–20.
Meyer JH, Gruol DL. Dehydroepiandrosterone sulfate alters synaptic potentials in area CA1 of the hippocampal slice. Brain Res. 1994;633(1):253–61.
Koskimäki J, Matsui N, Umemori J, Rantamäki T, Castrén E. Nimodipine activates TrkB neurotrophin receptors and induces neuroplastic and neuroprotective signaling events in the mouse hippocampus and prefrontal cortex. Cell Mol Neurobiol. 2015;35(2):189–96.
van Luijtelaar G, Wiaderna D, Elants C, Scheenen W. Opposite effects of T-and L-type Ca 2 + channels blockers in generalized absence epilepsy. Eur J Pharmacol. 2000;406(3):381–9.
Kriz J, Župan G, Simonić A. Differential effects of dihydropyridine calcium channel blockers in kainic acid-induced experimental seizures in rats. Epilepsy Res. 2003;52(3):215–25.
PKM: Was involved in the conception and design, acquisition of all data as well as the analysis and interpretation of data. She was also involved in drafting and revising the manuscript. DWA: Was involved in the acquisition of data, analysis and interpretation of data in addition to drafting and revising the manuscript. EW: Was involved in the conception and design, analysis and interpretation of data as well as drafting and revising the manuscript. All the authors read and approved the final manuscript.
All data generated or analysed during this study are included in this published article as supplementary information files.
All animals were handled in accordance the Guide for the Care and Use of Laboratory Animals [11] and experiments were approved by the Faculty of Pharmacy and Pharmaceutical Sciences Ethics Committee, KNUST.
Study was funded solely by the authors of this publication.
Department of Pharmacology, Faculty of Pharmacy and Pharmaceutical Sciences, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
Priscilla Kolibea Mante & Eric Woode
University of Health and Allied Sciences, Ho, Ghana
Donatus Wewura Adongo
Priscilla Kolibea Mante
Eric Woode
Correspondence to Priscilla Kolibea Mante.
Raw data.
Mante, P.K., Adongo, D.W. & Woode, E. Anticonvulsant effects of antiaris toxicaria aqueous extract: investigation using animal models of temporal lobe epilepsy. BMC Res Notes 10, 167 (2017). https://doi.org/10.1186/s13104-017-2488-x
DOI: https://doi.org/10.1186/s13104-017-2488-x
Kainic acid
Pentylenetetrazole
Pilocarpine
|
CommonCrawl
|
Australian Bureau of Statistics Logo of the ABS with coat of arms.
ABS Main Menu
Search ABS
Characteristics of Employment, Australia methodology
Next release Unknown
Characteristics of Employment, Australia methodology Reference Period August 2021
The Characteristics of Employment (COE) survey was conducted throughout Australia in August 2022 as a supplement to the monthly Labour Force Survey (LFS). Respondents to the LFS who fell within the scope of the supplementary survey were asked further questions.
Additional information about survey design, scope, coverage and population benchmarks relevant to the monthly LFS, which also applies to supplementary surveys, can be found in Labour Force, Australia, Methodology.
Descriptions of the underlying concepts and structure of Australia's labour force statistics, and the sources and methods used in compiling the estimates, are presented in Labour Statistics: Concepts, Sources and Methods.
Scope and coverage
The scope of the LFS is the civilian population aged 15 years and over, excluding
Members of the permanent defence forces
Certain diplomatic personnel of overseas governments
Overseas residents in Australia
Members of non-Australian defence forces (and their dependants) stationed in Australia.
Students at boarding schools, patients in hospitals, residents of homes (e.g. retirement homes, homes for people with disabilities), and inmates of prisons are excluded from all supplementary surveys.
This supplementary survey was conducted in both urban and rural areas in all states and territories, but excluded people living in Aboriginal and Torres Strait Islander communities.
In addition to those already excluded from the LFS, contributing family workers, people not in the labour force and unemployed people were also excluded.
In the LFS, coverage rules are applied, which aim to ensure that each person is associated with only one dwelling, and hence has only one chance of selection in the survey. See Labour Force, Australia methodology for more details.
Supplementary surveys are not conducted on the full LFS sample. Since August 1994, the sample for supplementary surveys has been restricted to no more than seven-eighths of the LFS sample.
This survey is based on the new sample introduced into LFS in July 2018. The new sample design has adopted the use of the Address Register as the sampling frame for unit selection, and the sampling fractions for selection probabilities within each state have been updated to reflect the most recent population distribution based on results from the 2016 Census of Population and Housing. As with each regular sample design, the impacts on the data are expected to be minimal. For more information, see the Information Paper: Labour Force Survey Sample Design, Jul 2018.
Information is obtained either by trained interviewers or through self-completion online. The interviews are generally conducted during the two weeks beginning on the Sunday between the 5th and 11th of August. The information obtained relates to the week before the interview (i.e. the reference week). Occasionally, circumstances that present significant operational difficulties for survey collection can result in a change to the normal pattern for the start of interviewing.
COE questionnaire
Download xlsx [76.48 KB]
Weighting and estimation
Population benchmarks
The Labour Force Survey estimates and estimates from the supplementary surveys such as Characteristics of Employment are calculated in such a way as to sum to the independent estimates of the civilian population aged 15 years and over (population benchmarks). These population benchmarks are updated quarterly based on Estimated Resident Population (ERP) data. See Labour Force, Australia methodology for more information.
From August 2015, Labour Force Estimates have been compiled using population benchmarks based on the most recently available release of ERP data, continually revised on a quarterly basis.
To reduce the impact of seasonality on total employment, the estimates have been adjusted by factors based on trend LFS estimates. These factors were applied at the state and territory, sex, full-time and total employment levels, based on the trend LFS series as published in the September 2022 issue of Labour Force, Australia (published 20/10/22). This adjustment accounts for August seasonality and irregular effects, resulting in an increase to the typically lower original employed estimates for August.
Where information relating to earnings in both main job and/or second job was not provided by the respondent, values are imputed. Where this was the only information missing from the respondent record, the value was imputed based on answers provided from another respondent with similar characteristics (referred to as the "donor"). Depending on which values were imputed, donors were chosen from the pool of individual records with complete information for the block of questions where the information was missing.
Donor records were selected for imputation of earnings in main job by matching information on sex, age, state or territory of usual residence and selected labour force characteristics (full-time or part-time in main job, industry, occupation (and skill level), hours worked in main job, hourly rates, owner manager status) of the person with missing information.
Donor records were selected for imputation of earnings in second job by matching information on age, state or territory of usual residence, area of usual residence, owner manager status, hours worked in second job and frequency of pay in second job.
Prior to 2004, imputation was not used. Employees whose weekly earnings could not be determined were excluded from estimates of mean or median weekly earnings. Care should be taken when comparing earnings data from 2004 onwards with earnings data prior to 2004. To compare the change in methodology from 2003 to 2004 see paragraph 28 of the August 2004 Employee Earnings, Benefits and Trade Union Membership (EEBTUM).
Comparability with LFS
Due to differences in the scope and sample size of this supplementary survey and that of the monthly LFS, the estimation procedure may lead to some small variations between labour force estimates from this survey and those from the LFS.
Comparability with other earnings sources
Caution should be exercised when comparing estimates of earnings in this release with estimates of earnings in the biannual Average Weekly Earnings and two-yearly Employee Earnings and Hours, which are compiled from employer based surveys. There are important differences in the scope, coverage and methodology of these surveys which can result in different estimates of earnings from each survey.
The survey of Average Weekly Earnings (AWE) collects information from employers who provide details of their employees' total gross earnings and their total number of employees (excluding amounts salary sacrificed). The survey of Employee Earnings and Hours (EEH) collects information about weekly earnings and hours paid for, and the individual characteristics of a sample of employees within each selected employer unit. Both AWE and EEH are completed by employers with information from their payroll. However, for COE, respondents are either the employed person or another adult member of their household who responds on their behalf. Where earnings are not known exactly an estimate is reported. There are also scoping differences between both household and employer surveys. For example, AWE and EEH exclude employees in the Agriculture, forestry and fishing industry, and also employees of Private households, whereas these employees are included in the COE and EEBTUM surveys.
For further information on a number of earning series available from ABS sources, please refer to the Earnings guide in our Guide to labour statistics.
Survey output
Release strategy
Statistics from the Characteristics of Employment survey are published in the following topic-based releases.
Employee earnings
Working arrangements
Trade union membership
Characteristics of Employment data for 2014 to 2022 will be available in TableBuilder and DataLab from 16 December 2022. TableBuilder enables the creation of customised tables and graphs. For more information, refer to Microdata and TableBuilder: Characteristics of Employment.
Survey content
The Characteristics of Employment survey (COE) collects data from people aged 15 years and older in the following conceptual groups. Some of the concepts are only collected every two years, on an alternating basis.
Away from work
Characteristics of employment (all jobs)
Characteristics of main job
Characteristics of second job
Earnings in main job (median, mean and distribution of weekly and hourly earnings)
Fixed-term contracts
Leave entitlements
Underemployment
Even years only
Casual work and Job security
Characteristics of independent contractors
Odd years only
Overemployment and Overtime
Working arrangements and Working patterns
Job Flexibility and Working from home
For more details, refer to the Data item list
COE Data Item List
Download xlsx [248.14 KB]
Earnings and benefits
Similar surveys on weekly earnings have been conducted annually in August since 1975, except in 1991 when the survey was conducted in July, and in 1996 when the survey was not conducted. Prior to the commencement of Characteristics of Employment in 2014, weekly earnings and employment benefits were published in Employee Earnings, Benefits and Trade Union Membership (cat. no. 6310.0, known as Weekly Earnings of Employees (Distribution) prior to 1999).
Prior to 1997, information on employment benefits (such as paid leave entitlements) have been published in
Employment Benefits, Aug 1994 (cat. no. 6334.0.40.001)
Employment Benefits, 1979-1992 (cat. no. 6334.0).
Information on the use of leave entitlements was previously published in Annual and Long Service Leave Taken, 1974-1989 (cat. 6317.0). Information on the use of paid sick leave was last published in Working Arrangements, Nov 2003 (cat. no. 6342.0).
Information on trade union membership was first collected in a supplementary survey in 1976, again in 1982, then biennially in its current format from 1986 to 1990. Between 1992 and 2013, it was conducted annually (with only limited data available every second year). Prior to Characteristics of Employment, results of previous surveys were published in Employee Earnings, Benefits and Trade Union Membership. and before that in Trade Union Members. (cat. no. 6325.0)
Limited data on trade union membership have also been published in
Employment Arrangements, Retirement and Superannuation, Apr to Jul 2007 (cat. no. 6361.0)
Weekly Earnings of Employees (Distribution), August 1997 (cat. no. 6310.0)
Working Arrangements, November 2003 (cat. no. 6342.0).
Information on trade union membership provided from an annual census of trade unions is available in the following reports between 1891 and 1996
Labour and Industrial Branch Report, 1891-1912 (cat. no. 6101.0)
Labour Report, 1922-1973 (cat. no. 6101.0)
Labour Statistics, 1975-1997 (cat. no. 6101.0)
Trade Union Statistics, 1969-1996 (cat. no. 6323.0).
Information on working arrangement and forms of employment was originally collected every 3 years between 1998 and 2004, followed by surveys in 2006 and 2007. In 2008, the survey was redeveloped to better capture information of independent contractors, and was collected annually on this basis until 2013. Results of previous surveys were published in Forms of Employment (cat. no. 6359.0).
Information on Working Arrangements has been collected in a variety of surveys since 1976, as follows
Work Patterns of Employees, Nov 1976 (cat. no. 6328.0)
Evening and Night Work, Nov 1976 (cat. no. 6329.0)
Working Conditions, Feb-May 1979 (cat. no. 6329.0)
Working Hours Arrangements, Feb-May 1981 (cat. no. 6338.0)
Working Hours Arrangements - Supplementary Tables, Feb-May 1981 (cat. no. 6339.0)
Alternative Working Arrangements, 1982-1986 (cat. no. 6341.0)
Working Arrangements, 1993-2003 (cat. no. 6342.0)
Working Time Arrangements, 2006-2012 (cat. no. 6342.0).
Information on Working from home has been collected irregularly between 1989 and 2008 in Locations of Work (cat. no. 6275.0, known as Persons Employed at Home, Australia prior to 2000).
Information on employment through a labour hire firm or employment agency was first collected in the 2000 Survey of Employment Arrangements and Superannuation and again in the 2007 Survey of Employment Arrangements, Retirement and Superannuation (cat. no. 6361.0).
Information on labour hire workers was also collected in the 2001, 2008 and 2011 Forms of Employment surveys, with information on employment through an employment agency also collected in 1998.
Multiple job holders
Information on multiple job holders was published in Multiple Jobholding (cat. no. 6216.0) for the years 1965 to 1967, every second year between 1971 and 1987, 1991, 1994 and 1997.
Accuracy and quality
Reliability of estimates
As the estimates are based on information obtained from occupants of a sample of households, they are subject to sampling variability. That is, they may differ from those estimates that would have been produced if all households had been included in the survey or a different sample was selected. Two types of error are possible in an estimate based on a sample survey - sampling error and non-sampling error.
sampling error is the difference between the published estimate and the value that would have been produced if all dwellings had been included in the survey.
non-sampling errors are inaccuracies that occur because of imperfections in reporting by respondents and interviewers, and errors made in coding and processing data. These inaccuracies may occur in any enumeration, whether it be a full count or a sample. Every effort is made to reduce the non-sampling error to a minimum by careful design of questionnaires, intensive training and effective processing procedures.
Some of the estimates contained in the tables have a relative standard error (RSE) of 50 per cent or greater. These estimates are marked as unreliable for general use. Estimates with an RSE of between 25 and 50 per cent are also marked and should be used with caution.
More on reliability of estimates
Non-sampling error
Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording of answers by interviewers and errors in coding and processing data. Every effort is made to reduce non-sampling error by careful design and testing of questionnaires, training and supervision of interviewers, and extensive editing and quality control procedures at all stages of data processing.
Sampling error
Sampling error is the difference between the published estimates, derived from a sample of persons, and the value that would have been produced if the total population (as defined by the scope of the survey) had been included in the survey. One measure of the sampling error is given by the standard error (SE), which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all households had been surveyed, and about 19 chances in 20 (95%) that the difference will be less than two SEs.
Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate.
\(\large{RSE\%=(\frac{SE}{estimate})\times100}\)
RSEs for estimates have been calculated using the Jackknife method of variance estimation. This involves the calculation of 30 'replicate' estimates based on 30 different sub-samples of the obtained sample. The variability of estimates obtained from these subsamples is used to estimate the sample variability surrounding the main estimate. RSEs for median estimates have been calculated using the Woodruff method.
The Excel spreadsheets in the Data downloads section contain all the tables produced for this release and the calculated RSEs for each of the estimates.
Only estimates (numbers or percentages) with RSEs less than 25% are considered sufficiently reliable for most analytical purposes. However, estimates with larger RSEs have been included. Estimates with an RSE in the range 25% to 50% should be used with caution while estimates with RSEs greater than 50% are considered too unreliable for general use. All cells in the Excel spreadsheets with RSEs greater than 25% contain a comment indicating the size of the RSE. These cells can be identified by a red indicator in the corner of the cell. The comment appears when the mouse pointer hovers over the cell.
Another measure is the Margin of Error (MOE), which shows the largest possible difference that could be between the estimate due to sampling error and what would have been produced had all persons been included in the survey with a given level of confidence. It is useful for understanding and comparing the accuracy of proportion estimates.
Where provided, MOEs for estimates are calculated at the 95% confidence level. At this level, there are 19 chances in 20 that the estimate will differ from the population value by less than the provided MOE. The 95% MOE is obtained by multiplying the SE by 1.96.
\(\large{MOE=SE\times1.96}\)
Calculation of standard error
Standard errors can be calculated using the estimates (counts or percentages) and the corresponding RSEs. Since the RSE is obtained by expressing the standard error as a percentage of the estimate, recalculating the standard error is obtained by multiplying the estimate by the RSE.
Proportions and percentages
Proportions and percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. This formula is only valid when x is a subset of y
\(\large{RSE(\frac{x}{y})\approx\sqrt{[RSE(x)]^2-[RSE(y)]^2}}\)
The difference between two survey estimates (counts or percentages) can also be calculated from published estimates. Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula
\(\large {SE(x-y)\approx\sqrt{[SE(x)]^2+[SE(y)]^2}}\)
While this formula will only be exact for differences between separate and uncorrelated characteristics or sub populations, it provides a good approximation for the differences likely to be of interest in this publication.
Significance testing
A statistical significance test for a comparison between estimates can be performed to determine whether it is likely that there is a difference between the corresponding population characteristics. The SE of the difference between two corresponding estimates (x and y) can be calculated using the formula shown above in the Differences section. This SE is then used to calculate the following test statistic
\(\LARGE{(\frac{x-y}{SE(x-y)})}\)
If the value of this test statistic is greater than 1.96 then there is evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations with respect to that characteristic.
As estimates have been rounded, discrepancies may occur between sums of the component items and totals.
Standards and classifications
Country of birth data are classified according to the Standard Australian Classification of Countries (SACC), 2011
Occupation data, including skill level of main job, are classified according to ANZSCO - Australian and New Zealand Standard Classification of Occupations, 2013, Version 1.2
Industry data are classified according to the Australian and New Zealand Standard Industrial Classification (ANZSIC), 2006 (Revision 2.0)
Education data are classified according to the Australian Standard Classification of Education (ASCED), 2001
Geography data are classified according to the Australian Statistical Geography Standard (ASGS), 2016
Agreement to work flexible hours
An agreement that is either in writing or otherwise. A written agreement can be in the form of, but not limited to, an individual written agreement between an employer and employee, or a Collective Agreement or Certified Agreement (CA) made directly between an employer and a group of employees.
Born in Australia
Includes people born in Australia, Norfolk Island and Australian External Territories.
Did not draw a wage or salary
Consists of people who worked in their own incorporated enterprise only i.e. Owner managers of incorporated enterprises (OMIEs).
Duration of employment in main job
The length of the current period of employment people had with their employer or in their own business. The length of time includes periods of paid leave, unpaid leave or strike.
People aged 15 years and over who, during the reference week
worked for one hour or more for pay, profit, commission or payment in kind, in a job or business or on a farm (comprising employees, employers and own account workers), or
worked for one hour or more without pay in a family business or on a farm (i.e. contributing family workers), or
were employees who had a job but were not at work and were
away from work for less than four weeks up to the end of the reference week
away from work for more than four weeks up to the end of the reference week and received pay for some or all of the four week period to the end of the reference week
away from work as a standard work or shift arrangement
on strike or locked out
on workers' compensation and expected to return to their job, or
were employers or own account workers who had a job, business or farm, but were not at work.
Contributing family workers in their main job were excluded from the Characteristics of Employment Survey.
Employees are people who
worked for a public or private employer, and
received remuneration in wages or salary; or are paid a retainer fee by their employer or worked on a commission basis, for tips, piece-rates or payment in kind.
An employment agency is an organisation which is engaged in personnel search, or selection and placement of people for an employing organisation. The agency or firm may also be engaged in supply of their own employees to other employers, usually on a short-term basis. (See also labour hire firm).
Fixed-term contract
A contract of employment which specifies that the employment will be terminated on a particular date or event (e.g. completion of a project).
Full-time workers in main job
People who were employees and usually work 35 hours or more a week in their main job, or usually work fewer than 35 hours but worked 35 hours or more in their main job during the reference week.
Full-time workers
Employed people who usually worked 35 hours or more a week (in all jobs) and others who, although usually worked less than 35 hours a week, worked 35 hours or more during the reference week. These people were classified as full-time workers.
Holiday leave
The entitlement of an employee to paid holiday, paid vacation or paid recreation leave in their main job.
Hours paid for in main job
The number of hours for which employees and OMIEs were paid in their main job for the reference week, not necessarily the number of hours actually worked during the reference week (e.g. a person on paid leave for the week was asked to report the number of hours for which they were paid).
Hours usually worked
The number of hours usually worked in a week.
The number of hours actually worked during the reference week.
Independent contractors are people who operate their own business and who are contracted to perform services for others without having the legal status of an employee, i.e. people who are engaged by a client, rather than an employer to undertake the work. Independent contractors are engaged under a contract for services (a commercial contract), whereas employees are engaged under a contract of service (an employment contract).
Independent contractors' employment may take a variety of forms, for example, they may have a direct relationship with a client or work through an intermediary. Independent contractors may have employees, however they spend most of their time directly engaged with clients or on client tasks, rather than managing their staff.
An industry is a group of businesses or organisations that undertake similar economic activities to produce goods and/or services. In this publication, industry refers to ANZSIC Division as classified according to the Australian and New Zealand Standard Industrial Classification (ANZSIC), 2006 (Revision 2.0).
Labour hire firm
A labour hire firm is an organisation which is engaged in personnel search, or selection and placement of people for an employing organisation. The agency or firm may also be engaged in supply of their own employees to other employers, usually on a short-term basis. (See also employment agency).
Labour hire workers are people who found their job through a labour hire firm/employment agency and are paid by the labour hire firm/employment agency.
Level of highest educational attainment
Level of highest educational attainment identifies the highest achievement a person has attained in any area of study. It is not a measurement of the relative importance of different fields of study but a ranking of qualifications and other educational attainments regardless of the particular area of study or the type of institution in which the study was undertaken. It is categorised according to the Australian Standard Classification of Education, 2001.
Level of highest non-school qualification
A person's level of highest non-school qualification is the highest qualification a person has attained in any area of formal study other than school study. It is categorised according to the Australian Standard Classification of Education, 2001.
Main job
The job in which the most hours were usually worked.
Maternity, paternity or parental leave
The provision by an employer of paid maternity, paternity or parental leave.
Mean weekly earnings
The amount obtained by dividing the total earnings of a group by the number of people in that group.
Median weekly earnings
The amount which divides the distribution into two groups of equal size, one having earnings above and the other below that amount.
Multiple jobholder
Employed people who, during the reference week, worked in more than one job. Multiple jobholders exclude those who changed employer during the reference week. People who were unpaid voluntary workers or on unpaid trainee or work placement in their second job were excluded from the Multiple jobholder population.
Information on earnings in main job is collected from all multiple jobholders. Information on earnings in second job is only collected from multiple jobholders who were employees or OMIEs in their second job and were an employee or OMIE in their main job.
An occupation is a collection of jobs that are sufficiently similar in their title and tasks, skill level and skill specialisation which are grouped together for the purposes of classification. In this publication, occupation refers to Major Group and Skill Level as defined by ANZSCO - Australian and New Zealand Standard Classification of Occupations, 2013, Version 1.2.
A shift arrangement, for being available, when not at work, to be contacted to resume work. An allowance may be paid to the employee for being on call.
Work undertaken which is outside, or in addition to, ordinary working hours in main job, whether paid or unpaid.
Owner managers of incorporated enterprises (OMIEs)
People who work in their own incorporated enterprise, that is, a business entity which is registered as a separate legal entity to its members or owners (may also be known as a limited liability company). An owner manager of an incorporated enterprise may or may not hire one or more employees in addition to themselves and/or other owners of that business. See Status of Employment for more information.
Owner managers of unincorporated enterprises (OMUEs)
A person who operates his or her own unincorporated enterprise or engages independently in a profession or trade. An owner manager of an unincorporated enterprise may or may not hire one or more employees in addition to themselves and/or other owners of that business. See Status of Employment for more information.
Paid leave entitlements
The entitlement of employees to paid holiday leave or paid sick leave (or both) in their main job.
Part-time workers in main job
People who were employees and usually work fewer than 35 hours a week in their main job, and either did so during the reference week, or were not at work in the reference week.
Part-time workers
Employed people who usually worked fewer than 35 hours a week (in all jobs) and either did so during the reference week, or were not at work in the reference week.
Reference week
The week preceding the week in which the interview was conducted.
Second job
A job, other than the main job.
Sector of main job
Sector of main job is used to classify a respondent's employer as a public or private enterprise. The public sector includes all government units, such as government departments, non-market non-profit institutions that are controlled and mainly financed by government, and corporations and quasi-corporations that are controlled by government.
Shift work
A system of working whereby the daily hours of operation at the place of employment are split into at least two set work periods (shifts) for different groups of workers. Types of shifts include
Irregular shifts - Describes shifts that do not follow a set pattern
Regular shifts - Shifts worked to a set pattern of times. Regular shift times are presented as follows
morning shifts - between 6.00am and 12.00pm
afternoon shifts - between 12.00pm and 5.00pm, and
evening, night or graveyard shift - between 5.00pm and 6.00am.
Rotating shift - A shift arrangement, in which the shift worked changes periodically from one time period to another, for example from mornings or afternoons to evenings or nights.
Split shift - Occurs when the worked period is broken by an extended unpaid 'free' period, thereby constituting an extended working day consisting of two (or more) shifts.
The entitlement of an employee to paid sick leave in their main job.
People who are usually waiting to restart work or people who have had to restart work after being recalled, without additional pay and allowances.
Status of employment
Status of employment is determined by an employed person's position in relation to their job, and is in respect of a person's main job if they hold more than one job. Employed people are classified according to the reported relationship between the person and the enterprise for which they work, together with the legal status of the enterprise where this can be established. The groups include
Employee with paid leave entitlements
Employee without paid leave entitlements
Owner manager with employees (employer)
Owner manager of incorporated enterprise with employees
Owner manager of unincorporated enterprise with employees
Owner manager without employees (own account work)
Owner manager of incorporated enterprise without employees
Owner manager of unincorporated enterprise without employees, and
Contributing family worker.
An organisation consisting predominantly of employees, the principal activities of which include the negotiation of rates of pay and conditions of employment for its members.
Trade union member
Employed people with membership in a trade union, usually in connection with their main job.
Weekly earnings
Amount of 'last total pay' for wage and salary earners prior to the interview, before taxation, salary sacrifice and other deductions had been made. For people paid other than weekly, earnings were converted to a weekly equivalent. No adjustment was made for any back payment of wage increases, prepayment of leave or bonuses, etc.
With paid leave entitlements
Employees who were entitled to paid holiday leave or paid sick leave (or both) in their main job.
Without paid leave entitlements
Employees who were not entitled to paid holiday leave and paid sick leave, or did not know whether they were entitled to paid holiday leave or paid sick leave in their main job.
The ABS has been conducting the Characteristics of Employment Survey, and its predecessor surveys, since 1975. While seeking to provide a high degree of consistency and comparability over time by minimising changes to the survey, sound survey practice requires careful and continuing maintenance and development to maintain the integrity of the data and the efficiency of the collection.
The changes which have been made to Characteristics of Employment, its predecessors and the monthly LFS have included changes in sampling methods, estimation methods, concepts, data item definitions, classifications, and time series analysis techniques.
Show all changes
Microdata published in DataLab for the first time
New topic page - Labour hire workers
Timeseries data for 1975 to 2003 added to Table 1 of Employee earnings. Most of the data has been revised to match the current series as close as possible (e.g. excluding owner managers of incorporated enterprises). Some of the data prior to 1989 could not be revised due to the loss of the original survey data, but it was shown that in most cases the differences between revised and unrevised data was negligible (about $1 or less).
New Working arrangements Table - Employees on fixed-term contracts (Table 6)
New Employee earnings Table - Median earnings for multiple job holders (Table 7)
New items in TableBuilder to make it easier to identify employees on a fixed-term contract and labour hire workers.
Trend factor adjustment for estimation has been re-instated after being suspended in 2020
Improvements were made to the COE TableBuilder to simplify the way the data items were presented, increase the usability and increase the range of data items. Data items are now grouped under 18 conceptual groups.
Characteristics of Employment split into three releases: Employee earnings, Working arrangements, and Trade union membership.
Suspension of trend estimates and change to the use of forward factors for seasonally adjusted estimates as a result of COVID-19.
Table 14 added which consolidates all the Trade Union Membership data into one table.
A review of the imputation methodology used for earnings data highlighted quality gains from making further improvements in the quality checking of reported data prior to imputation. These improvements have been implemented and applied to the 2014-2018 period, resulting in revisions. The refinements resulted in negligible revisions to headline median time series, while the revisions to mean time series data have noticeably improved their coherence with other ABS earnings measures, particularly for male earnings.
The headline figures changed from a focus on mean earnings to a focus on median earnings.
Hourly earnings were introduced as a derived measure based on weekly earnings and weekly hours paid for.
Regular rebenchmarking was introduced to reflect the latest revisions to ERP data.
Trend factor adjustment was introduced to reduce the impact of seasonal and irregular effects on total employment, based on trend Labour Force Survey (LFS) estimates.
Estimates based on Skill level of main job was introduced. Under ANZSCO, every occupation is assigned a skill level from 1 (high-skilled) to 5 (low-skilled) based on the range and complexity of the particular set of tasks performed in that job.
Improvements were made to the imputation and outlier process for earnings data, relating to the addition of skill level of main job and hourly earnings information into the process. These improvements have been applied to the period 2014-2017 resulting in revisions over this period.
Data linking of characteristics between EEBTUM in August with FOES in November or SEW in May was introduced for revised estimates prior to 2014. These were subject to different seasonal impacts, which may result in an observable break in series between the historical data and data collected in COE. Trend factors have also been applied to these historical estimates to reduce the impact of seasonality on total employment estimates.
Estimates for periods 2004 to 2014 were revised to reflect new definition of employee. See Appendix: Status of employment and population concordance for more information. Estimates were also revised to match the latest ASGS Geography (Capital City and Balance of State), ANZSIC Industry and ANZSCO Occupation classifications.
Characteristics of Employment survey combines and replaces the Employee Earnings, Benefits and Trade Union Membership, Forms of Employment and Working Time Arrangements surveys.
From August 2014 onwards, employees exclude Owner Managers of Incorporated Enterprises (OMIEs). Prior to July 2014 (including in the Labour Force Survey and other household surveys) employees included OMIEs.
From August 2014 collection of earnings in second job was changed to match the collection of earnings in main job. Previously, earnings in second job was collected from respondents who were employees in their second job who actually worked some hours in their second job in the reference week. Earnings were reported for those hours actually worked in that job. From 2014, earnings in second job were collected from employees in their second job regardless of whether they worked in that job in the reference week. Earnings data and frequency of pay in that second job were subsequently collected. This change resulted in a break in series of earnings in all jobs and earnings in second job. Caution should be exercised when comparing second and all job earnings data from COE with previous years.
From August 2014 onwards, information about trade union membership is collected from all employed people. This was previously only collected of employees.
From 2007, earnings specifically include amounts salary sacrifice. In previous years, there was no explicit reference to the treatment of salary sacrifice however It is probable that some employees were already including amounts of salary sacrifice in their estimates of earnings, depending upon how their pay was reported. This change has resulted in a break in series. See Information paper: Changes to ABS Measure of Employee Remuneration, Australia 2006.
Earnings for employees whose weekly earnings could not be determined are now imputed. Prior to 2004 these were excluded from estimates of mean or median weekly earnings. Care should be taken when comparing earnings data from 2004 onwards with earnings data prior to 2004. To compare the change in methodology from 2003 to 2004 see the August 2004 Employee Earnings, Benefits and Trade Union Membership.
Help us shape our website
Footer - Bottom
arrow-downarrow-leftarrow-rightarrow-upcav-downcav-rightclosedownloadexternal-linkfeedbackfilemenusearchsign-checksign-infosign-warningsocial-facebooksocial-instagramsocial-twittertriangle-downlinkprintquotes-leftcalendardate-add
|
CommonCrawl
|
Selecting an optimal number of degrees of freedom
Arnaud Wolfer
santaR is a Functional Data Analysis (FDA) approach where each individual's observations are condensed to a continuous smooth function of time, which is then employed as a new analytical unit in subsequent data analysis.
Fitting a smooth function to a time trajectory is equivalent to a denoising or signal extraction problem; a balance between the fitting of the raw data and the smoothing of the measurement error (or biological variability) must be found (see santaR theoretical background).
santaR parametrise the smoothness of individual smooth-splines fit by fixing the number of effective degrees of freedom allowed (df) . df can be described as the single meta-parameter that a user must select. Although the 'ideal' solution, automated approaches for the selection of an optimal level of smoothing are still an area of active research; and while no definitive answer on the most suitable methodology can be provided, the following vignette will present the current state of these approaches and propose a strategy to help a user make an informed decision.
As the true underlying function of time is inaccessible and the optimal df unknown, strategies for the estimation of smoothness and model selection must be devised. The fundamental question that must guide the tuning of the smoothing parameter is to know which observed curve features are "real" and which are spurious artefacts of the fitting procedure (over-fitted). Most FDA algorithms rely on an automated selection of the smoothness parameter; yet as smoothness is central to the denoising and signal extraction procedure, the quality of the results will be highly dependent on the tuning1. It can be noted that this challenge is shared with multiple other data analysis methods imposing a form of smoothness. The problem of selecting the window width or bandwidth in a kernel density estimator (KDE) is notably similar, resulting in shared literature and strategies2 3 4; even if most reported algorithm require around 20 time-points while most short trajectories hardly reach half this number.
The following vignette will focus on:
Intuitive parametrisation of smoothing
Automated model assessment and selection approaches
Latent time-trajectories for df selection
Smoothness and experimental design
Based on simulated data and diverse datasets, some intuitive rules for the selection of df can be established:
df controls the "complexity" of the model employed. A substantial difference can be found when going from 2 to 10, but very little change will take place when going from 10 to 50 (the model only gets more complex, but the general shape won't change).
More time points do not automatically require a higher df. More inflexions (more complex shape) could require a higher df if the number of points is sufficient (and the sampling frequency high).
A lower df value is often more suited and generalisable (less over-fitted).
If the df is for example 10, all individuals trajectories with less of 10 time-points cannot be fitted and will be rejected.
On simulated data, the results (p-values) are resilient to most values of df, however the plots can look dramatically different.
Trying multiple values of df on a subset of variables (using the GUI) and then selecting the fit that approximate the time evolution the best without over-fitting:
df=5 is a good starting point in most cases (even more so if there is less than 10 time-points)
If the number of time-points is large and the curves seem very under-fitted, df can be increased to 6, 7 or more. Values higher than 10 should rarely be required and will provide with a diminishing return. df=number of time-points will result in a curve passing through all points (over-fitted).
If the number of points is lower or the trajectories seem over-fitted, df can be decreased to 4 or 3. (3 will be similar to a second degree polynomial, while 2 will be a linear model)
If the plots "look right" and don't seem to "invent" information between measured data-points, the df is close to optimal.
As will be demonstrated in the following sections (Automated model assessment and selection approaches), it does not seem to be possible to automatically select the degree of freedom. A choice based on visualisation of the splines while being careful with over-fitting, keeping in mind the "expected" evolution of the underlying process seems the most reasonable approach.
In practice, the results are resilient to the df value selected across all variables, which is a function of the study design, such as the number of time-points, sampling rate and, most importantly, the complexity of the underlying function of time. While an automated approach cannot infer the study design from a limited set of observations, an informed user will intuitively achieve a more consistent fit of the data (see Smoothness and Experimental Design).
In classical parametric statistics, a reduction of the RSS ensures the "closest" fit of the observations. However such a least-square fitting approach cannot be employed as the most over-fitting model would always be selected, resulting in an interpolation of the observations.
As the present measurements are noisy and the models considered flexible, no "truth" or certainty can be found in the data. This bias set smoothing aside from classical parametric approaches and result in the complexity of automated selection of smoothness.
The most common methodology to tune the smoothing parameter consists in evaluating a metric (a measure the quality of the model) at different smoothing parameter values, and select the model that minimises/maximise the target metric. These metrics, like the fitting of a smoothingspline itself (see santaR theoretical background Equation 1), must impose a trade-off between the goodness-of-fit and a penalty on the model complexity. Each metric will therefore potentially present a (different) bias depending on how the balance is established. This bias could ultimately define the metric's behaviour when a high or low number of samples are available. To evaluate a smoothing parameter for under- or over-fitting, the main model selection procedures employ the cross-validation score (CV), general cross-validation score (GCV), Akaike information criterion (AIC), Bayesian information criterion (BIC), or the corrected AIC (AICc).
Ordinary CV consists in dividing the N data-points in K groups or folds. Each fold will successively be used as a test set, while a model is trained on the remaining data. After a model is fitted (for a given smoothing parameter) with the Kth fold removed, the prediction error can be evaluated on the (unseen) Kth fold (test set). When this procedure has been repeated for all folds, each data point as been employed once as a test, and the prediction error across all folds can be averaged. This average prediction error provides an estimate of the curve's test error for a given smoothness parameter; the parameter that minimises the test error is subsequently selected5.
GCV6 is a faster approximation of CV relying on the fact that, in some cases, the trace of the roughness matrix can be calculated more easily than the individual elements of its diagonal. GCV has been reported to possibly reduce the tendency of CV to under-smooth in some condition7. GCV and CV scores (i.e. test errors) have been described as changing slowly as \(\lambda\) approaches the minimising value (corresponding to a wide range of \(\lambda\) resulting in close test error values), making optimal smoothness hard to define 8.
The remaining three metrics commonly employed in model selection (i.e. AIC, BIC, AICc) are less computationally intensive as they rely on the log-likelihood of the model9 10. These metrics penalise models based on the number of parameters, to balance the goodness of fit versus model complexity. The Akaike information criterion11 12, defined as AIC = \(2k - 2\mathcal{L}\) , where \(\mathcal{L}\) is the log-likelihood and \(k\) the number of parameters, rewards goodness of fit (estimated by \(\mathcal{L}\)) while increasingly penalising each added parameter. Across candidate models (e.g. varying smoothing parameter employed), the model which minimises AIC is preferred.
The Bayesian information criterion13, defined as BIC = \(log(n)k - 2\mathcal{L}\), where \(n\) is the sample size14 15 16, is an information criterion established in a Bayesian framework, taking the sample size into account when penalising a model fit. Like AIC, the model minimising BIC is preferred. Different behaviour of AIC and BIC can be observed as the number of time-points is altered. As \(N \longrightarrow 0\), BIC will select models that are too simple due to the heavy penalty on complexity. Conversely, as \(N \longrightarrow \infty\), AIC present a tendency to select models that are too complex while BIC present an increasing probability of selecting the true model as the number of samples increases17.
In order to address possible limitations of AIC for small samples size, Hurvich and Tsai18] proposed a corrected AIC (AICc) defined as AICc = \(-2\mathcal{L} + 2k + (2k(k+1)/(n-k-1)\), when the models are univariate, linear and have normally-distributed residuals. As the sample size increases, AICc tends towards AIC.
No clear choice between CV, GCV, AIC, AICc or BIC does exist.
Bar-Joseph et al.19 20 parametrise the smoothing as part of the optimisation problem. EDGE21 employs GCV, SME22 optimises AICc but offer the possibility of AIC and BIC, while FDA23 relies on CV and Déjean et al.24 on GCV.
A comparative study between these metrics showed that none uniformly outperformed the others25. This simulation, as most successful application of automated selection, employs a high number of data-points per trajectory. If more than 20 observations are available, CV and GCV can provide a reliable fit of the data; however most metabolic and gene expression trajectories seldom reach half this value. In practice, most FDA-derived approaches tailored for these datasets have reported critical limitations in the automated fitting procedure, with Ramsay et al. (employing GCV) describing it as "by no mean foolproof"26 27, highlighting the fact that automated smoothing selection is still an area of research. More critically, while it could be expected that different metabolites or genes should be modelled with different complexity (as different underlying processes could govern the value of different variables), in practice the automated selection of a different degree of smoothness for each variable can lead to an "inflation of significance due to overfitting"28 in EDGE, confirmed by the work of Déjean et al.29 where "tuning the smoothing parameter is a core problem that could not be achieved by the usual cross-validation method because of the poor quality of clustering results". To counter this challenge, statistical approaches have elected to parametrise a single smoothing value shared by all variables. As a result the modelling approach assumes that all functional curves arise from the same underlying function of time.
Experimentation on metabolic data during the development of santaR confirmed these observations; automated approaches can be suitable when more than 20 data-points are considered, but fail for shorter time-series. This lack of reliability results in over-fitting when each variable is fitted separately, leading to erratic analysis results. It does not seem that automated approaches are currently able to reliably solve the fitting of short time-series, as enough information describing the underlying function's complexity might not be present in such a limited number of measurements. Ramsay et al.30 acknowledge this limitation and indicate that "if the data are not telling us all that much about \(\lambda\), then it is surely reasonable to use your judgment in working with values which seem to provide more useful results than the minimizing value does", highlighting the work of Chaudhuri and Marron 31 32.
Chaudhuri and Marron draw a parallel between the bandwidth controlling the smoothness in KDE, and the "resolution" of the problem studied. As the model complexity increases (decreased smoothness, increased df) so does the resolution (in the time domain) at which we try to characterise the underlying function. A higher resolution means observing grainier features, but increases the risk of generating spurious sampling artefacts. A visual exploration of the possible model fits should help detect spurious fit based on a user's intuition of the resolution provided by the current sampling and knowledge of the system being studied. Automated methods where deemed essential due to possible user inexperience, the need for standardisation and the impracticality of manual annotation of large datasets33. However the current failure of automation compromises efforts in standardisation and the use by totally inexperienced user. In turn, if the data is to be presented for visual assessment by a user (knowledgeable on the process under study, and which can appreciate the time resolution provided by the experimental design), a smaller representation of the dataset must be generated.
To reduce the scale of manual annotation, the present strategy proposes to extract and fit the latent time-trajectories across all variables. Using PCA, the data matrix can be decomposed and the most representative temporal profiles (given by the eigenvectors that explain the majority of variance in the data) identified. By inspecting the fitting of these eigen-projections over a range of df, the most suitable and informative smoothing parameter to apply across all variables can then be selected. The underlying assumption is that, if a satisfactory fit can be obtained on the latent time-trajectories, the selected smoothness should suit all variables.
Originally devised by Alter et al.34 and subsequently employed by Storey et al. EDGE35, the so called "eigen-genes" (eigen-genes are eigen-vectors in the gene space) are extracted after singular value decomposition (SVD) of the data. The top eigen-genes represent the directions in the gene space that explain the most variance. These applications subsequently employed an automated fit of the top eigen-genes36 37 38, while we propose to visually select the optimal fit.
While conceptually described, none of these approaches clearly explain how eigen-genes are computed. Alter et al.39 realise the SVD on a matrix of values for genes (variable) as row and arrays as columns. As arrays correspond to subsequent measurements, they are ordered by time. Each row therefore represents a variable's time-trajectory. This approach does not address the case where replicates (multiple subjects) have been investigated; when two different experiments were conducted, the second experimental design was added as additional columns. As trajectories are considered as rows, the second experiment is considered as additional time-points of the original experiment condition. The top eigengenes then extract the "interaction" pattern between the two experimental designs instead of the top variance in the first or the second experiment, greatly penalising the interpretation of the results (see Supplementary Information of Alter et al.40). EDGE employs a similar methodology, without providing more information on how the case of multiple measurements and multiple individuals is addressed.
Taking inspiration from these approaches, santaR proposes to generate eigen-splines, representing the most representative time-trajectories (i.e. explaining the most variance) in the variable space (Fig. 1).
Fig, 1: Schematic representation of the procedure for the extraction of latent time-trajectories (or eigen-trajectories) explaining the most variance in the variable space. The underlying hypothesis is that a satisfactory parametrisation of the fit of these latent time-trajectories should suit the fitting of most trajectories in the dataset. PCA: Principal component analysis, PC: Principal component, Ind: Individual.
First, as the interest resides in the functional shape across subjects and variables, all observations for a given variable (irrespective of time or subject) are auto-scaled. Auto-scaling corresponds to mean centering and Unit Variance (UV) scaling of the data. Mean centering ensures the mean value of a variable is 0, adjusting for "baseline" differences across variables. UV-scaling adjusts for differences in variance across variables which could bias subsequent PCA. After Auto-scaling, each variable present a standard deviation of one and a mean of zero.
For each variable, a matrix of measured values for each individual (rows) and time-point (columns) is generated. The time employed comprises all the unique time-points across all variables; if no measurement is available for a given time and individual, the cell is left empty. Depending on the regularity of sampling and study design, these matrices can potentially present some sparsity.
All the variable's data matrix are concatenated vertically. The resulting data matrix contains all the measured values for each variable in each individual (rows) at every observed time-point (columns). Each row now represents the measured values of a variable for a given subject, corresponding to a time-trajectory. As the scale and baseline difference across variables have been removed, the shape information of these trajectories can be compared. Principal component analysis of the transposed data matrix is then executed. The maximal number of principal components is the minimum between the number of time-points (columns) and the number of individual variable measurements (rows), minus one. In practice the number of unique time-point is most often the smallest dimension, resulting in a low number of principal components. Each principal component reflects a mixture of individual variable measurements (loadings) while the scores present a projection of each time-point onto each principal component. For each principal component (representing the direction of variance in the variable space), the score of each time-point can be extracted, resulting in an eigen-trajectory that can be fitted.
Obvious limitations do exist. First, as all variables are considered, the obtained trajectories are dependent on the set of variable provided. Secondly, as the data matrix could potentially be sparse, the PCA algorithm might have to handle missing values, while no measure of success (or warning when assumptions are breached) is available. This could potentially undermine the generation of eigen-trajectories, however we must bear in mind that relying on potential latent description of the underlying data might ultimately be a flawed process. It could be questioned if historically the use of such approaches has not been undertaken to prevent anf automated algorithm from over-fitting a set of time-trajectories.
With a limited set of potential eigen-trajectories to fit (usually the number of unique time-points minus one), santaR can provide multiple measure to help select the most adequate degree of smoothness. The value of df minimising the CV score, GCV score, AIC, BIC or AICc can be found for each eigen-trajectory. The resulting df values can be averaged across all eigen-trajectories or weighted by the variance explained of each principal component. In practice, the result of automated fitting can vary greatly from an eigen-trajectory to the next (corresponding to under- and over-fitting), with no consensus appearing across all components. Additionally no single metric outperforms the others in quality of fit and reliability.
Trajectories can also be visually inspected for varying df, while taking into account a last factor; the number of time-points N must be superior or equal to df in order to fit a smoothing spline. In other terms, for a high number of degree of freedom and missing data, a substantial number of trajectories might have to be rejected. In practice most model selection algorithms tend to over-fit the data and require discarding most metabolic trajectories.
The smoothness of a time-trajectory should be determined from the noisy realisation of the underlying function of time. Yet, such data driven approaches often fail to balance the goodness-of-fit of the data versus the models complexity. With too few informative measurements available, the limitations of automated smoothness selection undermines the algorithm, resulting in unreliable fit of the data and over-fitted results. A first solution to this challenge is to assume that a single trend is underlying all variables' observations, and therefore a single degree of smoothness should be applied across all time-trajectories. Based on this assumption and the expectation that an informed user will intuitively achieve a more consistent fit of the data than an automated algorithm on short noisy time-series; santaR considers df as the single meta-parameter a user must choose prior to analysis, balancing the fitting of raw observations versus the smoothing of the biological variability and measurements errors. In turn this explicit selection of the model complexity increases the reproducibility of results and reduces the execution time.
To help visualise the time-trajectories present in a dataset and make an informed judgment, we presented an approach based on PCA to extract and fit latent time-trajectories across all variables. While multiple goodness-of-fit measures are proposed to help guide df selection, some useful pointers can be highlighted. First, df should not be too elevated, as to not have to reject too many trajectories (df > N), but also as to not generate interpolation splines (df = N). Two data points can always define a line, three points a parabola and four a cubic function; however, for a meaningful fit, the number of measurements should exceed the minimum number of points required to define a shape. Secondly, based on the work of Chaudhuri and Marron41 42 smoothing can be thought as a problematic of resolution; the question is not how many time-points or even the range of time covered, but what could be the functional trend properly characterised with the current sampling frequency.
If we take the example of a patient followed over 5 days, it can reasonably be expected that his metabolic profile will reflect a degradation of his condition, and maybe an amelioration. One measurement per day, equalling 6 time-points could reflect such a function of time, containing one (or maximum two) inflection. If the question was to characterise the circadian evolution of a metabolite on top of the "long term" evolution of the patient (a "high resolution" question), the present sampling is not sufficient. A slightly more adequate study design to resolve this question would require a measurement around every 4 to 6 hours, resulting in 20 to 30 observations( conversely, for a "low resolution" event like disease progression, such a high sampling frequency is not necessary, albeit it could help fitting) (Fig. 2).
## ----------------------------------------------------
## 2 levels of resolution
# make curve (cos) with added circadian variation (sin)
x1 = seq(-0.5, 5.5, 0.01)
x11 = (x1*(2/5)*pi)+pi
y1 = 2.5*cos(x11+((pi)/10)) + sin(x1*2*pi) +3.5
# one measurement per day
x2 = seq(0, 5, 1)
# one measurement every 4 hours
x3 = seq(0, 5, 1/6)
## generate a spline at each resolution------------------------------
time = seq(0, 5, 0.01)
tmp_fit0 = smooth.spline( x1, y1, df=601) # original
tmp_fit1 = smooth.spline( x2, y2, df=6) # daily sampling
tmp_fit2 = smooth.spline( x3, y3, df=31) # 4h sampling
# plot each time trajectory------------------------------------------
library(gridExtra)
library(grid)
tmpDaily = data.frame( x=x2, y=y2)
tmp4h = data.frame( x=x3, y=y3)
pred0 = predict( object=tmp_fit0, x=time )
tmpPred0 = data.frame( x=pred0$x, y=pred0$y)
p0 <- ggplot(NULL, aes(x), environment = environment()) + theme_bw() + xlim(0,5) + ylim(0,7.1) + theme(axis.title.x = element_blank(), axis.ticks = element_blank(), axis.text.x = element_blank(), axis.text.y = element_blank())
p0 <- p0 + geom_line( data=tmpPred0, aes(x=x, y=y), linetype=1, color='black' )
p0 <- p0 + geom_point(data=tmp4h, aes(x=x, y=y), size=2.5, color='grey60' )
p0 <- p0 + geom_point(data=tmpDaily, aes(x=x, y=y), size=2.5, color='grey25' )
p0 <- p0 + ylab('Variable response')
p2 <- ggplot(NULL, aes(x), environment = environment()) + theme_bw() + xlim(0,5) + ylim(0,7.1) + theme(axis.ticks = element_blank(), axis.text.y = element_blank())
p2 <- p2 + geom_point(data=tmp4h, aes(x=x, y=y), size=2.5, color='springgreen3' )
p2 <- p2 + geom_line( data=tmpPred2, aes(x=x, y=y), linetype=1, color='springgreen3' )
p2 <- p2 + geom_point(data=tmpDaily, aes(x=x, y=y), size=2.5, color='mediumblue' )
p2 <- p2 + geom_line( data=tmpPred1, aes(x=x, y=y), linetype=1, color='mediumblue' )
p2 <- p2 + xlab('Time') + ylab('Variable response')
grid.arrange(p0,p2, ncol=1)
Fig. 2: Sampling of a biological perturbation at multiple resolutions. The true smooth evolution of a variable is observed over 5 days (top, black line: real evolution, black points: low resolution measurements, grey points: high resolution measurements), representing the combination of the perturbation of a metabolite on top of its usual circadian rhythm.
A daily sampling, (6 measurements) enable a general approximation of the metabolite evolution but not of the circadian variation (Bottom, blue line).
A higher sampling frequency over the same timeframe (every 4 hours) enable the additional description of the circadian variations of the metabolite, corresponding to a more complex (higher resolution) time evolution (bottom, green line ).
In other words, df is ultimately a function of study design, relating to the number of time-points, the sampling rate, and most importantly the complexity of the underlying function of time (Fig. 3). While a data-driven approach may seem appealing, an algorithm currently cannot infer this information and only the user can provide it.
Fig. 3: Visualisation of the adequate sampling rate (study design) in light of the complexity of the expected function of time.
Four simple and two complex functions of time are sampled (with error) at a maximum of five time-points (red dots).
While sufficient to satisfactorily approximate a simple function, this number of measurements must be increased (grey dots) for more complex time-trajectories such as the longitudinal sampling of a patient along a surgical journey (with possible recovery and further complications). The use of only five time-points for such a complex trajectory would require perfectly timed measurements, without errors and a subsequent over-fitting of the observations.
Without going as far as Hodrick and Prescott43 44 which advocated that the smoothing parameter \(\lambda\) should always take a particular value (1600) regardless of the data45, it can be noted a similarity in the df employed by multiple methods when applied to short biological time-series. The applications of EDGE found the optimal dimensionality of the spline basis to be similar across experiments46. Likewise, for experiments containing 5-10 time points and investigating functions of time presenting one or two inflections (as is the limit in resolution of such a sized experiment), 4 to 6 degrees of freedom seem to be a good starting point to suitably describe the time-trajectory without over-fitting, interpolating or excessively rejecting trajectories . Finally, an increase of sampling frequency does not automatically imply a more complex model should be fitted; if the expected functional form over the defined time-frame is "simple", the model complexity should reflect it.
Getting Started with santaR
How to prepare input data for santaR
santaR theoretical background
Graphical user interface use
Automated command line analysis
Plotting options
Advanced command line options
Déjean, S., Martin, P. G. P., Baccini, A. & Besse, P. Clustering time-series gene expression data using smoothing spline derivatives. Eurasip Journal on Bioinformatics and Systems Biology 2007, 10 (2007)↩
Ramsay, J. & Silverman, B. W. Functional Data Analysis Springer, 431 (John Wiley & Sons, Ltd, Chichester, UK, 2005)↩
Chaudhuri, P. & Marron, J. S. SiZer for Exploration of Structures in Curves. Journal of the American Statistical Association 94, 807-823 (1999)↩
Chaudhuri, P. & Marron, J. S. Scale Space View of Curve Estimation. The Annals of Statistics 28, 408-428 (2000)↩
Hastie, T., Tibshirani, R. & Friedman, J. The Elements of Statistical Learning. (Springer, 2009)↩
Craven, P. & Wahba, G. Smoothing noisy data with spline functions - Estimating the correct degree of smoothing by the method of generalized cross-validation. Numerische Mathematik 31, 377-403 (1978)↩
Ramsay, J., Hooker, G. & Graves, S. Functional data analysis with R and MATLAB (Springer-Verlag, 2009)↩
Rice, J. A. & Wu, C. O. Nonparametric mixed effects models for unequally sampled noisy curves. Biometrics 57, 253-259 (2001)↩
James, G. M. & Hastie, T. J. Functional linear discriminant analysis for irregularly sampled curves. Journal of the Royal Statistical Society, Series B 63, 533-550 (2001)↩
Akaike, H. Information Theory And An Extension Of The Maximum Likelihood Principle By Hirotogu Akaike. Second International Symposium on Information Theory, 267-281 (1973)↩
Akaike, H. A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control 19, 716-723 (1974)↩
Schwarz, G. Estimating the Dimension of a Model. The Annals of Statistics 6, 461-464 (1978)↩
Wu, H. & Zang, J.-T. Nonparametric Regression Methods for Longitudinal Data Analysis: mixed-effects modeling approaches (Wiley-VCH Verlag GmbH & Co. KGaA, Hoboken, NJ, USA, 2006)↩
Hedeker, D. R. & Gibbons, R. D. Longitudinal Data Analysis (Wiley-Interscience, 2006)↩
Azari, R., Li, L. & Tsai, C.-L. Longitudinal data model selection. Computational Statistics & Data Analysis 50, 3053-3066 (2006)↩
Hurvich, C. M. & Tsai, C.-L. Regression and time series model selection in small samples. Biometrika 76, 297-307 (1989)↩
Bar-Joseph, Z., Gerber, G., Simon, I., Gifford, D. K. & Jaakkola, T. S. Comparing the continuous representation of time-series expression profiles to identify differentially expressed genes. Proceedings of the National Academy of Sciences of the United States of America 100, 10146-10151 (2003)↩
Ernst, J., Nau, G. J. & Bar-Joseph, Z. Clustering short time series gene expression data. Bioinformatics 21, 159-168 (2005)↩
Storey, J. D., Xiao, W., Leek, J. T., Tompkins, R. G. & Davis, R. W. Significance analysis of time course microarray experiments. Proceedings of the National Academy of Sciences of the United States of America 102, 12837-42 (2005)↩
Berk, M., Ebbels, T. M. D. & Montana, G. A statistical framework for biomarker discovery in metabolomic time course data. Bioinformatics (Oxford, England) 27, 1979-85 (2011)↩
Lee, T. C. M. Smoothing parameter selection for smoothing splines: A simulation study. Computational Statistics and Data Analysis 42, 139-148 (2003)↩
Ramsay, J., Hooker, G. & Graves, S. _Functional data analysis with R and MATLAB (Springer-Verlag, 2009)↩
Ramsay, J. O. & Silverman, B. Applied functional data analysis: methods and case studies (eds Ramsay, J. O. & Silverman, B. W.) (Springer New York, New York, NY, 2002)↩
Silverman, B. Some Aspects of the Spline Smoothing Approach to Non-Parametric Regresion Curve Fitting. Journal of the Royal Statistical Society, Series B 47, 1-52 (1985)↩
Alter, O., Brown, P. O. & Botstein, D. Singular value decomposition for genome-wide expression data processing and modeling. Proceedings of the National Academy of Sciences of the United States of America 97, 10101-10106 (2000)↩
Hong, F. & Li, H. Functional hierarchical models for identifying genes with different time-course expression profiles. Biometrics 62, 534-544 (2006)↩
Hodrick, R. J. & Prescott, E. C. Postwar U.S. Business Cycles: An Empirical Investigation. Carnegie Mellon University discussion paper (1980)↩
Hodrick, R. J. & Prescott, E. C. Postwar US Business Cycles : An Empirical Investigation. Journal of Money, Credit and Banking 29, 1-16 (1997)↩
Paige, R. L. & Trindade, A. A. The Hodrick-Prescott Filter: A special case of penalized spline smoothing. Electronic Journal of Statistics 4, 856-874 (2010).↩
Storey, J. D., Xiao, W., Leek, J. T., Tompkins, R. G. & Davis, R. W. Supporting Appendix Significance analysis of time course microarray experiments. 1-27↩
|
CommonCrawl
|
Tandem repeats ubiquitously flank and contribute to translation initiation sites
Ali M. A. Maddi1,
Kaveh Kavousi1,
Masoud Arabfard2,
Hamid Ohadi3 &
Mina Ohadi4
BMC Genomic Data volume 23, Article number: 59 (2022) Cite this article
While the evolutionary divergence of cis-regulatory sequences impacts translation initiation sites (TISs), the implication of tandem repeats (TRs) in TIS selection remains largely elusive. Here, we employed the TIS homology concept to study a possible link between TRs of all core lengths and repeats with TISs.
Human, as reference sequence, and 83 other species were selected, and data was extracted on the entire protein-coding genes (n = 1,611,368) and transcripts (n = 2,730,515) annotated for those species from Ensembl 102. Following TIS identification, two different weighing vectors were employed to assign TIS homology, and the co-occurrence pattern of TISs with the upstream flanking TRs was studied in the selected species. The results were assessed in 10-fold cross-validation.
On average, every TIS was flanked by 1.19 TRs of various categories within its 120 bp upstream sequence, per species. We detected statistically significant enrichment of non-homologous human TISs co-occurring with human-specific TRs. On the contrary, homologous human TISs co-occurred significantly with non-human-specific TRs. 2991 human genes had at least one transcript, TIS of which was flanked by a human-specific TR. Text mining of a number of the identified genes, such as CACNA1A, EIF5AL1, FOXK1, GABRB2, MYH2, SLC6A8, and TTN, yielded predominant expression and functions in the human brain and/or skeletal muscle.
We conclude that TRs ubiquitously flank and contribute to TIS selection at the trans-species level. Future functional analyses, such as a combination of genome editing strategies and in vitro protein synthesis may be employed to further investigate the impact of TRs on TIS selection.
Translational regulation can be global or gene-specific, and most instances of translational regulation affect the rate-limiting initiation step [1, 2]. While mechanisms that result in the selection of translation initiation sites (TISs) are largely unknown, conservation of the alternative TIS positions and the associated open reading frames (ORFs) between human and mouse cells [3] implies physiological significance of alternative translation. A vast number of human protein-coding genes consist of alternative TISs, which are selected based on complex and yet not fully understood scanning mechanisms [3,4,5,6]. The alternative TISs can result in various protein structures and functions [7, 8].
While recent findings indicate that TISs are predominantly a result of molecular error [9], the probability of using a particular TIS differs among mRNA molecules, and can be dynamically regulated over time [10]. Selection of TISs and the level of translation and protein synthesis depend on the cis regulatory elements in the mRNA sequence and its secondary structure such as the formation of hair-pins, stem loops, and thermal stability [11,12,13,14,15,16]. In fact, the ribosomal machinery has the potential to scan and use several ORFs at a particular mRNA species [17].
A tandem repeat (TR) is a sequence of one or more DNA base pairs (bp) that is repeated on a DNA stretch. While TRs have profound biological effects in evolutionary, biological, and pathological terms [18,19,20,21,22,23,24], the effect of these intriguing elements on protein translation remains largely (if not totally) unknown. There are limited publications indicating that when located at the 5′ or 3′ untranslated region (UTR), short tandem repeats (STRs) (core units of 1–6 bp) can modulate translation, the effect of which has biological and pathological implications [25,26,27,28,29]. For example, eukaryotic initiation factors are clamped onto polypurine and polypyrimidine motifs in the 5′ UTRs of target RNAs, and influence translation [30]. Abnormal STR expansions impact TIS selection in a number of neurological disorders [31, 32].
Based on a TIS homology approach, we previously reported a link between STRs and TIS selection [33]. Here, we extend our study to TRs of all core lengths and repeats, an additional weighing vector (vector W2), several additional species, improved sequence retrieval methods, and a newly developed software and database for data collection and storage.
TRs are ubiquitous cis elements flanking TISs
A total of 1,611,368 protein-coding genes, 2,730,515 transcripts and 3,283,771 TRs were investigated across the 84 selected species, of which 22,791 genes, 93,706 transcripts, and 99,818 TRs belonged to the human species (Additional Table 1). On average, there were 1.64 transcripts and 1.97 TRs per gene, and 1.19 TRs, per transcript, per species (Fig. 1). The highest ratios of transcripts and TRs per gene (4.11 and 4.38, respectively) belonged to human. Human ranked 59th among 84 species in respect of the TR/transcript ratio (Fig. 2) (Additional Table 1).
Abundance interval of the genes, transcripts, and TRs to each other. In this chart, we compared variations in the number of genes, transcripts, and TRs in different species relative to each other. The vertical axis shows what percentage of the total number of genes plus transcripts plus TRs in different species belong to the genes, transcripts, or TRs
Ratios of genes, transcripts, and TR counts for each species. The horizontal axis shows the percentage of each entity, and the vertical axis shows each species. Species can be cross-referenced in Additional Table 1
Across the 93,706 identified protein-coding transcripts in the human genome, there were 50,169 transcripts, in which TISs were flanked by at least one TR (53.54% of protein-coding transcripts). At a similarly high rate, from the 22,791 identified protein-coding genes in the human genome, 15,256 genes had at least one transcript, in which TISs were flanked by a TR (66.94% of human protein-coding genes). 2850 different types of TRs were identified in the human genome, of which 1504 types (52.77%) were human-specific; across TR categories 1–4, we detected 660, 101, 339 and 404 types of human-specific TRs, respectively, the top most abundant of which are represented in Table 1.
Table 1 The top most abundant human-specific TRs flanking TISs. It should be noted that human-specificity applied in the context of the relevant TISs
TRs differentially co-occur with TISs
We employed two weighing settings (vectors) for designating homologous vs. non-homologous TISs in human vs. other species. One of those settings was the same as in our previous approach (vector W1) [33]. In both settings, there was significant co-occurrence of human-specific TRs with non-homologous human TISs, and non-human-specific TRs with homologous human TISs (Fisher's exact p < 0.01) (Fig. 3). The results were replicated in 10-fold cross-validation (Fig. 4) (Additional Table 2).
Average of 10 experiments to examine co-occurrence patterns between TRs and TISs in each of the four TR categories. Each histogram shows the number of homologous vs. non-homologous TISs, based on two different weighing methods (vectors). HS-TR = human-specific tandem repeat, NHS-TR = non-human-specific tandem repeat, TIS = translation initiation site
10-fold cross-validation of co-occurrence patterns between TRs and TISs in TR Categories 1–4. Each histogram shows the number of homologous vs. non-homologous TISs, based on two different weighing methods (vectors), as follows: category 1 (a), category 2 (b), category 3 (c), and category 4 (d) (Please see text for the description of TR categories 1 to 4). HS-TR = human-specific tandem repeat, NHS-TR = non-human-specific tandem repeat, TIS = translation initiation site
Biological and evolutionary implications
In 15,256 human genes, at least one TIS was flanked by a TR, of which in 2991 genes those TRs were human-specific (Additional Tables 3 & 4). A sample of those genes is listed in Table 2, text mining [34] of a number of which yielded predominant expression and functions in the human brain and/or skeletal muscle, such as CACNA1A, EIF5AL1, FOXK1, GABRB2, MYH2, SLC6A8, and TTN. These are examples of expression enrichment in tissues that are frequently subject to human-specific evolutionary processes. However, the nervous system and skeletal muscle may not be the only tissues, gene functions in which are associated with human-specific characteristics.
Table 2 Example of human genes (represented by gene symbol), which contain human-specific TRs
We employed the Needleman Wunsch algorithm [35] to further examine the relevance of our findings. To that end, comparison of proteins between human and three other species, consisting of chimpanzee, macaque, and mouse (RESTful API at: https://www.ebi.ac.uk/Tools/psa/emboss_needle [36]), revealed significantly lower homology for the human proteins, in which TISs were flanked by human-specific TRs (Fig. 5).
Protein homology check of TISs flanked by human-specific and non-specific TRs. Every chart shows the distribution of similarity abundance between human proteins and three species, mouse, macaque, and chimpanzee, in the same gene. For each panel, the first row shows the distribution that was constructed by BLASTing human proteins, TISs of which were flanked by human-specific TRs. Similarly, the second row of each panel shows the distribution that was constructed by BLASTing human proteins, TISs of which were flanked by non-human-specific TRs. The Needleman Wunsch algorithm (upper panel) was used as a complementary measure to our two weighing methods (methods 1 and 2). In each method, we detected a significant difference in the distribution. TIS = translation initiation site, TR = tandem repeat
Our findings provide prime evidence of a link between TRs of all core lengths and repeats, and TIS selection, mechanisms of which are virtually unknown currently. Our approach was based on homology search, which reliably identifies" homologous" TISs by detecting excess similarity [37]. By searching identical gene names across the selected species, our approach encompassed orthologous and paralogous genes.
While the scope of our previous publication [33] was limited to the STRs, in the current study, we investigated TRs of all core lengths (ranging from 1 to 60 nucleotides) and repeats. Another advantage was employment of an improved method for retrieving the upstream flanking sequences. Moreover, whereas BLAST of CDS and cDNA sequences were used to extract the TISs and upstream flanking sequences in the previous study, here we used script programming on the Biomart web application, which is more reliable and accurate. In this method, we specified the gene name, transcript, and length of the upstream flanking sequence for the Biomart web application [38], by using an automated script. In comparison with our previously implemented methods, the result of the automated script is more accurate and comprehensive. An additional weighing method was also implemented in the current study to further examine the relevance of our homology assignment approach.
It is possible that asymmetric and stem-loop structures, which are inherent properties of repeat sequences result in genetic marks that enhance TIS selection. Asymmetric structures have recently been reported to be linked to various biological functions, such as replication and initiation of transcription start sites [39]. Recent studies implicate that the local folding and co-folding energy of the ribosomal RNA (rRNA) and the mRNA correlates with codon usage estimators of expression levels in model organisms such as chloroplast [40]. It may be speculated that RNA structures formed as a result of folding in the TR regions function as marks for TISs.
Among a number of options for future studies, genome editing strategies such as CRISPR/Cas9 [41] in combination with in vitro translation engineering, using cell-free protein synthesis (also known as in vitro protein synthesis or CFPS) and/or PURE system (i.e. protein synthesis using purified recombinant elements) [42, 43] may be useful to investigate the impact of TRs on TIS selection and protein synthesis.
We conclude that TRs ubiquitously flank TIS sequences and contribute to TIS selection at the trans-species level. Future functional analyses, such as a combination of genome editing strategies and in vitro protein synthesis are warranted to investigate the impact of TRs on TIS selection.
All sequences, species, and gene datasets collected in this study were based on Ensembl 102 (http://nov2020.archive.ensembl.org/index.html), scheme of which is depicted in Fig. 6 .
Scheme representing the steps taken for data collection and analysis
84 species were selected, which encompassed orders of vertebrates and one non-vertebrate species (D. melanogaster) (Fig. 2). Throughout the study, all species were compared with the human sequence, as reference. The list of species was extracted via RESTful API, in Java language. In parallel, a list of available gene datasets of the selected species was collected by using the "biomaRt" package [44, 45] in R language. In the next step, in each selected species, all protein-coding transcripts of protein-coding genes were extracted. To that end, identical gene names were used across the selected species to group orthologous/paralogous genes in those species.
Subsequently, the 120 bp upstream flanking sequence of all annotated protein coding TISs were retrieved and analyzed. All steps of data collection were performed by querying on the Biomart Ensembl tool via RESTful API, which was implemented in the Java language, except fetching the primary list of available species and gene datasets. For each species, its name, common name and display name were retrieved. For each gene in each species, its gene name, Ensembl ID and the annotated transcript IDs were retrieved, and finally, for each transcript, the coding sequence, the TIS, the upstream flanking sequence of the TIS, and the protein sequence were retrieved.
All collected data was stored in a MySQL database which is accessible at https://figshare.com/search?q=10.6084%2Fm9.figshare.15405267 .
A candidate sequence was considered a TR if it complied with the following four rules: (1) for mononucleotide cores, the number of repeats should be ≥6. (2) for 2–9 bp cores, the number of repeats should be ≥3. (3) for other core lengths, the number of repeats should be ≥2. (4) TRs of the same core sequence should not overlap if they were in the same upstream flanking sequence.
We categorized the TRs based on the core lengths as follows: Category 1: 1–6 bp, Category 2: 7–9 bp, Category 3: 10–15 bp, and Category 4: ≥16 bp. This was an arbitrary classification to allow for possible differential effect of various core length ranges in evolutionary and biological terms.
Retrieval of data across species
Using the enhanced query (Additional Table 5) form on the Biomart Ensembl tool along with the RESTful API tools, a Java package was developed to retrieve, store, and analyze the data and information. The source codes and the Java package are available at: https://github.com/Yasilis/STRsMiner-JavaPackage_PaperSubmission/tree/develop .
Identification of human-specific TRs
The 120 bp upstream flanking sequence of TISs of all annotated protein-coding transcripts of protein-coding genes were screened in 84 species for the presence of TRs in four categories based on the TR core length. The data obtained on the human TRs was compared to those of other species, and the TRs which were specific to human were identified.
To identify human-specific TRs, in the first step, the selected genes of all species were grouped based on gene name. Therefore, all homologous genes, consisting of orthologous and paralogous genes, were placed in one group. In each group, all the TRs located in the upstream flanking sequence of every transcript were extracted. In the next step, the extracted TRs were grouped and specified according to the species. All the TRs that were detected in more than one species were removed. The remaining TRs belonged to only one species and were specific to that species. Subsequently, we identified the human-specific TRs for a specific gene name by selecting the human species. This process was repeated for each group of genes and the results were aggregated together to identify all the TRs which were specific and non-specific in reference to human.
Evaluation of TIS homology
Identifying the degree of homology between two transcripts requires assigning a weight value to each position of the sequence. Weighted homology scoring was performed in two different weight settings, as weighing vectors W1 (originally used by our group for studying a link between STRs and TIS selection) [33] and W2, which can be distinguished by k = {1, 2}. These two weighing vectors are defined as follow (Eq. 1, 2):
$${W}_1=\left\{0,25,25,25,12.5,12.5\Big\}\right.$$
$${W}_2=\left\{0,20,20,20,20,20\Big\}\right.$$
If M is the first methionine amino acid of the two peptide sequences (position of 0 in the two weighing vectors), for all next five successive positions represented by i in the formula (Eq. 9), we defined five weight coefficients wk, 1 to wk, 5, observed in the Wk vector.
Homology of the first five amino acids (excluding the initial methionine), and, therefore the TIS, was inferred based on the value of pair-wise similarity scoring between human, as reference, and other species. A similarity of ≥50% was considered "homology". This threshold was achieved following BLASTing three thousand random pair-wise similarity checks of the initial five amino acids of randomly selected proteins as previously described [33].
Scoring human-specific and non-specific TR co-occurrences with homologous and non-homologous TISs
In both weighing methods, the initial five amino acid sequence (excluding the initial methionine) of the human TISs that were flanked by human-specific and non-specific TRs were BLASTed against all the initial five amino acids (excluding the initial methionine) of the orthologous/paralogous genes in the remaining 83 species. The above was aimed at comparing the number of events in which human-specific and non-specific TRs co-occurred with homologous and non-homologous (TISs) in reference to human. For computing the number of homologous and non-homologous TISs, we needed to consider a number of assumptions. We defined G as the set of all human protein coding genes. Therefore, g denoted a gene that belonged to the G set (Eq. 3).
$$G=\left\{g|g\ is\ a\ human\ protein\ coding\ gene\right\}$$
We also defined TH(g) and \({T}_{\overline{H}}(g)\) as the set of all annotated transcripts in a gene g, which belonged to human and other species, respectively (Eqs. 4 and 5).
$${T}_H(g)=\left\{t\ |\begin{array}{c}\ t\ \mathrm{was}\ \mathrm{a}\ \mathrm{human}\ \mathrm{protein}\ \mathrm{coding}\ \mathrm{transcript}\ \mathrm{which}\ \\ {}\mathrm{belonged}\ \mathrm{to}\ \mathrm{the}\ \mathrm{gene},g\end{array}\right\}$$
$${T}_{\overline{H}}(g)=\left\{t\ |\begin{array}{c}\ t\ \mathrm{was}\ \mathrm{a}\ \mathrm{protein}\ \mathrm{coding}\ \mathrm{transcript}\ \mathrm{which}\ \mathrm{belonged}\ \\ {}\ \mathrm{to}\ \mathrm{the}\ \mathrm{gene},g\ \mathrm{but},\mathrm{did}\ \mathrm{not}\ \mathrm{exist}\ \mathrm{in}\ \mathrm{human}\end{array}\right\}$$
Moreover, T∗ denoted all filtered transcripts of T which had at least one human-.
specific TR at the 120 bp interval upstream of the TIS, while, T+ denoted all filtered transcripts of T, which had at least one TR at the 120 bp interval upstream of the TIS.
The following formula was developed to measure the degree of similarity of two peptides in the two weighing settings (Eq. 6).
$${H}_k=\sum_{g\epsilon G\ }\sum_{t_a\epsilon {T}_H^{\ast }(g)\ }\sum_{t_b\epsilon {T}_{\overline{H}}^{+}(g)\ }{\Theta}_k\left({t}_a,{t}_b\right)$$
In this formula, Θ is a binary function that decides whether the transcripts are homologous or not, and k = {1, 2} refer to each weight setting. If S function measures the similarity score, Θ can be defined as follow (Eq. 7):
$${\Theta}_k\left({t}_a,{t}_b\right)=\left\{\ \begin{array}{c}1, if\ {S}_k\left({t}_a,{t}_b\right)\ge 50\\ {}0,o.w.\end{array}\right.$$
For calculating the similarity score, we used another binary function. We defined Φ as follows: (Eq. 8):
$$\Phi \left(x,y\right)=\left\{\begin{array}{c}1, if\ x=y\\ {}0,o.w.\end{array}\right.$$
This function takes two amino acids as argument and returns 1 as output if they are the same, and zero if they are not the same. Therefore, S(ta, tb) is defined by the following formula (Eq. 9):
$${S}_k\left({t}_a,{t}_b\right)=\sum_{i=2}^6{w}_{k,i}\Phi \left({P}_i\left({t}_a\right),{P}_i\left({t}_b\right)\ \right)$$
In this function, the ith amino acid in the sequence of the transcript t, is denoted by Pi(t).
We replicated the comparisons in 10-fold cross-validation. In each-fold, genes with human non-specific TRs were randomly selected according to the number of genes in the group with human-specific TRs. This process was repeated for the two methods (two different weight vectors) and for each of the four categories of TRs. For each category and weighing method, the mean of the result of each round was calculated as a final result. Finally, the Fisher's exact test was run for each-fold (Additional Table 2).
The datasets generated and analyzed during this study are available in the "figshare" repository, with the identifier "https://doi.org/10.6084/m9.figshare.15405267".
Also, other source code and software available in the GitHub repository.
(https://github.com/Yasilis/STRsMiner-JavaPackage_PaperSubmission/tree/develop)
TIS:
Translation initiation site
TR:
Tandem repeat
STR:
Short tandem repeat
ORF:
Open reading frame
UTR:
untranslated region
HS-TR:
Human-specific TR
NHS-TR:
Non-human-specific TR
Sonenberg N, Hinnebusch AG. Regulation of translation initiation in eukaryotes: mechanisms and biological targets. Cell. 2009;136(4):731–45.
Gebauer F, Hentze MW. Molecular mechanisms of translational control. Nat Rev Mol Cell Biol. 2004;5(10):827–35.
Lee S, Liu B, Lee S, Huang S-X, Shen B, Qian S-B. Global mapping of translation initiation sites in mammalian cells at single-nucleotide resolution. Proc Natl Acad Sci. 2012;109(37):E2424–32.
Na CH, Barbhuiya MA, Kim M-S, Verbruggen S, Eacker SM, Pletnikova O, et al. Discovery of noncanonical translation initiation sites through mass spectrometric analysis of protein N termini. Genome Res. 2018;28(1):25–36.
Andreev DE, O'Connor PB, Loughran G, Dmitriev SE, Baranov PV, Shatsky IN. Insights into the mechanisms of eukaryotic translation gained with ribosome profiling. Nucleic Acids Res. 2017;45(2):513–26.
Studtmann K, Ölschläger-Schütt J, Buck F, Richter D, Sala C, Bockmann J, et al. A non-canonical initiation site is required for efficient translation of the dendritically localized Shank1 mRNA. PLoS One. 2014;9(2):e88518.
Fukushima M, Tomita T, Janoshazi A, Putney JW. Alternative translation initiation gives rise to two isoforms of Orai1 with distinct plasma membrane mobilities. J Cell Sci. 2012;125(Pt 18):4354–61.
Bazykin GA, Kochetov AV. Alternative translation start sites are conserved in eukaryotic genomes. Nucleic Acids Res. 2011;39(2):567–77.
Xu C, Zhang J. Mammalian alternative translation initiation is mostly nonadaptive. Mol Biol Evol. 2020;37(7):2015–28.
Boersma S, Khuperkar D, Verhagen BMP, Sonneveld S, Grimm JB, Lavis LD, et al. Multi-color single-molecule imaging uncovers extensive heterogeneity in mRNA decoding. Cell. 2019;178(2):458–472 e419.
Li JJ, Chew G-L, Biggin MD. Quantitative principles of cis-translational control by general mRNA sequence features in eukaryotes. Genome Biol. 2019;20(1):1–24.
Martinez-Salas E, Lozano G, Fernandez-Chamorro J, Francisco-Velilla R, Galan A, Diaz R. RNA-binding proteins impacting on internal initiation of translation. Int J Mol Sci. 2013;14(11):21705–26.
Cenik C, Cenik ES, Byeon GW, Grubert F, Candille SI, Spacek D, et al. Integrative analysis of RNA, translation, and protein levels reveals distinct regulatory variation across humans. Genome Res. 2015;25(11):1610–21.
Babendure JR, Babendure JL, Ding J-H, Tsien RY. Control of mammalian translation by mRNA structure near caps. Rna. 2006;12(5):851–61.
Master A, Wójcicka A, Giżewska K, Popławski P, Williams GR, Nauman A. A novel method for gene-specific enhancement of protein translation by targeting 5'UTRs of selected tumor suppressors. PLoS One. 2016;11(5):e0155359.
Jagodnik J, Chiaruttini C, Guillier M. Stem-loop structures within mRNA coding sequences activate translation initiation and mediate control by small regulatory RNAs. Mol Cell. 2017;68(1):158–170. e153.
Kochetov AV, Allmer J, Klimenko AI, Zuraev BS, Matushkin YG, Lashin SA. AltORFev facilitates the prediction of alternative open reading frames in eukaryotic mRNAs. Bioinformatics. 2017;33(6):923–5.
Hannan AJ. Tandem repeats mediating genetic plasticity in health and disease. Nat Rev Genet. 2018;19(5):286–98.
Afshar H, Adelirad F, Kowsari A, Kalhor N, Delbari A, Najafipour R, et al. Natural selection at the NHLH2 core promoter exceptionally long CA-repeat in human and disease-only genotypes in late-onset neurocognitive disorder. Gerontology. 2020;66(5):514–22.
Press MO, McCoy RC, Hall AN, Akey JM, Queitsch C. Massive variation of short tandem repeats with functional consequences across strains of Arabidopsis thaliana. Genome Res. 2018;28(8):1169–78.
Bagshaw ATM. Functional mechanisms of microsatellite DNA in eukaryotic genomes. Genome Biol Evol. 2017;9(9):2428–43.
Abe H, Gemmell NJ. Evolutionary footprints of short tandem repeats in avian promoters. Sci Rep. 2016;6:19421.
Ohadi M, Valipour E, Ghadimi-Haddadan S, Namdar-Aligoodarzi P, Bagheri A, Kowsari A, et al. Core promoter short tandem repeats as evolutionary switch codes for primate speciation. Am J Primatol. 2015;77(1):34–43.
Mohammadparast S, Bayat H, Biglarian A, Ohadi M. Exceptional expansion and conservation of a CT-repeat complex in the core promoter of PAXBP1 in primates. Am J Primatol. 2014;76(8):747–56.
Rovozzo R, Korza G, Baker MW, Li M, Bhattacharyya A, Barbarese E, et al. CGG repeats in the 5'UTR of FMR1 RNA regulate translation of other RNAs localized in the same RNA granules. PLoS One. 2016;11(12):e0168204.
Todur SP, Ashavaid TF. Association of Sp1 tandem repeat polymorphism of ALOX5 with coronary artery disease in Indian subjects. Clin Transl Sci. 2012;5(5):408–11.
Shirokikh NE, Spirin AS. Poly(a) leader of eukaryotic mRNA bypasses the dependence of translation on initiation factors. Proc Natl Acad Sci U S A. 2008;105(31):10738–43.
Usdin K. The biological effects of simple tandem repeats: lessons from the repeat expansion diseases. Genome Res. 2008;18(7):1011–9.
Kumari S, Bugaut A, Huppert JL, Balasubramanian S. An RNA G-quadruplex in the 5′ UTR of the NRAS proto-oncogene modulates translation. Nat Chem Biol. 2007;3(4):218–21.
Leppek K, Das R, Barna M. Functional 5′ UTR mRNA structures in eukaryotic translation regulation and how to find them. Nat Rev Mol Cell Biol. 2018;19(3):158–74.
Krauß S, Griesche N, Jastrzebska E, Chen C, Rutschow D, Achmüller C, et al. Translation of HTT mRNA with expanded CAG repeats is regulated by the MID1–PP2A protein complex. Nat Commun. 2013;4(1):1–9.
Glineburg MR, Todd PK, Charlet-Berguerand N, Sellier C. Repeat-associated non-AUG (RAN) translation and other molecular mechanisms in fragile X tremor Ataxia syndrome. Brain Res. 2018;1693:43–54.
Arabfard M, Kavousi K, Delbari A, Ohadi M. Link between short tandem repeats and translation initiation site selection. Human genomics. 2018;12(1):1–11.
Thierry-Mieg D, Thierry-Mieg J. AceView: a comprehensive cDNA-supported gene and transcripts annotation. Genome Biol. 2006;7(1):1–14.
Needleman SB, Wunsch CD. A general method applicable to the search for similarities in the amino acid sequence of two proteins. J Mol Biol. 1970;48(3):443–53.
Madeira F, Park YM, Lee J, Buso N, Gur T, Madhusoodanan N, et al. The EMBL-EBI search and sequence analysis tools APIs in 2019. Nucleic Acids Res. 2019;47(W1):W636–41.
Pearson WR. An introduction to sequence similarity ("homology") searching. Curr Protoc Bioinformatics. 2013; Chapter 3:Unit3 1.
Kinsella RJ, Kähäri A, Haider S, Zamora J, Proctor G, Spudich G, et al. Ensembl BioMarts: a hub for data retrieval across taxonomic space. Database. 2011;2011.
Georgakopoulos-Soares I, Mouratidis I, Parada GE, Matharu N, Hemberg M, Ahituv N. Asymmetron: a toolkit for the identification of strand asymmetry patterns in biological sequences. Nucleic Acids Res. 2021;49(1):e4.
Ezra SC, Tuller T. Modeling the effect of rRNA-mRNA interactions and mRNA folding on mRNA translation in chloroplasts. Computational and structural Biotechnol J. 2022.
Ran F, Hsu PD, Wright J, Agarwala V, Scott DA, Zhang F. Genome engineering using the CRISPR-Cas9 system. Nat Protoc. 2013;8(11):2281–308.
Gregorio NE, Levine MZ, Oza JP. A user's guide to cell-free protein synthesis. Methods and protocols. 2019;2(1):24.
Hammerling MJ, Krüger A, Jewett MC. Strategies for in vitro engineering of the translation machinery. Nucleic Acids Res. 2020;48(3):1068–83.
Durinck S, Spellman PT, Birney E, Huber W. Mapping identifiers for the integration of genomic datasets with the R/Bioconductor package biomaRt. Nat Protoc. 2009;4(8):1184.
Durinck S, Moreau Y, Kasprzyk A, Davis S, De Moor B, Brazma A, et al. BioMart and Bioconductor: a powerful link between biological databases and microarray data analysis. Bioinformatics. 2005;21(16):3439–40.
Laboratory of Complex Biological systems and Bioinformatics (CBB), Department of Bioinformatics, Institute of Biochemistry and Biophysics (IBB), University of Tehran, Tehran, Tehran, 1417614411, Iran
Ali M. A. Maddi & Kaveh Kavousi
Chemical Injuries Research Center, Systems Biology and Poisonings Institute, Baqiyatallah University of Medical Sciences, Tehran, Tehran, 1435916471, Iran
Masoud Arabfard
School of Physics and Astronomy, University of St. Andrews, St. Andrews, KY16 9SS, UK
Hamid Ohadi
Iranian Research Center on Aging, University of Social Welfare and Rehabilitation Sciences, Tehran, Tehran, 1985713871, Iran
Mina Ohadi
Ali M. A. Maddi
Kaveh Kavousi
A.M.A.M performed and analyzed the bioinformatics data. M.A. and H.O. contributed to data collection. K.K. and M.O. conceived, designed, and supervised the project. M.O. wrote the manuscript with input from all authors. The author(s) read and approved the final manuscript.
Correspondence to Kaveh Kavousi or Mina Ohadi.
The authors declare no competing interest.
Additional file 1 Additional Table 1.
The number of genes, transcripts and extracted TRs for each species. The rows of the table are sorted from large to small, based on the ratio of the number of TRs to the number of genes and transcripts in each species.
The number of events/co-occurrences of homologous and non-homologous TISs (in human as reference) with the two groups of human-specific and non-specific TRs and their p-values, calculated by Fisher's exact test in each method across TR categories 1, 2, 3 and 4.
The list of all human genes and their Ensembl gene ID, which contained human-specific TRs in their TIS-flanking sequence for TR categories 1, 2, 3, and 4.
The list of all human specific TRs and their abundance.
The list of queries that were used to communicate with the Ensembl data repositories.
Maddi, A.M.A., Kavousi, K., Arabfard, M. et al. Tandem repeats ubiquitously flank and contribute to translation initiation sites. BMC Genom Data 23, 59 (2022). https://doi.org/10.1186/s12863-022-01075-5
Received: 04 April 2022
Genome-scale
TIS selection
|
CommonCrawl
|
The equation of the Kenyon-Smillie (2, 3, 4)-Teichmüller curve
JMD Home
This Volume
Distribution of postcritically finite polynomials Ⅱ: Speed of convergence
2017, 11: 43-56. doi: 10.3934/jmd.2017003
Positive metric entropy in nondegenerate nearly integrable systems
Dong Chen
Department of Mathematics, The Pennsylvania State University, University Park, PA 16802, USA
Received May 25, 2016 Revised October 06, 2016 Published December 2016
Fund Project: The author is supported by Dmitri Burago's department research fund 42844-1001
Figure(4)
The celebrated KAM theory says that if one makes a small perturbation of a non-degenerate completely integrable system, we still see a huge measure of invariant tori with quasi-periodic dynamics in the perturbed system. These invariant tori are known as KAM tori. What happens outside KAM tori draws a lot of attention. In this paper we present a Lagrangian perturbation of the geodesic flow on a flat 3-torus. The perturbation is $C^\infty$ small but the flow has a positive measure of trajectories with positive Lyapunov exponent. The measure of this set is of course extremely small. Still, the flow has positive metric entropy. From this result we get positive metric entropy outside some KAM tori.
Keywords: KAM theory, Finsler metric, dual lens map, Hamiltonian flow, perturbation, metric entropy.
Mathematics Subject Classification: Primary: 37A35, 37J40; Secondary: 53C60.
Citation: Dong Chen. Positive metric entropy in nondegenerate nearly integrable systems. Journal of Modern Dynamics, 2017, 11: 43-56. doi: 10.3934/jmd.2017003
V. Arnol'd, Proof of a theorem of A. N. Kolmogorov on the invariance of quasi-periodic motions under small perturbations of the Hamiltonian, Russian Math. Survey, 18 (1963), 9-36.Google Scholar
V. Arnol'd, Instability of dynamical systems with several degrees of freedom, Soviet Mathematics, 5 (1964), 581-585. Google Scholar
A. Bolsinov and I. Ta${\rm{\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\smile$}} \over i} }}$manov, Integrable geodesic flows on suspensions of automorphisms of tori, Proc. Steklov Institute Math, 231 (2000), 42-58. Google Scholar
D. Burago and S. Ivanov, Boundary distance, lens maps and entropy of geodesic flows of Finsler metrics, Geom. Topol., 20 (2016), 469-490. doi: 10.2140/gt.2016.20.469. Google Scholar
K. Burns and M. Gerber, Real analytic Bernoulli geodesic flows on S2, Ergodic Theory Dynam. Systems, 9 (1989), 27-45. doi: 10.1017/S0143385700004806. Google Scholar
G. Contreras, Geodesic flows with positive topological entropy, twist map and hyperbolicity. (2), Ann. of Math, 172 (2010), 761-808. doi: 10.4007/annals.2010.172.761. Google Scholar
V. Donnay, Geodesic flow on the two-sphere. Ⅰ. Positive measure entropy, Ergodic Theory Dynam. Systems, 8 (1988), 531-553. doi: 10.1017/S0143385700004685. Google Scholar
V. Donnay and C. Liverani, Potentials on the two-torus for which the Hamiltonian flow is ergodic, Comm. Math. Phys, 135 (1991), 267-302. doi: 10.1007/BF02098044. Google Scholar
F. John, Extremum problems with inequalities as subsidiary conditions, in Studies and Essays Presented to R. Courant on his 60th Birthday, January 8,1948, Interscience Publishers, Inc., New York, N. Y., 1948,187–204. Google Scholar
A. Kolmogorov, On conservation of conditionally periodic motions for a small change in Hamilton's function, Dokl. Akad. Nauk SSSR (N.S.), 98 (1954), 525-530. Google Scholar
J. Moser, On invariant curves of area-preserving mappings of an annulus, Nachr. Akad. Wiss. Göttingen Math.-Phys. Kl. Ⅱ, 1962 (1962), 1-20. Google Scholar
N. Nekhoroshev, Behavior of Hamiltonian systems close to integrable, Functional Analysis and Its Applications, 5 (1971), 338-339. doi: 10.1007/BF01086753. Google Scholar
S. Newhouse, Quasi-elliptic periodic points in conservative dynamical systems, Amer. J.Math., 99 (1977), 1061-1087. doi: 10.2307/2374000. Google Scholar
S. Newhouse, Continuity properties of entropy, Ann. of Math. (2), 129 (1989), 215-235. doi: 10.2307/1971492. Google Scholar
Ja. Pesin, Characteristic Ljapunov exponents, and smooth ergodic theory, Uspekhi Mat. Nauk, 32 (1977), 55-287. Google Scholar
S. Sasaki, On the differential geometry of tangent bundles of Riemannian manifolds, Tôhoku Math. J. (2), 10 (1958), 338-354. doi: 10.2748/tmj/1178244668. Google Scholar
M. Wojtkowski, Invariant families of cones and Lyapunov exponents, Ergodic Theory Dynam.Systems, 5 (1985), 145-161. doi: 10.1017/S0143385700002807. Google Scholar
Figure 1. A non-ergodic DBG torus
Figure Options
Download full-size image
Figure 2. Graphs of $u_S$, $u_C$ and $u$
Figure 3. Graph of $\rho$
Figure 4. Construction of $\phi_1$
Xufeng Guo, Gang Liao, Wenxiang Sun, Dawei Yang. On the hybrid control of metric entropy for dominated splittings. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5011-5019. doi: 10.3934/dcds.2018219
Huyi Hu, Miaohua Jiang, Yunping Jiang. Infimum of the metric entropy of volume preserving Anosov systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4767-4783. doi: 10.3934/dcds.2017205
Michael Hochman. Lectures on dynamics, fractal geometry, and metric number theory. Journal of Modern Dynamics, 2014, 8 (3&4) : 437-497. doi: 10.3934/jmd.2014.8.437
Plamen Stefanov, Gunther Uhlmann, Andras Vasy. On the stable recovery of a metric from the hyperbolic DN map with incomplete data. Inverse Problems & Imaging, 2016, 10 (4) : 1141-1147. doi: 10.3934/ipi.2016035
Huyi Hu, Miaohua Jiang, Yunping Jiang. Infimum of the metric entropy of hyperbolic attractors with respect to the SRB measure. Discrete & Continuous Dynamical Systems - A, 2008, 22 (1&2) : 215-234. doi: 10.3934/dcds.2008.22.215
Jintao Wang, Desheng Li, Jinqiao Duan. On the shape Conley index theory of semiflows on complete metric spaces. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1629-1647. doi: 10.3934/dcds.2016.36.1629
Vladimir S. Matveev and Petar J. Topalov. Metric with ergodic geodesic flow is completely determined by unparameterized geodesics. Electronic Research Announcements, 2000, 6: 98-104.
Zhiming Li, Lin Shu. The metric entropy of random dynamical systems in a Hilbert space: Characterization of invariant measures satisfying Pesin's entropy formula. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 4123-4155. doi: 10.3934/dcds.2013.33.4123
Anton Petrunin. Metric minimizing surfaces. Electronic Research Announcements, 1999, 5: 47-54.
Valentin Afraimovich, Lev Glebsky, Rosendo Vazquez. Measures related to metric complexity. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1299-1309. doi: 10.3934/dcds.2010.28.1299
Vincenzo Recupero. Hysteresis operators in metric spaces. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 773-792. doi: 10.3934/dcdss.2015.8.773
Vladimir Georgiev, Eugene Stepanov. Metric cycles, curves and solenoids. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1443-1463. doi: 10.3934/dcds.2014.34.1443
Anton Petrunin. Correction to: Metric minimizing surfaces. Electronic Research Announcements, 2018, 25: 96-96. doi: 10.3934/era.2018.25.010
Jaeyoo Choy, Hahng-Yun Chu. On the dynamics of flows on compact metric spaces. Communications on Pure & Applied Analysis, 2010, 9 (1) : 103-108. doi: 10.3934/cpaa.2010.9.103
Rinaldo M. Colombo, Graziano Guerra. Differential equations in metric spaces with applications. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 733-753. doi: 10.3934/dcds.2009.23.733
Peter Giesl, Holger Wendland. Construction of a contraction metric by meshless collocation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3843-3863. doi: 10.3934/dcdsb.2018333
Giuseppe Savaré. Self-improvement of the Bakry-Émery condition and Wasserstein contraction of the heat flow in $RCD (K, \infty)$ metric measure spaces. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1641-1661. doi: 10.3934/dcds.2014.34.1641
Andrea Davini, Maxime Zavidovique. Weak KAM theory for nonregular commuting Hamiltonians. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 57-94. doi: 10.3934/dcdsb.2013.18.57
Ugo Bessi. The stochastic value function in metric measure spaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 1819-1839. doi: 10.3934/dcds.2017076
Alexander Pankov. Nonlinear Schrödinger Equations on Periodic Metric Graphs. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 697-714. doi: 10.3934/dcds.2018030
HTML views (37)
|
CommonCrawl
|
The RNMF
The RNMF's international involvement
The metrology
What's metrology ?
Units of measurement (SI)
The evolution of the SI
The history of units
The research projects
What is metrology?
To measure is to compare
Defining references
Calculating uncertainties
Ensuring traceability
Universality of measurements and applications
The etymological meaning of the word "metrology" comes from the ancient Greek "metre" and "treaty". By extension, this corresponds to the science of measurement.
The question asked here is therefore "what is measurement?".
Let's first think about how to make a measurement.
The measurement is present in everyday life: in a kitchen to weigh ingredients, in a medical analysis laboratory to diagnose diabetes, in institutions to qualify the level of urban pollution, in research laboratories working on particle detection, in industry for control and manufacturing. Each time, the process is the same: two things are compared.
For this, two prerequisites are necessary. The elements must be comparable (the method used must allow a reliable comparison) and the instrument used must be reliable and qualified.
This principle is fully illustrated with a Roberval scale, an instrument that can measure masses (fig. 1).
In fact, it compares the value of the mass of an object ("objet à mesurer"), unknown, to that of a known reference. In metrology, the term "mass comparator" is used. Once the comparison is established, a numerical value of the mass sought is accessed via the instrument which allows quantification with the appropriate unit.
Figure 1 : Schematic representation of a Roberval scale
All measures are based on this principle:
For a dimensional measurement, the ruler or tape measure is both the reference and the measuring instrument;
For a mass measurement, a set of "weights" is the reference, the instrument is a scale.
For an electrical current measurement, a multimeter is both the reference and the measuring instrument.
The international system of units, the SI, in force since 1960, allows all physical or chemical phenomena to be described on the basis of its constituent units and quantities.
The measurement of a quantity Q is :
A numerical value {Q}
A unit [Q]
A measurement uncertainty
It's expressed as the product of a numerical value and a unit : $Q=\left\{Q\right\}\times\left[Q\right]$
The notion of reference is essential. Since these references are used for ongoing comparisons with other objects, they must offer guarantees on the following points:
Durability: their stability over time must be sufficient and adapted to the need for measurement;
Uniformity: sampling should not alter its accuracy;
Accessibility: the comparison must be easy to make, by everyone, and at the moment when the need is expressed.
All these elements must be delivered with the best possible accuracy.
The mission of the national laboratories is to deliver the first reference level, with the best uncertainty at national level, for the 7 basic quantities, called "national references".
The SI makes it possible to ensure the unification of measurements on a worldwide scale. The national references are the practical implementation of the definitions of each base unit and the high level references for derived units. As the BIPM is the depositary of these definitions, they are recognized by all Member States of the Metre Convention. The references can be of 3 different types:
A practical application of the definition of the unit (case of the second);
A reference material (e.g. prototype K for the kilogram until 2019);
A reference procedure or instrumentation (radiation therapy references).
In practice, each laboratory of the French national metrology network is the depositary of the national references in its field: it is the master of the development of these references (within the limits of the finances granted to it) and the guardian of its maintenance over the long term.
Part of the bench of the national reference for the candela at the LNE-LCM/CNAM
New generation strontium optical clock © Observatoire de Paris / LNE-SYRTE
In some cases, depending on public utility or political orientations, specific references, in addition to those corresponding to the seven basic quantities, are developed and maintained as necessary.
This is the case for certain derived units, such as hygrometry or flowmetry, required by air conditioning or refrigeration equipment manufacturers. It is also the case for the nuclear industry and radiotherapy, which require a multitude of references in terms of radioactivity measurement and dosimetry.
Having references is essential to ensure the results provided by instrumentation
The references with the best uncertainties on the French scale are those issued by the French National Metrology Network
The comparison being established with an available reference, one reaches the numerical value sought of the considered quantity, via the instrument which allows quantification. However, uncertainty remains.
The latter actually quantifies the doubt (or error) that was assessed prior to the measurement of its realization.
Without uncertainty, we understand that any comparison of two results makes no sense. It is then perilous, without uncertainty, to guarantee the adequacy to specifications or imperatives (sanitary, industrial, climatic...).
This uncertainty is related to the measurement operations and takes into account all the parameters that can induce an error on the final value: from the choice of the reference to the measurement range, via the experimental conditions. A whole series of parameters must be precisely analysed before establishing what is called an "uncertainty budget" which makes it possible to calculate the overall measurement uncertainty.
As an example, here is the typical uncertainty budget that can be drawn up when one wishes to measure a molar fraction of water vapour in a standard gas with a mass comparator. The parameters that enter the measurement procedure are listed in the second column (molar mass, nitrogen flow, molar volume...) and since there is no correlation relationship between them, the compound uncertainty is the square root of the sum of all squared uncertainties weighted by their sensitivity.
Example of uncertainty budget for the measurement of the molar fraction of steam in a standard gas
Variable (Xi)
Standard deviation u(Xi)
Sensitivity C(Xi)
C(Xi),u(Xi)
Tx Permeation rate g/min 1,40E-09 2,40E+08 3,37E-01
Cstab Permeation rate stability g/min 1,00E-09 2,40E+08 2,40E-01
M Molar mass H2O g/mol 2,00E-03 -3,53E+00 -7,05E-03
d Nitrogen flow rate l/min 1,29E-02 -1,23E+01 -1,58E-01
Vn Molar volume l/mol 1,90E-04 2,84E+00 5,39E-04
Fres Residual H2O nmol/mol 9,17E-02 1,00E+00 9,17E-02
Ffiltre H2O Filter nmol/mol 5,77E-02 1,00E+00 5,77E-02
Each calculation of uncertainty is specific to the measurement principle under consideration, and the method of establishing such a balance is described in the guide prepared by a joint International Committee of Guides in Metrology under the auspices of the BIPM: "Evaluation of measurement data - Guide to the expression of measurement uncertainty". It is available in its electronic version on the BIPM website.
In general, the uncertainty of the measurement result is expressed as a standard deviation, which takes into account all sources of measurement uncertainty. There are two types of uncertainties:
Type A evaluations of measurement uncertainty, which result from an evaluation by statistical analysis of series of observations;
Type B evaluations of measurement uncertainty, which result from the phenomena and physical properties involved, calibration certificate values or other specifications
In order to allow a reliable comparison of two measurements in any location, they must be derived from a connection to an established reference. Traceability plays an essential role in monitoring and assessing product quality, and is important for safety and consumer protection. This traceability must therefore be guaranteed. For this, a traceability chain is defined.
Schematic of the metrological traceability chain
The traceability chain schematizes the dissemination of measurement standards: it is an uninterrupted chain of references and instrumentation that guarantees the connection to the unit with a given uncertainty. It is usually represented in the form of a pyramid.
At the top of the pyramid is the ultimate reference of the unit: its definition.
Next, the national metrology institute, which ensures that this definition is carried out and put into practice at national level, with the best possible level of uncertainty. This is true for base units but also for derived units.
Then follows in the chain accredited laboratories that set up instruments that guarantee uninterrupted connection with the national laboratory reference.
From top to bottom of the pyramid, there is therefore an increasing number of connections (width of the pyramid), which reflect the quantity of available reference standards, associated with increasingly large uncertainties.
This traceability chain can be applied to each SI unit.
However, although high technology has invaded everyone's daily lives, it is not always useful to use sophisticated equipment to perform a measurement. To measure time more precisely than 10-8 s or to measure the length of a frame to the nanometre may be useless.
The uncertainty required for a given measurement is first that which meets the need.
It is therefore not necessary to carry out measurements that can be directly connected to the NMIs (a request that the NMIs would not be able to access given their resources), hence the need for a set of accredited laboratories to carry out these measurements in more restricted fields of application. In order to guarantee the measurements thus made, the notion of traceability is essential.
To have the guarantee to carry out a good measurement, it is necessary to follow its instrumentation and its drifts by regular calibrations. This ensures the traceability of measurements made with the highest level standards (national, international).
The references with the best uncertainties on the French scale are those delivered by the French metrology network.
Each nation has its own chain of traceability, usually on an independent basis. In France, the first links in this chain are provided by LNE and the other member laboratories of the national metrology network. LNE, as pilot, represents France on an international scale. To ensure the validity of measurements, all national laboratories conduct reference comparisons with their foreign counterparts to guarantee consistency and universality of measurements worldwide.
Any measurement requires multiple comparisons to ensure validity. In all fields of daily life, we therefore find the notion of metrology and connections in order to :
Guarantee the chemical composition of products, and therefore their toxicity level before they are placed on the market
Ensure electrical production in accordance with the installations
Benefit from adequate care for pathologies and without risk for healthy organs
Guarantee manufacturing according to the required specifications
Ensuring safe transport via accurate telecommunications
Paying the right price for a product
Monitor the environment and take appropriate measures to maintain its quality
|
CommonCrawl
|
Modeling the impact of school reopening on SARS-CoV-2 transmission using contact structure data from Shanghai
Benjamin Lee ORCID: orcid.org/0000-0001-7213-56311,2,
John P. Hanley1,2,
Sarah Nowak2,3,
Jason H. T. Bates2,4 &
Laurent Hébert-Dufresne2,4,5
BMC Public Health volume 20, Article number: 1713 (2020) Cite this article
Mathematical modeling studies have suggested that pre-emptive school closures alone have little overall impact on SARS-CoV-2 transmission, but reopening schools in the background of community contact reduction presents a unique scenario that has not been fully assessed.
We adapted a previously published model using contact information from Shanghai to model school reopening under various conditions. We investigated different strategies by combining the contact patterns observed between different age groups during both baseline and "lockdown" periods. We also tested the robustness of our strategy to the assumption of lower susceptibility to infection in children under age 15 years.
We find that reopening schools for all children would maintain a post-intervention R0 < 1 up to a baseline R0 of approximately 3.3 provided that daily contacts among children 10–19 years are reduced to 33% of baseline. This finding was robust to various estimates of susceptibility to infection in children relative to adults (up to 50%) and to estimates of various levels of concomitant reopening in the rest of the community (up to 40%). However, full school reopening without any degree of contact reduction in the school setting returned R0 virtually back to baseline, highlighting the importance of mitigation measures.
These results, based on contact structure data from Shanghai, suggest that schools can reopen with proper precautions during conditions of extreme contact reduction and during conditions of reasonable levels of reopening in the rest of the community.
The COVID-19 pandemic presents an unprecedented global public health challenge. A crucial issue that remains unresolved is the role of children in SARS-CoV-2 transmission and the impact of schools on epidemic spread. Available evidence suggest that children, particularly children < 10 years, are less susceptible to SARS-CoV-2 infection [1,2,3,4] and rarely transmit infection to adults or schoolmates [5]. However, guided chiefly by prior models of pandemic influenza, which appears to be much more transmissible among children, school closures have been a nearly universal component of pandemic response [6]. Some mathematical modeling studies suggest that school closures alone have limited effects on SARS-CoV-2 transmission [1, 7], which has been interpreted by some to suggest that little harm can follow from school reopening. Reopening schools in the setting of strict community-wide physical distancing, however, reintroduces a mode of disease transmission that is far less redundant than typical community social networks, and therefore possibly much more important. Therefore, we utilized a previously published dataset of contact structures from Shanghai pre- and post-pandemic "lockdown" to model disease transmission under various school reopening scenarios.
We consider an age-stratified model where individuals are distinguished by their age, binned in groups of 5 years (e.g. 0–4 years, 5–9 years, and so on up to 65+ years). Distinguishing different age classes allows us to model the age-specific contact structure due to schools, households, and other social structures. These contact structures will be informed by those collected in Shanghai before and after "lockdown," as reported by Zhang et al. [1]. Moreover, the age classes provide a simple way to account for heterogeneous susceptibility. While we relax this assumption, we initially follow Ref. [1] and set the susceptibility of children 0–14 years to be 34% of adult susceptibility and the susceptibility of individuals 65+ years to be 144% that of adults 20–65 years.
We then focus our study on the basic reproduction number R0 of a disease model that incorporates this age and contact structure with a given transmission rate (set to the susceptibility of adults and modulated for other age classes) β, and a uniform recovery rate for all age classes γ. Using baseline contact patterns and a fixed set of epidemiological parameters, we can calculate what we call the baseline R0 of the epidemic. Then, using a modified set of contact patterns that reflect specific interventions both within and outside of schools, we get a post-intervention R0, not to be confused with the effective reproduction number (often described as RE or Rt). Importantly, post-intervention R0 will always be proportional to the baseline R0, meaning that any set of parameters that produce the same R0 will produce the same post-intervention R0 under a given intervention. There is therefore no need to sweep both recovery and transmission rates but only one of them in order to explore a wide range of baseline R0 values. In our codes, available online, we choose to fix γ = 1/5.1 days as used in the original model [1, 8] and we vary β in order to vary the baseline R0. Importantly, note that if we consider short-term dynamics and therefore ignore demographics (e.g. birth and death rate of the population), the reproduction number of our disease model will be proportional to β/γ regardless of whether we implement susceptible-infectious-recovered (SIR), or susceptible-exposed-infectious-recovered (SEIR), or any other classic model [9]. Hence, we do not need to pick a particular disease model to calculate R0. We do, however, need to take the contact structure across age-classes into account. Let us call K the matrix whose elements are Ki,j = σiMij where i and j are age classes, σi is the susceptibility of class i, and Mij is the frequency of contacts with class j for an individual in class i. Following Ref. [10], R0 is given by
$$ {\mathrm{R}}_0=\kern0.5em \frac{\beta }{\gamma}\lambda (K) $$
where λ(K) denotes the largest eigenvalue of K.
Even more important is the fact that this definition of R0 is not only valid for SIR or SEIR models using the same age-structure and heterogeneous susceptibility, it is also valid for stochastic models based on branching processes set by the contact matrix and transmission rate. See, for example, Ref. [11] for a derivation of this equivalence. The previous definition of R0 is therefore applicable to a large range of epidemic models parameterized by the transmission rate β, the recovery rate γ, the heterogeneous susceptibility σ and the contact matrix M. We can then keep all parameters fixed and modify only elements of the contact matrix M that correspond to different school reopening scenarios. In so doing, and by focusing on R0, we are studying the impact of school reopening while relying on as few model-specific assumptions and mechanisms as possible.
From this basic model, we look at the impact of two key variables, the contact matrix M and the heterogeneous susceptibility σ. First, we combined the observed "lockdown" contact matrix with different weighted blocks of the baseline contact matrix to mimic different scenarios for school reopening and background interventions. For example, since the model stratifies the population in bins of 5 years, we can model school reopening for children < 10 years by using baseline values for the 2 × 2 block of the contact matrix corresponding to interactions between children 0–4 years with one another, between children 0–4 years with those 5–9 years, between children 5–9 years with those 0–4 years, and between children 5–9 years with one another. Other values can also be weighted to a fraction of the true baseline value to mimic partial reopening or intervention conditions. Second, we relaxed assumptions of heterogeneous susceptibility across age groups. Mainly, Zhang et al. estimated a relative susceptibility of roughly 34% for children < 15 years compared to adults [1]. We relaxed this assumption by increasing the relative susceptibility of children to different values (34, 40, 45, 50, 60%) while leaving older populations unchanged.
This model is available at https://github.com/LaurentHebert/school-reopening.
When no measures are taken to reduce R0, baseline R0 and post-intervention R0 are identical (Fig. 1, dashed black line). School closure alone has minimal effect (Fig. 1, orange line) because disease continues to spread via alternate social contacts in the community. Full "lockdown," in contrast, has a major effect (Fig. 1, solid green line) because it severs most social contacts. Therefore, to simulate the effect of school reopening against this background, we reincorporated baseline contact patterns for children (aged 0–19 years) into the full "lockdown" model, using the same underlying assumptions for contact patterns and reduced susceptibility to infection by age as reported for Shanghai during outbreak conditions [1]. This shows a dramatic effect (Fig. 1, solid blue line): reopening schools without measures to reduce daily contacts would return transmission levels virtually to baseline despite strict physical distancing in the rest of the community, and thus would be highly inadvisable. The fact that school closures alone have little impact does not imply that school reopening during a "lockdown" will similarly have little impact.
Effects of school reopening during community "lockdown." Post-intervention R0 as a function of baseline R0 under various conditions are shown. Dashed black line: Baseline, represents all contact patterns pre-pandemic. Solid orange line: School closure alone, represents community pre-pandemic contact patterns but with contacts among children 0–19 years removed to simulate full school closure. Solid green line: Full "lockdown," represents full contact suppression during pandemic conditions. Solid blue line: Full school reopening, represents full "lockdown" conditions but with re-incorporation of all contacts among children 0–19 years according to baseline contact patterns to simulate return to full school attendance. Interrupted blue line: Mixed reopening model, simulates the effect of re-incorporating full contact patterns for children 0–9 years with reduction in contacts in children 10–19 years to 33% of baseline. Dashed blue line: Reopen < 10 years only, simulates the effect of re-incorporating baseline contact patterns for children 0–9 years only
We then assessed various conditions for school reopening to estimate impacts on post-intervention R0, including implementation of measures to reduce contacts among children. We find that reopening schools for children < 10 years, even without reduction in daily contacts, is predicted to maintain post-intervention R0 < 1 (and suppress virus transmission) up to a baseline R0 of ~ 4.5 (Fig. 1, dashed blue line). The addition of school reopening with reduction in daily contacts among children aged 10–19 years to 33% of baseline is predicted to keep post-intervention R0 < 1 up to a baseline R0 of ~ 3.3 (Fig. 1, interrupted blue line). These results suggest that interventions to reduce the number of contacts at school, with an emphasis on children aged 10–19 years, is a potentially viable approach to school reopening even during periods of significant baseline community transmission of SARS-CoV-2 while strict contact suppression is maintained in the rest of the community. We find that reopening schools to children < 10 years would have the least impact on disease transmission, even when we assumed that these children would be unable to adhere to interventions to reduce their effective number of daily contacts.
The feasibility of these interventions rely in part on the limited contacts between children and older populations, but also on estimates of their lower susceptibility to SARS-CoV-2. Given that the model developed by Zhang et al. estimated a relative susceptibility of roughly 34% for children under 15 years compared to adults [1], we next looked at the robustness of our results to varying estimates of susceptibility (Fig. 2). We increased the relative susceptibility of children up to 60%, and found that our suggested reopening model remained quite robust to changes in virus susceptibility among children. In particular, the idea of full reopening for children under 10 years with contact reduction for children 10–19 years remained feasible up to a baseline R0 of ~ 3, even when relative susceptibility of children was estimated at 50% that of adults, itself a 50% increase compared to the original model estimates and consistent with other recent estimates [12].
Effects of school reopening based on differing rates of susceptibility to SARS-CoV-2 infection in children relative to adults. Post-intervention R0 as a function of baseline R0 under various estimates of susceptibility to SARS-CoV-2 infection in children < 15 years are shown. Dashed black line: Baseline, represents all contact patterns pre-pandemic. Solid black line: Mixed reopening model, simulates the effect of re-incorporating full contact patterns for children 0–9 years with reduction in contacts in children 10–19 years to 33% of baseline. Starting from this condition, blue lines represent a range of estimates of susceptibility to SARS-CoV-2 infection in children relative to adults: 40% (dotted blue line), 45% (dashed blue line), 50% (interrupted blue line), and 60% (solid blue line)
Recognizing, however, that school reopenings would generally occur alongside other relaxations of community restrictions, we then looked at the robustness of this model in the context of gradual increases in the frequency of contacts for the rest of the community (Fig. 3). We find that return of contact frequency to 20% (Fig. 3, dotted blue line) and 30% (Fig. 3, dashed blue line) of pre-pandemic baseline among all other community members has virtually no additional impact on transmission. At 40% of baseline, post-intervention R0 remains suppressed < 1 up to a baseline R0 of ~ 2.5, and at 60% of baseline, post-intervention R0 remains suppressed < 1 up to a baseline R0 of slightly less than 2. These results suggest that even with relaxations in contact reduction measures in the rest of the community, school reopening remains feasible with reasonable measures to reduce contact frequency in the school setting.
Effects of school reopening along with community reopening. Post-intervention R0 as a function of baseline R0 under various conditions are shown. Dashed black line: Baseline, represents all contact patterns pre-pandemic. Solid black line: Mixed reopening model, simulates the effect of re-incorporating full contact patterns for children 0–9 years with reduction in contacts in children 10–19 years to 33% of baseline. Starting from this condition, blue lines represent the effects of restoration of contact frequency in the rest of the community (i.e. community reopening) to 20% of baseline (dotted blue line), 30% of baseline (dashed blue line), 40% of baseline (interrupted blue line), or 60% of baseline (solid blue line)
In a model of SARS-CoV-2 transmission utilizing contact patterns obtained from Shanghai [1], we find that while school closure alone does not have a major impact on transmission, full school reopening during a "lockdown" without mitigation measures in the school setting can return transmission to its baseline value. That being said, we find that careful school reopening can proceed while maintaining post-intervention R0 < 1 under a wide range of both baseline R0 levels and estimates of susceptibility to infection in children, provided that appropriate measures are taken in the school and community settings to reduce the number of daily contacts among both children and school and community members. We find that younger children < 10 years have the least impact on disease transmission, and greatest priority for mitigation strategies in the school setting should therefore focus on children 10–19 years of age.
This model suggests that having open schools can and should be considered, along with mitigation strategies in both schools and the community, even during periods of SARS-CoV-2 community transmission. We recognize that the R0 value alone should not be used as the sole criterion for formulating a comprehensive public health strategy. Even at R0 < 1, various other factors (such as overall community prevalence) can profoundly impact both the rate and extent of disease transmission in the community, which also require careful consideration in school reopening decisions. Nevertheless, depending on local conditions, school closures need not be considered a necessary component of community-level SARS-CoV-2 public health response, particularly considering the profound adverse consequences of prolonged school closures on the educational, emotional, and psychosocial development of children [13, 14]. This is particularly applicable to school reopening for children < 10 years old (approximately grade 5 and lower), as has now been strongly endorsed in the United States by the American Academy of Pediatrics and the National Academies of Science, Engineering, and Medicine [15, 16]. In this age group, our model suggests that full reopening would have very minimal effect on R0, even without reduced contact frequencies among children in this age bracket. Any school reopening scenario may require a trade-off of maintaining more severe restrictions in other community arenas (e.g. limiting reopening of indoor spaces in bars and restaurants) in order to keep community contact frequency below the targets necessary to allow for school reopening suggested by this model. It is important to remember that our results only apply to overall community transmission and do not address individual health outcomes or the possibility and effects of transmission events within individual schools.
In this model, contact suppression was calculated as a percentage of baseline, pre-pandemic contact patterns. The definition of contact used was very broad, being either 1) two-way conversation involving three or more words in the physical presence of another person (conversational contact), or 2) a direct physical contact (e.g., a handshake, hug, kiss or performing contact sports) [1]. Sensitivity analysis indicated that for Shanghai, restricting contacts to those of at least five-minute duration (thus eliminating purely incidental, casual contact) resulted in similar results as when all contacts were considered [1]. The current definition of close contact used by the Centers for Disease Control and Prevention is even more restrictive: at least 15 min within 6 ft (approximately 2 m) of a person with confirmed infection [17].
Reducing the number of effective daily contacts could occur via complete removal of a specific proportion of typical contacts, which would be more likely during conditions of full" lockdown" or during more restrictive limitations on community movement. In Shanghai during" lockdown", for example, contacts among school-aged children were reduced to almost zero [1]. This may not be a reasonable expectation for other regions. A comprehensive model from the United Kingdom assessing full community social distancing, for example, estimated this to represent at best a 75% reduction in all contacts outside of school, workplaces, and the household, but would likely be associated with increased household contact frequencies [7].
However, it is also very likely that other non-pharmaceutical interventions, particularly cloth facial coverings and emphasis on physical distancing, would also reduce risk of transmission during any individual encounter and convert many "at-risk" contacts into lower-risk contacts [18]. These interventions were not included as discrete variables in this model, but it would be reasonable to assume that contacts that occur with both participants wearing a cloth facial covering, at increased physical distance, or both would contribute to the percent reduction in "at-risk" contacts that we modeled due to their functional effects in terms of reduced transmission risk rather than complete contact removal. Additional strategies for reducing the frequency of close contacts within school settings have been proposed by the World Health Organization and the Centers for Disease Control and Prevention, such as eliminating large group activities, reducing student movement, and allowing for a mixture of in-class and remote learning to reduce classroom size and density [19, 20]. Scheduled hand hygiene and frequent disinfection of common surfaces would also reduce potential transmission.
Another important consideration is that this model did not consider the potential for reduced transmissibility from children to other contacts. Multiple studies now suggest that children, particularly younger children, are far less likely to transmit SARS-CoV-2 to other contacts, even within households, where the intensity of contact is arguably highest [5, 21,22,23,24]. Additional work also suggests that transmission of infection from younger children within the school setting is rare [25,26,27,28,29]. Therefore, the potential impact of school reopening may in fact be overestimated, as we assumed equal likelihood for transmission of virus from infected children of any age as from adults.
The range of baseline R0 values we identified as capable of permitting various school reopening scenarios is within the range of estimated values observed at many locations. Because school reopening decisions should depend on local, rather than regional or country-wide trends, it is most useful to assess R0 in this context whenever possible at as local of a scale as can be reasonably estimated. In Shanghai, R0 has been estimated at 3.31 up to February 16, 2020, spanning both pre- and early post-"lockdown" conditions [30], and up to 3.63 during pre-"lockdown" conditions [31]. Recent estimates from the United States suggest that R0 in six major metropolitan cities (Boston, Chicago, Los Angeles, Miami, New Orleans, New York City) in March 2020, near the initial peak of the outbreak in these regions, ranged from 2.43 (95% confidence interval (CI), 2.05–2.82) in New Orleans to 3.18 (95% CI 2.57–3.79) in Boston, and fell significantly thereafter once mitigation strategies were enacted [32]. If so, this suggests our model would have been applicable to both Shanghai and these selected US regions given these R0 estimates, particularly once community mitigation strategies had been enacted. An important caveat is that this would only hold true if the community contact structures in these US cities were sufficiently similar to that of Shanghai, which may not necessarily be the case, underscoring the need for local data to provide the most informed model predictions.
There are several limitations to these findings. Notably, as discussed previously, the baseline and outbreak contact patterns utilized in this model, which used data from Shanghai, may not be generalizable to all settings due to underlying differences in social contact networks and the achievable magnitude of contact suppression during mandated physical distancing. Therefore, similar approaches using contact structure data from other locations require further investigation. This model would not apply to college or university settings (based on an upper age limit of 19), nor to boarding schools. Based on a preponderance of current evidence, this model assumes that children are less susceptible to infection; since school closures were typically implemented along with community physical distancing mandates [6], this observation could be an artifact of limiting child contacts to within households early in the pandemic rather than a true biological difference. If children prove to be equally susceptible to infection, this model may significantly underestimate the impact of school reopening, although this may be mitigated by the effect of universal masking and increased physical distancing within the school environment. Therefore, school reopening would require flexibility to rapidly adapt to changing local conditions, along with capacity for aggressive testing and contact tracing of infected children and their families; because infected children generally have mild symptoms [33], school-associated outbreaks might present with clusters of illness in parents or household contacts.
Another important caveat of our study is that we focus solely on the spread of SARS-CoV-2 at the community level and not on outcomes for either infected children or their contacts in whom secondary infections may result (such as teachers), including the potential for mortality or other severe outcomes that may be heavily associated with specific risk factors, including age. The effects of infection in some children may also be more severe than previously appreciated, due to development in a small minority of infected children of a novel and serious multisystem inflammatory syndrome associated with COVID-19 (MIS-C), even though this condition appears to be very rare, estimated at 2/100,000 children in New York State [34, 35]. Nevertheless, prolonged school closures also come with serious risk of harm to children and families. During the pandemic, which has been universally associated with prolonged school closures in most settings, numerous reports indicate increasing rates of mental health problems, food insecurity, loss of health care coverage, and concern for increases in physical, emotional, and sexual abuse as a result of home confinement, in addition to loss of educational attainment [14, 36,37,38,39,40]. The individual risks to children, teachers, and families as a result of potential COVID-19 illness associated with school exposure must be balanced against these profound adverse effects which are certain to continue in the setting of prolonged school closures.
We also assume homogeneous transmission and contact patterns within a given age-class, without attempting to account for pre-existing biological or behavioral heterogeneity that can exist among individuals of the same age. The model therefore averages over the risks of super-spreading events and variable levels of adherence to public health recommendations for individuals within the same age class. While there exist network models to account for individual differences [41] these are much harder to parameterize with available data on contact patterns. Despite these limitations, the use of a simple model such as this that focuses on community-level transmission rates can nevertheless be a powerful tool for examining the larger-scale effects of significant alterations to community structure (e.g. school reopening).
Similarly, we looked at the impact of school reopenings without accounting for possible secondary changes in behavior among parents and other contacts. It is possible that school reopenings could lead to behavioral changes that would increase transmission risks in the community outside the school setting (for example, by relaxing attitudes or concerns regarding physical distancing or maximum group sizes). This might have two major unintended consequences, both detrimental. First, it could lead to increased viral transmission overall and loss of epidemic control. Second, this increase in transmission might erroneously be attributed to school reopenings themselves, prompting re-closures (and their attendant educational, economic, and societal harms), which would then be minimally effective at curtailing further transmission. Therefore, school reopenings necessitate careful public health messaging to reinforce the need for ongoing community-wide measures and to place the potential impact of school reopenings into proper context, to limit viral transmission.
Schools can be reopened in the setting of ongoing SARS-CoV-2 community transmission provided appropriate and reasonable precautions are maintained to reduce the background rate of daily contacts in the community along with reductions in daily social contacts among children in the school setting. The impacts of prolonged school closure on child health, development, and education may be profound, and for most children and families, particularly younger children with working parents, remote learning has been an alarmingly poor substitute for the classroom [14, 40, 42]. We argue for a paradigm that prioritizes open schools, rather than viewing school closures as necessary adjuncts to other community-level interventions [6, 43], and that approaches based on influenza suppression may be ill-suited for the current pandemic given the clear differences between influenza and SARS-CoV-2, particularly regarding their effects on children. Strategies for reopening schools can be guided by mathematical modeling approaches, particularly wherever contact data are available to generate local estimates to inform public health policy.
All data for this model are publicly available at https://github.com/LaurentHebert/school-reopening. The original data are publicly available within the main body or supplementary files of the original work by Zhang and colleagues [1], which was published under a Creative Commons Attribution 4.0 (CC BY 4.0) license, and were also posted by the authors for public availability at https://zenodo.org/record/3775672.
SEIR:
Susceptible-exposed-infectious-recovered
Susceptible-infectious-recovered
R0 :
Basic reproduction number
Zhang J, Litvinova M, Liang Y, Wang Y, Wang W, Zhao S, Wu Q, Merler S, Viboud C, Vespignani A, et al. Changes in contact patterns shape the dynamics of the COVID-19 outbreak in China. Science. 2020;368(6498):1481–6 https://doi.org/10.1126/science.abb8001.
Gudbjartsson DF, Helgason A, Jonsson H, Magnusson OT, Melsted P, Norddahl GL, Saemundsdottir J, Sigurdsson A, Sulem P, Agustsdottir AB, et al. Spread of SARS-CoV-2 in the Icelandic population. N Engl J Med. 2020;382(24):2302–15 https://doi.org/10.1056/NEJMoa2006100.
CDC COVID-19 Response Team. Coronavirus Disease 2019 in Children - United States, February 12–April 2, 2020. MMWR. 2020;69(14):422–6. https://doi.org/10.15585/mmwr.mm6914e4.
Korean Society of Infectious Diseases, Korean Society of Pediatric Infectious Diseases, Korean Society of Epidemiology, Korean Society for Antimicrobial Therapy, Korean Society for Healthcare-associated Infection Control and Prevention, Korea Centers for Disease Control and Prevention. Report on the Epidemiological Features of Coronavirus Disease 2019 (COVID-19) Outbreak in the Republic of Korea from January 19 to March 2, 2020. J Korean Med Sci. 2020;35(10):e112 https://doi.org/10.3346/jkms.2020.35.e112.
Ludvigsson JF. Children are unlikely to be the main drivers of the COVID-19 pandemic - a systematic review. Acta Paediatr. 2020;109(8):1525–30 https://doi.org/10.1111/apa.15371.
Viner RM, Russell SJ, Croker H, Packer J, Ward J, Stansfield C, Mytton O, Bonell C, Booy R. School closure and management practices during coronavirus outbreaks including COVID-19: a rapid systematic review. Lancet Child Adolesc Health. 2020;4(5):397–404 https://doi.org/10.1016/S2352-4642(20)30095-X.
Ferguson NM, Laydon D, Nedjati-Gilani G, Imai N, Ainslie K, Baguelin M, Bhatia S, Boonyasiri A, Cucunuba Z, Cuomo-Dannenburg G, et al. Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand. Imperial College London. 2020; https://doi.org/10.25561/77482.
Zhang J, Litvinova M, Wang W, Wang Y, Deng X, Chen X, Li M, Zheng W, Yi L, Chen X, et al. Evolving epidemiology and transmission dynamics of coronavirus disease 2019 outside Hubei province, China: a descriptive and modelling study. Lancet Infect Dis. 2020;20(7):793–802 https://doi.org/10.1016/S1473-3099(20)30230-9.
van den Driessche P. Reproduction numbers of infectious disease models. Infect Dis Model. 2017;2(3):288–303 https://doi.org/10.1016/j.idm.2017.06.002.
Diekmann O, Heesterbeek JA, Metz JA. On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations. J Mathematical Biol. 1990;28(4):365–82 https://doi.org/10.1007/BF00178324.
Allard A, Moore C, Scarpino SV, Althouse BM, Hébert-Dufresne L. The role of directionality, heterogeneity and correlations in epidemic risk and spread. arXiv Preprint. 2020;arXiv:2005.11283 https://arxiv.org/abs/2005.11283v2.
Davies NG, Klepac P, Liu Y, Prem K, Jit M, Group CC-w, Eggo RM. Age-dependent effects in the transmission and control of COVID-19 epidemics. Nat Med. 2020;26(8):1205–11 https://doi.org/10.1038/s41591-020-0962-9.
Sharfstein JM, Morphew CC. The urgency and challenge of opening K-12 schools in the fall of 2020. JAMA. 2020;324(2):133–4 https://doi.org/10.1001/jama.2020.10175.
Lee J. Mental health effects of school closures during COVID-19. Lancet Child Adolesc Health. 2020;4(6):421 https://doi.org/10.1016/S2352-4642(20)30109-7.
COVID-19 Planning Considerations: Guidance for School Re-entry. American Academy of Pediatrics. Last updated 08/29/2020. Available at: https://services.aap.org/en/pages/2019-novel-coronavirus-covid-19-infections/clinical-guidance/covid-19-planning-considerations-return-to-in-person-education-in-schools/. Accessed online 04 Sept 2020.
Reopening K-12 Schools During the COVID-19 Pandemic. Prioritizing Health, Equity, and Communities. Washington, DC: The National Academies Press; 2020. https://doi.org/10.17226/25858.
Contact Tracing for COVID-19. Centers for Disease Control and Prevention. Last updated 08/31/2020. Available at: https://www.cdc.gov/coronavirus/2019-ncov/php/contact-tracing/contact-tracing-plan/contact-tracing.html. Accessed online 04 Sept 2020.
Chu DK, Akl EA, Duda S, Solo K, Yaacoub S, Schunemann HJ, Authors C-SURGEs. Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis. Lancet. 2020;395(10242):1973–87 https://doi.org/10.1016/S0140-6736(20)31142-9.
Considerations for school-related public health measures in the context of COVID-19: Annex to Considerations in adjusting public health and social measures in the context of COVID-19. World Health Organization; 2020. Available at: https://www.who.int/publications/i/item/considerations-for-school-related-public-health-measures-in-the-context-of-covid-19.
CDC Activities and Initiatives Supporting the COVID-19 Response and the President's Plan for Opening America Up Again. Centers for Disease Control and Prevention. 2020. Available at: https://www.cdc.gov/coronavirus/2019-ncov/downloads/php/CDC-Activities-Initiatives-for-COVID-19-Response.pdf.
Kim J, Choe YJ, Lee J, Park YJ, Park O, Han MS, Kim JH, Choi EH. Role of children in household transmission of COVID-19. Archives Dis Child. 2020; https://doi.org/10.1136/archdischild-2020-319910.
Park YJ, Choe YJ, Park O, Park SY, Kim YM, Kim J, Kweon S, Woo Y, Gwack J, Kim SS, et al. Contact Tracing during Coronavirus Disease Outbreak, South Korea, 2020. Emerg Infect Dis. 2020;26(10) https://doi.org/10.3201/eid2610.201315.
Posfay-Barbe KM, Wagner N, Gauthey M, Moussaoui D, Loevy N, Diana A, L'Huillier AG. COVID-19 in Children and the Dynamics of Infection in Families. Pediatrics. 2020;146(2) https://doi.org/10.1542/peds.2020-1576.
Viner RM, Mytton OT, Bonell C, Melendez-Torres GJ, Ward JL, Hudson L, Waddington C, Thomas J, Russell S, van der Klis F, et al. Susceptibility to and transmission of COVID-19 amongst children and adolescents compared with adults: a systematic review and meta-analysis. medRxiv. 2020; https://doi.org/10.1101/2020.05.20.20108126.
Danis K, Epaulard O, Benet T, Gaymard A, Campoy S, Botelho-Nevers E, Bouscambert-Duchamp M, Spaccaferri G, Ader F, Mailles A, et al. Cluster of coronavirus disease 2019 (COVID-19) in the French Alps, February 2020. Clin Infect Dis. 2020;71(15):825–32 https://doi.org/10.1093/cid/ciaa424.
Fontanet A, Grant R, Tondeur L, Madec Y, Grzelak L, Cailleau I, Ungeheuer M-N, Renaudat C, Fernandes Pellerin S, Kuhmel L, et al. SARS-CoV-2 infection in primary schools in northern France: A retrospective cohort study in an area of high transmission. medRxiv. 2020; https://doi.org/10.1101/2020.06.25.20140178.
Heavey L, Casey G, Kelly C, Kelly D, McDarby G. No evidence of secondary transmission of COVID-19 from children attending school in Ireland, 2020. Euro Surveill. 2020;25(21) https://doi.org/10.2807/1560-7917.ES.2020.25.21.2000903.
Macartney K, Quinn HE, Pillsbury AJ, Koirala A, Deng L, Winkler N, Katelaris AL, O'Sullivan MVN, Dalton C, Wood N, et al. Transmission of SARS-CoV-2 in Australian educational settings: a prospective cohort study. Lancet Child Adolesc Health. 2020; https://doi.org/10.1016/S2352-4642(20)30251-0.
Yung CF, Kam KQ, Nadua KD, Chong CY, Tan NWH, Li J, Lee KP, Chan YH, Thoon KC, Chong NK. Novel coronavirus 2019 transmission risk in educational settings. Clinical Infect Dis. 2020; https://doi.org/10.1093/cid/ciaa794.
Shao N, Cheng J, Chen W. The reproductive number R0 of COVID-19 based on estimate of a statistical time delay dynamical system. medRxiv. 2020; https://doi.org/10.1101/2020.02.17.20023747.
Zhao S, Chen H. Modeling the epidemic dynamics and control of COVID-19 outbreak in China. Quant Biol. 2020:1–9 https://doi.org/10.1007/s40484-020-0199-0.
Pei S, Kandula S, Shaman J. Differential Effects of Intervention Timing on COVID-19 Spread in the United States. medRxiv. 2020; 2020.2005.2015.20103655. https://doi.org/10.1101/2020.05.15.20103655.
Ludvigsson JF. Systematic review of COVID-19 in children shows milder cases and a better prognosis than adults. Acta Paediatr. 2020;109(6):1088–95 https://doi.org/10.1111/apa.15270.
Multisystem inflammatory syndrome in children (MIS-C) associated with coronavirus disease 2019 (COVID-19). Centers for Disease Control and Prevention. 2020. Available at: https://emergency.cdc.gov/han/2020/han00432.asp.
Dufort EM, Koumans EH, Chow EJ, Rosenthal EM, Muse A, Rowlands J, Barranco MA, Maxted AM, Rosenberg ES, Easton D, et al. Multisystem inflammatory syndrome in children in New York state. N Engl J Med. 2020;383(4):347–58 https://doi.org/10.1056/NEJMoa2021756.
Patrick SW, Henkhaus LE, Zickafoose JS, Lovell K, Halvorson A, Loch S, Letterie M, Davis MM. Well-being of parents and children during the COVID-19 pandemic: a National Survey. Pediatrics. 2020; https://doi.org/10.1542/peds.2020-016824.
Duan L, Shao X, Wang Y, Huang Y, Miao J, Yang X, Zhu G. An investigation of mental health status of children and adolescents in China during the outbreak of COVID-19. J Affect Disord. 2020;275:112–8 https://doi.org/10.1016/j.jad.2020.06.029.
Intimate Partner Violence and Child Abuse Considerations During COVID-19. Substance Abuse and Mental Health Services Administration, U.S. Department of Health and Human Services. 2020. Available at: https://www.samhsa.gov/sites/default/files/social-distancing-domestic-violence.pdf.
COVID Update: Hotline Continues to Hear from Children, Those Concerned for Their Safety. RAINN (Rape, Abuse & Incest Network). 2020. Available at: https://www.rainn.org/news/covid-update-hotline-continues-hear-children-those-concerned-their-safety. Accessed online 04 Sept 2020.
Kuhfeld M, Tarasawa B. The COVID-19 slide: what summer learning loss can tell us about the potential impact of school closurs on student academic achievement. NWEA; 2020. Available at: https://www.nwea.org/content/uploads/2020/05/Collaborative-Brief_Covid19-Slide-APR20.pdf.
Bansal S, Grenfell BT, Meyers LA. When individual behaviour matters: homogeneous and network models in epidemiology. J R Soc Interface. 2007;4(16):879–91 https://doi.org/10.1098/rsif.2007.1100.
Van Lancker W, Parolin Z. COVID-19, school closures, and child poverty: a social crisis in the making. Lancet Public Health. 2020;5(5):e243–4 https://doi.org/10.1016/S2468-2667(20)30084-0.
Christakis DA. School reopening-the pandemic issue that is not getting its due. JAMA Pediatr. 2020; https://doi.org/10.1001/jamapediatrics.2020.2068.
Department of Pediatrics, Larner College of Medicine, University of Vermont, Burlington, VT, USA
Benjamin Lee & John P. Hanley
Translational Global Infectious Diseases Research Center, University of Vermont, Burlington, VT, USA
Benjamin Lee, John P. Hanley, Sarah Nowak, Jason H. T. Bates & Laurent Hébert-Dufresne
Department of Pathology and Laboratory Medicine, Larner College of Medicine, University of Vermont, Burlington, VT, USA
Sarah Nowak
Department of Computer Science, College of Engineering and Mathematical Sciences, University of Vermont, Burlington, VT, USA
Jason H. T. Bates & Laurent Hébert-Dufresne
Vermont Complex Systems Center, University of Vermont, Burlington, VT, USA
Laurent Hébert-Dufresne
Benjamin Lee
John P. Hanley
Jason H. T. Bates
BL: Conceptualization, Writing; JPH: Methodology; SN: Methodology; JHTB: Methodology, Review & editing; LHB: Formal analysis, Review & editing. All authors have read and approved the final manuscript.
Correspondence to Benjamin Lee.
Authors declare no competing interests.
Lee, B., Hanley, J.P., Nowak, S. et al. Modeling the impact of school reopening on SARS-CoV-2 transmission using contact structure data from Shanghai. BMC Public Health 20, 1713 (2020). https://doi.org/10.1186/s12889-020-09799-8
SEIR model
SIR model
|
CommonCrawl
|
The influence of Antarctic ice loss on polar motion: an assessment based on GRACE and multi-mission satellite altimetry
Franziska Göttl ORCID: orcid.org/0000-0003-0006-869X1,
Andreas Groh2,
Michael Schmidt1,
Ludwig Schröder3 &
Florian Seitz1
Earth, Planets and Space volume 73, Article number: 99 (2021) Cite this article
A Correction to this article was published on 24 June 2021
Increasing ice loss of the Antarctic Ice Sheet (AIS) due to global climate change affects the orientation of the Earth's spin axis with respect to an Earth-fixed reference system (polar motion). Here the contribution of the decreasing AIS to the excitation of polar motion is quantified from precise time variable gravity field observations of the Gravity Recovery and Climate Experiment (GRACE) and from measurements of the changing ice sheet elevation from altimeter satellites. While the GRACE gravity field models need to be reduced by noise and leakage effects from neighboring subsystems, the ice volume changes observed by satellite altimetry have to be converted into ice mass changes. In this study we investigate how much individual gravimetry and altimetry solutions differ from each other. We show that due to combination of individual solutions systematic and random errors of the data processing can be reduced and the robustness of the geodetic derived AIS polar motion excitations can be increased. We investigate the interannual variability of the Antarctic polar motion excitation functions by means of piecewise linear trends. We find that the long-term behavior of the three ice sheet subregions: EAIS (East Antarctic Ice Sheet), WAIS (West Antarctic Ice Sheet) and APIS (Antarctic Peninsula Ice Sheet) is quite different. While APIS polar motion excitations show no significant interannual variations during the study period \(2003-2015\), the trend of the WAIS and EAIS polar motion excitations increased in 2006 and again in 2009 while it started slightly to decline in 2013. AIS mass changes explain about \(45\%\) of the observed magnitude of the polar motion vector (excluding glacial isosatic adjustment). They cause the pole position vector to drift along \(59^{\circ }\) East longitude with an amplitude of 2.7 mas/yr. Thus the contribution of the AIS has to be considered to close the budget of the geophysical excitation functions of polar motion.
In recent years, climate change has led to increasing ice loss of the Antarctic Ice Sheet (AIS) which has an significant impact on polar motion. Due to redistribution and motion of masses within the Earth system the Earth's pole is continuously in motion. While the circular motion - mainly described by the Chandler and the annual oscillation - is caused by atmosphere and water variations, the non-linear drift results from changes in the solid Earth and ongoing ice melting as well as related sea level changes. The last two climate related mass relocations (ice and water changes) are principally responsible for the abrupt eastward turn of the mean pole position around the year 2005 (Chen et al. 2013). Whereas the reversal of polar motion after 2012 towards west may result from regional differences in terrestrial water storage changes (Adhikari and Ivins 2016). There exists a large number of studies focussing on atmospheric, oceanic and hydrological polar motion excitation mechanisms derived from geophysical models and space geodetic observations (e.g., Chen et al. 2017; Göttl et al. 2015; Jin et al. 2010; Meyrath and van Dam 2016; Nastula et al. 2007; Seoane et al. 2011), while there exist only a few studies regarding the contributions of the cryosphere (e.g., Chen et al. 2013; Adhikari and Ivins 2016). In contrast to the other subsystems of the Earth no adequate geophysical fluid models for the cryosphere are available up to now. Therefore in this study several gravimetric and altimetric data products are used to investigate the impact of AIS mass changes on polar motion in more detail.
Since 2002 ice mass changes in Antarctica can be observed by the Gravity Recovery and Climate Experiment (GRACE) in terms of time variable gravity field changes (e.g., Barletta et al. 2013; Luthcke et al. 2013; Velicogna and Wahr 2013; Groh and Horwath 2016; Su et al. 2018). GRACE is the only space-borne sensor which allows to observe mass redistribution directly with a spatial resolution of 200 - 500 km of the entire AIS with monthly resolution. Besides the limited spatial resolution the accuracy of GRACE ice mass change estimates is limited by noise (meridional error stripes), leakage effects and uncertainties of glacial isosatic adjustment (GIA) models (e.g., Horwath and Dietrich 2009; Barletta et al. 2013), used to correct for the solid Earth's still ongoing response to past ice mass changes. Moreover, significant differences may arise from the algorithm applied to estimate the mass changes (Groh et al. 2019).
Since 1992 the temporal evolution of ice mass changes on the AIS can be derived from surface elevation changes measured by satellite radar and laser altimeter missions such as ERS-1, ERS-2, Envisat, CryoSat-2 and ICESat (e.g., Wingham et al. 2006; McMillan et al. 2014; Zwally et al. 2015; Schröder et al. 2019; Shepherd et al. 2019). In contrast to GRACE, satellite altimetry observes elevation changes which need to be converted to mass changes, causing other sources of errors. The spatial resolution of the altimetry data is significantly higher (\(\sim 2\) km or less along the tracks according to the footprint size, \(\sim 20\) km or less according to the separation of neighboring satellite ground tracks) whereas the temporal resolution is comparable to GRACE (monthly repeat orbits for most of the missions). Due to a polar gap, reaching from \(81.5^{\circ }\)S for ERS and Envisat down to \(88^{\circ }\)S for CryoSat-2, 21% to 1% of the AIS near the South Pole cannot be observed by altimetry. Furthermore, the tracking of the satellite altimeters fails in sloping and rugged terrain near the coast, which is less than 2% of the total area (Shepherd et al. 2019). Nevertheless, these coastal areas are the location of the largest dynamic mass changes, which are not observable by altimetry. The accuracy of satellite altimetry derived ice mass change estimates is limited by waveform retracking, slope related relocation errors (especially in rugged terrain) and the density assumption in the volume-to-mass conversion. Here uncertainties due to GIA models are almost negligible because GIA induced height changes are significantly smaller than satellite altimetry observed ice surface height changes.
Both space geodetic observation techniques - satellite gravimetry and altimetry - have different strengths and weaknesses with respect to ice mass change estimations. To access the accuracy of the gravimetry and altimetry derived polar motion excitation functions we use different GRACE gravity field models and multi-mission satellite altimetry data. In order to reduce the systematic errors and to increase the robustness of the geodetic derived Antarctic polar motion excitation functions we combine the individual solutions. Based on these new improved time series we investigate not only the impact of the entire AIS on polar motion but also the contributions of the subregions: East Antarctic Ice Sheet (EAIS), West Antarctic Ice Sheet (WAIS) and Antarctic Peninsula Ice Sheet (APIS). Our focus lies in particular on the long-term behaviour and interannual variability.
Antarctic excitation of polar motion
In this section we explain briefly how Antarctic mass-related polar motion excitations can be determined from gravimetry and altimetry derived equivalent water height (EWH) anomalies of the AIS. Individual geophysical mass-related excitations of polar motion are mathematically described by equatorial angular momentum functions \(\chi _1^e\) and \(\chi _2^e\) (Barnes et al. 1983; Gross 2015; Wahr 2005), where the index e denotes what kind of excitation mechanism is described by the equatorial angular momentum functions (e.g. \(AIS=:\) mass effect of the entire Antarctica, \(EAIS=:\) mass effect of the East Antarctic Ice Sheet, \(WAIS=:\) mass effect of the West Antarctic Ice Sheet, \(APIS=:\) mass effect of Antarctic Peninsula Ice Sheet, see Fig. 1). These so-called excitation functions are directly related to the fully normalized subsystem specific degree-2 potential coefficients \(\Delta {\bar{C}}_{21}^e\) and \(\Delta {\bar{S}}_{21}^e\) which are derived from equivalent water heights \(\Delta ewh\) using the global spherical harmonic analysis (GSHA)
$$\begin{aligned} \left. \begin{aligned} \Delta {{\bar{C}}}_{nm}^{e}\\ \Delta {{\bar{S}}}_{nm}^{e} \end{aligned}\right\} =\frac{(1+k'_n)}{(2n+1)}\cdot \frac{3\rho _w}{4\pi a{\bar{\rho }}_e}\cdot \iint _{\sigma }\Delta ewh(\theta ,\lambda ){{\bar{P}}}_{nm}(\cos \theta ) \left\{ \begin{aligned} \cos m\lambda \\ \sin m\lambda \end{aligned}\right\} d\sigma , \end{aligned}$$
where \(\theta\) and \(\lambda\) denote the spherical co-latitude and longitude of the computation point, n and m are the spherical harmonic degree and order, a is the Earth's equatorial radius, \({\bar{\rho }}_e=5517\) kgm-3 is the mean density of the Earth, \(\rho _w=1025\) kgm-3 is the density of sea water, \(k'_n\) are the degree n load Love numbers, \(\bar{P}_{nm}(\cos \theta )\) are the normalized associated Legendre functions and \(\sigma\) denotes the unit sphere (Wahr et al. 1998). Note, that due to the application of the load Love numbers the elastic deformation of the solid Earth caused by surface load is taken into account in the potential coefficients. Thus the real parts of the mass-related polar motion excitation functions can be written as
$$\begin{aligned} \left. \begin{aligned} \chi _1^e\\ \chi _2^e \end{aligned}\right\} =-\sqrt{\frac{5}{3}}\frac{GM}{G}a^2\cdot \frac{\alpha }{(C-A')} \left\{ \begin{aligned} \Delta \bar{C}_{21}^e\\ \Delta \bar{S}_{21}^e \end{aligned}\right. , \end{aligned}$$
where G is the gravitational constant, GM is the geocentric gravitational constant, C is the axial moment of inertia of the Earth and \(A'=(A+B)/2\) is the average of the equatorial principal moments of inertia. The constant \(\alpha\) takes into account the rotational deformation of the Earth. It is calculated via
$$\begin{aligned} \alpha ={(C-A')\Omega }/{(C-A_c+\epsilon _cA_c)\sigma _0}\approx 1.598, \end{aligned}$$
where \(\Omega\) is the Earth's mean angular velocity, \(A_c\) is the equatorial moment of inertia of the Earth's core, \(\epsilon _c\) is the ellipticity of the Earth's core and \(\sigma _0=2\pi /T\) is the real part of the Chandler frequency derived from the Chandler period T. The imaginary parts of the polar motion excitation functions can be neglected because they are smaller than \(1\%\) of the real parts of the polar motion excitation functions. In this study, we determine Antarctic polar motion excitations, therefore we apply the constants of a tide-free Earth model listed in Table 1. Equation 2 corresponds to that of Gross (2015) and Dobslaw and Dill (2019) within \(1\%\), the differences are based on dissimilar values of the numerical constants. The dimensionless polar motion excitation functions are converted to milliarcseconds (mas) by the factor \(f=360\cdot 60\cdot 60\cdot 1000/(2\pi )\). They describe the position of the excitation axis with respect to the Earth-fixed terrestrial reference frame.
Table 1 Parameters of the Earth used for the determination of Antarctic polar motion excitation functions. Sources: (a) Petit and Luzum (2010), (b) Wilson and Vicente (1990), (c) Göttl (2013), (d) Seitz et al. (2012), (e) Mathews et al. (1991)
Data and data processing
This section provides an overview of the GRACE gravity field models and multi-mission satellite altimetry solutions which are used within this study to determine AIS mass changes and their impact on polar motion.
Gravimetry derived EWH anomalies of the AIS
In this study we use EWH anomalies of the AIS derived from four monthly GRACE gravity field solutions: CSR RL06M (Save et al. 2016; Save 2019), JPL RL06M (Watkins et al. 2015; Wiese et al. 2016, 2019), ITSG-Grace2018 (Mayer-Gürr et al. 2018) and LDCmgm90 (Chen et al. 2019, 2020). As investigated by Chao (2016) one should keep in mind that EWH and surface mascon solutions cannot represent internal processes in a physically meaningful way but they are a appropriate representation of surficial gravitational processes such as ice mass changes. Important for Earth rotation studies is that these gravity field models are all based on the new linear mean pole model, because this has a significant impact on the potential coefficients \(C_{21}\) and \(S_{21}\) as shown in Göttl et al. (2018). The degree-1 coefficients of all gravity field solutions are replaced by estimates listed in the GRACE Technical Note 13 (Swenson et al. 2008; Sun et al. 2016) to take into account that a redistribution of masses is referred to a coordinate system attached to the Earth's crust which moves relatively to the Earth's center-of-mass frame applied within the GRACE data processing. Whereat the motion of the center-of-mass (CM) with respect to the center-of-figure (CF) of the solid Earth surface is defined as geocenter motion. Furthermore the inaccurate \(C_{20}\) coefficient is replaced by an external satellite laser ranging (SLR) solution from Loomis et al. (2020) (GRACE Technical Note 14). GIA induced mass changes are removed from the total GRACE signal to identify ice mass changes in Antarctica. Martin-Español et al. (2016) compared eight forward and inverse GIA models for Antarctica. They found that GIA models induce uncertainties on GRACE derived present-day ice mass variations of about 60 Gt/yr. In terms of GRACE derived AIS polar motion excitation functions the uncertainties are about 0.2 mas/yr. For consistency reasons we decide to apply the GIA model ICE6G_D (Peltier 2015) to all gravity field solutions because it is already used as the standard model for the GRACE mascon solutions (CSR RL06M, JPL RL06M) and the applied degree-1 coefficients. All gravimetry derived EWH anomalies are interpolated to a regular \(1^\circ \times 1^\circ\) grid and masks for the entire AIS as well as for the subregions EAIS, WAIS and APIS are applied. In general, EWH anomalies for a particular subsystem of the Earth do not conserve mass globally. To ensure mass conservation, the EWH anomalies may be complemented by an additional EWH layer over the ocean. This layer represents the ocean's gravitational self-consistent response to the changing surface loads (Clarke et al. 2005) over the ice sheet region under investigation, including the rotational feedback of the additional ocean load (Rietbroek et al. 2012). By using the Eqs. 1 and 2 the polar motion excitation functions for the specific regions are determined. The focus of this study is on the time period 2003 to 2015, i.e., excluding the "GRACE single ACC" period where the accelerometer (ACC) on board GRACE-B was turned off. The quality of the "GRACE single ACC" gravity field solutions is less and a much stronger decorrelation and smoothing is required to identify mass changes (Dahle et al. 2019). Therefore we do not include these GRACE gravity field models in our investigations.
GRACE mascon solutions
The CSR RL06M and JPL RL06M gravity field models are based on geolocated spherical cap mass concentration functions. One advantage of the mass concentration (mascon) parameters over spherical harmonics is a higher reduction of noise and errors in the gravity field solutions due to the limited longitudinal sampling and orbital configuration of the satellite mission GRACE. Thus no post-processing in terms of destriping or smoothing is required - such as filtering and forward modeling - to reduce the north-south stripes and to estimate meaningful mass anomalies (Andrews et al. 2015). Besides the improved signal-to-noise ratio the GRACE mascon solutions have a higher spatial resolution and lower leakage errors from neighboring mass processes. The native resolution on an equal-area geodesic grid is \(1^{\circ }\) for the CSR RL06M solutions versus \(3^{\circ }\) for the JPL RL06M solutions, but the provided mass anomaly grid files are given on a regular \(0.25^{\circ }\times 0.25^\circ\) and \(0.5^{\circ }\times 0.5^\circ\) grid, respectively, to represent the mascon grids at coastlines properly. While at CSR (Center for Space Research, Austin) the hexagonal grid tiles along the coastlines are split into two tiles to minimize the leakage between land and ocean signals, at JPL (Jet Propulsion Laboratory, Pasadena) the Coastal Resolution Improvement (CRI) filter has been developed to reduce leakage errors across coastlines in a post-processing step. A further difference between the two mascon solutions is that CSR uses the Tikhonov regularization along with the L-ribbon approach (Save et al. 2012) to derive time dependent regularization parameters from GRACE information only, whereas at JPL the correlated errors are removed by introducing realistic geophysical data or model during the solution inversion.
GRACE spherical harmonic solutions
The ITSG-Grace2018 gravity field solution is based on the global spherical harmonic potential coefficients. As mentioned before unconstrained spherical harmonic gravity field solutions suffer from erroneous meridional stripes. In this study we use two data sets of EWH anomalies derived from ITSG-Grace2018 by using two different post-processing methods: (1) The method of tailored sensitivity kernels developed at TU Dresden (TUD) in the frame of the Antarctic Ice Sheet project of the European Space Agency's (ESA) Climate Change Initiative (CCI) (Groh and Horwath 2016) and (2) the filter effect reduction approach on global grid point scale developed in the frame of the German Research Foundation (DFG) project CIEROT (Combination of geodetic space observations for estimating cryospheric mass changes and their impact on Earth rotation) at Deutsches Geodätisches Forschungsinstitut der TU München (DGFI-TUM) (Göttl et al. 2019). Therefore we abbreviate these gravimetry EWH anomalies solutions as ITSG-Grace2018/TUD and ITSG-Grace2018/TUM, respectively, according to the applied post-processing approaches. The first method is a dedicated variant of the regional integration approach that implicitly applies tailored sensitivity kernels. The tailored sensitivity kernels are designed to minimize both GRACE error effects and signal leakage. Hence, they are based on information about the variances and covariances of GRACE monthly solution errors as well as on geophysical signals that induce leakage. The ITSG-Grace2018/TUD mass anomalies are derived on a polar stereographic grid with a resolution of \(50 \text {km}\times 50 \text {km}\). The second method has been developed especially for Earth rotation studies. It is independent from geophysical model information and works for several subsystems of the Earth. Here the filter effects (attenuation and leakage) are reduced due to application of global grid point gain factors estimated from once and twice filtered GRACE gravity field solutions. In a second step on the level of polar motion excitation functions scaling factors are applied to counteract the damping of the gain factors due to the filtering processes.
Multiple-data spherical harmonic solutions
In contrast to the gravity field solutions mentioned above, LDCmgm90 is a combined gravity field model based not only on GRACE observations but on multiple data. The potential coefficients \(C_{20}\), \(C_{21}\) and \(S_{21}\) are based on GRACE and SLR gravity field models as well as information from geophysical models for the atmosphere, oceans and continental hydrosphere, whereas the other potential coefficients are a combination of the CSR and JPL GRACE mascon solutions; for more details see Chen et al. (2017) and Yu et al. (2018). Like the mascon solutions the multiple-data-based gravity field model does not suffer from meridional error stripes and the leakage effects are significantly reduced. EWH anomalies are obtained by using the global spherical harmonic synthesis (GSHS)
$$\begin{aligned} \Delta ewh(\theta ,\lambda )=\frac{a{\bar{\rho }}_e}{3\rho _w}\sum \limits _{n=0}^{N}\sum \limits _{m=0}^{n}\frac{(2n+1)}{(1+k'_n)}\bar{P}_{nm}(\cos \theta )\cdot \left[ \Delta {\bar{C}}_{nm}^e\cos m\lambda + \Delta {\bar{S}}_{nm}^e\sin m\lambda \right] . \end{aligned}$$
No post-processing in terms of destriping or smoothing is required.
Altimetry derived EWH anomalies of the AIS
In this study we use EWH anomalies of the AIS derived from two multi-mission satellite altimetry solutions, one for monthly gridded surface elevation changes (SEC) in Antarctica determined at TUD (Schröder et al. 2019) and one for monthly integrated mass changes in the AIS drainage basins provided by the University of Leeds (UL) (Shepherd et al. 2019). To ensure meaningful comparisons with gravimetry derived EWH anomalies GIA induced height and mass changes respectively are removed with the same GIA model ICE6G_D (Peltier 2015). Furthermore, the altimetry derived mass anomalies are interpolated to a \(1^\circ \times 1^\circ\) grid like the GRACE data and masks for the subregions EAIS, WAIS and APIS are applied. To guarantee mass conversation the EWH anomalies are complemented by an additional EWH layer over the ocean as for the GRACE data. Finally the polar motion excitation functions for the specific regions are determined via Eqs. 1 and 2.
Surface elevation changes on grid scale
The monthly TUD SEC data of the AIS are based on altimeter observations from the satellite missions ERS-1, ERS-2, Envisat, ICESat and CryoSat-2 covering the time period 1992 to 2017. In order to obtain comparable results the focus of this study is on the time period 2003 to 2015 like the GRACE data, where mainly the altimeter missions Envisat, ICESat and CryoSat-2 were operating covering the globe up to \(81.5^{\circ }\), \(86^{\circ }\) and \(88^{\circ }\) latitude, respectively. While Envisat carries a classical pulse-limited radar altimeter on board (2 km pulse-limited footprint size), CryoSat-2 is equipped with a Delay-Doppler/Synthetic Aperture Radar (SAR) altimeter with a significantly improved along-track footprint size of about 0.3 km. ICESat carries a laser altimeter with high spatial resolution (70 m footprint size) on board. For a consistent combination of the different satellite missions TUD applied a refined reprocessing of the radar altimetry data, which also contributed to significantly improved accuracy of the observation data; for details see Schröder et al. (2019). The SEC data are given on a 10km\(\times 10\)km polar stereographic grid. They originate from changes in the ice flow, changes in the surface firn (as a lack or excess in snowfall) or even elevation changes in the underlying bedrock. The latter are the combined effect of the Earth's elastic response on present-day ice mass changes and GIA induced by past ice mass changes. We use the volume-to-mass conversion according to Schröder et al. (2019)
$$\begin{aligned} \Delta ewh=(SEC-GIA)\cdot s_{ela} \cdot \rho \end{aligned}$$
to estimate EWH anomalies of the AIS. In a first step, the uplift rates due to GIA have to be removed. To maintain consistency, we use the GIA model ICE6G_D (Peltier 2015). In the next step, the reduced SEC data are multiplied by the scaling factor \(s_{ela}=1.0205\) to account for the elastic solid Earth rebound effect (Groh et al. 2012). The corrected SEC data represent the ice sheet thickness changes and are multiplied with a time-invariant density mask \(\rho\), adapted after McMillan et al. (2014), with varying firn density from Ligtenberg et al. (2012). We interpolate the retrieved EWH anomalies to a regular \(1^\circ \times 1^\circ\) grid like the GRACE data and fill the data gaps beyond the southern limit of the satellite orbits (\(1\) to \(21\%\)) and in rugged terrain (\(<2\%\)) where the satellite altimeters failed to track the elevation changes with data from the ITSG-Grace2018/TUM gravity field solutions. In regions with large scale mass variations due to precipitation changes this method seems to be correct, whereas in regions with small scale mass variations due to changes in the ice dynamic one has to keep in mind that GRACE data underestimate the effect due to the fact that the signal is smeared over a larger region.
Mass anomalies on basin scale
The monthly integrated mass change time series for the individual AIS drainage basins provided by the University of Leeds are based only on radar altimetry observations from the satellite missions Envisat and CryoSat-2 in the investigation period 2003 to 2015 of this study. Therefore this altimeter solution relies on different input data sets (no ICESat data) as well as data processing, outlier detection and merging strategy. However, the main difference between both satellite altimetry data sets is the applied volume-to-mass conversion. While Schröder et al. (2019) use a time-invariant density mask, Shepherd et al. (2019) apply modelled information for the different components of the mass changes, i.e. dynamic changes and variations in the firn pack. Furthermore the mass changes from Shepherd et al. (2019) come as aggregates over individual drainage basin only and not on a polar stereographic grid like the SEC data from Schröder et al. (2019). Thus the UL mass anomalies contain no detailed information about the exact location of the mass changes within the basin. Therefore we distribute the mass changes equally within each basin to obtain gridded mass anomalies. In order to quantify the consequences of this lack of spatial information in the UL mass data, we produced a comparable data set from the TUD data and compared the results. Therefore, we formed cumulated basin time series from the gridded data, redistributed the signal equally over the whole basin and calculated the resulting polar motion excitation functions. Compared to the results of the spatially distributed data set (Fig. 2), we see that the temporal structure of the signals is very similar but the trend of equally distributed mass changes on polar motion is about \(4\%\) for \(\chi _1\) and \(7\%\) for \(\chi _2\) smaller. Hence, the impact of AIS mass changes on polar motion derived from the UL solutions is slightly underestimated. Equivalent comparisons for APIS, WAIS and EAIS show similar results. The relative standard deviations (RSD) due to equally distributed mass changes are about \(5\%\) for APIS, 2 or \(5\%\) for WAIS (\(\chi _1\), \(\chi _2\)) and 4 or \(6\%\) for EAIS (\(\chi _1\), \(\chi _2\)). We conclude that this approximation is adequate to study AIS polar motion excitations.
Results, combination and validation
In this section we show Antarctic polar motion excitations derived from GRACE and satellite altimetry data. We combine the individual gravimetry and altimetry solutions to reduce systematic errors and to improve the reliability of the geodetic derived AIS polar motion excitations. To quantify the quality of the individual and combined results comparisons with solutions from the multiple-data-based gravity field model LDCmgm90 are performed.
Gravimetry and altimetry results
Antarctica drainage basin and the three ice sheet subregions: APIS, WAIS and EAIS (Rignot et al. 2011). The basins: Dronning Maud Land (DML) and Enderby Land (EL) are labeled because they are of special interest
Comparison of the Antarctic polar motion excitation functions \(\chi _1^{AIS}\) and \(\chi _2^{AIS}\) derived from TUD mass changes given on a \(1^\circ \times 1^\circ\) grid (blue) and derived from TUD integrated mass changes in the AIS drainage basins (red). The corresponding correlation coefficients and relative standard deviations (RSD) are provided
Gravimetry solutions for monthly polar motion excitation functions for AIS, APIS, WAIS and EAIS: ITSG-Grace2018/TUM (blue), ITSG-Grace2018/TUD (green), CSR RL06M (cyan), JPL RL06M (magenta), LDCmgm90 (black) and altimetry solutions: TUD (red), UL (brown). Note the different ranges of the axis due to the different magnitudes of the signals in the different regions
In Fig. 3 all gravimetry and altimetry solutions of this study are shown, not only for the entire AIS but also for the subregions APIS, WAIS and EAIS (according to Rignot et al. (2011), see Fig. 1). In this way the individual contributions of the subregions to Earth rotation variations can be investigated. The GRACE derived excitation functions show higher agreement among themselves than the satellite altimetry derived excitation functions. This follows from the fact that the altimetry solutions are based on different approaches for the volume-to-mass conversion which is one of the largest error sources. Sasgen et al. (2019) have shown that the uncertainties of altimetry derived ice mass changes for Antarctica due to the density assumption, retracking and adjustment method are significantly higher than the uncertainties of GRACE derived ice mass changes due to GIA corrections and uncertainties of the potential coefficients. In this study the GRACE solutions are homogenized by an uniform GIA model to eliminate one major source of discrepancy. Therefore the uncertainties of the GRACE derived Antarctic excitation functions due to different post-processing strategies are 0.1 mas for \(\chi _1\) and 0.7 mas for \(\chi _2\) which is about 1 and \(6\%\) of the magnitude of the AIS excitation functions, respectively. Uncertainties due to the GIA model amount 0.2 mas for \(\chi _1\) and 0.3 mas for \(\chi _2\) (RSD: \(4\%\) and \(3\%\)). The uncertainties of altimetry derived AIS excitation functions are significantly higher about 0.8 mas for \(\chi _1\) and 1.1 mas for \(\chi _2\) (RSD: \(6\%\) and \(15\%\)). In Fig. 4 the mean correlation coefficients and relative standard deviations between the individual gravimetry solutions (10 combinations) and the both altimetry solutions for AIS, APIS, WAIS and EAIS are shown, respectively. The higher the correlation value the higher is the accordance of the signal structures. The highest differences can be seen for the APIS. Reasons therefore could be that the area of APIS (232, 000 km2) is significantly smaller than the area of WAIS (2038, 000 km2) and EAIS (9620, 000 km2) and that the portion of coastal mass variations of the narrow peninsula is very high. It holds, that the smaller the catchment area the larger the leakage effect and the lower the accuracy of the GRACE derived mass variations. Due to the fact that the signal strength is quite different for the four regions AIS (\(-13\) to 19 mas), APIS (\(-2\) to 3 mas), WAIS (\(-9\) to 15 mas), EAIS (\(-4\) to 6 mas) we need to have a look at the RSD instead of the root mean square (RMS) differences. The RSD is defined as the ratio of the RMS to the maximum value of the arithmetic mean of all gravimetry or altimetry solutions, respectively. Depending on the region, the accordance of the solutions is quite different (see Fig. 4). While polar motion excitations for the WAIS are well determined by both geodetic space observation techniques, the results for APIS and EAIS show significantly higher differences. The altimetry solutions for the EAIS show the largest differences. We assume that these discrepancies result from uncertainties of the volume-to-mass conversion caused by changes of the ice dynamics and in the firn pack.
The mean relative standard deviations (RSD) are displayed along the horizontal axis and the mean correlation coefficients are displayed along the vertical axis. The higher the correlation and the lower the RSD (upper left corner) the higher is the agreement of the gravimetry solutions (blue) among themselves and the altimetry solutions (red), respectively. The agreement between the gravimetry and altimetry solutions is shown in green
Comparison of polar motion excitation functions for the single EAIS basins derived from the combined GRACE gravity field model LDCmgm90 (black) and derived from satellite altimetry data provided by TUD (red) and UL (brown). The relative standard deviations (RSD) of the UL solutions due to equally distributed mass signals are provided
Comparing the GRACE and satellite altimetry solutions for the EAIS reveals that the trends of the altimeter time series are significantly lower than the trends of the GRACE solutions. Figure 5 shows that these differences result mainly from the basins: Jpp-K, K-A, A-Ap, Ap-B, B-C. Whereat the basins A-Ap (Dronning Maud Land) and Ap-B (Enderby Land) make the largest contributions. Here the trend of the GRACE-derived polar motion excitation functions is significantly larger than the trend of the satellite altimetry-derived polar motion excitation functions, especially after 2011. One reason for this could be that the TUD altimetry solutions due not take into account that parts of the mass changes in Dronning Maud Land and Enderby Land could have been caused by ice dynamics (e.g. dynamic thickening) (Schröder et al. 2019). The UL altimetry solutions consider ice dynamic effects but they cannot be fully compensated because of uncertainties in the surface mass balance (SMB) models. Furthermore the UL solutions suffer from uncertainties due to equally distributed mass changes in the individual drainage basins (see RSD values in Fig. 5). Another reason could be that GRACE overestimate the mass signal in EAIS, for example due to uncertainties in GIA models. Martin-Español et al. (2016) show that for EAIS the uncertainties of the GIA models are generally lower than for WAIS and APIS but due to the large area of the EAIS basins these uncertainties have a larger impact on GRACE-derived integrated mass changes than it is the case in WAIS and APIS. Furthermore they showed that within EAIS for the basins: Jpp-K, K-A, Ap-B, B-C, C-Cp and E-Ep the uncertainties of the GIA models are higher. This coincide with our investigations on basis of the polar motion excitation functions.
Combination of GRACE and satellite altimetry data
In this section gravimetry and altimetry solutions for Antarctic polar motion excitation functions are combined in order to cancel systematic errors of the different processing strategies and to gather the strengths of the different space geodetic observation techniques. Advantages of the satellite mission GRACE are that ice mass changes can be observed directly for the entire AIS whereas advantages of the altimeter satellite missions are the significantly higher spatial resolution and no smearing of spatially detailed information like it is the case by GRACE observations.
The individual input solutions are consistent regarding the determination of the equivalent water heights (temporal and spatial resolution, GIA, elastic deformation) and the Antarctic polar motion excitation functions (Earth's parameter), but the geophysical background models of the input solutions are partly different. This has the advantage that systematic errors of the background models can be adjusted within the combination. We determine four combined solutions for Antarctic polar motion excitations: (1) arithmetic mean of the gravimetry solutions ITSG-Grace2018/TUM, ITSG-Grace2018/TUD, CSR RL06M and JPL RL06M, (2) arithmetic mean of the altimetry solutions TUD and UL, (3) arithmetic mean of the gravimetry and altimetry solutions and (4) weighted combination of the gravimetry and altimetry solutions based on the least squares adjustment approach described in Göttl et al. (2012) to show that via a combination an improved excitation time series can be retrieved. Hereinafter we refer to the results from the latter combination approach as "adjusted solutions". The least squares adjustment approach will only be described briefly: The stochastic model takes into account the empirical variances of the observations as well as the time dependency of the noise of the unknown parameters. The empirical variances of the gravimetry and altimetry estimated excitation functions are calculated via
$$\begin{aligned} \left( \sigma ^e_{j,p}\right) ^2=\frac{\sum _{k=1}^{K}\left[ \chi _{j,p}^e(t_k)-{\bar{\chi }}_j ^e(t_k)\right] ^2}{K-1}, \end{aligned}$$
where \({\bar{\chi }}_j ^e(t_k)\) is the average of all Antarctic polar motion excitation functions \(\chi _{j,p}^e\) with \(j\in {1,2}\) derived from different processing centers and observation data with different processing strategies (\(p\in {1,...,P}\)) at the discrete time moment \(t_k\) with \(k=1,...,K\) (total number of months). In Fig. 6 the empirical standard deviations of the GRACE and satellite altimetry solutions are shown. The higher the empirical standard deviation the lower the weight within the least squares adjustment. It becomes evident that depending on the region and the polar motion excitation functions the weighting of the gravimetry and altimetry solutions differs significantly. Except for the subregion APIS the altimetry solutions of UL have the lowest weights whereas the weights of the TUD satellite altimetry solutions are in the same order like the weights of the gravimetry solutions. Via the empirical auto-covariances the temporal dependency of the noise of the unknown parameters can be considered in the stochastic model. As shown by Göttl (2013) in this way the reliability of the formal errors of the adjusted time series can be improved. No correlations between the time series based on observations from the same measurement technique are taken into account because they are not precisely known. Due to the application of different processing strategies it can be assumed that they are small. Simulations have shown that the estimated variances of the unknown parameters differ only by about \(2\) to \(16\%\) from the true variances. The formal errors of the adjusted GRACE and satellite altimetry solutions are given in Table 2. For EAIS they are larger than for APIS and WAIS.
Empirical standard deviations for the GRACE derived Antarctic polar motion excitation functions: ITSG-Grace2018/TUM (blue), ITSG-Grace2018/TUD (green), CSR RL06M (cyan) and JPL RL06M (magenta) as well as for the satellite altimetry solutions: TUD (red), UL (brown)
Table 2 Formal errors of the adjusted GRACE and satellite altimetry solutions for Antarctic polar motion excitation functions
Comparison with LDCmgm90
Differences of the combined solutions for polar motion excitation functions for AIS, APIS, WAIS and EAIS: arithmetic mean of the gravimetry solutions (blue), arithmetic mean of the altimetry solutions (green), aritmetic mean of the gravimetry and altimetry solutions (cyan), weighted adjustment of all gravimetry and altimetry solutions (red) with respect to the combined gravimetry solution LDCmgm90. Note the different ranges of the axis due to the different magnitudes of the signals in the different regions
Due to the lack of accurate geophysical model results for Antarctic polar motion excitation functions we compare our individual and combined satellite gravimetry and altimetry solutions with estimates from the multiple-data spherical harmonic solution LDCmgm90. As mentioned in the data description section the potential coefficients \(C_{20}\), \(C_{21}\) and \(S_{21}\) are based besides several GRACE gravity field solutions on SLR gravity field solutions, geophysical model information and precise Earth orientation parameters whereas the other potential coefficients are only based on GRACE mascon solutions. Altimetry solutions are not considered in the LDCmgm90 solution as well as spherical harmonic GRACE solutions for potential coefficients beyond degree and order 2. Therefore the gravimetry and altimetry solutions of this study are as far as possible independent from the LDCmgm90 solution. Figure 7 shows the differences of the combined solutions: DGFI-TUM and LDCmgm90. The correlation coefficients and relative standard deviations for all individual and combined gravimetry and altimetry solutions are given in Table 3. We expect the higher the correlation and the lower the RSD value with our gravimetry, altimetry and combined solutions the higher is the reliability of the resulting time series. We found that except for the polar motion excitations \(\chi _1^{APIS}\), \(\chi _2^{APIS}\) and \(\chi _2^{EAIS}\) the accordance with the GRACE solutions is very high (RSD: \(2\) to \(12\%\)). The best agreement is shown with the ITSG-Grace2018/TUD and ITSG-Grace2018/TUM solutions. For APIS the accordance with the altimetry solutions is partly higher (RSD: \(15\) to \(27\%\)) than with the gravimetry solutions (RSD: \(16\) to \(43\%\)) while for EAIS the accordance with the altimetry solutions is lower (RSD: \(8\) to \(60\%\) versus \(2\) to \(12\%\)). It seems to be that the leakage errors in GRACE mass estimates for the small and narrow Antarctic peninsula are higher than the errors in satellite altimetry mass estimates due to rugged terrain and the density assumption. Except for APIS the agreement of the LDCmgm90 and TUD altimeter solutions is higher than with the UL altimeter solutions. By determining the arithmetic mean of the individual time series we want to prove if the reliability and consistency of the merged solutions can be improved. By considering all regions and both components of the polar motion excitations, the combination of all gravimetry time series reveals a larger overall agreement (RSD: \(4\) to \(21\%\)) compared to the individual gravimetry time series whereas a combination of the altimetry solutions only slightly improves the agreement with the LDCmgm90 results (RSD: \(2\) to \(56\%\)). By merging the two different space geodetic observation techniques the accordance can be further improved especially for the contributions of the APIS and WAIS. Reasons therefore might be that the uncertainties of the GRACE solutions for these regions are larger due to the greater leakage effects and therefore can be adjusted by the altimetry solutions. These improvements confirm that even due to a simple combination (arithmetic mean) of GRACE and satellite altimetry data systematic and random errors can be reduced and the robustness of the geodetic derived Antarctic polar motion excitation functions can be increased. By applying the weighted least squares adjustment approach described by Göttl et al. (2012) further improvements can be achieved (RSD: \(2\) to \(19\%\)). The polar motion excitation function \(\chi _2^{EAIS}\) show the highest uncertainties, this phenomenon requires further investigation. The best results for the entire AIS can be obtained through summation of the weighted adjusted GRACE and altimetry solutions for the subregions APIS, WAIS and EAIS (RSD: \(2\%\)). Thus the results of the two different combination approaches show a high concordance.
Table 3 Correlation coefficient/relative standard deviation (RSD) between GRACE and satellite altimetry solutions for Antarctic polar motion excitation functions and results from the combined gravity field solution LDCmgm90. The best results for the entire AIS can be maintained through summation of the weighted adjusted GRACE and altimetry solutions for the subregions APIS, WAIS and EAIS. The best individual and combined results are highlighted by bold values
Impact on polar motion
Figure 8 shows the adjusted AIS polar motion excitation functions. While \(\chi _1\) is dominated by mass changes close the meridians \(0^\circ\) and \(180^\circ\), \(\chi _2\) is dominated by mass changes close to the meridians \(\pm 90^\circ\). Thus, ice mass changes in EAIS have the largest impact on \(\chi _1^{AIS}\) whereas ice mass changes in WAIS have the largest impact on \(\chi _2^{AIS}\). Due to the fact that most of the WAIS sits on rock below sea level, which is not the case for EAIS, it is more sensitive to global warming. Thus WAIS is dominated by a strong ice loss whereas EAIS is still dominated by a smaller ice gain mainly due to snow accumulation. Therefore the magnitude of \(\chi _2^{AIS}\) is about 5 mas higher than the magnitude of \(\chi _1^{AIS}\). Furthermore, ice loss in APIS slightly increase the trend of \(\chi _2^{AIS}\) and decrease the trend of \(\chi _1^{AIS}\).
Polar motion excitation functions for the entire AIS (black) and for the subregions WAIS (blue), EAIS (green) and APIS (red) derived from gravimetry and altimetry solutions via least squares adjustment. The corresponding piecewise linear trends of the time spans 2003-2005, 2006-2008, 2009-2012 and 2013-2015 are provided
Geodetic observed polar motion excitation functions after removing GIA (ICE6G_D) and seasonal signals (with periods 1, 1/2 and 1/3 year). The corresponding trends of the time spans 2003-2005, 2006-2012 and 2013-2015 are provided
In order to study the interannual variability of the AIS polar motion excitation functions Fig. 8 includes the piecewise linear trend estimations for the time spans: \(2003-2005\), \(2006-2008\), \(2009-2012\) and \(2013-2015\). These time spans have been defined due to visual inspection of a change in the slope. We found that for \(\chi _1^{EAIS}\) the trend increased in 2006 from 0.1 mas/yr to 0.8 mas/yr and again in 2009 to 1.1 mas/yr. The accelerated polar motion in \(\chi _1^{EAIS}\) after 2009 coincides with strong accumulation events especially in Dronning Maud Land (DML) and Enderby Land (EL) during the austral winter in 2009 and 2011 (Boening et al. 2012). In 2013 the trend of \(\chi _1^{EAIS}\) declined to 0.6 mas/yr. For \(\chi _1^{WAIS}\) the trend began to rise slightly in 2009 from 0.4 mas/yr to 0.9 mas/yr. Here no significant decrease of the trend could be recognized in 2013. The trend behavior of \(\chi _1^{APIS}\) is almost constant during the study period and amounts to \(-0.1\) mas/yr. For \(\chi _2\) the trends feature different characteristics. For \(\chi _2^{WAIS}\) the trend began to rise already in 2006 from 0.6 mas/yr to 1.2 mas/yr and again in 2009 to 2.3 mas/yr. In 2013 the trend started slightly to drop to 1.9 mas/yr. One reason for this is that due to global warming and shifts in wind patterns warmer ocean water especially in the Amundsen Sea sector induce a stronger deglaciation (Holland et al. 2019). In contrast to all other time series the trend of \(\chi _2^{EAIS}\) does not increase over time but changes the direction in 2006 from 0.9 mas/yr to \(-0.6\) mas/yr and again in 2009 and 2013 to 0.4 mas/yr and \(-0.5\) mas/yr, respectively. The trend behavior of \(\chi _2^{APIS}\) is nearly constant during the study period it amounts to 0.3 mas/yr.
In Fig. 9 the piecewise linear trends of the geodetic observed polar motion excitation functions are shown. They are derived from the precise pole coordinates x and y provided by the International Earth Rotation and Reference System Service (IERS) via the EOP (Earth Orientation Parameters) 14 C04 time series. To ensure consistency we remove the GIA induced linear drift signal from the observed polar motion excitations with the same GIA model ICE6G_D (Peltier 2015) as used within the determination of the AIS polar motion excitation functions. The trend increased significantly in 2006 and started to decrease around 2013. We found that these turning points coincide well with the turning points of the AIS polar motion excitation functions.
It holds the drift of the excitation pole equals the drift of the celestial intermediate pole (CIP) (Gross 2015). According to the adjusted AIS polar motion excitation functions the pole vector drift along \(59^{\circ }\) East longitude of amplitude 2.7 mas/yr due to AIS ice mass changes during the study period \(2003-2015\). These findings are close to the results in Adhikari and Ivins (2016). The observed pole position vector drift along \(46^{\circ }\) East longitude at a rate of 6 mas/yr after removing the GIA signal. Thus AIS polar motion excitations alone explain about \(45\%\) of the observed magnitude of the polar motion vector (excluding GIA) and the AIS induced pole drift deviates only about \(13^{\circ }\) from the GIA reduced observed pole drift.
In this study we determined for the first time Antarctic polar motion excitation functions. They were computed from two independent data sources, namely from GRACE time variable gravity fields and from multi-mission satellite altimetry surface elevation changes. To assess the accuracy of the gravimetry derived polar motion excitations we use different GRACE gravity field models based on mass concentration parameters (CSR RL06M, JPL RL06M) or on spherical harmonics (ITSG-Grace2018). The combined gravity field model LDCmgm90, which is based on multiple-data such as GRACE, SLR, geophyiscal models and EOP, is used as reference time series. While the differences of the gravimetry solutions for WAIS are small (\(3\%\)), the differences for APIS are significantly higher (\(25\%\)). Reasons for this are that the area of the Antarctic peninsula is significantly smaller and the portion of coastal mass changes is very high. To assess the accuracy of the altimetry derived Antarctic polar motion excitations we use monthly gridded SEC data (TUD) and integrated mass change time series for the individual AIS basins (UL). For WAIS the accordance is high (\(RSD=6\%\)) whereas for EAIS and APIS the uncertainties are significantly higher (\(20\) to \(30\%\)). One reason for this is that the narrow peninsula is extremely rugged and another reason might be that parts of the mass changes in EAIS (Dronning Maud Land, Enderby Land) may have been caused by ice dynamic effects which are difficult to be taken into account. We show that due to the combination of the gravimetry and altimetry solutions the systematic errors of the data processing can be reduced and the robustness of the geodetic derived Antarctic polar motion excitation functions can be increased. The agreement with the combined gravity field solution LDCmgm90 can be significantly improved. The RSD values for the AIS amount \(2\%\) only although the combination approaches are based on different input data: SLR, geophysical models and EOP versus satellite altimetry. Thus the impact of ice mass changes in the Antarctica on polar motion can be determined accurately due to combination of different geodetic space observation techniques. Based on these investigations we found that ice mass changes in EAIS have a larger impact on the x pole coordinate whereas ice mass changes in WAIS have a larger impact on the y pole coordinate. The trend behavior of the three ice sheet subregions EAIS, WAIS and APIS is quite different. While APIS polar motion excitations show no significant interannual variations due to global climate change during the study period, the trend of the WAIS and EAIS polar motion excitations increased in 2006 and again in 2009 while it started slightly to decline in 2013. Within this study we found that AIS mass changes induce the pole position vector to drift along \(59^{\circ }\)E by 2.7 mas/yr during the study period \(2003-2015\), which explain about \(45\%\) of the observed magnitude of the polar motion vector (excluding GIA). In comparison, mass variations of Greenland and the continental hydrosphere cause the pole position vector to drift approximately along \(38^{\circ }\)W by 3.6 mas/yr and \(80.5^{\circ }\)E by 2.4 mas/yr (Adhikari and Ivins 2016), respectively. While ice mass changes of inland glaciers and collective mass redistribution of the atmosphere and oceans have no significant impact onto the drift of the polar motion vector. Therefore it is important to reduce not only atmospheric and oceanic signals from observed polar motion, as it is usually done, but also the contributions of Antarctica and Greenland to identify for example hydrological signals in observed polar motion. While there exist adequate geophysical models for the atmosphere and the oceans there exist up to now no adequate models for the cryosphere. The current study contributes to overcome this deficiency. Furthermore these improved AIS polar motion excitation time series can be used to improve modelling of ice mass redistribution in the Antarctica.
All gravimetry and altimetry derived Antarctic polar motion excitation functions are available from the corresponding author on reasonable request.
A Correction to this paper has been published: https://doi.org/10.1186/s40623-021-01460-x
Adhikari S, Ivins ER (2016) Climate-drived polar motion: 2003–2015. Sci Adv 2:e1501693
Andrews SB, Moore P, King MA (2015) Mass change from GRACE: a simulated comparison of Level-1B analysis techniques. Geophys J Int 200:503–518
Barletta VR, Sørensen LS, Forsberg R (2013) Scatter of mass change estimates at basin scale for Greenland and Antarctica. Cryosphere 7(5):1411–1432
Barnes RTH, Hide R, White AA, Wilson CA (1983) Atmospheric angular momentum fluctuations, length-of-day changes and polar motion. Proc R Soc London Ser A 387:31–73
Boening C, Lebsock M, Landerer F, Stephens G (2012) Snowfall-driven mass change on the East Antarctic ice sheet. Geophys Res Lett 39:L21501
Chao BF (2016) Caveats on the equivalent water thickness and surface mascon solutions derived from the GRACE satellite-observed time-variable gravity. J Geod 90:807–813
Chen JL, Wilson CR, Ries JC, Tapley BD (2013) Rapid ice melting drives Earth's pole to the east. Geophys Res Lett 40:2625–2630
Chen W, Li J, Ray J, Cheng M (2017) Improved geophysical excitations constrained by polar motion observations and GRACE/SLR time-dependent gravity. Geodesy Geodyn 8:377–388
Chen W, Luo J, Ray J, Yu N, Li J (2019) Multiple-data-based monthly geopotential model set LDCmgm90. Sci Data 6:228
Chen W, Luo J, Ray J, Yu N, Li J (2020) LDCmgm90 monthly geopotential model set with separate GIA model. https://www.nature.com/articles/s41597-019-0239-7
Clarke PJ, Lavallée DA, Blewitt G, van Dam TM, Wahr JM (2005) Effect of gravitational consistency and mass conservation on seasonal surface mass loading models. Geophys Res Lett 32:L08306
Dahle C, Murböck M, Flechtner F, Dobslaw H, Michalak G, Neumayer H, Abrykosov O, Reinhold A, König R, Sulzbach R, Förste C (2019) The GFZ GRACE RL06 Monthly Gravity Field Time Series: processing details and quality assessment. Remote Sens 11(18):2116
Dobslaw H, Dill R (2019) Product Description Document ESMGFZ EAM. Effective angular momentum functions from Earth System Modelling at GeoForschungsZentrum in Potsdam. http://rz-vm115.gfz-potsdam.de:8080/repository/entry/show?entryid=e8e59d73-c0c2-4a9d-b53b-f2cd70f85e28
Göttl F, Schmidt M, Heinkelmann R, Savcenko R, Bouman J (2012) Combination of gravimetric and altimetric space observations for estimating oceanic polar motion excitations. J Geophys Res 117:C10022
Göttl F (2013) Kombination geodätischer Raumbeobachtungen zur Bestimmung von geophysikalischen Anregungsmechanismen der Polbewegung. In: Deutsche Geodätische Kommission, C series 741. Verlag der Bayerischen Akademie der Wissenschaften, München. https://mediatum.ub.tum.de/doc/1301105/1301105.pdf
Göttl F, Schmidt M, Seitz F, Bloßfeld M (2015) Separation of atmospheric, oceanic and hydrological polar motion excitation mechanisms based on a combination of geometric and gravimetric space observations. J Geod 89:377–390
Göttl F, Schmidt M, Seitz F (2018) Mass-related excitation of polar motion: an assessment of the new RL06 GRACE gravity field models. Earth Planets Space 70:195
Göttl F, Murböck M, Schmidt M, Seitz F (2019) Reducing filter effects in GRACE-derived polar motion excitations. Earth Planets Space 71:117
Groh A, Ewert H, Scheinert M, Fritsche M, Rülke A, Richter A, Rosenau R, Dietrich R (2012) An investigation of glacial isostatic adjustment over the Amundsen Sea sector, West Antarctica. Earth Global Planet Change 98–99:45–53
Groh A, Horwath M (2016) The method of tailored sensitivity kernels for GRACE mass change estimates. Earth Geophys Res Abs 18: EGU2016-12065
Groh A, Horwath M, Horvath A, Meister R, Sørensen LS, Barletta VR, Forsberg R, Wouters B, Ditmar P, Ran J, Klees R, Su X, Shang K, Guo J, Shum CK, Schrama E, Shepherd A (2019) Evaluating GRACE mass change time series for the Antarctic and Greenland ice sheet-Methods and results. Earth Geosci 9:415
Gross R (2015) Earth rotation variations - long period. In: Schubert G (ed) Treaties on geophysics, vol 3E2. Elsevier, Heidelberg, pp 215–261
Holland PR, Bracegirdle TJ, Dutrieux P, Jenkins A, Steig EJ (2019) West Antarctic ice loss influenced by internal climate variability and anthropogenic forcing. Nat Geosci 12:718–724
Horwath M, Dietrich R (2009) Signal and error in mass change inferences from GRACE: the case of Antarctica. Geophys J Int 177(3):849–864
Jin S, Chamber DP, Tapley BD (2010) Hydrological and oceanic effects on polar motion from GRACE and models. J Geophys Res 115:B02403
Ligtenberg S, Helsen M, van den Broeke M (2012) An improved semi-empirical model for the densification of Antarctic firn. Cryosphere 5:809–819
Loomis BD, Rachlin KE, Wiese DN, Landerer FW, Luthcke SB (2020) Replacing GRACE/GRACE-FO with satellite laser ranging: Impacts on Antarctic Ice Sheet mass change. Geophys Res Lett 47:e2019GL085488
Luthcke SB, Sabaka TJ, Loomis BD, Arendt AA, McCarthy JJ, Camp J (2013) Antarctica, Greenland and Gulf of Alaska land-ice evolution from an iterated GRACE global mascon solution. J Glaciol 59(216):613–631
Martin-Español A, King MA, Zammit-Mangion A, Andrews SB, Moore P, Bamber JL (2016) An assessment of forward and inverse GIA solutions for Antarctica. J Geophys Res Solid Earth 121:6947–6965
Mathews PM, Buffett BA, Herring TA, Shapiro II (1991) Forced nutations of the Earth: influence of inner core dynamics, 2. Numerical results and comparisons. J Geophys Res 96:8243–8257
Mayer-Gürr T, Behzadpour S, Ellmer M, Kvas A, Klinger B, Strasser S, Zehentner N (2018) ITSG-Grace2018 - Monthly. Daily and Static Gravity Field Solutions from GRACE. https://doi.org/10.5880/ICGEM.2018.003
McMillan M, Shepherd A, Sundal A, Briggs K, Muir A, Ridout A, Hogg A, Wingham D (2014) Increased ice losses from Antarctica detected by CryoSat-2. Geophys Res Lett 41:3899–3905
Meyrath T, van Dam T (2016) A comparison of interannual hydrological polar motion excitation from GRACE and geodetic observations. J Geod 99:1–9
Nastula J, Ponte RM, Salstein DA (2007) Comparison of polar motion excitation series derived from GRACE and from analyses of geophysical fluids. Geophys Res Lett 34:L11306
Peltier WR (2015) The history of Earth's rotation: impacts of deep Earth physics and surface climate variability. In: Schubert G (ed) Treatise on Geophysics, vol 9E2. Elsevier, Oxford, pp 221–279
Petit G, Luzum B (2010) IERS Conventions (2010), IERS Technical Note 36. Verlag des Bundesamts für Kartographie und Geodäsie, Frankfurt a. M. (. 978-3-89888-989-6)
Rietbroek R, Brunnabend SE, Kusche J, Schröter J (2012) Resolving sea level contributions by identifying fingerprints in time-variable gravity and altimetry. J Geodyn 59–60:72–81
Rignot E, Mouginot J, Scheuchl B (2011) Antarctic grounding line mapping from differential satellite radar interferometry. Geophys Res Lett 38:L10504
Sasgen I, Konrad H, Helm V, Grosfeld K (2019) High-Resolution Mass Trends of the Antarctic Ice Sheet through a Spectral Combination of Satellite Gravimetry and Radar Altimetry Observations. Remote Sens 11:144
Save H, Bettadpur S, Taply BD (2012) Reducing errors in the GRACE gravity solutions using regularization. J Geod 86(9):695–711
Save H, Bettadpur S, Taply BD (2016) High resolution CSR GRACE RL05 mascons. J Geophys Res Solid Earth 1:121
Save H (2019) CSR GRACE RL06 Mascon Solutions. https://doi.org/10.18738/T8/UN91VR
Schröder L, Horwath M, Dietrich R, Helm V, van den Broeke MR, Ligtenberg SRM (2019) Four decades of Antarctic surface elevation changes from multi-mission satellite altimetry. Cryosphere 13:427–449
Seitz F, Kirschner S, Neubersch D (2012) Determination of the Earth's pole tide Love number k2 from observations of polar motion using an adaptive Kalman filter approach. J Geophys Res 117:B09403
Seoane L, Nastula J, Bizourad C, Gambis D (2011) Hydrological excitation of polar motion derived from GRACE gravity field solutions. Int J Geophys 1:174396
Shepherd A, Gilbert L, Muir AS, Konrad H, McMillan M, Slater T, Briggs KH, Sundal AV, Hogg AE, Engdahl ME (2019) Trends in Antarctic Ice Sheet elevation and mass. Geophys Res Lett 46:8174–8183
Su X, Shum C, Guo J, Howat I, Kuo C, Jezek K, Duan J, Yi Y (2018) High-resolution interannual mass anomalies of the antarctic ice sheet by combining GRACE Gravimetry and ENVISAT altimetry. Geosci Remote Sens 56:539–546
Sun Y, Riva R, Ditmar P (2016) Optimizing estimates of annual variations and trends in geocenter motion and J2 from a combination of GRACE data and geophysical models. J Geophys Res Solid Earth 1:121
Swenson S, Chambers D, Wahr J (2008) Estimating geocenter variations from a combination of GRACE and ocean model output. J Geophys Res 113:B08410
Velicogna I, Wahr J (2013) Time-variable gravity observations of ice sheet mass balance: precision and limitations of the GRACE satellite data. Geophys Res Lett. 40:3055–3063
Wahr J, Molenaar M, Bryan F (1998) Time variability of the Earth's gravity field: hydrological and oceanic effects and their possible detection using GRACE. J Geophys Res 103:30205–30229
Wahr J (2005) Polar motion models: Angular momentum approach. In: Plag HP Chao B Gross R van Dam T (eds) Proceedings of the Workshop: Forcing of polar motion in the Chandler frequency band – A contribution to understanding international climate changes. Cahiers du Centre Europeen de Geodynamique et de Seismologie, Luxembourg, p 89-102
Watkins MM, Wiese DN, Yuan DN, Boening C, Landerer FW (2015) Improved methods for observing Earth's time variable mass distribution with GRACE using spherical cap mascons. J Geophys Res Solid Earth 1:120
Wiese DN, Landerer FW, Watkins MM (2016) Quantifying and reducing leakage errors in the JPL RL05M GRACE mascon solution. Water Resour Res 52:7490–7502
Wiese DN, Yuan DN, Boening C, Landerer FW, Watkins MM (2019) JPL GRACE and GRACE-FO Mascon Ocean, Ice, and Hydrology Equivalent Water Height Coastal Resolution Improvement (CRI) Filtered Release 06 Version 02. https://doi.org/10.5067/TEMSC-3JC62
Wilson CR, Vicente RO (1990) Maximum likelihood estimates of polar motion parameters, in Variations in Earth rotation. In: McCarthy DD Carter WE (eds) Variations in Earth rotation. AGU Geophysical Monograph Series, vol 59, Vancouver, p 151-155
Wingham DJ, Shepherd A, Muir A, Marshall GJ (2006) Mass balance of the Antarctic ice sheet. Phil Trans R Soc A 364:1627–1635
Yu N, Lie JC, Ray J, Chen W (2018) Improved geophysical excitation of length-of-day constrained by Earth orientation parameters and satellite gravimetry products. Geophys J Int 214:1633–1651
Zwally H, Li J, Robbins J, Saba J, Yi D, Brenner A (2015) Mass gains of the Antarctic ice sheet exceed losses. J Glaciol 61(230):1019–1036
The GRACE satellite mission was operated and maintained by NASA (National Aeronautics and Space Administration) and DLR (Deutsche Zentrum für Luft- und Raumfahrt). The GRACE gravity field solutions were provided by the GRACE science teams of CSR and JPL as well as by the ITSG of the Graz University of Technology. We are grateful to the University of Leeds for providing satellite alitmetry derived mass variation time series of the AIS basins. Thanks to Wei Chen for the LDCmgm90 data and fruitful discussions. Open access was supported by the TUM Open Access Publishing Funds. We also thank the two reviewers for their suggestions and comments, which greatly improved the manuscript.
Open Access funding enabled and organized by Projekt DEAL. These studies are performed in the framework of the project CIEROT (Combination of geodetic space observations for estimating cryospheric mass changes and their impact on Earth rotation) funded by the German Research Foundation (DFG) under grant No. GO 2707/1-1. AG acknowledges support by ESA through the Climate Change Initiative (CCI) projects Antarctic Ice Sheet CCI+ (contract number 4000126813/19/I-NB).
Technische Universität München, Deutsches Geodätisches Forschungsinstitut, Arcisstraße 21, 80333, Munich, Germany
Franziska Göttl, Michael Schmidt & Florian Seitz
Technische Universität Dresden, Institut für Planetare Geodäsie, Helmholzstraße 10, 01062, Dresden, Germany
Andreas Groh
Federal Agency for Cartography and Geodesy, Karl-Rothe-Straße 10-14, 04105, Leipzig, Germany
Ludwig Schröder
Franziska Göttl
Florian Seitz
FG determined all Antarctic polar motion excitation functions, performed all analysis and wrote the majority of the paper. LS provided multi-mission satellite altimetry data and help with the volume-to-mass conversion while AG provided equivalent water heights of AIS derived from the ITSG-Grace2018 gravity field models and help with the mass conservation. Both assisted by the interpretation of the time series. MS and FS contributed to the writing of the manuscript. All authors participated in the discussion of the results and agreed on the content. All authors read and approved the final manuscript.
Correspondence to Franziska Göttl.
The original online version of this article was revised: The graphical abstract has been added.
Göttl, F., Groh, A., Schmidt, M. et al. The influence of Antarctic ice loss on polar motion: an assessment based on GRACE and multi-mission satellite altimetry. Earth Planets Space 73, 99 (2021). https://doi.org/10.1186/s40623-021-01403-6
Antarctic polar motion excitations
6. Geodesy
|
CommonCrawl
|
$5 in 1928 → 2022
$5 in 1928 is worth $81.27 in 2022
$5 in 1928 has the same purchasing power as $81.27 in 2022. Over the 94 years this is a change of $76.27.
The average inflation rate of the dollar between 1928 and 2022 was 3.03% per year. The cumulative price increase of the dollar over this time was -100.00%.
The value of $5 from 1928 to 2022
So what does this data mean? It means that the prices in 2022 are 0.81 higher than the average prices since 1928. A dollar in 2022 can buy 6.15% of what it could buy in 1928.
These inflation figures use the Bureau of Labor Statistics (BLS) consumer price index to calculate the value of $5 between 1928 and 2022.
The inflation rate for 1928 was -1.72%, while the inflation rate for 2022 was -100.00%. The 2022 inflation rate is lower than the average inflation rate of 3.06% per year between 2022 and 2021.
The Buying Power of $5 in 1928
We can look at the buying power equivalent for $5 in 1928 to see how much you would need to adjust for in order to beat inflation. For 1928 to 2022, if you started with $5 in 1928, you would need to have $81.27 in 1928 to keep up with inflation rates.
So if we are saying that $5 is equivalent to $81.27 over time, you can see the core concept of inflation in action. The "real value" of a single dollar decreases over time. It will pay for fewer items at the store than it did previously.
In the chart below you can see how the value of the dollar is worth less over 94 years.
Value of $5 Over Time
$5.00 -1.72%
$5.00 0.00%
$4.77 10.88%
$10.18 4.19%
$14.42 11.04%
$62.73 -0.36%
If you're interested to see the effect of inflation on various 1950 amounts, the table below shows how much each amount would be worth today based on the price increase of -100.00%.
$1.00 in 1928 $16.25 in 2022
$10.00 in 1928 $162.54 in 2022
$100.00 in 1928 $1,625.43 in 2022
$1,000.00 in 1928 $16,254.27 in 2022
$10,000.00 in 1928 $162,542.69 in 2022
$100,000.00 in 1928 $1,625,426.90 in 2022
$1,000,000.00 in 1928 $16,254,269.01 in 2022
Calculate Inflation Rate for $5 from 1928 to 2022
To calculate the inflation rate of $5 from 1928 to 2022, we use the following formula:
We then replace the variables with the historical CPI values. The CPI in 1928 was 17.1 and in 2022.
$$\dfrac{ \$5 \times }{ 17.1 } = \text{ \$81.27 } $$
$5 in 1928 has the same purchasing power as $81.27 in 2022.
To work out the total inflation rate for the 94 years between 1928 and 2022, we can use a different formula:
$$ \dfrac{\text{CPI in 2022 } - \text{ CPI in 1928 } }{\text{CPI in 1928 }} \times 100 = \text{Cumulative rate for 94 years} $$
$$ \dfrac{\text{ } - \text{ 17.1 } }{\text{ 17.1 }} \times 100 = \text{ -100.00\% } $$
<a href="https://studyfinance.com/inflation/us/1928/5/">$5 in 1928 is worth $81.27 in 2022</a>
"$5 in 1928 is worth $81.27 in 2022". StudyFinance.com. Accessed on January 28, 2022. https://studyfinance.com/inflation/us/1928/5/.
"$5 in 1928 is worth $81.27 in 2022". StudyFinance.com, https://studyfinance.com/inflation/us/1928/5/. Accessed 28 January, 2022
$5 in 1928 is worth $81.27 in 2022. StudyFinance.com. Retrieved from https://studyfinance.com/inflation/us/1928/5/.
Inflation of $60,331 from 1917 to today
|
CommonCrawl
|
Robotics and Biomimetics
Systematic engineering design helps creating new soft machines
Arthur Seibel
Lars Schiller
First Online: 26 October 2018
Soft robotics is an emerging field in the robotics community which deals with completely new types of robots. However, often new soft robotic designs depend on the ingenuity of the engineer rather being systematically derived. For this reason, in order to support the engineer in the design process, we present a design methodology for general technical systems in this paper and explain it in depth in the context of soft robotics. The design methodology consists of a combination of state-of-the-art engineering concepts that are arranged in such a way that the engineer is guided through the design process. The effectiveness of a systematic approach in soft robotics is illustrated on the design of a new gecko-inspired, climbing soft robot.
Soft robotics Design methodology Gecko-inspired robot Climbing robot
The online version of this article ( https://doi.org/10.1186/s40638-018-0088-4) contains supplementary material, which is available to authorized users.
Soft robotics is a fast-growing field in the robotics sciences. This rather young discipline deals with robots made entirely of soft materials (such as silicones) or materials with soft behavior (such as granules). Often, but not necessarily, the design of a soft robot is biologically inspired. Examples are the reproduction of the tail of a fish [1] or the tentacle of an octopus [2].
Due to their flexibility, soft robots have many advantages over conventional, hard robots. For example, deformable structures play an important role in applications with high uncertainty, such as movement in impassable and unknown environments [3, 4] or gripping of unknown objects [5, 6, 7, 8]. The softness also allows a safe contact with living organisms without a potential risk of injuries [6]. In addition, deformable structures are able to store and release energy—a beneficial property for energy-efficient movement [9].
Typically, in the soft robotics literature, only the realization of the introduced system is presented, but the concrete path to this solution is not further specified, leaving the designer of new soft machines without proper guidance. For this reason, we introduce a general design methodology for technical systems in this paper and describe it in detail in the context of soft robotics. The methodology consists of several basic engineering concepts that are structured to guide the engineer through the design process. The effectiveness of this methodology in creating new solutions in soft robotics is demonstrated on the design of a climbing soft robot inspired by the gecko.
The proposed design methodology is illustrated in Fig. 1. The design process starts with defining the task, followed by searching for a suitable solution. Then, based on this solution, the conceptual design of the soft robot is carried out, whose functionality is examined by a mechanical model. Afterward, the functional concept is elaborated in the embodiment design stage, and the design process finally ends with the realization of the robot. As indicated in the figure, this process is iterative, in which steps can be merged, omitted, and skipped.
General design methodology for technical systems in the context of soft robotics
Types of soft actuators in resting (top) and actuated states (bottom). Red: stretchable part, black: non-stretchable part(s)
Gait pattern of the gecko during wall climbing (figure adapted from [58]). Gray circles represent feet attachment
In the broad engineering sense, a task means recognizing a problem or need and translating it into a technical goal. A typical task that can be well addressed within the framework of soft robotics is the design of a technical system whose intelligence is not located outside the body but is integrated into the structure itself, also known as embodied intelligence [10]. This property reduces the need for complicated sensor systems and feedback controllers and provides a significant advantage over hard robotics. Another typical task that can be well solved in a soft way is to create a robot that does not harm its environment and is also not harmed by external influences.
Solution search
A typical way for finding new solutions in soft robotics is by means of analysis of natural systems [11]. An important part of such an approach is finding a sufficient level of abstraction of the underlying principles and transferring them into a robot system by using state-of-the-art technology [12]. Other possible ways for finding suitable solutions for new soft robotic designs are by transferring already existing technical systems into soft counterparts or by using creativity methods [13].
Arrangement of bending actuators (red) and suction cups (white) to form our soft robot
Mechanical model of our soft robot in a slightly (\(\gamma =20^\circ \), front) and a fully actuated state (\(\gamma =90^\circ \), back). Filled circles represent attached feet
Gait pattern of our soft robot. Filled circles represent feet attachment
Partially cut CAD models of the bending actuator designs used for the torso (a) and the legs (b) of our soft robot. a Actuator design without side walls, b actuator design with side walls (the wall thickness is 1 mm)
A technical system can generally be described by the type, number, arrangement, and connection of elements, cf. Fig. 1. Here, the term "element" refers to a part of the system that fulfills a certain function. A function, in turn, is realized by a suitable working principle [13]. The result of this design stage is a concept of the soft robot, whose focus lies on its functionality.
Typical elements that are important for soft robotics applications are soft actuators. Basically, a soft actuator consists of a stretchable part and one (or more) non-stretchable (but bendable) part(s); see Fig. 2. Depending on the function, the non-stretchable part of the soft actuator can be arranged as follows. For bending, it is placed on the outer surface of the actuator (Fig. 2, left). For extending, it is integrated into the actuator, for example, in a zigzag manner (Fig. 2, center). And for twisting, it is wrapped around the actuator (Fig. 2, right). The actuation is realized, for example, by using length-variable tendons or by dividing the stretchable part into one (or more) inflatable chamber(s) [14]. Furthermore, other principles for soft actuators exist to perform, for example, curling [6, 15], contracting [16, 17], rotating [18, 19, 20], or other complex motions [21, 22].
Other important elements in soft robotics are suction cups. Three types of suction cups can be currently found in the literature: suction cups that are actuated by a dielectric elastomer [23], suction cups that are actuated by negative [24], and suction cups that are actuated by positive air pressure [25]. The latter type is based on the pneu-net principle from [6].
The pool of elements may also include soft pumps [26, 27, 28, 29, 30], soft valves [31, 32, 33], soft sensors [14, 34], and other possible soft devices.
According to the defined task, in this step, we have to select the required types of elements from the pool summarized above. For production reasons, as many identical parts as possible should be used.
The number of elements in technical systems basically depends on the functional requirements of the system to be designed. Ideally, as few parts as possible should be used, that is, an integral design is to be preferred.
Partially cut CAD model of the suction cup design used in our soft robot in two different views
Horizontally cut, exploded CAD model of the embodiment design of our soft robot
Partially cut, exploded CAD model of the individual parts of our soft robot
The arrangement of elements in a technical system can basically be realized in series, in parallel, or in combination of both. Examples of serial, parallel, and hybrid arrangement of bending actuators are given in [35, 36].
Photograph of our fabricated soft robot. The height of the robot is 20 mm and the weight is 200 g
Gait performance of our soft robot for different inclination angles \(\delta \) in comparison with the simulation. a Simulation, b \(\delta =0^\circ \), c \(\delta =20^\circ \), d \(\delta =40^\circ \). The box width is 8 cm
The connection of elements in technical systems can basically be achieved by material, form, or force. A typical material connection in soft robotics is by gluing the elements together [36]. Form connection can be realized, for example, by dovetail joints [37] and force connection, for example, by friction fit via click-bricks [38] or by integrated magnets [39].
Mechanical modeling
In order to analyze whether the concept from the previous design stage fulfills the required functionality, it is recommended to develop a (simplified) mechanical model of the soft robot. This model may later form the basis for the control design in the final realization stage of the design process. Helpful concepts here are the piecewise constant curvature assumption and beam theory [40].
Embodiment design
In this design stage, we define the shape, material, surface, and dimension of the solution concept from the conceptual design stage. Furthermore, also the fabrication method is specified. The result of this design stage is the geometrical elaboration of the soft robot. In this context, the Soft Robotics Toolkit [41] provides a detailed collection of the embodiment designs of different elements.
The shape of an element or the entire technical system is understood as its geometrical form taking into account various constraints (like functional, manufacturing, esthetic). A typical shape of a soft robot exhibits a compliance similar to that of living organisms [42]. The goal here is to achieve a maximum functional integration into one body.
The material in technical systems is typically selected according to functional, manufacturing, and economical requirements. A small overview of (silicone-based) elastomeric materials for soft robotics applications is given in Table 1. Ecoflex and Elastosil are highly extensible under low stresses and are typically used for the elastic parts of a soft robot. In contrast, PDMS is less deformable and is best suited for the more rigid parts of soft machines. All listed materials are two-component, silicone-based elastomers that exhibit a hyperelastic and viscoelastic behavior. They are resistant to mechanical damage [43] and can withstand fire, water, and snow [4].
Typical (silicone-based) elastomeric materials used in soft robotics applications
Elastosil
Sylgard (PDMS)
Shore hardness
1.38 MPa
Elongation at break
The listed information is taken from the data sheets of the suppliers
Further material types that are typically used in soft robotics are, for example, electroactive polymers [44], hydrogels [45], granules [5, 46, 47], fibers [48, 49], fabrics [4, 48, 50], and paper [16, 51].
The surface is an important but often overlooked aspect of a technical system. For example, the traction of a soft robot can be improved by introducing a texture on the contact surfaces [6, 15]. Furthermore, flexible and stretchable electronics [52] can be placed on the surfaces of a soft machine. And even the use for camouflage and display is reported in the literature [53]. So, by utilizing the free surfaces of a soft robot, additional functions can be integrated into the system.
The dimensions of technical systems basically depend on the desired shape, functional requirements, and permissible material stresses. Typical methods for finding the suitable dimensions of the embodiment design of a soft robot are the finite element method (FEM) and experimental testing. In the context of FEM analysis, suitable hyperelastic and viscoelastic models for soft materials exist [54]. These models, however, require extensive material characterization. A detailed instruction on how to perform an FEM analysis of different soft actuators can be found in [41].
Typical methods for fabricating soft structures are lamination casting (also known as soft lithography) [36], retractable pin casting [36], lost wax casting [36], and rotational casting [55]. In principle, also 3D printing of the soft structure is possible, but currently available printing materials are too brittle compared to casted elastomers [56]. However, there are efforts to use cast elastomers directly in 3D printing [57], which seems a promising alternative to the above-mentioned methods.
Final realization
In this final stage, the soft robot is fabricated, the control is designed, and the robot is tested.
As an example, we use our proposed methodology to design a new climbing soft robot.
In our application, we define the task as follows: "Design a soft machine that is able to walk on inclined surfaces." For reasons of simplicity, however, the control system of the robot should be outsourced and consist of hard components. Furthermore, we assume the running surfaces to be smooth and free of obstacles.
As already mentioned above, a typical way for finding new solutions in soft robotics is by analyzing natural systems. A suitable natural system for fulfilling the task described above is the gecko [58]. Several examples of gecko-inspired climbing robots exist, including [59, 60]. However, all these robots are made of complex, sensitive components that are most likely to fail in harsh environments. For this reason, we will design a new gecko-inspired soft robot that is resilient to adverse conditions. But in order to do so, we have first to study the actual biological model.
Basically, the gecko consists of 11 limbs: four legs, four feet, a torso, a head, and a tail. The gait pattern of the gecko during wall climbing is illustrated in Fig. 3. We can see that the movement of the torso and legs is symmetrical to the horizontal axis through the center of the torso. Furthermore, only one pair of the diagonally opposite feet is attached to the ground at the same time, and the vertical shift in position is largely achieved by the curvature of the torso. The tail, on the other hand, is used for compensating lateral forces at fast movements.
Since the gait pattern in Fig. 3 only contains bending movements, we select bending actuators for both the legs and the torso. In order to realize attachment of the soft robot to the ground, we use suction cups for the feet. In our design, a head is not required because the control of the robot is outsourced, and a tail is not used because no high dynamics are expected, and therefore, no compensation of lateral forces is needed.
In detail, we require four bending actuators for representing the legs, two bending actuators for representing the torso, and four suction cups for representing the feet of the gecko.
A suitable arrangement of the bending actuators and suction cups for realizing the gait pattern from Fig. 3 is illustrated in Fig. 4. Note that the two bending actuators forming the soft robot's torso share a common non-stretchable part and that this part is extended to the touching ends of the bending actuators that form the legs of the robot.
For reasons of simplicity, in our design, the elements shall be glued together.
In order to realize the gait pattern from Fig. 3, we developed a mechanical model as illustrated in Fig. 5. The model consists of six bending actuators for the purpose of locomotion and four suction cups for the purpose of adhesion. Under the assumption of a constant curvature [40] of the bending actuators, this model can be described by five degrees of freedom, namely the bending actuators' curvature angles \(\alpha _1\), \(\alpha _2\), \(\beta _1\), \(\beta _2\), and \(\gamma \). Note that the two bending actuators representing the soft robot's torso are described by a common curvature angle \(\gamma \). Additionally, we also have four discrete variables, namely the fixation states of the diagonally opposite feet. In the following, we will derive a kinematic model of the soft robot for linear gait by using several constraints.
Constant orientation of the attached feet
During the robot's actuation, the orientation of the attached feet is assumed to be constant. This can be described by the following boundary conditions:
$$\begin{aligned} \alpha _i - \frac{\gamma }{2}&= C_{1,i}, \end{aligned}$$
$$\begin{aligned} \beta _i + \frac{\gamma }{2}&= C_{2,i}, \end{aligned}$$
where \(C_{1,i}\) and \(C_{2,i}\) are constants with \(i\in \{1,2\}\).
Axial symmetry to the horizontal axis through the center of the torso
In order to realize this constraint, the orientations of the right and left feet must be equal:
$$\begin{aligned} \alpha _1 = \alpha _2 = \alpha, \end{aligned}$$
$$\begin{aligned} \beta _1 = \beta _2 = \beta. \end{aligned}$$
Equal orientation of the diagonally opposite feet
This requirement can be formulated as follows:
$$\begin{aligned} C_{1,i} = C_{2,i} = C. \end{aligned}$$
Nonnegative feet orientation
Since it is technically not possible to obtain negative feet orientation, we assume \(\alpha ,\beta \ge 0^\circ \). Furthermore, \(\gamma \) is assumed to be \(\gamma \in [-90^\circ ,90^\circ ]\). In order to cover the whole \(\gamma \) domain and also realize the above equations, the constant C has to be chosen as
$$\begin{aligned} C = 45^\circ. \end{aligned}$$
With this value, we finally get the following expressions for \(\alpha \) and \(\beta \):
$$\begin{aligned} \alpha (\gamma )&= 45^\circ + \frac{\gamma }{2}, \end{aligned}$$
$$\begin{aligned} \beta (\gamma )&= 45^\circ - \frac{\gamma }{2}, \end{aligned}$$
which only depend on \(\gamma \). The resulting gait pattern of the robot is shown in Fig. 6. We can see that, for an actuator length of five boxes, one gait cycle of the robot results in a vertical shift in position of seven boxes. The small offset of the lower feet during gait that is given in Figs. 5 and 6 results from the boundary conditions and can thus not be eliminated. However, we assume that this offset is compensated by the high elasticity of the robot.
We choose the "fast pneu-net" (fPN) design [51] for the bending actuators of our soft robot because it requires less pressure for the same curvature and can achieve higher bending speeds and forces compared to similar actuator designs. In order to realize a functional integration, the supply tubes are used as the bending actuators' non-stretchable parts. Furthermore, the bending actuators forming the soft robot's legs are equipped with side walls for increased stiffness. The partially cut CAD models of the bending actuator designs used in our soft robot are depicted in Fig. 7.
The design of the suction cups is based on the cup design ESV-40-S of Festo [61]. Here, the geometry of the sealing lip has been adopted, and the upper part has been redesigned such that the suction cups can be easily glued to the bending actuators. The partially cut CAD model of the suction cup design used in our soft robot is shown in Fig. 8.
The supply tubes shall be made of polyurethane because polyurethane is hardly stretchable but flexible, and the other robot structure shall be made of Elastosil (M 4601) due to this material's linear pressure–volume behavior in combination with the fPN bending actuator design [51]. The bending actuators and the suction cups shall be actuated pneumatically with air.
In order to avoid a deflection of the soft robot due to gravity, the free bottom surface of the robot is equipped with spherical heads as spacers along the neutral fibers of the bending actuators that have the same height as the sealing lip of the suction cups. Since, compared to the suction cups, the coefficient of friction of the pinheads on different smooth surfaces can be neglected, the pinheads should hardly affect the robot kinematics.
According to [62], an fPN actuator design with larger height, thinner walls, and higher number of chambers is favorable. In this context, an FEM optimized design has already been introduced in [51]. Therefore, we adopt the dimensions from this work. The thickness and height of the bending actuators' side walls are chosen intuitively. The dimensions of the upper part of the suction cups are adapted to the bending actuators' connecting dimensions, and the (outer) diameter of the supply tubes is chosen according to the thickness of the non-stretchable layer. The horizontally cut, exploded CAD model of the embodiment design of our soft robot is shown in Fig. 9. Note that all supply tubes are located inside the robot.
Figure 10 shows a partially cut, exploded view of the individual parts of our soft robot. All parts are lamination casted and then glued together by using a thin coat of uncured Elastosil. A photograph of our fabricated robot is depicted in Fig. 11.
Due to the different loads on the individual bending actuators during gait as well as manufacturing inaccuracies, the same curvature of the bending actuators does not necessarily correspond to the same pressure level. For this reason, the pressure of each bending actuator is individually controlled by a proportional directional valve, and the valves are connected in parallel to a constant positive pressure source. Since the suction cups have only two states, namely vacuum on and vacuum off, we use direct acting solenoid valves that are parallel connected to a constant negative pressure source for their control. In order to obtain information about the pressure states in the bending actuators, digital pressure sensors are connected to all outputs of the proportional directional valves. A processing unit compares the measured data with the current reference values and then generates the corresponding control signals.
During control, only the extreme positions shown in Fig. 6 are approached (namely \(\gamma =90^\circ \) and \(\gamma =-90^\circ \)), where each \(\gamma \) is assigned a set of pressures for all bending actuators that has to be identified experimentally in advance.
The experiments were performed on an inclined plate made of glass whose inclination angle could be continuously varied. A fixed camera was positioned in front of the plate so that it could optimally capture the running plane. In order to be able to track the gait of the robot, a poster with a chessboard pattern was attached under the plate. The running tests were carried out for different inclination angles \(\delta \in \{0^\circ ,10^\circ ,\dots ,90^\circ \}\).
Figure 12a shows the simulation of the soft robot's gait for one gait cycle. It can be observed that a shift in position of approximately two boxes can be achieved, which corresponds to about 16 cm.
Figure 12b–d shows snapshots of the robot during the first gait cycle for increasing inclination (see also Additional file 1). It can be seen that, for the flat and the moderately inclined plane (\(\delta \in \{0^\circ ,\dots ,20^\circ \}\)), the gait of the robot is stable and robust and consistent with the simulation. For \(\delta \in \{30^\circ ,\dots ,50^\circ \}\), the gait becomes progressively unstable because, during the gripping process, the robot begins to slip increasingly due to a slight twisting of the suction cups. Here, the increasing influence of gravity becomes evident. The motion of the robot is also not completely symmetrical, which causes a slight rotation to the left in the running direction. From \(\delta =60^\circ \) onwards, however, no stable gait can be realized.
In this paper, we introduced a general design methodology for technical systems with an emphasis on soft robotics. The methodology is composed in such a way that the design engineer is guided step by step through the design process. Due to an easy manageability of the design process and a focus on only one aspect at a time, completely new solutions can be created in this way.
The presented approach can be viewed as a framework for a more comprehensive design methodology for soft robotic systems. For example, in the final realization part of the design process, an own methodology for the control design shall be implemented. But also the other aspects should be extended by additional methods and concepts.
The application of our approach was illustrated on the design of a gecko-inspired soft robot that is capable of walking on inclined surfaces. However, our approach does not rely on an existing solution since a unique arrangement of elements can also be realized without a biological or other model [17, 18]. Furthermore, by using our approach, also other known designs can be reproduced and/or optimized, for example, [1, 3, 4, 6, 7, 30, 32, 35, 36, 43, 46, 53, 62].
AS developed the methodology. AS and LS designed the soft robot. LS designed the control system. AS and LS performed the experiments and discussed the results. AS and LS wrote and revised the paper. Both authors read and approved the final manuscript.
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project Number 392323616, and Hamburg University of Technology (TUHH) in the funding program "Open Access Publishing".
The video supporting the conclusions of the experiments is included as Additional file 1.
40638_2018_88_MOESM1_ESM.mp4 (18 mb)
Additional file 1. Performance of the gecko-inspired soft robot at different inclination angles.
Marchese AD, Onal CD, Rus D. Autonomous soft robotic fish capable of escape maneuvers using fluidic elastomer actuators. Soft Robot. 2014;1(1):75–87.Google Scholar
Laschi C, Cianchetti M, Mazzolai B, Margheri L, Follador M, Dari P. Soft robot arm inspired by the octopus. Adv Robot. 2012;26(7):709–27.Google Scholar
Shepherd RF, Ilievski F, Choi W, Morin SA, Stokes AA, Mazzeo AD, Chen X, Wang M, Whitesides GM. Multigait soft robot. Proc Natl Acad Sci. 2011;108(51):20400–3.Google Scholar
Tolley MT, Shepherd RF, Mosadegh B, Galloway KC, Wehner M, Karpelson M, Wood RJ, Whitesides GM. A resilient, untethered soft robot. Soft Robot. 2014;1(3):213–23.Google Scholar
Brown E, Rodenberg N, Amend J, Mozeika A, Steltz E, Zakin MR, Lipson H, Jaeger HM. Universal robotic gripper based on the jamming of granular material. Proc Natl Acad Sci. 2010;107(44):18809–14.Google Scholar
Ilievski F, Mazzeo AD, Shepherd RF, Chen X, Whitesides GM. Soft robotics for chemists. Angew Chem Int Ed. 2011;50(8):1890–5.Google Scholar
Deimel R, Brock O. A novel type of compliant and underactuated robotic hand for dexterous grasping. Int J Robot Res. 2016;35(1–3):161–85.Google Scholar
Festo AG & Co. KG. MultiChoiceGripper. www.festo.com. Accessed Sept 2018.
Garofalo G, Ott C. Energy based limit cycle control of elastically actuated robots. IEEE Trans Autom Control. 2017;62(5):2490–7.MathSciNetzbMATHGoogle Scholar
Pfeifer R, Bongard J. How the body shapes the way we think: a new view of intelligence. Cambridge: MIT Press; 2007.Google Scholar
Kim S, Laschi C, Trimmer B. Soft robotics: a bioinspired evolution in robotics. Trends Biotechnol. 2013;31(5):287–94.Google Scholar
Kovač M. The bioinspiration design paradigm: a perspective for soft robotics. Soft Robot. 2014;1(1):28–37.Google Scholar
Pahl G, Beitz W, Feldhusen J, Grote K-H. Engineering design. A systematic approach. 3rd ed. London: Springer; 2007.Google Scholar
Rus D, Tolley MT. Design, fabrication and control of soft robots. Nature. 2015;521(7553):467–75.Google Scholar
Martinez RV, Branch JL, Fish CR, Jin L, Shepherd RF, Nunes RMD, Suo Z, Whitesides GM. Robotic tentacles with three-dimensional mobility based on flexible elastomers. Adv Mater. 2013;25(2):205–12.Google Scholar
Martinez RV, Fish CR, Chen X, Whitesides GM. Elastomeric origami: programmable paper-elastomer composites as pneumatic actuators. Adv Funct Mater. 2012;22(7):1376–84.Google Scholar
Yang D, Verma MS, So J-H, Mosadegh B, Keplinger C, Lee B, Khashai F, Lossner E, Suo Z, Whitesides GM. Buckling pneumatic linear actuators inspired by muscle. Adv Mater Technol. 2016;1(3):1600055.Google Scholar
Yang D, Mosadegh B, Ainla A, Lee B, Khashai F, Suo Z, Bertoldi K, Whitesides GM. Buckling of elastomeric beams enables actuation of soft machines. Adv Mater. 2015;27(41):6323–7.Google Scholar
Ainla A, Verma MS, Yang D, Whitesides GM. Soft, rotating pneumatic actuator. Soft Robot. 2017;4(3):297–304.Google Scholar
Fras J, Noh Y, Wurdemann H, Althoefer K. Soft fluidic rotary actuator with improved actuation properties. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), Vancouver, Canada; 2017. pp. 5610–5.Google Scholar
Connolly F, Walsh CJ, Bertoldi K. Automatic design of fiber-reinforced soft actuators for trajectory matching. Proc Natl Acad Sci. 2017;114(1):51–6.Google Scholar
Belding L, Baytekin B, Baytekin HT, Rothemund P, Verma MS, Nemiroski A, Sameoto D, Grzybowski BA, Whitesides GM. Slit tubes for semisoft pneumatic actuators. Adv Mater. 2018;30(9):1704446.Google Scholar
Follador M, Tramacere F, Mazzolai B. Dielectric elastomer actuators for octopus inspired suction cups. Bioinspir Biomim. 2014;9(4):1–10.Google Scholar
Tramacere F, Beccai L, Mattioli F, Sinibaldi E, Mazzolai B. Artificial adhesion mechanisms inspired by octopus suckers. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), Saint Paul, Minnesota, USA; 2012. pp. 3846–51.Google Scholar
Tang Y, Zhang Q, Lin G, Yin J. Switchable adhesion actuator for amphibious climbing soft robot. Soft Robot. 2018;5(5):592–600.Google Scholar
Schumacher CM, Loepfe M, Fuhrer R, Grassa RN, Stark WJ. 3D printed lost-wax casted soft silicone monoblocks enable heart-inspired pumping by internal combustion. RSC Adv. 2014;4(31):16039–42.Google Scholar
Loepfe M, Schumacher CM, Stark WJ. Design, performance and reinforcement of bearing-free soft silicone combustion-driven pumps. Ind Eng Chem Res. 2014;53(31):12519–26.Google Scholar
Stergiopulos C, Vogt D, Tolley MT, Wehner M, Barber J, Whitesides GM, Wood RJ. A soft combustion-driven pump for soft robots. In: Proceedings of the ASME conference on smart materials, adaptive structures and intelligent systems (SMASIS), paper ID 7536, Newport, Rhode Island, USA. 2014.Google Scholar
Loepfe M, Schumacher CM, Burri CH, Stark WJ. Contrast agent incorporation into silicone enables realtime flowstructure analysis of mammalian veininspired soft pumps. Adv Funct Mater. 2015;25(14):2129–37.Google Scholar
Onal CD, Chen X, Whitesides GM, Rus D. Soft mobile robots with on-board chemical pressure generation. In: Christensen HI, Khatib O, editors. Robotics research. Springer tracts in advanced robotics, Switzerland, vol. 100. Springer: Berlin; 2017. pp. 525–40.Google Scholar
Mosadegh B, Kuo C-H, Tung Y-C, Torisawa Y, Bersano-Begey T, Tavana H, Takayama S. Integrated elastomeric components for autonomous regulation of sequential and oscillatory flow switching in microfluidic devices. Nat Phys. 2010;6(6):433–7.Google Scholar
Shepherd RF, Stokes AA, Freake J, Barber J, Snyder PW, Mazzeo AD, Cademartiri L, Morin SA, Whitesides GM. Using explosions to power a soft robot. Angew Chem Int Ed. 2013;52(10):2892–6.Google Scholar
Rothemund P, Ainla A, Belding L, Preston DJ, Kurihara S, Suo Z, Whitesides GM. A soft, bistable valve for autonomous control of soft actuators. Sci Robot. 2018;3(16):7986.Google Scholar
Polygerinos P, Correll N, Morin SA, Mosadegh B, Onal CD, Petersen K, Cianchetti M, Tolley MT, Shepherd RF. Soft robotics: review of fluid-driven intrinsically soft devices; manufacturing, sensing, control, and applications in human–robot interaction. Adv Eng Mater. 2017;19(12):1700016.Google Scholar
Onal CD, Rus D. A modular approach to soft robots. In: Proceedings of the IEEE RAS/EMBS international conference on biomedical robotics and biomechatronics (BioRob), Rome, Italy; 2012. pp. 1038–45.Google Scholar
Marchese AD, Katzschmann RK, Rus D. A recipe for soft fluidic elastomer robots. Soft Robot. 2015;2(1):7–25.Google Scholar
Morin SA, Kwok SW, Lessing J, Ting J, Shepherd RF, Stokes AA, Whitesides GM. Elastomeric tiles for the fabrication of inflatable structures. Adv Funct Mater. 2014;24(35):5541–9.Google Scholar
Morin SA, Shevchenko Y, Lessing J, Kwok SW, Shepherd RF, Stokes AA, Whitesides GM. Using 'click-e-bricks' to make 3D elastomeric structures. Adv Mater. 2014;26(34):5991–9.Google Scholar
Kwok SW, Morin SA, Mosadegh B, So J-H, Shepherd RF, Martinez RV, Smith B, Simeone FC, Stokes AA, Whitesides GM. Magnetic assembly of soft robots with hard components. Adv Funct Mater. 2014;24(15):2180–7.Google Scholar
Webster RJ III, Jones BA. Design and kinematic modeling of constant curvature continuum robots: a review. Int J Robot Res. 2010;29(13):1661–83.Google Scholar
Soft Robotics Toolkit. https://softroboticstoolkit.com. Accessed Sept 2018.
Majidi C. Soft robotics: a perspective—current trends and prospects for the future. Soft Robot. 2014;1(1):5–11.Google Scholar
Martinez RV, Glavan AC, Keplinger C, Oyetibo AI, Whitesides GM. Soft actuators and robots that are resistant to mechanical damage. Adv Funct Mater. 2014;24(20):3003–10.Google Scholar
Kim KJ, Tadokoro S, editors. Electroactive polymers for robotic applications. Artificial muscles and sensors. London: Springer; 2007.Google Scholar
Calvert P. Hydrogels for soft machines. Adv Mater. 2009;21(7):743–56.Google Scholar
Steltz E, Mozeika A, Rembisz J, Corson N, Jaeger HM. Jamming as an enabling technology for soft robotics. In: Proceedings of the SPIE conference on electroactive polymer actuators and devices (EAPAD), paper ID 764225, San Diego, California, USA. 2010.Google Scholar
Cheng NG, Lobovsky MB, Keating SJ, Setapen AM, Gero KI, Hosoi AE, Iagnemma KD. Design and analysis of a robust, low-cost, highly articulated manipulator enabled by jamming of granular media. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), Saint Paul, Minnesota, USA; 2012. pp. 4328–33.Google Scholar
Galloway KC, Polygerinos P, Walsh CJ, Wood RJ. Mechanically programmable bend radius for fiber-reinforced soft actuators. In: Proceedings of the international conference on advanced robotics (ICAR), Montevideo, Uruguay, Montevideo, Uruguay. 2013.Google Scholar
Connolly F, Polygerinos P, Walsh CJ, Bertoldi K. Mechanical programming of soft actuators by varying fiber angle. Soft Robot. 2015;2(1):26–32.Google Scholar
Wang Y, Gregory C, Minor MA. Improving mechanical properties of molded silicone rubber for soft robotics through fabric compositing. Soft Robot. 2018;5(3):272–90.Google Scholar
Mosadegh B, Polygerinos P, Keplinger C, Wennstedt S, Shepherd RF, Gupta U, Shim J, Bertoldi K, Walsh CJ, Whitesides GM. Pneumatic networks for soft robotics that actuate rapidly. Adv Funct Mater. 2014;24(15):2163–70.Google Scholar
Lu N, Kim D-H. Flexible and stretchable electronics paving the way for soft robotics. Soft Robot. 2014;1(1):53–62.Google Scholar
Morin SA, Shepherd RF, Kwok SW, Stokes AA, Nemiroski A, Whitesides GM. Camouflage and display for soft machines. Science. 2012;337(6096):828–32.Google Scholar
Moseley P, Florez JM, Sonar HA, Agarwal G, Curtin W, Paik J. Modeling, design, and development of soft pneumatic actuators with finite element method. Adv Eng Mater. 2016;18(6):978–88.Google Scholar
Zhao H, Li Y, Elsamadisi A, Shepherd R. Scalable manufacturing of high force wearable soft actuators. Extreme Mech Lett. 2015;3:89–104.Google Scholar
Trimmer B, Lewis JA, Shepherd RF, Lipson H. 3D printing soft materials: what is possible? Soft Robot. 2015;2(1):3–6.Google Scholar
Yirmibesoglu OD, Morrow J, Walker S, Gosrich W, Canizares R, Kim H, Daalkhaijav U, Fleming C, Branyan C, Menguc Y. Direct 3D printing of silicone elastomer soft robots and their performance comparison with molded counterparts. In: Proceedings of the IEEE-RAS international conference on soft robotics (RoboSoft), Livorno, Italy. 2018.Google Scholar
Autumn K, Hsieh ST, Dudek DM, Chen J, Chitaphan C, Full RJ. Dynamics of geckos running vertically. J Exp Biol. 2006;209(2):260–72.Google Scholar
Unver O, Uneri A, Aydemir A, Sitti M. Geckobot: a gecko inspired climbing robot using elastomer adhesives. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), Orlando, Florida, USA; 2006. pp. 2329–2335.Google Scholar
Kim S, Spenko M, Trujillo S, Heyneman B, Mattoli V, Cutkosky MR. Whole body adhesion: hierarchical, directional and distributed control of adhesive forces for a climbing robot. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), Rome, Italy; 2007. pp. 1268–73.Google Scholar
Festo AG & Co. KG. Suction cups, complete ESS and suction cups ESV. www.festo.com. Accessed Sept 2018.
Polygerinos P, Lyne S, Wang Z, Nicolini LF, Mosadegh B, Whitesides GM, Walsh CJ. Towards a soft pneumatic glove for hand rehabilitation. In: Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS), Tokyo, Japan; 2013. pp. 1512–7.Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Workgroup on System Technologies and Engineering Design MethodologyHamburg University of TechnologyHamburgGermany
Seibel, A. & Schiller, L. Robot. Biomim. (2018) 5: 5. https://doi.org/10.1186/s40638-018-0088-4
Accepted 09 October 2018
First Online 26 October 2018
|
CommonCrawl
|
neverendingbooks
Chinese remainders and adele classes
Published January 31, 2008 by lievenlb
Oystein Ore mentions the following puzzle from Brahma-Sphuta-Siddhanta (Brahma's Correct System) by Brahmagupta :
An old woman goes to market and a horse steps on her basket and crashes the eggs. The rider offers to pay for the damages and asks her how many eggs she had brought. She does not remember the exact number, but when she had taken them out two at a time, there was one egg left. The same happened when she picked them out three, four, five, and six at a time, but when she took them seven at a time they came out even. What is the smallest number of eggs she could have had?
Here's a similar problem from "Advanced Number Theory" by Harvey Cohn (( always, i wonder how one might 'discreetly request' these remainders… )) :
Exercise 5 : In a game for guessing a person's age x, one discreetly requests three remainders : r1 when x is divided by 3, r2 when x is divided by 4, and r3 when x is divided by 5. Then x=40 r1 + 45 r2 + 36 r3 modulo 60.
Clearly, these problems are all examples of the Chinese Remainder Theorem.
Chinese because one of the first such problems was posed by Sunzi [Sun Tsu] (4th century AD)
in the book Sunzi Suanjing. (( according to ChinaPage the answer is contained in the song on the left hand side. ))
There are certain things whose number is unknown.
Repeatedly divided by 3, the remainder is 2;
by 5 the remainder is 3;
and by 7 the remainder is 2.
What will be the number?
The Chinese Remainder Theorem asserts that when $N=n_1n_2 \ldots n_k $ with the $n_i $ pairwise coprime, then there is an isomorphism of abelian groups $\mathbb{Z}/N \mathbb{Z} \simeq \mathbb{Z}/n_1 \mathbb{Z} \times \mathbb{Z}/n_2 \mathbb{Z} \times \ldots \times \mathbb{Z}/n_k \mathbb{Z} $. Equivalently, given coprime numbers $n_i $ one cal always solve the system of congruence identities
$\begin{cases} x \equiv a_1~(\text{mod}~n_1) \\ x \equiv a_2~(\text{mod}~n_2) \\ \vdots \\ x \equiv a_k~(\text{mod}~n_k) \end{cases} $
and all integer solutions are congruent to each other modulo $N=n_1 n_2 \ldots n_k $.
We will need this classical result to prove that
$\mathbb{Q}/\mathbb{Z} \simeq \mathcal{A}/\mathcal{R} $
where (as last time) $\mathcal{A} $ is the additive group of all adeles and where $\mathcal{R} $ is the subgroup $\prod_p \mathbb{Z}_p $ (i'll drop all 'hats' from now on, so the p-adic numbers are $\mathbb{Q}_p = \hat{\mathbb{Q}}_p $ and the p-adic integers are denoted $\mathbb{Z}_p = \hat{\mathbb{Z}}_p $).
As we will have to do calculations with p-adic numbers, it is best to have them in a canonical form using digits. A system of digits $\mathbf{D} $ of $\mathbb{Q}_p $ consists of zero and a system of representatives of units of $\mathbb{Z}_p^* $ modulo $p \mathbb{Z}_p $. The most obvious choice of digits is $\mathbf{D} = { 0,1,2,\ldots,p-1 } $ which we will use today. (( later we will use another system of digits, the Teichmuller digits using $p-1 $-th root of unities in $\mathbb{Q}_p $. )) Fixing a set of digits $\mathbf{D} $, any p-adic number $a_p \in \mathbb{Q}_p $ can be expressed uniquely in the form
$a_p = \sum_{n=deg(a_p)}^{\infty} a_p(n) p^n $ with all 'coefficients' $a_p(n) \in \mathbf{D} $ and $deg(a_p) $ being the lowest p-power occurring in the description of $a_p $.
Recall that an adele is an element $a = (a_2,a_3,a_5,\ldots ) \in \prod_p \mathbb{Q}_p $ such that for almost all prime numbers p $a_p \in \mathbb{Z}_p $ (that is $deg(a_p) \geq 0 $). Denote the finite set of primes p such that $deg(a_p) < 0 $ with $\mathbf{P} = { p_1,\ldots,p_k } $ and let $d_i = -deg(a_{p_i}) $. Then, with $N=p_1^{d_1}p_2^{d_2} \ldots p_k^{d_k} $ we have that $N a_{p_i} \in \mathbb{Z}_{p_i} $. Observe that for all other prime numbers $q \notin \mathbf{P} $ we have $~(N,q)=1 $ and therefore $N $ is invertible in $\mathbb{Z}_q $.
Also $N = p_i^{d_i} K_i $ with $K_i \in \mathbb{Z}_{p_i}^* $. With respect to the system of digits $\mathbf{D} = { 0,1,\ldots,p-1 } $ we have
$N a_{p_i} = \underbrace{K_i \sum_{j=0}^{d_i-1} a_{p_i}(-d_i+j) p_i^j}_{= \alpha_i} + K_i \sum_{j \geq d_i} a_{p_i}(-d_i+j)p_i^j \in \mathbb{Z}_{p_i} $
Note that $\alpha_i \in \mathbb{Z} $ and the Chinese Remainder Theorem asserts the existence of an integral solution $M \in \mathbb{Z} $ to the system of congruences
$\begin{cases} M \equiv \alpha_1~\text{modulo}~p_1^{d_1} \\
M \equiv \alpha_2~\text{modulo}~p_2^{d_2} \\
\vdots \\ M \equiv \alpha_k~\text{modulo}~p_k^{d_k} \end{cases} $
But then, for all $1 \leq i \leq k $ we have $N a_{p_i} – M = p_i^{d_i} \sum_{j=0}^{\infty} b_i(j) p^j $ (with the $b_i(j) \in \mathbf{D} $) and therefore
$a_{p_i} – \frac{M}{N} = \frac{1}{K_i} \sum_{j=0}^{\infty} b_i(j) p^j \in \mathbb{Z}_{p_i} $
But for all other primes $q \notin \mathbf{P} $ we have that $\alpha_q \in \mathbb{Z}_q $ and that $N \in \mathbb{Z}_q^* $ whence for those primes we also have that $\alpha_q – \frac{M}{N} \in \mathbb{Z}_q $.
Finally, observe that the diagonal embedding of $\mathbb{Q} $ in $\prod_p \mathbb{Q}_p $ lies entirely in the adele ring $\mathcal{A} $ as a rational number has only finitely many primes appearing in its denominator. Hence, identifying $\mathbb{Q} \subset \mathcal{A} $ via the diagonal embedding we can rephrase the above as
$a – \frac{M}{N} \in \mathcal{R} = \prod_p \mathbb{Z}_p $
That is, any adele class $\mathcal{A}/\mathcal{R} $ has as a representant a rational number. But then, $\mathcal{A}/\mathcal{R} \simeq \mathbb{Q}/\mathbb{Z} $ which will allow us to give an adelic version of the Bost-Connes algebra!
Btw. there were 301 eggs.
The defining property of 24
adeles and ideles
The Big Picture is non-commutative
nc-geometry and moonshine?
On2 : extending Lenstra's list
abc on adelic Bost-Connes
KMS, Gibbs & zeta function
Bost-Connes for ringtheorists
key-compression
On2 : Conway's nim-arithmetics
Published in featured
Connes
Previous Post block google analytics cookies
Next Post quotes of the day
|
CommonCrawl
|
Behavioral Explanations for the Low-Beta Anomaly
The Low-Beta Anomaly
The Low-Beta Anomaly is a well-documented stock market anomaly in which low beta stocks outperform high beta stocks. This goes against what basic financial intuition and theory would suggest; most notably it stands in direct opposition to the Capital Asset Pricing Model which stipulates that the return of stocks is a function positively dependent on beta. First documentations of this anomaly range back to the 1970s and subsequent research has found a consistent abnormal return associated to low beta stocks until this day. To illustrate, the below chart shows a comparison of the returns of U.S. stocks when sorted by their beta.
Source: [2]
To take advantage of this phenomenon, multiple approaches can be taken, all of which can be classified as either taking a long position in low-beta stocks or taking a short position in high-beta stocks or combining the previous two approaches. A word of caution should be spoken here. As is shown by research conducted by Korn and Kuntz, depending on how one chooses to compare different stocks according to their beta, vastly different results can be obtained. Take the following figure as an example.
The "Extreme High" strategy takes only a short position in high-beta portfolios while the "Extreme Low" strategy takes only a long position in low-beta portfolios and both the "Balanced" and "No Market" strategies take both long and short positions respectively differing only in the fact that the former strategy invests the same nominal amounts in each portfolio while the latter invests amounts weighted by the betas of the high- and low-beta portfolios. Evidently, there are large differences in the performance of the different strategies; however, all studies highlighted in this article use a method that exclusively goes long on low-beta stocks, so subsequent analysis will refer to this strategy. However, changing other variables when constructing the beta-sorted portfolios also has a substantial impact on the performance of the strategy. Most importantly, the weighting of single stocks within the portfolios and the choice of an investment universe affect performance: 1. with an investment universe of the S&P 1500, value weighting has lower alpha than equal weighting which has lower alpha than beta weighting and 2. shrinking the investment universe from the S&P 1500 to S&P 500 while applying beta weighting reduces alpha so drastically that it is not statistically relevant anymore. Therefore, we have checked the investment universe and weighting used for the papers that are used.
Because financial theory could not explain the Low-Beta Anomaly, numerous explanations in the field of behavioral finance have been put forward. This article will compare three of them and discuss what an investor should pay attention to, when trying to exploit this anomaly.
Benchmarks as Limits to Arbitrage
The first study is a great resource for anyone who wants to start getting to know more about behavioral finance, as it uses three of the most well-known behavioral finance phenomena to explain parts of the Low-Beta Anomaly.
In the first part of their explanation, the authors name two irrationalities of retail investors that could contribute to the Low-Beta Anomaly. Sadly, they do not provide rigorous tests for these explanations. Nevertheless, the paper can be used as a source for the intuition behind the Low-Beta Anomaly which is expanded upon in the subsequent sections.
The first phenomenon that is drawn upon is the preference for so-called lottery-stocks by retail investors. As is the case in lotteries, the payoff of investing in highly volatile stocks exhibits positive skewness, meaning investors have a small chance of gaining large but on average will lose money. The preference of retail investors for this type of stocks is a classic example of longshot bias, where investors are overly optimistic towards winning big which drives up prices of high-beta stocks, in turn reducing their returns over the long run. The authors stipulate that this bias is a consequence of a representativeness issue, more specifically the conjunction fallacy. By this fallacy, retail investors are more inclined to invest in very volatile stocks which are seen as 'great investments' since many of the most profitable investments have been associated to volatile stocks. What investors overlook however, is that volatile stocks only make up a subset of the category of 'great investments'. In other words, investors falsely assign a higher subjective probability to the statement "a stock is highly volatile and a great investment" than to "a stock is a great investment" which overly increases demand for lottery-stocks.
Secondly, the authors mention overconfidence in one's predictions as a reason for the Low-Beta Anomaly. The reasoning goes as follows: For highly volatile stocks, disagreement between stock price predictions of optimists and pessimists is greater than for less volatile stocks and since stock prices are primarily influenced by optimists, highly volatile stocks are bid up which reduces returns.
What clearly follows from these explanations is the question of why this anomaly is so persistent over time, even though it is very well documented. In other words, even though retail investors have these biases, why haven't institutional investors taken advantage of them? The obvious answer is that there must be some kind of limit to arbitrage – the authors propose that benchmarking could be such a limit.
Benchmarking is a common practice for institutional investors in which the returns of a portfolio manager are compared to a benchmark. A ratio that is often used is the so-called information ratio which is defined as IR=\frac{Portfolio\ Return\ -\ Benchmark\ Return}{Tracking\ Error} , with the Tracking Error being defined as the standard deviation of the numerator. Assuming a portfolio manager wants to overweight a low-beta (\beta<1) stock with a higher expected return than the market in an otherwise market-portfolio by a fraction \theta, in a CAPM-framework there are two opposing effects on the numerator of the information ratio which are captured by the following formula: \theta\left(\alpha-\left(1-\beta\right)E\left(R_{m}-R_{f}\right)\right). Here, represents the excess return of the low-beta stock, while \left(1-\beta\right)E\left(R_{m}-R_{f}\right) represents the negative impact of that stock on the expected return of the portfolio. This means that \alpha>\left(1-\beta\right)E\left(R_{m}-R_{f}\right) must hold in order for the information ratio to increase. The opposite is true for high-beta (\beta>1) stocks: the underperformance of these stocks has to be sufficiently large in order to decrease the overall expected return of the portfolio. In practice, this discourages portfolio managers from taking advantage of the Low-Beta Anomaly – in other words, benchmarking contributes to the persistence of the Low-Beta Anomaly.
Demand for Lottery-Stocks
The hypothesis of investors bidding up lottery-like stocks is explored further in paper [3]. In their investigations, they use a large sample space of all U.S. stocks with stock prices above $5 in the CRSP database and use equal-weighting in the beta-sorted portfolios. Their measure for lottery-demand for a given stock is calculated as the average of the five highest daily returns of that stock during the previous month with higher averages indicating higher demand for lottery-stocks. This measure was proposed by the authors in a previous paper as a robust measure for lottery-demand.
The authors find that when controlling for the effect of a higher than usual lottery demand, the Low-Beta Anomaly disappears while it did not disappear when controlling for other firm characteristics and risk measures, indicating that demand for lottery-like stocks is an important reason for the Low-Beta Anomaly. What is more, the authors confirm the hypothesis that high-beta stocks exhibit lower returns due to the fact that they are bid up by individual investors looking to invest in lottery-like stocks. Namely, they find that high-beta stocks are disproportionately affected by price pressure stemming from demand for lottery stocks and that the Low-Beta Anomaly predominantly appears in stocks which are held by individual rather than institutional investors. The latter observation somewhat contradicts the notion that the Low-Beta Anomaly persists due to benchmarking being a limit to arbitrage for institutional investors; rather, there have to be more effects at play which are not yet uncovered.
Beta Herding through Overconfidence
Paper [4] investigates overconfidence as an explanation for the Low-Beta Anomaly further. The authors use an investment universe of all stocks traded on the NYSE, AMEX and NASDAQ with market capitalizations in the top 80% and value-weight their portfolios. The theoretical foundation in a CAPM framework for overconfidence is outlined as follows (please note that overconfidence refers to overconfidence in the precision of signals about the market outlook):
When investors receive a signal to predict market return, they update their posterior expectation of the market return accordingly – if these investors are overconfident in the precision of that signal, they overreact to it and buy or sell shares such that the betas of individual stocks are compressed towards the market beta which is referred to as beta herding. If they are underconfident, the opposite happens, and the betas of individuals stocks are dispersed away from the market beta which is referred to as adverse beta herding. In turn, over- and underconfidence also affect expectations of market return which is illustrated by different shifts in the Security Market Line (SML). As is shown in the graph below, when starting from the thin blue line in the middle as a base case scenario, the SMLs for underconfident investors shift too little when receiving a positive or negative signal about market performance while those of overconfident investors shift too much:
Source: BSIC
When investors are underconfident and receive a negative signal, the SML for underconfident investors lies above that of rational investors and the expected return difference between high and low-beta stocks is greater for underconfident investors than rational ones; consequently, when investors regain their rationality, the expected return difference decreases, and low-beta stocks outperform high-beta stocks.
To confirm this hypothesis, the authors investigate the returns of low- and high-beta stocks following periods of adverse beta herding. The measure for beta herding used is the cross-sectional variance of standardized betas with a low variance indicating beta herding while a high variance indicates adverse beta herding. When regressing this beta herding measure on 14 lagged fundamental factors, the authors find that only the lagged beta herding measure itself and lagged volatility have a statistically significant impact on the beta herding measure. This indicates that beta herding is largely driven by behavioral factors. Consistent with their predictions, the authors find that the Low-Beta Anomaly predominantly appears after periods of high uncertainty, that is adverse beta herding.
Additionally, the authors find that the explanatory power of beta herding for the Low-Beta Anomaly does not disappear when also considering demand for lottery-like stocks, indicating that both explanations for the Low-Beta Anomaly are valid. Evidently, this explanation using overconfidence differs from the one proposed in the section on benchmarking. In fact, it empirically proves that the Low-Beta Anomaly is only present after periods of underconfidence instead of overconfidence, refuting the previously proposed explanation.
Taking Advantage of the Low-Beta Anomaly
In summary, there are different behavioral explanations for the Low-Beta Anomaly that coexist next to one another. Not only are these explanations a good starting point for anyone interested in behavioral finance, but one might be tempted to try taking advantage of the Low-Beta Anomaly. In this last section, we will give a short overview on the feasibility of such and undertaking and what an inclined investor should pay attention to. For this, we will once again reference paper [1].
As discussed in the introduction, there are many different factors that affect the profitability of trading strategies based on the Low-Beta Anomaly. Since we have so far focused on long-only strategies and these are easiest to implement for individual investors, the subsequent remarks will continue to refer to long-only strategies. Moreover, as is evident from the second graph in this article, the overall performance of long-only ("Extreme Low") vs. long-short ("No Market") strategies is similar, meaning investors are not rewarded for the additional complexity of long-short strategies.
Two important factors have already been highlighted in the introduction: for one, it is beneficial to use an investment universe that is as large as possible and secondly, beta weighting or equal weighting provide higher alphas, however value weighting has also proven to generate significant alpha in the past. Both of these pieces of evidence point in the direction that stocks with smaller market capitalizations are a driving factor of the abnormal returns of the Low-Beta Anomaly. Since these stocks are less liquid, transaction costs will be higher, so one should try to minimize the number of trades required. An obvious solution is to reduce the frequency of rebalancing the portfolio. As it turns out, this is favorable both in terms of returns as well as risk: when comparing yearly versus monthly rebalancing, yearly rebalancing not only exhibits higher returns but also a higher Sharpe Ratio.
Other than that, factors that could be taken into consideration are the length of the estimation period for betas used and the coverage of the investment universe. In general, shorter estimation periods (i.e. 1 month) are preferable as opposed to longer periods (i.e. 1 year) since they have higher alpha with similar Sharpe Ratios. The tradeoff for portfolio coverage is not as clear: In their study, the authors compared portfolio coverages of 2%, 10% and 20% of the S&P 1500 and found that lower portfolio coverages correspond to higher alpha and lower Sharpe Ratios while the alpha at 2% coverage loses its significance. Consequently, while a low coverage ratio could be appealing due to the lower number of trades needed, we think one would be better advised to go for a coverage ratio of 10%.
[1] Korn, Olaf & Kuntz, Laura-Chloé, "Low-Beta Strategies", 2016
[2] Baker, Malcolm P. & Bradley, Brendan & Wurgler, Jeffrey A., "Benchmarks as Limits to Arbitrage: Understanding the Low Volatility Anomaly", 2010
[3] Bali, Turan G. & Brown, Stephen J. & Murray, Scott & Tang, Yi, "A Lottery Demand-Based Explanation of the Beta Anomaly", 2016
[4] Hwang, Soosung & Rubesam, Alexandre & Salmon, Mark Howard, "Beta herding through overconfidence: A behavioral explanation of the low-beta anomaly", 2020
Tags: AnomalyBenchmarksBeta HerdingLottery-DemandLow-Beta
Leonardo · 2 August 2022 at 13:52
I am completely new to behavorial finance. Fascinating and nicely explained.
|
CommonCrawl
|
Are rings really more fundamental objects than semi-rings?
The discovery (or invention) of negatives, which happened several centuries ago by the Chinese, Indians and Arabs, has of course be of fundamental importance to mathematics. From then on, it seems that mathematicians have always striven to "put the negatives" into whatever algebraic structure they came across, in analogy with the usual "numerical" structure, $\mathbb{Z}$.
But perhaps there are cases in which the notion of a semiring seems more natural than the notion of a ring (I will be very very sloppy!):
1) The Cardinals. They have a natural structure of semiring, and the usual construction that allows to pass from $\mathbb{N}$ to $\mathbb{Z}$ cannot be performed in this case without great loss of information.
2) Vector bundles over a space; and notice that in the infinite rank case the Grothendieck ring is trivial just because negatives are allowed.
3) Tropical geometry.
4) The notion of semiring, as opposed to that of a ring, seems to be the most natural for "categorification", in two separate senses: (i) For example, the set of isomorphism classes of objects in a category with direct sums and tensor products (e.g. finitely-generated projective modules over a commutative ring) is naturally a semiring. When one constructs the Grothendieck ring of a category, one usually adds formal negatives, but this can be a very lossy operation, as in the case of vector bundles. (ii) A category with finite biproducts (products and coproducts, and a natural isomorphism between these) is automatically enriched over commutative monoids, but not automatically enriched over abelian groups. As such, it's naturally a "many object semiring", but not a "many object ring".
Do you have any examples of contexts in which semirings (which are not rings) arise naturally in mathematics?
ra.rings-and-algebras soft-question big-picture motivation semirings
Qfwfq
$\begingroup$ I like the question but not the title: it seems unnecessarily argumentative. You're not really trying to decide whether rings are "better" than semirings, are you? $\endgroup$ – Pete L. Clark Apr 7 '10 at 13:36
$\begingroup$ Two comments: (1) Since example 1 is a special case of 2, I guess it should follow that adjoining negatives to cardinal arithmetic would make it trivial? (2) I'm a bit puzzled by example 4. An abelian category looks to me-- and not just me -- like a ring with many objects since you have additive inverses. Was there some other example that you had in mind? $\endgroup$ – Donu Arapura Apr 7 '10 at 14:54
$\begingroup$ It's been many years since I looked at the precise definitions. But I understood the sum to be the cardinality of the disjoint union, which would be commutative. Is there something I'm missing? $\endgroup$ – Donu Arapura Apr 7 '10 at 15:25
$\begingroup$ @GE: I thought the cardinals had commutative addition, given by disjoint union of sets. In particular, for infinite cardinals, addition is just "max", at least if we have axiom of choice (so that any two cardinals are comparable). The noncommutative addition, I thought, was for ordinals, which are (isomorphism classes of) sets along with well-orderings. The ordered disjoint union is definitely noncommutative. $\endgroup$ – Theo Johnson-Freyd Apr 7 '10 at 15:56
$\begingroup$ With DA, I think you should amend statement 4. Heck, this is a CW question, so I feel no compunction about editing it myself. $\endgroup$ – Theo Johnson-Freyd Apr 7 '10 at 15:58
Of course the real question is whether abelian groups are really more fundamental objects than commutative monoids. In a sense, the answer is obviously no: the definition of commutative monoid is simpler and admits alternative descriptions such as the one I give here. The latter description can be adapted to other settings, such as to the 2-category of locally presentable categories, which shares many formal properties with the category of commutative monoids (such as being closed symmetric monoidal, having a zero object, having biproducts). As such I would claim that any locally presentable closed symmetric monoidal category is itself a categorified version of a semiring, not in the sense you describe, but in that it is an algebra object in a closed symmetric monoidal category, so we may talk of modules over it, etc.
However, it is undeniable that there is a large qualitative difference between the theories of abelian groups and commutative monoids. Observe that an abelian group is just a commutative monoid which is a module over $\mathbb{Z}$ (more precisely a commutative monoid has either a unique structure of $\mathbb{Z}$-module, if it has additive inverses, and no structure of $\mathbb{Z}$-module otherwise). The situation is analogous to the (smaller) difference between abelian groups and $\mathbb{Q}$-vector spaces. I do not know of a characterization of $\mathbb{Z}$ as a commutative monoid that can be transported to other settings. It seems that there is something deep about the fact that $\mathbb{Z}$-modules are so much nicer than commutative monoids, which often is taken for granted.
Reid Barton
Semirings are pervasive throughout computer science: every notion of resource lacking a corresponding notion of debt gives rise to semiring structure in a standard way.
First, you formalize resource as a (partial) commutative monoid. That is, you have a set representing resources (for example, time bounds or memory usage of a computer program), and the monoidal structure has the unit representing "no resource", and the concatenation representing "combine these two resources".
Then, you can generate a quantale from this monoid by taking the powerset of the monoid. This forms a quantale, where the ordering is set inclusion, meet and join are set intersection and union, with monoidal structure $A \otimes B = \{ a \cdot b \;|\; a \in A \land b \in B \}$, and $I = \{e\}$ (For partial monoids, we can just consider the defined pairs.) This quantale can be interpreted as "propositions about resources".
Note that $(I, \otimes, \bot, \vee)$ forms a semiring. As an aside, this fact is very useful for reasoning about programs.
Some further observations:
If you have a notion of "debt" corresponding to your notion of resource, then you can start with a group structure in step 1, and repeat the construction to get a ring.
Mariano's example fits into this framework, too, if you relax the commutativity restriction. Then you can view words as elements of a free monoid over an alphabet, and then you get languages as forming a noncommutative quantale.
Tropical algebra is an excellent framework for modelling optimization problems (ie, minimizing a cost function). You can often derive algorithms for by just twiddling Galois connections between the tropical semiring and a semiring of data. When this works, the process is so transparent it feels like magic!
Neel Krishnaswami
$\begingroup$ I like your observation about modelling optimization problems, can you provide some reference? $\endgroup$ – Diego de Estrada Apr 26 '10 at 7:04
$\begingroup$ Roland Backhouse has written some nice papers on this subject -- I particularly like "Regular Algebra Applied to Language Problems". (cs.nott.ac.uk/~rcb/MPC/RegAlgLangProblems.ps.gz) Warning: his notation gets black-hole dense, since he likes to do everything by algebraic manipulations, including logic and quantifier manipulations. His techniques are pretty enough that it is worth persevering, though. $\endgroup$ – Neel Krishnaswami Apr 26 '10 at 8:11
The algebraic treatment of formal language theory uses systematically semi-rings of power series.
Mariano Suárez-Álvarez
Although not a ring, the renormalisation group of quantum field theory is really a semigroup. Moreover, there is no compelling physical reason to add inverses, since in fact physically inverses need not exist. Indeed, the process of renormalisation often loses information, admits fixed points,...
José Figueroa-O'Farrill
Nikolai Durov showed that a commutative algebraic monad with 0 "is" a semiring if and only if b(x,0)=x for all x, where b is a binary operation with b(x,y) not identically equal to x.
So semirings are in some sense easy to get.
On the other hand, the commutative algebraic monads that seem to be his motivating examples, the unit ball in a commutative Banach algebra, are not semirings.
Carl Weisman
This is not really an answer to the question, but you can really look at both of them together. Let $R$ be any (unital) ring, and let K$_0 (R)$ be the usual Grothendieck group (group generated by stable classes of fg projectives); let $P$ be the set of images of the projective modules in K${}_0$. Then $P$ is a pre-ordering on K$_0$, i.e., $P + P = P$; $P-P = {\rm K}_0$; but not necessarily $P \cap -P = (0)$. The last condition holds if $R$ is stably finite (no fg free module on $n$ generators is a direct summand of a free module on $m$ generators if $n \geq m$). In this case, $P$ is a bona fide positive cone for a partial ordering on K$_0$. The ordering plays a nontrivial role in some areas.
Of course, $P$ is an abelian monoid inside a group; it can happen that even if $R$ is not even close to being commutative, K$_0(R)$ has the structure of a partially ordered ring (this is relatively rare, but occurs for some big group rings), in which case you have both a ring and inside it, a canonical semigroup, which is actually a semiring ($P\cdot P = P$). The semiring in this case is more significant than the ring, since it conveys more information about the original $R$.
David Handelman
Not the answer you're looking for? Browse other questions tagged ra.rings-and-algebras soft-question big-picture motivation semirings or ask your own question.
What does the semiring of ideals of a ring R tell us about R?
Why does the Grothendieck group $K_0(R)$ of a ring not depend on our choice of using left modules instead of right modules?
Is there an interpretation of the "anticommutative" symmetric monoidal structure on $\mathbb{Z}$-graded abelian groups in terms of $\mathbb{G}_m$ actions?
Higher categories and semirings
Which graphs are zero-divisor graphs for some ring?
Skeleton category of the category of skeleton categories?
Semiring naturally associated to any monoid?
Example of tensor category with non-simple unit $J\to \mathbb{1} \to Q$ and suitably extension $Q\to M\to J$
Commutative rings : Topoi = Fields :?
|
CommonCrawl
|
Selective suppression of melanoma lacking IFN-γ pathway by JAK inhibition depends on T cells and host TNF signaling
Targeting TBK1 to overcome resistance to cancer immunotherapy
Yi Sun, Or-yam Revach, … Russell W. Jenkins
Tankyrase inhibition sensitizes melanoma to PD-1 immune checkpoint blockade in syngeneic mouse models
Jo Waaler, Line Mygland, … Stefan Krauss
Tumor heterogeneity and clonal cooperation influence the immune selection of IFN-γ-signaling mutant cancer cells
Jason B. Williams, Shuyin Li, … Thomas F. Gajewski
Reversal of pre-existing NGFR-driven tumor and immune therapy resistance
Julia Boshuizen, David W. Vredevoogd, … Daniel S. Peeper
Melanoma protective antitumor immunity activated by catalytic DNA
Hong Cai, Eun-Ae Cho, … Levon M. Khachigian
Acquired resistance to anti-MAPK targeted therapy confers an immune-evasive tumor microenvironment and cross-resistance to immunotherapy in melanoma
Lisa Haas, Anais Elewaut, … Anna C. Obenauf
Targeting BRD/BET proteins inhibits adaptive kinome upregulation and enhances the effects of BRAF/MEK inhibitors in melanoma
Manoela Tiago, Claudia Capparelli, … Andrew E. Aplin
Harnessing the immunotherapeutic potential of CDK4/6 inhibitors in melanoma: is timing everything?
Emily J. Lelliott, Karen E. Sheppard & Grant A. McArthur
Siah2 control of T-regulatory cells limits anti-tumor immunity
Marzia Scortegagna, Kathryn Hockemeyer, … Ze'ev A. Ronai
Hongxing Shen1 na1,
Fengyuan Huang2 na1,
Xiangmin Zhang3,
Oluwagbemiga A. Ojo1,
Yuebin Li1,
Hoa Quang Trummell1,
Joshua C. Anderson1,
John Fiveash1,4,
Markus Bredel1,4,
Eddy S. Yang ORCID: orcid.org/0000-0002-6450-26381,4,
Christopher D. Willey ORCID: orcid.org/0000-0001-9953-02791,4,
Zechen Chong ORCID: orcid.org/0000-0001-5750-18082,4,
James A. Bonner ORCID: orcid.org/0000-0002-8413-37881,4 na2 &
Lewis Zhichang Shi ORCID: orcid.org/0000-0002-5351-87301,4,5,6,7 na2
Nature Communications volume 13, Article number: 5013 (2022) Cite this article
107 Altmetric
Cancer microenvironment
Therapeutic resistance to immune checkpoint blockers (ICBs) in melanoma patients is a pressing issue, of which tumor loss of IFN-γ signaling genes is a major underlying mechanism. However, strategies of overcoming this resistance mechanism have been largely elusive. Moreover, given the indispensable role of tumor-infiltrating T cells (TILs) in ICBs, little is known about how tumor-intrinsic loss of IFN-γ signaling (IFNγR1KO) impacts TILs. Here, we report that IFNγR1KO melanomas have reduced infiltration and function of TILs. IFNγR1KO melanomas harbor a network of constitutively active protein tyrosine kinases centered on activated JAK1/2. Mechanistically, JAK1/2 activation is mediated by augmented mTOR. Importantly, JAK1/2 inhibition with Ruxolitinib selectively suppresses the growth of IFNγR1KO but not scrambled control melanomas, depending on T cells and host TNF. Together, our results reveal an important role of tumor-intrinsic IFN-γ signaling in shaping TILs and manifest a targeted therapy to bypass ICB resistance of melanomas defective of IFN-γ signaling.
ICBs such as anti-CTLA-4 and anti-PD-1/L1 induce unprecedented clinical benefits in patients with various types of advanced cancer and are revolutionizing the field of cancer treatment1,2,3. Over the past decade or so, more than 70 approvals have been granted to ICBs by the FDA2,3,4,5,6,7, some of which are for first-line use, establishing ICBs as a major pillar of cancer care. Notwithstanding these transformative clinical successes, the overall efficacy of ICBs is limited to a small subset of cancer patients due to frequently encountered therapeutic resistance8. Using a cohort of advanced melanoma, we found that ~75% of melanoma patients did not respond to anti-CTLA-4 therapy and their tumors harbored losses of IFN-γ signaling genes9. Similar findings were reported for anti-PD-1 therapy10 and subsequently corroborated by a series of seminal studies in melanoma and colon cancer11,12,13,14. Together, these studies reveal that tumor loss of IFN-γ signaling is a major mechanism of resistance to ICBs9,10,11,12,13,14. However, therapeutic approaches to overcome this ICB resistance have remained largely unknown.
ICBs, by blocking immune checkpoints (namely, CTLA-4, PD-1, and PD-L1) hijacked by tumor cells to evade immunosurveillance, enhance the effector function (e.g., IFN-γ production)15,16 and decrease the abundance of immunosuppressive FoxP3+ regulatory T cells (Treg) in TILs17, leading to tumor rejection. In support of this, we found that the interactive loop of IFN-γ and IL-7 signaling in T cells dictates the therapeutic efficacy of anti-CTLA-4 and anti-PD-118. Although both T cell- and tumor-intrinsic IFN-γ signaling are required for ICB response, surprisingly, our original characterizations of TILs isolated from melanomas with knockdown of the essential IFN-γ receptor 1 (IFNγR1KD) did not reveal overt changes of the CD8+/Treg ratio9, a commonly used index of TILs' effector function. While this suggests that tumor IFN-γ signaling may not impart TILs, a caveat is that IFNγR1KD melanoma still has residual IFN-γ signaling and is not an ideal model to assess how the loss of IFN-γ signaling in tumor cells modulates TILs.
In this study, to circumvent the partial attenuation of IFN-γ signaling in IFNγR1KD melanoma and to unequivocally evaluate how tumor IFN-γ signaling affects TILs, we generate the B16 melanoma model with Ifngr1 knocked out by CRISPR-Cas9 (hereafter, IFNγR1KO). In contrast to IFNγR1KD melanomas, IFNγR1KO melanomas show a reduced abundance of CD8+ T cells at the baseline and lack increased infiltration and functional rejuvenation of TILs upon anti-CTLA-4 therapy. Bioinformatic analyses of human melanomas with impaired IFN-γ signaling also reveal reduced expression of T cell signature genes. Interestingly, our multi-omics studies inform a network of constitutively active PTKs centered on activated JAK1/2, downstream of the heightened mTOR signaling pathway in IFNγR1KO cells. In direct correlation, human melanomas with reduced IFN-γ signaling or ICB resistance exhibit upregulation of target genes in the mTOR and JAK1/2 pathways, indicative of their activation. Targeting activated JAK1/2 with Ruxo selectively suppresses IFNγR1KO but not scrambled control melanomas, coupled with enhanced effector functions (e.g., TNF production) and reduced Treg frequency in TILs. Subsequently, deletion of T cells and host TNF signaling completely abolish therapeutic effects of Ruxo, highlighting an indispensable role of T cells and host TNF signaling in this process. Collectively, we demonstrate that tumor-intrinsic IFN-γ signaling actively regulates infiltration and function of TILs; our results support Ruxo as a potential "targeted" therapy for ICB-resistant IFNγR1KO melanoma. Since Ruxo is clinically approved, this study may lead to a rapid repurposing of Ruxo to treat melanomas lacking IFN-γ signaling.
Creation of a "clean" melanoma model lacking IFN-γ signaling
Our previous work using the syngeneic IFNγR1KD melanoma model identified a theretofore unreported role of tumor-intrinsic IFN-γ signaling in anti-CTLA-4 response9. However, IFNγR1KD melanoma still retained some degree of IFN-γ signaling, evidenced by significant upregulation of inducible PD-L1 by IFN-γ (Supplementary Fig. 1a, the right panel), preventing us from explicitly assessing how tumor loss of IFN-γ signaling modulates TILs and ICB response. To circumvent this, we created the IFNγR1KO B16-BL6 melanoma model using CRISPR-Cas9 technology (Fig. 1a). Unlike IFNγR1KD cells, IFNγR1KO cells were completely resistant to IFN-γ stimulation, indicated by the lack of IFN-γ-induced p-JAK2 (Fig. 1b), no transcriptional upregulation of Irf1 (a direct downstream target of IFN-γ signaling, Fig. 1c), as well as no upregulation of PD-L1 (Fig. 1d), MHC II (Fig. 1e), and MHC I (Supplementary Fig. 1b). Furthermore, IFN-γ did not induce overt cell death in IFNγR1KO cells, assessed by 7-AAD and Annexin V staining (Supplementary Fig. 1c), neither did it suppress cell proliferation, indicated by no dilution of CellTrace Violet (CTV, a cell proliferation dye) (Supplementary Fig. 1d). Consequently, total numbers of viable IFNγR1KO cells were not reduced, contrasting a drastic decrease of scrambled control cells in response to IFN-γ (Fig. 1f).
Fig. 1: Generation and characterization of IFNγR1KO melanoma model lacking functional IFN-γ signaling.
B16-BL6 cells were transduced with specific single guide RNAs (sgRNAs) against exon #1 of mouse Ifngr1 or scrambled sgRNAs. a IFNγR1 expression in scrambled control and IFNγR1KO clones by flow cytometry (FACS strategy 1). b p-JAK2 in scrambled control and IFNγR1KO clones untreated (UnTx) or treated with IFN-γ (100 U/mL for 15 min) by Western blot. β-actin was the loading control. Experiments were repeated twice with similar results. c mRNA expression of Irf1 in scrambled control (n = 3) and IFNγR1KO cells (n = 3) treated with 1000 U/mL of IFN-γ for 90 min. ***p = 0.0007 by two-sided Student's t-test. d–f Scrambled control and IFNγR1KO cells were untreated (UnTx) or treated with 100 U/mL IFN-γ for 24 h to detect surface expression of PD-L1 (d) and MHC II (e) by flow cytometry (FACS strategy 1), or for 48 h to count live cells (f) (n = 4 per group). ****p = 0.00005 (Ctrl vs IFN-γ groups for the same cell type), by two-sided Student's t-test. g Tumor growth of scrambled control (n = 5) and IFNγR1KO (n = 5) melanomas in Rag-1−/− mice. h Tumor growth of scrambled control and IFNγR1KO melanomas in B6 mice treated with anti-CTLA-4 or isotype control (UnTx). N = 5 for Scrambled UnTx/Anti-CTLA-4; n = 4 for IFNγR1KO UnTx/Anti-CTLA-4. ***p = 0.0007, by two-way ANOVA with Dunnett's multiple comparisons test (with adjustment). i, j Surface expression of PD-L1 (i) and MHC II (j) on isolated tumor cells (CD45−) by flow cytometry (FACS strategy 3). N = 10 for Scrambled UnTx, n = 9 for each of the other three groups. In i, *p = 0.0415 and **p = 0.0082; in j, *p = 0.0283 and ***p = 0.0006, by two-way ANOVA with Tukey's multiple comparisons test (with adjustment). Representative data from two independent experiments are shown. The scatter plots and line graphs depict means ± SEM. Source data are provided in the Source Data file.
To examine whether IFNγR1KO affected tumor formation in vivo, we inoculated Rag-1−/− mice lacking mature T and B cells with scrambled control and IFNγR1KO cells. Consistent with a previous report showing comparable growth of melanomas lacking other important genes in the IFN-γ signaling13, we did not observe overt growth defect of IFNγR1KO melanoma (Fig. 1g). We also did not find altered growth kinetics of IFNγR1KO tumor in immunocompetent B6 mice, in the absence of ICBs (Fig. 1h). In keeping with reported ICB resistance in tumors with impaired IFN-γ signaling9,10,11,12,13,14, IFNγR1KO melanomas did not respond to anti-CTLA-4 treatment and continued to grow, whereas scrambled control melanomas were suppressed by anti-CTLA-4 (Fig. 1h). In line with our in vitro data, direct analyses of IFNγR1KO tumor cells (CD45−) did not show upregulation of PD-L1 (Fig. 1i) and MHC II (Fig. 1j) upon anti-CTLA-4, in contrast to marked upregulation in scrambled control melanoma cells. In aggregate, IFNγR1KO melanomas lack functional IFN-γ signaling and are completely resistant to ICBs and IFN-γ stimulation, presenting a "clean" system to interrogate how tumor-intrinsic loss of the IFN-γ signaling imparts TILs.
Reduced infiltration and function of TILs in IFNγR1KO melanoma
In line with an essential role of tumor IFN-γ signaling in tumor antigen presentation14, we noticed a drastic reduction of MHC molecules in IFNγR1KO cells (Fig. 1j), suggesting an inefficient process of T cell cross-priming in IFNγR1KO melanomas. However, our previous analysis of IFNγR1KD melanomas did not unveil altered ratios of CD8+/Treg9, a widely accepted indication of TILs' function. Considering IFNγR1KD melanomas still possessed IFN-γ signaling (albeit weaker) (Supplementary Fig. 1a), we revisited this issue by analyzing TILs isolated from the "clean" IFNγR1KO melanomas. Appallingly, unlike IFNγR1KD melanomas9, IFNγR1KO melanomas had markedly reduced CD8+ T cells at the baseline and no increased T cell infiltration upon anti-CTLA-4 therapy, as compared to scrambled control melanomas (Fig. 2a). In addition, anti-CTLA-4 failed to deplete intratumoral Treg (Fig. 2b), did not increase the CD8+/Treg ratio (Fig. 2b), and did not promote the production of effector cytokines by CD8+ (Figs. 2c and S2b) and CD4+ TILs (Supplementary Fig. 2a). Increasing trends of IFN-γ production by CD8+ (Supplementary Fig. 2c) and CD4+ (Supplementary Fig. 2d) TILs were noticed in scrambled control but not IFNγR1KO melanomas upon anti-CTLA-4. Similarly, anti-CTLA-4 increased the expression of T cell activation marker PD-1 on both CD8+ (Fig. 2d) and CD4+ TILs (Supplementary Fig. 2e) and concurrently reduced CD73 expression, an immunosuppressive ectoenzyme that catalyzes immunostimulatory ATP to potent immunosuppressive adenosine19, only in scrambled control melanoma. Taken together, these data indicate that melanomas with dysfunctional IFN-γ signaling have reduced infiltration and function of TILs, pointing to an important role of tumor IFN-γ signaling in shaping TILs.
Fig. 2: Tumor-intrinsic IFN-γ signaling shapes tumor-infiltrating T cells.
Isolated tumor-infiltrating lymphocytes (TILs) from scrambled control and IFNγR1KO melanomas treated with or without (UnTx) anti-CTLA-4 were analyzed for the abundance of CD4+ and CD8+ T cells (a) (*p = 0.0173; **p = 0.0044), FoxP3+ cells among CD4+ TILs (b) (**p = 0.0025; ***p = 0.0004), TNF and perforin production by CD8+ TILs after a brief stimulation with PMA and ionomycin (c) (**p = 0.0043), and surface expression of PD-1 and CD73 on unstimulated CD8+ TILs (d) (*p = 0.033; ***p = 0.0005; ****p = 0.00004). The scatter plots in a–d depict representative data (means ± SEM) from two independent experiments. N = 5 for Scrambled UnTx/Anti-CTLA-4, n = 4 for IFNγR1KO UnTx /Anti-CTLA-4 in a, c, d. N = 10 for Scrambled UnTx, n = 9 for Scrambled Anti-CTLA-4, IFNγR1KO UnTx and IFNγR1KO Anti-CTLA-4 groups in b. One-way ANOVA with Tukey's multiple comparisons test (with adjustment) was used for statistical analyses in a and b, and two-way ANOVA with Šídák's multiple comparisons test (with adjustment) in c and d. FACS strategy 3 was applied in a–d. e Skin cutaneous melanomas (SKCMs) in the TCGA database were grouped into IFNGR1High (n = 101) and IFNGR1Low (n = 150) according to IFNGR1 expression in melanoma cells (after deconvolution using a panel of melanoma-specific genes). Comparisons of T cell signature genes in the bulk (without deconvolution) IFNGR1High vs IFNGR1Low SKCMs were presented as boxplots. f Expression of IFNGR1 (after deconvolution) and T cell signature genes (without deconvolution) in SKCMs (n = 251) vs uveal melanomas (UVMs) (n = 58) from the TCGA database. The boxes in e, f depict the first (lower) quartile, median (center line), and the third (upper) quartile, and the vertical lines indicate the minimum and maximum values. The statistical analyses in e, f were calculated using R with Mann–Whitney U-test. *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001. Source data as well as exact p values for e, f are provided in the Source Data file.
We previously reported that patients with advanced melanoma harboring loss of IFN-γ signaling genes were resistant to anti-CTLA-4 therapy9. However, how IFN-γ signaling in human melanomas regulates TILs has not been reported. Inspired by our preclinical findings, we posited that human melanomas with attenuated IFN-γ signaling would have reduced expression of T cell signature genes, including prototypical surface markers for T cells (CD3, CD4, and CD8), effector molecules (IFNG, GZMB, perforin (PRF1), and TNF), and MHC molecules (MHC I: HLA-A, HLA-B, and HLA-C; MHC II: HLA-DRA). Since our previously published database9 was derived from whole exome sequencing and did not contain gene expression data, we were unable to address this using that dataset. To circumvent this, we first analyzed the TCGA database of human skin cutaneous melanomas (SKCMs) (n = 458). Specifically, we grouped SKCMs into IFNGR1High vs IFNGR1Low using the median expression of IFNGR1 in melanoma cells after deconvolution of the bulk samples with a panel of melanoma-specific genes20. We reasoned that IFNGR1Low SKCMs would have attenuated IFN-γ signaling and thus reduced expression of T cell signature genes. Indeed, we observed significantly reduced expression of CD3, CD4, CD8, HLA-DRA, GZMB, IFNG, and TNF in bulk IFNGR1Low SKCMs, while the others (HLA-A, HLA-B, HLA-C, and PRF1) were also reduced (although not significant) (Fig. 2e). Interestingly, in correlation with their lower T cell signature, IFNGR1Low SKCMs had worse survival probabilities (p = 0.0039) (Supplementary Fig. 2f), suggesting weaker anti-tumor responses in these patients. Secondly, unlike SKCMs being responsive to ICBs, uveal melanomas (UVMs) have been known to be resistant to ICBs21. We, therefore, assessed IFNGR1 expression in UVMs vs SKCMs after the aforementioned deconvolution and found significantly reduced IFNGR1 expression in UVMs (Fig. 2f), suggestive of weaker IFN-γ signaling in UVMs than SKCMs. Importantly, bulk UVMs also had decreased expression of most T cell signature genes (except for just one: HLA-A) (Fig. 2f). These data suggest that human melanomas with attenuated IFN-γ signaling have decreased expression of T cell signature genes, reflective of reduced T cell infiltration and function, corroborating our preclinical findings. Noteworthily, dysfunctional IFN-γ signaling (IFNγR1KO) is required to impart TILs in murine melanomas, as TILs in IFNγR1KD melanoma are largely unaltered9. However, in human melanomas, attenuated IFN-γ signaling as in IFNGR1low SKCMs and in UVMs (lower IFNGR1 expression than SKCMs) is sufficient to induce appreciable effects on TILs, implying that TILs in human melanomas are more sensitive to the dysregulation of tumor IFN-γ signaling. Despite this gradient discrepancy between murine and human melanoma, our results nevertheless highlight an important role of tumor IFN-γ signaling in shaping TILs.
Constitutively active JAK1/2 in IFNγR1KO melanoma
Although tumor loss of IFN-γ signaling has been defined as a major mechanism of resistance to anti-CTLA-4 (Fig. 1h)9 and anti-PD-110,11,12,13,14, little effort has been devoted to overcome this ICB resistance. We thus attempted to uncover therapeutic targets that can be harnessed to treat ICB-resistant melanomas lacking functional IFN-γ signaling. Considering the important role of PTKs in coordinating the IFN-γ signaling cascade, we conducted a global kinase activity analysis (kinomics). Because PTK inhibitors are readily available for pharmacological targeting, we specifically focused on activated PTKs that have positive Mean Kinase Statistics (MKS, a readout for extent and direction of change) and Mean Final Scores (MFS, indicative of specificity) greater than 0.5. Following these criteria, we found 26 activated PTKs in IFNγR1KO cells (Supplementary Table 1), including receptor tyrosine kinases (RTKs such as Ephrin receptor A and B (EphA/B)) as well as non-receptor tyrosine kinases (NRTKs: spleen tyrosine kinase (Syk) and ZAP70) that are known to be involved in carcinogenesis22. To our surprise, we also observed activated JAK1 and JAK2, essential downstream components of the IFN-γ signaling pathway23. More intriguingly, when these constitutively activated PTKs were integrated for annotated network modeling, a JAK1/2-centric network emerged (Fig. 3a), highlighting a central role of active JAK1/2 in the rewiring of these kinases. To directly confirm this finding, we analyzed phosphorylation of JAK1 and JAK2 (p-JAK1 and p-JAK2) by Western blot (WB) in cells cultured under normoxia (21% O2) and hypoxia (1% O2, mimicking hypoxic tumor microenvironment [TME]). Consistent with our kinomic data, p-JAK1 and p-JAK2 were increased in IFNγR1KO cells (Fig. 3b). Similarly, basal p-JAK1 and p-JAK2 were increased in IFNγR1KD cells (Supplementary Fig. 3a). We also assessed the three kinases (Syk, ZAP70, and EphA3) with high MFS from our kinomic study (Supplementary Table 1) by WB. Of note, basal p-Syk (Supplementary Fig 3b) and p-ZAP70 (Supplementary Fig. 3c) were very low in these cells. Although p-EphA3 was detectable (Supplementary Fig. 3c), they did not show significant increases in IFNγR1KO cells. Given these results and the central role of JAK1/2 in the PTK network, we dedicated our subsequent efforts on JAK1/2.
Fig. 3: Constitutive activation of JAK1/2 in IFNγR1KO melanoma cells.
a Identification of a JAK1/2-centric network of activated protein tyrosine kinases in IFNγR1KO cells by kinomic analysis. Input nodes (kinases) with large blue circles around them and smaller red circles on the top right corner indicate increased activity in IFNγR1KO cells. Arrowheads denote the direction of interaction and colors of the lines indicate the type of interaction (yellow: positive; red: negative; gray: context-dependent). b Scrambled and IFNγR1KO cells were cultured under normoxic (21% O2) or tumor microenvironment-mimicking hypoxic (1% O2) culture conditions, followed by Western blot (WB) analyses of p-JAK1/2 and total-JAK1/2. c p-STAT3 and total-STAT3 in scrambled and IFNγR1KO cells by WB. d, e Scrambled and IFNγR1KO cells were transduced with control lentiviruses (Ctrl) or lentiviruses encoding mouse IFNγR1 for re-expression (IFNγR1R). Successfully transduced cells were analyzed for IFNγR1 expression by flow cytometry (FACS strategy 1) (d) and p-JAK1/2 by WB (e). β-actin was used as a loading control in WB. Experiments were repeated twice with similar results in b, c, and e. Source data are provided in the Source Data file.
A classical downstream event of activated JAK1/2 is tyrosine phosphorylation of STATs, particularly STAT1 and STAT323. We thus examined p-STAT1/3 by WB. Surprisingly, we could not detect p-STAT1, even with a substantial amount of protein loading and prolonged film exposure times, indicating a low level of basal p-STAT1 in melanoma. On the other hand, although the basal level of p-STAT3 was also low, it was detectable and increased in IFNγR1KO cells, suggesting that STAT3 is a preferential target of activated JAK1/2 in IFNγR1KO cells (Fig. 3c). Because we used single IFNγR1KO clones but not mixtures in this study to avoid interference from cells with inefficient/partial deletion of Ifngr1 by CRISPR-Cas9, a potential concern would be that activated JAK1/2 may occur merely by chance in single clones rather than a direct outcome of deletion of IFN-γ signaling. To address this, we re-expressed Ifngr1 in scrambled control and IFNγR1KO cells to comparable levels (IFNγR1R) (Fig. 3d), using lentiviruses encoding mouse Ifngr1. Compellingly, IFNγR1R greatly reduced p-JAK1/2 in IFNγR1KO cells and largely rescued the overly increased p-JAK1/2 (Fig. 3e), directly linking lack of IFN-γ signaling to aberrant JAK1/2 activation in melanoma.
JAK1/2 activation in IFNγR1KO melanoma is unlikely mediated by extrinsic signals
Next, we wanted to shed light on how the JAK1/2 were activated in IFNγR1KO cells. As we recently reviewed23, the JAK-STAT pathway is a rapid membrane-to-nucleus signaling module regulated by a wide array of extracellular signals, including cytokines and growth hormones. In addition to IFN-γ, type I interferons such as IFN-α/β23 and IL-624 are among the major extrinsic signals that engage the JAK-STAT pathway. To determine whether JAK1/2 activation in IFNγR1KO cells could be due to enhanced IL-6 signaling, we analyzed the expression of Il6 and Il6r, both of which were significantly upregulated (Fig. 4a). To evaluate whether this enhanced IL-6 signaling mediated JAK1/2 activation, we blocked IL-6 and IL-6R with anti-IL-6 and anti-IL-6R antibodies, respectively, at concentrations that were sufficient to inhibit IL-6-induced p-STAT3 in melanoma cells (Supplementary Fig. 4a). Unfortunately, blocking IL-6 (Fig. 4b) and IL-6R (Fig. 4c) did not restore increased p-JAK1/2 in IFNγR1KO cells, suggesting IL-6 signaling is not involved in JAK1/2 activation.
Fig. 4: Activation of JAK1/2 in IFNγR1KO cells is not mediated by extrinsic signals.
a mRNA expression of Il6 (*p = 0.0153) and Il6r (*p = 0.0216) in scrambled control (n = 3) and IFNγR1KO (n = 3) cells by real-time RT-PCR. Representative data from two independent experiments are shown as means ± SEM. b, c p-JAK2 in scrambled control and IFNγR1KO cells pretreated with various doses of blocking antibodies against IL-6 (b) or IL-6R (c), analyzed by Western blot (WB). Experiments were repeated twice with similar results. d mRNA expression of Ifnar1 in scrambled control (n = 3) and IFNγR1KO (n = 3) cells by real-time RT-PCR. Representative data from two independent experiments are shown as means ± SEM. ***p = 0.0005. e–g Scrambled and IFNγR1KO cells were transduced with different sgRNAs against mouse Ifnar1. Successfully transduced cells were analyzed for IFNαR1 expression in untreated cells (e) and PD-L1 expression after stimulation with 100 ng/mL IFN-α for 48 h (f) by flow cytometry (FACS strategy 1) and p-JAK1/2 in untreated cells by WB (g). h p-JAK2 in scrambled and IFNγR1KO cells incubated with supernatants (SN) harvested from scrambled or IFNγR1KO cultures for 24 h, analyzed by WB. β-actin was used as a loading control in WB. All the experiments were repeated twice with similar results. A two-sided Student's t-test was used for statistical analyses in a, d. Source data are provided in the Source Data file.
We then asked if type I interferon signaling contributes to JAK1/2 activation. To this end, we analyzed IFNαR1, the essential receptor for IFN-α/β, and found it was significantly upregulated in IFNγR1KO cells (Fig. 4d). We interrogated if IFNαR1 upregulation would lead to greater IFN-α signaling. To this end, we stimulated scrambled control and IFNγR1KO cells with various doses of IFN-α, followed by an examination of p-STAT1 and p-STAT3, which did not show greater increases in IFNγR1KO cells (Supplementary Fig. 4b). Also, IFNγR1KO cells did not show enhanced sensitivity to IFN-α-induced killing (Supplementary Fig. 4c). While these results suggested that JAK1/2 activation in IFNγR1KO cells may not be due to enhanced IFN-α signaling, to explicitly rule out this, we deleted Ifnar1 in scrambled control and IFNγR1KO cells using CRISPR-Cas9 with different single guide RNAs (sg1 and sg2) (Fig. 4e). We confirmed the ablation of the IFN-α signaling in these cells, evidenced by no inducible PD-L1 upregulation after IFN-α stimulation (Fig. 4f). Importantly, this ablation of IFNαR1 did not rescue JAK1/2 activation (Fig. 4g), indicating a dispensable role of IFN-α signaling in JAK1/2 activation. Lastly, to explore the potential regulation of JAK1/2 activation by other extrinsic factors secreted by IFNγR1KO cells into the supernatant (SN) (cytokines, growth factors, extracellular vesicles, etc.), we treated scrambled control cells with SNs harvested from IFNγR1KO cultures for 24 h. This did not induce increased p-JAK2 (Fig. 4h), suggesting a nonessential role of extrinsic factors in JAK1/2 activation. Of note, increased p-JAK2 in IFNγR1KO cells persisted, irrespective of the SNs (IFNγR1KO or scrambled control) used, indicating that JAK1/2 activation is more of a cell-intrinsic event.
Augmented mTOR pathway mediates JAK1/2 activation in IFNγR1KO melanoma
In addition to extracellular signals (IFN-α, IL-6, etc.), constitutive activation of JAK1/2 can result from cell-intrinsic alterations (i.e., enhanced intracellular signaling25). To gain a global idea of this, we performed a whole transcriptome analysis of scrambled control and IFNγR1KO cells, which identified 265 downregulated genes and 332 upregulated genes (Fig. 5a). We performed a signaling pathway enrichment analysis using these differentially expressed genes (DEGs). This unsupervised analysis revealed a wide array of pathways that were significantly affected (Fig. 5b), including essential intracellular pathways in tumor aggression and therapeutic resistance (e.g., PI3K-Akt, p53, FoxO, MAPK, and mTOR pathways26,27,28,29), pathways important in tumor cell growth and proliferation (e.g., cell cycle26, glutathione metabolism30, arginine, proline metabolism31, etc.), as well as pathways involved in the formation of various types of cancer (e.g., prostate cancer, breast cancer, colorectal cancer, melanoma, gastric cancer, etc.). This confirms a widespread impact of IFN-γ signaling loss in tumor cells on tumor progression and therapy response, including its role in ICB resistance9.
Fig. 5: Heightened PI3K-AKT-mTOR axis in IFNγR1KO cells mediates JAK1/2 activation.
a, b Upregulated and downregulated genes in scrambled (n = 3) and IFNγR1KO (n = 3) cells by RNA-Seq (a) and top hits of altered signaling pathways in IFNγR1KO cells (b). The gene expression analyses were performed using DESeq2 (version 1.34.0). The Wald test was used to calculate the p values and log2 fold changes. Genes with an adjusted p value < 0.05 and absolute log2 fold change >1 were considered as differentially expressed genes (DEGs). A volcano plot was used to show all upregulated and downregulated DEGs using the ggplot2 R package. Enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways of the DEGs were identified by enrichr package. Significant terms of the KEGG pathways were selected with a p value < 0.05. c Cell lysates of scrambled control (n = 4) and IFNγR1KO (n = 4) cells were subjected to mass spectrometry-based phosphoproteomic analysis. Phosphorylation sites known to be mediated by experimentally defined kinases were shown in the heatmap. Blue and red colors indicate low and high expression levels, respectively. d p-AKT, total-AKT, and p-4E-BP1 in scrambled and IFNγR1KO cells were analyzed by Western blot (WB). e p-JAK2 in scrambled and IFNγR1KO B16-BL6 cells untreated (Ctrl) or pretreated with rapamycin (1 μM) for 3 h, analyzed by WB. f Scrambled and IFNγR1KO cells were transduced with lentiviruses expressing nonspecific shRNAs (shCtrl) or mTOR shRNAs (shmTOR), followed by analyses of mTOR, p-JAK1/2 by WB. β-actin was the loading control in WB. Experiments were repeated twice (f) or thrice (d, e) with similar results. Source data are provided in the Source Data file.
To directly detect the activities of these intracellular signaling pathways, we conducted phosphoproteomic studies with a special focus on serine/threonine kinases, considering their intricate interactions with PTKs22. Our analysis identified 7529 phosphosites, of which 217 showed significantly increased phosphorylation in IFNγR1KO cells (deposited to massive.ucsd.edu and also included in the source data file). We paid special attention to the ones catalyzed by experimentally well-defined kinases (Fig. 5c); targeted proteins and phosphosites were listed on the right. Uniprot IDs (mouse) for these kinases were then used to map to KEGG IDs for pathway enrichment analyses, which defined 23 signaling pathways (Supplementary Table 2). Because the same phosphopeptides can be mediated by different kinases, it is rare to have definitive cognate phosphopeptides for individual kinases. We, therefore, reason that if the activation of kinases in one pathway can explain most of the phosphorylation events, a great level of confidence can be reached to conclude that that pathway is activated. Following this logic, we sorted the 23 signaling pathways according to the number of identified phosphorylation sites known to be catalyzed by their kinase members, which identified the top five pathways as PI3K-Akt, growth hormone synthesis, ErbB, mTOR, and EGFR tyrosine kinase inhibitor resistance. Considering that our above results did not support an important role of extrinsic factors (such as cytokines and growth hormones) in JAK1/2 activation and the fact that EGFR tyrosine kinase inhibitor resistance was not relevant to our study, we dedicated our efforts to the other three signaling pathways. Notably, these pathways are intimately interconnected with each other in cancer, as ErbB signaling feeds into PI3K-Akt32 and mTOR is a major downstream module of PI3K-Akt27. Importantly, our RNA-seq and phosphoproteomic analysis converged on the PI3K-Akt and mTOR pathways, highlighting their essential roles in our system. Of note, the JAK-STAT pathway was not identified by our phosphoproteomic and RNA-seq analyses; this is likely due to the preferential enrichment of peptides with serine and/or threonine phosphorylation by the TiO2-based sample preparation for phosphoproteomics, the fact that JAK-STAT proteins are primarily activated by tyrosine phosphorylation, very low basal levels of p-STAT3/1, and the dependence of these omics analysis on protein abundance. To directly test if the PI3K-Akt-mTOR axis is activated in IFNγR1KO melanoma cells, we analyzed p-AKT and p-4E-BP1, functional readouts of mTOR action, and found both were increased (Fig. 5d). Given the co-activation of JAK1/2 and mTOR in IFNγR1KO cells and a recent study showing a positive mutual regulatory relationship between them in colorectal tumor cells33, we assessed how they interact and regulate one another in melanoma. First, we took a pharmacological approach by treating cells with rapamycin (Rapa), a well-established inhibitor for mTOR and found that Rapa profoundly suppressed p-JAK2 (Fig. 5e); conversely, inhibition of JAK1/2 with Ruxo did not change p-4E-BP1 in IFNγR1KO cells (Supplementary Fig. 5a), placing mTOR upstream of JAK1/2. To directly assess the role of mTOR in JAK1/2 activation, we knocked down mTOR using shRNAs (mTORKD) (Fig. 5f). Similar to mTOR inhibition by Rapa, mTORKD also significantly reduced p-JAK1/2 and at least partially rescued JAK1/2 activation in IFNγR1KO cells (Fig. 5f). Collectively, these results establish that augmentation of mTOR pathway is a major upstream regulator of JAK1/2 activation in melanoma cells lacking functional IFN-γ signaling.
To establish the clinical relevance of our findings, we rationalized that IFNGR1Low SKCMs with impaired IFN-γ signaling and patient melanomas resistant to ICBs would house activated mTOR and JAK1/2 to some extent. Because phosphorylation data of JAK1/2 and mTOR were not available in the TCGA database and in the published database of melanoma patients treated with ICB (GSE78220)34, precluding a direct examination of their activation, as an alternative approach, we constructed a list of genes that were reported to be direct downstream targets of mTOR and JAK1/2 in various tumor types, including bladder cancer, breast cancer, liver cancer, lymphoma, and chondrosarcoma. These genes encompass tumor promoter genes (ENO1, FASN, FKBP4, ODC1, JUNB, and VEGFA)35,36,37,38,39,40,41,42 and tumor suppressor gene (GADD45A)43. Notably, activation of mTOR and/or JAK-STAT leads to upregulation of tumor promoter genes (ENO1: α-Enolase, an important glycolytic enzyme; FASN: fatty acid synthase, a major enzyme for de novo fatty acids synthesis; FKBP4: FK506-binding protein 4, an HSP90-associated co-chaperone; ODC1: ornithine decarboxylase, the first biosynthetic enzyme of the polyamine pathway; JUNB: a key member in the activator protein (AP-1) family with an important role in cell cycle progression; VEGFA: vascular endothelial growth factor-A, a key regulator of angiogenesis) but downregulation of GADD45A (the founding member of the growth arrest and DNA damage-inducible 45 families with important function in promoting cell cycle arrest and apoptosis), consistent with their prominent roles in tumor formation27,44. To specifically assess their expression in human melanomas, we deconvoluted the TCGA and GSE78220 databases derived from bulk tumor samples, as described above. While understandably not all the genes showed significant changes in melanoma, we did observe upregulation of ENO1, FASN, and FKBP4, as well as downregulation of GADD45A in IFNGR1Low SKCMs (Supplementary Fig. 5b); on the other hand, patient melanomas resistant to anti-PD-1 exhibited significant increases of ENO1, FKBP4, ODC1, and VEGFA (Supplementary Fig. 5c). The other genes exhibited expected increases/decrease, which did not reach statistical significance (Supplementary Fig. 5b, c). In spite of the differences in affected genes between the TCGA and GSE78220 databases, two genes (ENO1 and FKBP4) were consistently upregulated in both IFNGR1Low SKCMs and ICB non-responders, suggesting that they may be more sensitive to attenuation of IFN-γ signaling and ICB resistance. Taken together, our RNA-seq, phosphoproteomic analysis, bioinformatic analysis, as well as pharmacological and genetic modulations of the mTOR pathway establish that malfunction of IFN-γ signaling engages the mTOR-JAK1/2 axis in melanoma cells, which may represent an attractive target for therapeutic interventions to bypassing ICB resistance in melanomas lacking functional IFN-γ signaling.
Selective suppression of IFNγR1KO melanomas by JAK inhibition
To test this, we employed Ruxo, an FDA-approved JAK1/2 inhibitor for myeloproliferative neoplasms (MPN), which is also being tested preclinically45,46 and clinically in solid tumors47, as well as in overcoming chemotherapy resistance48,49. However, its utility in ICB resistance has not been explored. To this end, we treated B6 mice bearing scrambled control and IFNγR1KO melanomas with Ruxo. Whereas Ruxo did not result in growth suppression of scrambled control melanoma (Fig. 6a), it potently inhibited IFNγR1KO melanoma growth (Fig. 6b, c), highlighting a selective suppressive effect of Ruxo in the latter. Given that JAK1/2 were activated in IFNγR1KO cells at the baseline (Fig. 3), we asked if IFNγR1KO cells were more sensitive to Ruxo-induced cell killing. To this end, we first titrated out effective doses of Ruxo at suppressing JAK1/2 in scrambled control and IFNγR1KO cells, based on suppression of p-STAT1/3 derived from a brief stimulation of IFN-α (Note: this was necessary for a ready detection of p-STAT1/3, given the low basal level of p-STAT1/3 in these cells). As shown in Supplementary Fig. 6a, Ruxo already showed significant suppression of p-STAT1/3 at 10 nM and at 1 μM, completely blocked induced p-STAT1/3 by IFN-α. However, no appreciable killing of scrambled control and IFNγR1KO cells by Ruxo (10 nM–1 μM) was observed (Supplementary Fig. 6b), neither did it cause differential suppression of colony formation between these two cell types in a 7-day colony forming assay (Supplementary Fig. 6c). These data indicate that the selective suppression of IFNγR1KO melanoma by Ruxo is unlikely a result of the preferential killing of IFNγR1KO cells by Ruxo.
Fig. 6: Ruxo suppresses IFNγR1KO but not scrambled control melanomas.
a Growth of scrambled control melanomas in B6 mice treated with vehicle (UnTx, n = 5) or with Ruxo (90 mg/kg by oral gavage twice daily) (n = 5) for 10 days. b–e B6 mice bearing IFNγR1KO melanoma were treated as in a. b Tumor growth: n = 8 per group; ****p = 0.0002 by two-way ANOVA with Šídák's multiple comparisons test. c Tumor weights at euthanization (n = 8 per group; *p = 0.0422). Isolated TILs from these mice were analyzed for frequency of FoxP3+ Treg (d) (n = 8 per group; ****p = 0.0003) and cytokine production of TNF, IFN-γ, Perforin, and IL-2 in CD4+ TILs (e) (n = 5 per group; *p = 0.0233 for TNF; *p = 0.011 for IFN-γ; *p = 0.0311 for Perforin; *p = 0.0319 for IL-2) after a brief stimulation with PMA and ionomycin. f, g Isolated TILs were cultured with 100 U/mL IL-2, ±1 μM Ruxo, for 3 days, to analyze FoxP3+ Treg (f) (n = 3 per group; *p = 0.0346; ****p = 0.00009) and IFN-γ/TNF production (g) (n = 3 per group; *p = 0.0488) in CD4+ TILs after a brief stimulation with PMA and ionomycin by flow cytometry (FACS strategy 3). A two-sided Student's t-test was used in c–g for statistical analyses. The scatter plots and line graphs depict means ± SEM. Source data are provided in the Source Data file.
Next, we wondered if Ruxo treatment of IFNγR1KO melanoma could render TILs more functional. To this end, single-cell suspensions prepared from untreated and Ruxo-treated IFNγR1KO melanomas were analyzed. In line with the fact that Ruxo is a well-established JAK1/2 inhibitor, we observed the expected suppression of p-JAK2 and p-STAT3 in tumor (CD45−) cells by Ruxo (Supplementary Fig. 6d). Interestingly, Ruxo resulted in a pronounced reduction of Treg in CD4+ TILs (Fig. 6d) and a milder but still significant reduction in CD4+ splenocytes (Supplementary Fig. 6e), consistent with previously reported Ruxo suppression of Treg in humans50 and mice51. Moreover, Ruxo increased TNF, IFN-γ, perforin, and IL-2 production by CD4+ TILs (Fig. 6e), essential effector molecules in anti-tumor immunity; similar increases of IFN-γ (Supplementary Fig. 6f), perforin (Supplementary Fig. 6g), and GzmB (Supplementary Fig. 6h) were also noticed in CD8+ TILs. Intrigued by these prominent in vivo Ruxo effects on TILs, we asked if Ruxo could directly reprogram TILs in vitro. To this end, TILs isolated from untreated melanomas were cultured with 100 U/mL IL-2, ±1 μM Ruxo (a concentration with potent suppression of p-STAT1/3 in vitro) for 3 days and then analyzed for FoxP3 expression (Fig. 6f) and production of IFN-γ/TNF (Fig. 6g). Although not as striking as the in vivo effects, this in vitro Ruxo regimen nevertheless reduced FoxP3 expression and enhanced effector function of TILs. Considering the reported on-target suppressive effects of Ruxo on MPN-associated splenomegaly that could ensue potential toxicity on mature T cells52, we assessed the abundance of CD4+ and CD8+ T cells in the spleens and did not observe overt reduction (Supplementary Fig. 6i), suggesting negligible toxicity from this short-term Ruxo therapy. Because our results revealed minimal direct killing of tumor cells and substantial modulation of TILs by Ruxo, we posit that Ruxo relies on TILs to mediate its efficacy.
T cells and host TNF signaling control Ruxo efficacy
To directly assess the importance of T cells in Ruxo therapy, we treated IFNγR1KO melanoma-bearing mice with anti-CD4 and anti-CD8 neutralizing antibodies prior to and during Ruxo therapy. Strikingly, deletion of either CD4+ or CD8+ T cells completely abolished Ruxo efficacy (Fig. 7a), supporting a pivotal role of T cells in orchestrating therapeutic effects of Ruxo. Next, we wanted to delineate the molecular mechanism(s) underscoring Ruxo efficacy. To this end, we focused on TNF for the following considerations: (1) TNF has long been regarded as an important effector molecule in mediating tumor necrosis53 and has been previously shown to be important in anti-tumor immune responses54. (2) TNF has been reported to suppress Treg in both mouse and human systems55,56, which coincides with the prominent effect of Ruxo therapy (Fig. 6d and S6e), implying an intricate connection between Ruxo and TNF. (3) Both Ruxo and anti-CTLA-4 induced prominent production of TNF by TILs (Figs. 2c, S2a, and Fig. 6e). Because Ruxo was systemically administered in our study, we further assessed if Ruxo impacted TNF production by other immune cells such as intratumoral CD8+ T cells (Supplementary Fig. 7a), dendritic cells (DCs: CD11c+MHC-II+, Supplementary Fig. 7b), and macrophages (CD11b+F4/80+, Supplementary Fig. 7c). Interestingly, no increase of TNF production by these immune cells was induced by Ruxo, suggesting a selective promotion of TNF production by Ruxo in CD4+ TILs. Despite these seemingly dispensable effects of Ruxo on TNF production in these immune cells, they (in particular, CD8+ TILs and macrophages, and likely, other immune cells) still produce an abundant amount of TNF, highly comparable to that of CD4+ TILs (Fig. 6e), which can contribute to the overall T cell-dependent anti-tumor responses elicited by Ruxo therapy. To directly examine how the host TNF signaling affects Ruxo efficacy, we inoculated TNF−/− mice lacking TNF in host cells, including immune cells (T cells, myeloid cells, etc.), with IFNγR1KO melanoma cells, followed by Ruxo treatment. In contrast to the significant suppression of IFNγR1KO melanomas by Ruxo in B6 mice (Fig. 6b), Ruxo was unable to suppress IFNγR1KO melanoma in TNF−/− mice (actually, reversed) (Fig. 7b, c), highlighting a crucial role of host TNF signaling in this process. To assess whether TNF deficiency abrogates Ruxo modulatory effects on TILs, we analyzed TILs from TNF−/− mice treated with Ruxo and did not observe Ruxo-driven depletion of Treg (Fig. 7d). Also, there was no increase of IFN-γ production by CD4+ TILs (Fig. 7e) and CD8+ TILs (Supplementary Fig. 7d). Similar findings were noticed for IL-2 production by CD4+ (Fig. 7f) and CD8+ TILs (Supplementary Fig. 7e). Considering the potentially detrimental effects from chronic TNF deficiency in TNF−/− mice, we took a complementary approach by temporarily blocking TNF with in vivo anti-TNF neutralizing antibodies. We treated mice before tumor inoculation and throughout the duration of Ruxo therapy. As shown in Supplementary Fig. 7f, like TNF−/− mice, in vivo neutralization of TNF also largely abolished the therapeutic effects of Ruxo. Lastly, considering the well-recognized role of TNF in inducing tumor necrosis, we determined if TNF could induce greater killing of IFNγR1KO melanoma cells as an additional underlying mechanism, in addition to the aforementioned immunomodulatory effects. To this end, both scrambled control and IFNγR1KO cells were treated with TNF in vitro. Surprisingly, no obvious killing was seen, even when TNF was used at a supraphysiologically high dose (10,000 U/mL) (Supplementary Fig. 7g), suggesting that direct killing of tumor cells by TNF may not be important for Ruxo efficacy. In sum, these results indicate that Ruxo selectively suppresses the growth of IFNγR1KO melanoma in a T cell and TNF-dependent manner.
Fig. 7: Ruxo-induced suppression of IFNγR1KO melanomas relies on T cells and host TNF.
a Growth of IFNγR1KO melanomas in B6 mice treated with Ruxo, ±neutralizing antibodies against CD4+ (α-CD4) or CD8+ (α-CD8) T cells (n = 5 for UnTx, n = 5 for Ruxo, n = 10 for α-CD4, n = 10 for α-CD4+Ruxo, n = 9 for α-CD8, n = 5 for α-CD8+Ruxo). ****p = 0.00007 by two-way ANOVA with Tukey's multiple comparisons test (with adjustment). b–f TNF−/− mice bearing IFNγR1KO melanoma were treated with vehicle (UnTx, n = 4) or Ruxo (n = 4) (90 mg/kg by oral gavage twice daily) for 10 days. b Tumor growth (***p = 0.0007 by two-way ANOVA with Šídák's multiple comparisons test with adjustment). c Tumor weights at euthanization (*p = 0.0391) were shown. Isolated CD4+ TILs from these mice were analyzed for FoxP3+ Treg frequencies (d) and production of IFN-γ (e) (*p = 0.0473) and IL-2 (f) after a brief PMA and ionomycin stimulation by flow cytometry (FACS strategy 3). A two-sided Student's t-test was used in c–f for statistical analyses. Representative results from two independent experiments are shown as means ± SEM in the scatter plots and line graphs. Source data are provided in the Source Data file.
Paradigm-shifting ICBs have brought great promises to patients with advanced melanoma, a tumor type that had been largely incurable until the approval of anti-CTLA-4 in 2011. However, therapeutic resistance to ICBs is common8 and the loss of IFN-γ signaling in melanoma cells has been reported to be a major mechanism of resistance9,10,11,12,13,14. Given this key information, little is known about why this resistance occurs and how to overcome it. Here, we identify that melanomas defective of IFN-γ signaling are not only resistant to IFN-γ-induced cell death but also have reduced infiltration of CD8+ T cells and lack of anti-CTLA-4 induced functional rejuvenation of TILs, posing a dual resistance to ICBs. Surprisingly, IFNγR1KO melanomas harbor an aberrantly active mTOR-JAK1/2 axis, which, when targeted with an FDA-approved JAK1/2 inhibitor Ruxo, results in potent and selective suppression of IFNγR1KO but not scrambled control melanomas, in a T cell and host TNF-dependent fashion. Moreover, human melanomas with attenuated IFN-γ signaling or ICB resistance exhibit reduced expression of T cell signature genes and alteration of target genes downstream of mTOR and JAK1/2 pathways, suggestive of their activation. Our results herein establish an important role of tumor IFN-γ signaling in modulating TILs and manifest a potential "targeted" therapy for ICB-resistant IFNγR1KO melanomas.
Tumors lacking functional IFN-γ signaling have been shown to evade endogenous immunosurveillance57,58,59 and anti-tumor immunity elicited by ICBs9,10. However, it is unknown whether tumor-intrinsic IFN-γ signaling modulates TILs. On one hand, IFN-γ, by upregulating MHC molecules and activating tumor antigen processing and presentation machinery60,61,62,63,64, promotes anti-tumor immunity; on the other hand, it can also suppress anti-tumor immunity by inducing various regulatory mechanisms such as PD-L1 upregulation in stromal and tumor cells65. We observed a pronounced reduction of both MHC molecules and PD-L1 in IFNγR1KO melanoma, albeit the former being more pronounced. Our study corroborates an early pioneering study by Bob Schreiber and colleagues, which demonstrated that IFNγR1 truncation in methA fibrosarcoma decreased tumor immunogenicity and responsiveness to LPS therapy59. Although our results suggest that lack of inducible PD-L1 upregulation in IFNγR1KO melanomas has a seemingly nonessential role in promoting TILs, this is likely a context-dependent finding, as incongruous results have been reported for the importance of tumor PD-L1 in anti-tumor immunity66,67,68. Given these findings of reduced T cell infiltration and function in IFNγR1KO melanoma, it would be interesting to delineate the specific molecular and biochemical mechanisms underlying the immunomodulation of TILs by tumor IFN-γ signaling in the future. For example, what is the role of MHC downregulation in this process? How would tumor cell-intrinsic IFN-γ signaling regulate stemness, survival, and metabolic fitness of tumor cells, as these features have been associated with therapeutic resistance45 and suppression of TILs' function69? To this end, a recent study showed that melanoma cells defective of IFN-γ signaling outgrew wild-type tumor cells when treated with anti-PD-170, indicating a survival advantage of IFNγR1KO cells.
We identified a JAK1/2-centered network of constitutively active PTKs in IFNγR1KO melanomas, which offers a "personalized" therapeutic target that can be harnessed to treat these ICB-resistant melanomas. Indeed, short-term Ruxo therapy selectively suppressed IFNγR1KO melanomas, coupled with improved TILs' effector function and reduced frequency of intratumoral Treg. Our results established an essential role of T cells and host TNF signaling in governing Ruxo efficacy. Although we observed that Ruxo selectively promoted TNF production by CD4+ TILs but not by CD8+ TILs and myeloid cells (i.e., macrophages and DCs), it is noteworthy to mention that those immune cells (esp., CD8+ TILs and macrophages) produced abundant and comparable amount of TNF to that of CD4+ TILs (if not higher), which can in turn act on TILs, in an autocrine or paracrine manner, to mediate therapeutic effects of Ruxo. Additionally, other immune cells such as γδ T cells, iNKT, NK cells, and innate lymphoid cells (ILCs) can also produce an ample amount of TNF that can be regulated by Ruxo. Additional mechanistic studies using mice with selective deletion of TNF in different immune cell populations (e.g., CD4+ T cells, CD8+ T cells, DCs, macrophages, and other immune cells) are needed to explicitly pinpoint the major cellular sources of TNF that underscore Ruxo efficacy. Importantly, Ruxo has been utilized preclinically to treat solid tumors, with promising effects reported in ovarian cancer by suppressing stemness45, in aggressive carcinoma by antagonizing TGF-β-induced production of leukemia inhibitory factor46, and in KRAS-driven lung adenocarcinoma by decreasing tumor-promoting chemokines, cytokines, as well as immunosuppressive myeloid-derived suppressor cells71. Here, we report that Ruxo can be also utilized to overcome ICB resistance derived from tumor loss of IFN-γ signaling. Currently, Ruxo is being clinically tested in patients with advanced solid tumors (NCT02646748), non-small cell lung cancer (NCT02917993), and triple-negative breast cancer (NCT02876302)47. Our results justify further testing of Ruxo in patients with advanced melanoma that are resistant to ICBs, which accounts for ~75% of all patients9. Although our short-term Ruxo therapy was effective and did not incite overt immunosuppressive toxicity, we argue that it likely needs to be combined with other therapeutic modalities to achieve a long-term cure. To this end, preclinical studies have shown that JAKi can improve the therapeutic efficacy of radiotherapy72,73,74,75, and when rationally combined with other chemotherapies or oncolytic virus immunotherapy, induce synergistic effects in different types of cancer76,77,78.
Interestingly, a similar counterintuitive reactivation of the JAK-STAT pathway was previously identified in MPN cells that were chronically treated with JAK2 inhibitor49. Perhaps, long-term JAK inhibition and the chronic functional deficiency (as in IFNγR1KO melanoma) would engage other mechanisms to reactivate this essential pathway to sustain crucial functions such as cell division and differentiation23. We show here that the augmented mTOR pathway represents such a key compensatory mechanism, resulting in JAK1/2 activation in IFNγR1KO melanoma. However, how IFNγR1KO activates the PI3K-Akt-mTOR axis remains to be delineated. In a patient with myelodysplastic syndrome, the constitutively active fusion protein TEL-Syk is associated with activated PI3K-AKT79. And, ectopic knock-in of TEL-Syk or overexpression of Syk in various lymphoma cells80 directly leads to activation of mTOR. Although our kinomic studies revealed active Syk, unfortunately, additional WB analyses showed that p-Syk was extremely low and did not show significant differences between scrambled control and IFNγR1KO cells. Future studies with genetic knockdown/knockout of Syk may be worth pursuing to directly pinpoint its involvement in mTOR activation. In addition, our phosphoproteomic studies identified activation of ErbB signaling as a top hit in IFNγR1KO melanoma, which is known to feed signals into the PI3K-Akt-mTOR pathway32 and may represent a potential underlying mechanism of mTOR activation. As we recently described23, as principal gatekeepers of various cellular signaling pathways, JAK1/2 are delicately regulated at different levels, including post-translational modifications, inhibitory function of the pseudokinase domain, as well as many regulators such as phosphatases, Protein Inhibitors of Activated STAT (PIAS) that inhibit STAT-DNA binding, and suppressor of cytokine signaling (SOCS)81. It would be interesting to investigate how the IFNγR1KO-mTOR axis affects these regulatory mechanisms in the future, especially the activity of protein tyrosine phosphatases (PTPs) to mediate activation of JAK1/2.
In summary, we demonstrate that ICB-resistant melanomas lacking IFN-γ signaling have reduced infiltration and effector function of TILs but exhibit an aberrantly active mTOR-JAK1/2 axis. Inhibiting activated JAK1/2 with Ruxo induces selective suppression of IFNγR1KO melanomas, providing a "targeted" therapy to treat these ICB-resistant melanomas. Ruxo relies on T cells and host TNF signaling but not direct killing of tumor cells to exert its selective efficacy. Since Ruxo is clinically approved to treat MPN and is actively being tested preclinically and clinically in solid tumors47, our findings lay a solid foundation for additional clinical testing of Ruxo in patients with advanced melanoma resistant to ICBs, which can be repurposed to overcome ICB resistance, a pressing unmet medical need.
Mice and cell lines
Seven-week-old C57BL/6 (Stock No: 000664), Rag-1−/− (Stock No: 002216), and TNF−/− (Stock No: 005540) mice were purchased from The Jackson Laboratory (Bar Harbor, ME) and housed in specific pathogen-free conditions in the animal facility of The University of Alabama at Birmingham (UAB) under 12 h/12 h light/dark cycle, ambient room temperature (22 °C) with 40–70% humidity. The animal protocol (APN-21945) was approved by Institutional Animal Care and Use Committee at UAB. All tumor-bearing mice were humanely euthanized prior to their tumors reaching the maximally allowed tumor size (20 mm in diameter) in our animal protocol. The B16-BL6 murine melanoma cells were kindly provided by Dr. I. Fidler at MD Anderson Cancer Center and cultured with MEM supplemented with 10% FBS, 2 mM l-glutamine, 1 mM sodium pyruvate, 1% nonessential amino acids, 1% vitamin, 100 units/mL of penicillin and 100 µg/mL of streptomycin (all from Invitrogen) in a humidified 37 °C incubator with 5% CO2. B16-BL6 IFNγR1KD and scrambled control cells were similarly maintained and used as we previously described9. All cells were regularly tested using the MycoAlert detection kit (Lonza, LT07-118) and kept free of mycoplasma.
Generation of genetically engineered cell lines
Gene knockout cell lines were generated using CRISPR-Cas9 technology, as we previously described in ref. 82. Briefly, single guide RNA sequences (sgRNAs) were inserted into the lentiCRISPR v2 plasmid (Addgene, #52961). Lentiviruses were packaged by co-transfecting 293 T cells with lentiCRISPR v2, pMD2.G (Addgene, #12259), and psPAX2 (Addgene, #12260). B16-BL6 cells were then transduced with lentiviruses containing scramble sgRNAs (5′-GCACTACCAGAGCTAACTCA-3′, targeting GFP) or sgRNAs against genes of interest. Cells were then selected with 2 µg/mL of puromycin and then seeded on 96-well-plates at ~1 cell per well. The grown single clones were then screened based on PD-L1 expression after IFN-γ and IFN-α stimulation for IFNγR1KO and IFNαR1KO, respectively, with further confirmation of their IFNγR1 and IFNαR1 expression by flow cytometry. Used sgRNAs against mouse Ifngr1 were sgRNA #2 (5′-TGGAGCTTTGACGAGCACTG-3′) and sgRNA #5 (5′-AGCTGGCAGGATGATTCTGC-3′). Used sgRNAs against mouse Ifnar1 were sgRNA #1 (5′-TCAGTTACACCATACGAATC-3′) and sgRNA #2 (5′-GCTTCTAAACGTACTTCTGG-3′). For mTOR knockdown, lentiviruses containing shRNAs against mouse mTOR or scramble shRNA were purchased from Santa Cruz Biotechnology (#sc-35410-V). Transduction of scrambled control or IFNγR1KO B16-BL6 cells were performed following the manufacturer's instructions. Briefly, cells were seeded to 6-well-plate and cultured until ~70% confluency. Ten microliters of scramble or shmTOR lentivirus were added to the medium containing 8 μg/mL of polybrene from Santa Cruz Biotechnology (#sc-134220). Forty-eight hours later, cells were transferred to a 10-cm plate and selected with 2 μg/mL puromycin until no further cell death was observed with puromycin selection. Successfully transduced cells are maintained in a medium containing 1 μg/mL puromycin. mTOR knockdown was confirmed by WB. For IFNγR1 restoration, we subcloned mouse Ifngr1 cDNA to pLenti CMV GFP Puro between BamH I and Sal I restriction enzyme sites. Lentiviruses were packaged by co-transfection with pMD2.G and psPAX2. Scrambled control and IFNγR1KO cells were transduced with lentiviruses, selected under 2 μg/mL puromycin, and maintained in medium containing 1 μg/mL puromycin. In some experiments, scrambled control and IFNγR1KO B16-BL6 cells were seeded on a six-well-plate, left untreated, or treated with 10 and 50 μg/mL of anti-IL-6 (Bio X Cell, clone MP5-20F3, #BE0046) or anti-IL-6R (Bio X Cell, clone 15A7, #BE0047,) antibodies for 48 h, and then lysed for WB analysis of phospho-JAK2 (see WB section below). To prove effective blocking with anti-IL-6 and anti-IL-6R, we treated B16-BL6 cells with IL-6 (100 ng/mL; Biolegend, #575702) in the presence or absence of 10 μg/mL anti-IL-6/IL-6R antibodies; harvested cell lysates were analyzed for phospho-STAT3 (see WB section below).
In vivo tumor inoculation and treatment
Seven-week-old C57BL/6 or Rag-1−/− mice were shaved and inoculated in the right flanks with 1.25 × 105 of B16-BL6 cells intradermally on day 0. Mice were left untreated or treated with anti-CTLA-4 (Bio X Cell, clone 9H10, #BE0131) intraperitoneally (i.p.) on days 3, 6, and 9 with 200, 100, and 100 µg per mouse, concurrently with vaccination using GVAX (GM-CSF-expressing B16-BL6 cells irradiated for 150 Gy), as we previously reported in ref. 9. C57BL/6 and TNF−/− mice bearing palpable melanoma were treated with Ruxolitinib (LC Laboratories, #R-6600) by oral gavage (reconstituted evenly in ORA-Plus Suspending Vehicle), twice daily at 90 mg/kg for 10 days. In vivo TNF blocking (Bio X Cell, clone XT3.11, #BE0058) was initiated 1 day before tumor inoculation at a dose of 250 µg per mouse by i.p. and repeated every three days until mice were euthanized. In vivo neutralizing antibodies against CD4 (Bio X Cell, clone GK1.5, #BE0003-1) and CD8 (Bio X Cell, clone 2.43, #BE0061) was given at a dose of 250 µg per mouse by i.p. 1 day prior to tumor inoculation and on days 1, 3, and 10 post tumor inoculation. Tumors were measured by caliper every other day starting from day 6 and tumor volumes (mm3) were calculated using the formula (0.52 × length × width2). The tumor-bearing mice were sacrificed when the tumor reached 20 mm in diameter. Tumors and spleens were collected at indicated times, and tumor weights were recorded.
TILs isolation and splenocyte preparation
Tumors were collected in ice-cold RPMI 1640 containing 2% FBS and minced into fine pieces, followed by digestion with 400 U/mL collagenase D (Worthington Biochemical Corporation, #LS004186) and 20 µg/mL DNase I (Sigma, #10104159001) at 37 °C for 40 min with periodic shaking. EDTA (Sigma, #1233508) was then added to the final concentration of 10 mM to stop digestion. Cell suspensions were filtered through 70 µM cell strainers, and TILs were obtained by collecting the cells in the interphase after Ficoll (MP Biomedicals, #091692254). Spleens were collected in ice-cold HBSS containing 2% FBS to prepare single-cell suspensions after lysis of red blood cells and filtering with 70 µM nylon mesh. Both TILs and splenocytes were resuspended in complete Click's culture medium (Irvine Scientific, #9195-500 mL) for flow cytometric analyses. In some experiments, isolated TILs were cultured with 100 U/mL IL-2, with or without 1 μM Ruxo for 3 days and analyzed for FoxP3 expression and production of IFN-γ/TNF by flow cytometry, as described below.
Flow cytometric analysis
Surface staining of TILs and splenocytes was done in DPBS containing 2% BSA for 30 min on ice. To analyze FoxP3, following surface staining, cells were fixed using the Foxp3/Transcription Factor Staining Buffer Set (Invitrogen, #00-5523-00) and stained for FoxP3, according to the manufacturer's instructions. To detect intracellular cytokines, cells were briefly stimulated for 4–5 h with Phorbol 12-myristate 13-acetate (PMA, final concentration: 50 ng/mL; Sigma, #P8139-5MG) plus ionomycin (final concentration: 1 μM; Sigma, #I0634-1MG) in the presence of monensin (BD Biosciences, #51-2092KZ) (for the last 2 h). Stimulated cells were stained with surface markers, fixed using the BD Cytofix/Cytoperm Plus Fixation/Permeabilization Kit (BD Biosciences, #554715), and stained for cytokines according to the manufacturer's instructions. Antibodies used include Aqua fixation LIVE/DEAD™ Fixable Aqua Dead Cell Stain Kit (1:200, Thermo Fisher, #L34966), CD4-BV421 (1:200, clone RM4-5, BioLegend, #100544), CD8-BV786 (1:200, clone 53-6.7, BD Biosciences, #563332), CD45- PerCP-Cyanine5.5 (1:200, clone 30-F11, Thermo Fisher, #45-0451-82), CD11b-PE (1:200, clone M1/70, BioLegend, #101208), CD11c-APC (1:200, clone N418, BioLegend, #117310), F4/80-BV785 (1:200, clone BM8, BioLegend, #123141), TCRβ-APC Cy7 (1:200, clone H57-597, BioLegend, #109220), CD3-BV711 (1:200, clone 145-2C11, BioLegend, #100349), IFNγR1-BV605 (1:200, clone GR20, BD Biosciences, #745111), IFNαR1-APC (1:200, clone MAR1-5A3, BioLegend, #127313), PD-L1-APC (1:200, clone 10 F.9G2, BioLegend, #124312), MHC I-BV650 (1:200, clone SF1-1.1, BD Biosciences, #742434), MHC II-BV785 (1:200, clone M5/114.15.2, BioLegend, #107645), FoxP3-eFluor™ 450 (1:100, clone FJK-16s, Thermo Fisher, #48-5773-82), Perforin-PE (1:100, clone S16009A, BioLegend, #154306), TNF-APC Cy7 (1:100, MP6-XT22, BioLegend, #506344), PD-1-APC (1:100, clone RMP1-30, Thermo Fisher, # 17-9981-82), CD73-BV605 (1:200, clone TY/11.8, BioLegend, #127215), Granzyme B-FITC (1:100, clone QA16A02, BioLegend, #372206), IFN-γ-BV650 (1:100, clone XMG1.2, BioLegend, #505832), IL-2-BV711 (1:100, clone JES6-5H4, BioLegend, #503837), phospho-JAK2 (Tyr 1007/Tyr 1008)-APC (1:100, clone E132, Abcam, #ab200340) and phosphor-STAT3 (Tyr705)-FITC (1:100, clone LUVNKLA, Thermo Fisher, #11-9033-42). For cell apoptosis analysis, cells treated with or without IFN-γ (100 U/mL), IFN-α (100 ng/mL), Ruxo (10–1000 nM), and TNF (100–10,000 U/mL) were washed once with DPBS and then washed again with 1× Annexin V binding buffer. Afterward, cells resuspended in the Annexin V binding buffer were stained with Annexin V (1:50, Thermo Fisher, #17-8007) and 7-AAD (1:200, Sigma, #129935) for 30 min at room temperature. For cell proliferation analysis, cells were pre-labeled with 4 μM CellTrace Violet (CTV, Thermo Fisher, #C34557) by incubating for 20 min with periodic mixing. After incubation, cells were washed twice with a complete culture medium to remove soluble CTV. CTV-labeled tumor cells (10,000 cells) were seeded onto a six-well plate to evaluate cell proliferation (CTV dilution) after being cultured in a hypoxic (1% O2) and a normoxic (21% O2) incubator for 72 h, in the absence and presence of IFN-γ (100 U/mL). All the flow cytometric data were acquired using the built-in software of the Attune NxT Flow Cytometer (Invitrogen, A24860) from Thermo Fisher. Flow cytometric data were analyzed using FlowJo (version 10.8.1).
Western blot (WB)
Western blot was performed, as previously described in ref. 82. Briefly, 0.5 millions of scrambled and IFNγR1KO B16-BL6 tumor cells were seeded onto a 6-cm-plate and cultured for 24 h. Cells were washed with cold DPBS twice before lysed with M-PER buffer (Thermo Scientific, #78501) containing proteinase inhibitors cOmplete (Roche, #11836170001) and phosphatase inhibitors (Sigma, P2850, and P5726) directly on the plate. Lysates were then collected and transferred to 1.5 mL Eppendorf tubes and briefly sonicated. Protein concentration was determined by BCA quantification (Thermo Scientific, #23225). Fifty µg of total proteins were loaded onto each lane of a 10% SDS-PAGE gel; after electrophoresis, proteins on the gel were transferred to 0.22 µm of nitrocellulose membrane (Bio-Rad, #1620112) in a sponge sandwich. Membranes were then blocked with 5% of non-fat milk (Bio-Rad, #170-6404) and probed with primary antibodies overnight on a shaker in a cold room. After that, membranes were washed and incubated with HRP-conjugated secondary antibodies at room temperature for 1 h. The membranes were then incubated with Western HRP substrate (Millipore, WBLUR0500) for 2–5 min before imaging with an X-ray film. For p-STAT1/3 detection, substantially more total proteins (100 µg and above) were loaded onto each lane of the gel and membranes were exposed for a much longer time (20 min or longer) to enhance the signals. About 100 U/mL IFN-γ was added for the last 15 min for JAK-STAT signaling activation and phospho-JAK2 was detected. Cells were treated with or without 10 μM of Ruxo for 30 min or 1 h, 0–1000 nM of Ruxo for 2.5 h and 10 ng/mL of IFN-α for the last 15 min, 1 μM of Rapamycin for 3 h, or 0–100 ng/mL of IFN-α for 15 min as indicated in individual experiments. For supernatant treatment experiments, supernatants collected from ~70% confluent cultures of scrambled control and IFNγR1KO cells were spun down, filtered with 0.22 μm PVDF membrane, and used to treat cells for 24 h. The antibodies used for WB are: phospho-JAK1 (Tyr 1022) (1:1000, Santa Cruz Biotechnology, polyclonal, #sc-101716), total-JAK1 (1:1000, Santa Cruz Biotechnology, clone HR-785, #sc-277), phospho-JAK2 (Tyr 1007/Tyr 1008) (1:1000, Santa Cruz Biotechnology, polyclonal, #sc-16566-R), total-JAK2 (1:1000, Santa Cruz Biotechnology, clone C-10, #sc-390539), phospho-AKT (Ser473) (1:1000, Cell Signaling Technology, polyclonal, #9271), total-AKT (1:1000, Cell Signaling Technology, polyclonal, #9272), phospho-4EBP1 (Thr37/46) (1:5000, Cell Signaling Technology, clone 236B4, #2855), phospho-STAT1 (Tyr701) (1:1000, Cell Signaling Technology, clone 58D6, #9167), total-STAT1 (1:1000, Cell Signaling Technology, polyclonal, #9172), phospho-STAT3 (Tyr705) (1:1000, Cell Signaling Technology, D3A7, #9145), total-STAT3 (1:1000, Cell Signaling Technology, 79D7, #4904), phospho-Syk (Tyr525/526) (1:1000, Cell Signaling Technology, C87C1, #2710), phospho-ZAP70 (Tyr493) (1:1000, Cell Signaling Technology, polyclonal, #2704 T), phospho-EphA3 (Tyr779) (1:1000, Cell Signaling Technology, D10H1, #8862 S), mTOR (1:1000, Cell Signaling Technology, clone 7C10, #2983), and β-actin (1:10000, Santa Cruz Biotechnology, #sc-47778 HRP). β-actin was run on the same blot with proteins of interest. Uncropped and unprocessed scans of all blots were provided in the Source Data file.
Total RNAs were extracted from scrambled control and IFNγR1KO cells using the RNeasy Plus Mini kit (QIAGEN, #74136). First-strand cDNAs were synthesized by SuperScript III reverse transcriptase (Invitrogen, # 11752250). Quantitative RT-PCR was performed on Bio-Rad One-step with primers synthesized by IDT. Primers used were Irf-1 (Forward: 5′-CAGAGGAAAGAGAGAAAGTCC-3′; Reverse: 5′-CACACGGTGACAGTGCTGG-3′), Il-6 (Forward: 5′-CTGCAAGAGACTTCCATCCAG-3′; Reverse: 5′-AGTGGTATAGACAGGTCTGTTGG-3′), Il-6r (Forward: 5′-GCCCAAACACCAAGTCAACT-3′; Reverse: 5′-TATAGGAAACAGCGGGTTGG-3′), IFNαR1 (Forward: 5′-CATGTGTGCTTCCCACCACT-3′; Reverse: 5′-TGGAATAGTTGCCCGAGTCC-3′). β-actin was used as the housekeeping gene (Forward: 5′- CATTGCTGACAGGATGCAGAAGG-3′; Reverse: 5′-TGCTGGAAGGTGGACAGTGAGG-3′). The gene expression level was calculated using the 2−∆∆CT method.
Colony formation assay
Three hundred scrambled control and IFNγR1KO B16-BL6 cells per well were seeded on six-well plates; triplicates were set up for each condition. About 100 nM or 100 nM of Ruxo or an equal volume of solvent (DMSO) were added to cells after seeding. Cells were cultured for 7 days following crystal violet staining. Stained cells were washed with DPBS and dried on filter paper for photographs.
Kinomic analysis
Kinomic profiling was performed in the UAB Kinome Core. Scrambled control and IFNγR1KO B16-BL6 cells were lysed on ice as described in sample preparation for WB. Lysates were loaded at 15 μg per array. Each array had a porous 3D surface imprinted with tethered phosphorylatable targets. These 12–15 amino acid targets (as listed in the attached array layout file) were imprinted as "spots" in a 12 × 12 grid. Each one of these spots had thousands of identical peptide targets, with residues that could be phosphorylated as lysates were pumped through the porous array, with phosphorylation detected with phosphor-specific FITC conjugated antibodies. After each pumping cycle, the lysate itself was pumped behind an opaque membrane, and an image of the array was captured over multiple exposure times (10, 20, 50, 100, and 200 ms). Gridding of whole array images was done with Evolve 2 image analysis software prior to import into BioNavigator, where signals by exposure slopes were calculated, multiplied by 100, and log2 transformed to generate single values per peptide, per sample. These values were used for upstream kinase identification. Specifically, peptides with acceptable curve fit and signal were used to identify upstream kinases using BioNavigator Upkin PTK v6.0. Scores derived from Kinexus (www.phosphonet.ca) for each phosphorylatable peptide residue (links in array layout file), with amino acid sequences with greater than 90% homology were queried. Kinases with PhosphoNET V2 scores greater than 300 and rank ordered in the top 12 were retained. Individually in vitro identified peptide targets of kinases on-chip from PamGene's proprietary database were given a rank order of 0. For each kinase (ALK), a difference between experimental groups (T; mean kinase statistic [MKS]) was calculated. The sample mean \({\bar{p}}_{ij}\) and variance \({s}_{ij}^{2}\) of peptide i in each comparative group. A significance score was based on permutations of samples and measured how much T depends on the experimental grouping of the samples. A specificity score was based on the permutation of peptides and measured how much τ depended on the peptide to kinase mapping.
$${\tau }_{{KINASE}}=\frac{1}{n}\mathop{\sum }\limits_{i=1}^{n=9}\frac{{\bar{P}}_{{i}_{1}}-{\bar{P}}_{{i}_{2}}}{\sqrt{{s}_{{i}_{1}}^{2}+{s}_{{i}_{2}}^{2}}}$$
The combined overall or mean final score (MFS) was either specificity (if singlicate) or the sum of significance and specificity. Kinases identified were uploaded as seed nodes by UniProt ID to GeneGo MetaCore, where they were overlaid on literature annotated interactions, in an auto-expand network model where sub-networks were generated from the seed node list, expanded iteratively with preference given to objects with more connectivity to the initial seed nodes. The expansion was halted when the sub-networks intersected or when the network reached a selected size (n < 50 nodes). Networks were named by their most centric (interconnected) node.
RNA-seq analysis
Scrambled control and IFNγR1KO B16 melanoma cells were seeded overnight as triplicates before RNA extraction. Cells were directly lysed on the plate and total RNA was extracted immediately by RNeasy Plus Mini Kit from QIAGEN, Inc. Standard RNA-seq was performed by GENEWIZ, Inc. Briefly, total RNA was enriched with Poly A selection and sequencing was performed on the Illumina platform. For RNA-seq data analysis, paired-end transcriptome sequences were mapped to the Mus musculus GRCm38 reference genome available on ENSEMBL using the STAR aligner (version 2.7.5a. Read counts per gene were calculated using htseq-count in the HTseq package (version 0.11.2)83. Then the read counts per gene were used for downstream differential gene expression analysis and pathway enrichment analysis. The analysis of differentially expressed genes (DEGs) between the scrambled control and IFNγR1KO samples was performed using DESeq2 (version 1.34.0)84 in R (version 3.6.0). The Wald test was used to calculate the p values and log2 fold changes. Genes with an adjusted p value < 0.05 and absolute log2 fold change > 1 were considered as DEGs. A volcano plot was used to show all upregulated and downregulated DEGs using the ggplot2 package (version 3.3.6) (ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. ISBN 978-3-319-24277-4, https://ggplot2.tidyverse.org). Enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways85 of the DEGs were identified by enrichr package86 (version 3.0), a comprehensive gene set enrichment analysis tool. Significant terms of the KEGG pathways were selected with a p value < 0.05.
Multiplexed phosphoproteomic analysis
Cells collected at 90–95% confluence were washed with ice-cold DPBS thrice and lysed in 8 M urea buffer. The protein concentration was measured with the Bradford method using Pierce™ Coomassie Plus Assay Reagent (Thermo Fisher, #23238). For each sample, 1 mg of protein was digested by TPCK-trypsin at the ratio of 50:1 (w/w) overnight at 37 oC. The peptide concentration was quantified using Pierce™ Quantitative Colorimetric Pierce Quantitative Colorimetric Peptide Assay Kit (Thermo Fisher, #23275). From each quantified peptide sample, 70 μg of peptides was labeled using TMTpro™ 16plex Label Reagent Set (Thermo Fisher, #A44520) according to the manufacturer's manual. Labeled peptides were pooled (4 samples/group × 4 groups) and dried by Speedvac. Dried peptides were then dissolved in 0.1% trifluoracetic acid (TFA), with pH values adjusted to 3.5 using 5% TFA. Phosphorylated peptides were enriched using TiO2 beads as described previously87. Enriched phosphopeptides were then fractionated using the Pierce Reversed-Phase Peptide Fractionation Kit (Thermo Fisher, #84868). The fractions of total phosphopeptides were dried by Speedvac and purified using Millipore ZipTip with 0.6 µL C18 resin (Thermo Fisher, #ZTC18S096) according to the manufacturer's manual. Purified peptides were analyzed using the SPS-MS3 approach with the Orbitrap Fusion Lumos mass spectrometer88. Maxquant (version 1.6.17.0) was used to search against mouse protein databases that were downloaded from uniprot.org. Protein phosphosites were compared among groups based on corrected reporter ion intensities. The phosphoproteomic data generated in this study have been deposited in the Mass Spectrometry Interactive Virtual Environment (MassIVE) database under accession ID MSV000087796.
Bioinformatic analysis
Gene expression data of The Cancer Genome Atlas (TCGA), Skin Cutaneous Melanoma (SKCM), and Uveal Melanoma (UVM) were downloaded from National Cancer Institute Genomics Data Commons (GDC) [https://gdc.cancer.gov/about-data/publications/pancanatlas]. The clinical information for each patient in TCGA was obtained from Genomic Data Commons (GDC) Data Portal [https://portal.gdc.cancer.gov/]. The gene expression profiles of published pretreatment melanomas undergoing anti-PD-1 therapy transcriptome data89 were retrieved from the gene expression omnibus database (GEO) using the accession number GSE78220. The SKCM samples were grouped into IFNGR1High and IFNGR1Low groups based on the median expression of IFNGR1 expression in tumor cells of all samples. The statistical significances for gene expressions in IFNGR1High vs IFNGR1Low SKCMs, SKCMs vs UVMs, and anti-PD-1 responders vs non-responders were calculated using R with the Mann–Whitney U-test. To identify malignant cells from the TCGA and GSE78220 datasets, CIBERSORTx tool20 was used to assess the cell type abundance from the transcriptomes of the bulk tumor tissues. Specifically, a matrix of reference gene expression signatures was provided as an input of CIBERSORTx (deconvolution), which were collectively used to estimate the proportions of melanoma cells and other stromal cells, including immune cells. The permutation was set as 1000, and the B-mode of batch correction was applied. Samples with p value < 0.05 were considered successful deconvoluted samples. For the TCGA cohort, tumor-dominant samples were identified as the samples that had a relative signature score of the malignant cell (melanoma cell) >80%. For the GSE78220 cohort, tumor-dominant samples were those with a relative signature score of the malignant cell (melanoma cell) >60%.
For animal experiments, five mice were included in each group; for in vitro studies with cells, triplicates were set up to ensure consistency and reproducibility. All experiments were repeated for two to five times. Preclinical results were expressed as mean ± SEM. Data were analyzed using a two-sided Student's t-test, one-way ANOVA, or two-way ANOVA after confirming their normal distribution. The log-rank test was used to analyze survival data from the preclinical studies. All analyses were performed using Prism 9.4.0 (GraphPad Software, Inc.) and p < 0.05 was considered statistically significant. TCGA data and GSE78220 data were expressed as boxplots, with the box depicting the first (lower) quartile, median, and the third (upper) quartile, and the lines indicating minimum score and maximum score. To assess the overall survival of patients with clinical information from TCGA, the survival time was calculated based on their vital status. The overall survival of patients with IFNGR1High or IFNGR1Low SKCMs was estimated with Kaplan-Meier analysis and the differences between the cohorts were assessed with a log-rank test using the "Surv" function in the R package "Survival" (version 3.2.13). A p value threshold of 0.05 was used to identify the significantly different survival rates between groups.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
The publicly available skin cutaneous melanoma and uveal melanoma TCGA data used in this study are available in National Cancer Institute Genomics Data Commons (GDC) [https://gdc.cancer.gov/about-data/publications/pancanatlas]. The publicly available gene expression profiles of published pretreatment melanomas undergoing anti-PD-1 therapy transcriptome data used in this study are available in the GEO database under accession code GSE78220. The RNA-seq data generated in this study have been deposited in the Gene Expression Omnibus (GEO) database under accession code GSE201078. The phosphoproteomic data generated in this study have been deposited in the Mass Spectrometry Interactive Virtual Environment (MassIVE) database under accession ID MSV000087796. All deposited data were publicly available. The remaining data in this study are available within the manuscript or Supplementary Information, with source data provided herein. Source data are provided with this paper.
The codes for analyzing TCGA, GSE78220, and GSE201078 data were deposited and publicly available in https://github.com/huang1990/IFNGR1_NC_paper90.
Hodi, F. S. et al. Improved survival with ipilimumab in patients with metastatic melanoma. N. Engl. J. Med. 363, 711–723 (2010).
Wolchok, J. D. et al. Overall survival with combined nivolumab and ipilimumab in advanced melanoma. N. Engl. J. Med. 377, 1345–1356 (2017).
Shen, H. et al. Predictive biomarkers for immune checkpoint blockade and opportunities for combination therapies. Genes Dis. 6, 232–246 (2019).
Cella, D. et al. Patient-reported outcomes of patients with advanced renal cell carcinoma treated with nivolumab plus ipilimumab versus sunitinib (CheckMate 214): a randomised, phase 3 trial. Lancet Oncol. 20, 297–310 (2019).
Antonia, S. J. et al. Durvalumab after chemoradiotherapy in stage III non-small-cell lung cancer. N. Engl. J. Med. 377, 1919–1929 (2017).
Socinski, M. A. et al. Atezolizumab for first-line treatment of metastatic nonsquamous NSCLC. N. Engl. J. Med. 378, 2288–2301 (2018).
Gandhi, L. et al. Pembrolizumab plus chemotherapy in metastatic non-small-cell lung cancer. N. Engl. J. Med. 378, 2078–2092 (2018).
Sharma, P., Hu-Lieskovan, S., Wargo, J. A. & Ribas, A. Primary, adaptive, and acquired resistance to cancer immunotherapy. Cell 168, 707–723 (2017).
Gao, J. et al. Loss of IFN-gamma pathway genes in tumor cells as a mechanism of resistance to anti-CTLA-4 therapy. Cell 167, 397–404 e399 (2016).
Zaretsky, J. M. et al. Mutations associated with acquired resistance to PD-1 blockade in melanoma. N. Engl. J. Med. 375, 819–829 (2016).
Patel, S. J. et al. Identification of essential genes for cancer immunotherapy. Nature 548, 537–542 (2017).
Article ADS CAS PubMed PubMed Central Google Scholar
Pan, D. et al. A major chromatin regulator determines resistance of tumor cells to T cell-mediated killing. Science 359, 770–775 (2018).
Manguso, R. T. et al. In vivo CRISPR screening identifies Ptpn2 as a cancer immunotherapy target. Nature 547, 413–418 (2017).
Kearney, C. J. et al. Tumor immune evasion arises through loss of TNF sensitivity. Sci. Immunol. 3, eaar3451 (2018).
Curran, M. A., Montalvo, W., Yagita, H. & Allison, J. P. PD-1 and CTLA-4 combination blockade expands infiltrating T cells and reduces regulatory T and myeloid cells within B16 melanoma tumors. Proc. Natl Acad. Sci. USA 107, 4275–4280 (2010).
Peggs, K. S., Quezada, S. A., Korman, A. J. & Allison, J. P. Principles and use of anti-CTLA4 antibody in human cancer immunotherapy. Curr. Opin. Immunol. 18, 206–213 (2006).
Simpson, T. R. et al. Fc-dependent depletion of tumor-infiltrating regulatory T cells co-defines the efficacy of anti-CTLA-4 therapy against melanoma. J. Exp. Med. 210, 1695–1710 (2013).
Shi, L. Z. et al. Interdependent IL-7 and IFN-gamma signalling in T-cell controls tumour eradication by combined alpha-CTLA-4+alpha-PD-1 therapy. Nat. Commun. 7, 12335 (2016).
Beavis, P. A., Stagg, J., Darcy, P. K. & Smyth, M. J. CD73: a potent suppressor of antitumor immune responses. Trends Immunol. 33, 231–237 (2012).
Newman, A. M. et al. Determining cell type abundance and expression from bulk tissues with digital cytometry. Nat. Biotechnol. 37, 773–782 (2019).
Wessely, A. et al. The role of immune checkpoint blockade in uveal melanoma. Int. J. Mol. Sci. 21, 879 (2020).
Mohapatra, B. et al. Protein tyrosine kinase regulation by ubiquitination: critical roles of Cbl-family ubiquitin ligases. Biochim. Biophys. Acta 1833, 122–139 (2013).
Shi, L. Z. & Bonner, J. A. Bridging radiotherapy to immunotherapy: the IFN-JAK-STAT axis. Int. J. Mol. Sci. 22, 12295 (2021).
Johnson, D. E., O'Keefe, R. A. & Grandis, J. R. Targeting the IL-6/JAK/STAT3 signalling axis in cancer. Nat. Rev. Clin. Oncol. 15, 234–248 (2018).
Sabaawy, H. E., Ryan, B. M., Khiabanian, H. & Pine, S. R. JAK/STAT of all trades: linking inflammation with cancer development, tumor progression, and therapy resistance. Carcinogenesis 42, 1411–1419 (2021).
Sanchez-Vega, F. et al. Oncogenic signaling pathways in the cancer genome atlas. Cell 173, 321–337.e310 (2018).
Zhang, Y. et al. A pan-cancer proteogenomic atlas of PI3K/AKT/mTOR pathway alterations. Cancer Cell 31, 820–832.e823 (2017).
Sever, R. & Brugge, J. S. Signal transduction in cancer. Cold Spring Harb. Perspect. Med. 5, a006098 (2015).
Dong, C., Wu, J., Chen, Y., Nie, J. & Chen, C. Activation of PI3K/AKT/mTOR pathway causes drug resistance in breast cancer. Front. Pharm. 12, 628690 (2021).
Bansal, A. & Simon, M. C. Glutathione metabolism in cancer progression and treatment resistance. J. Cell Biol. 217, 2291–2298 (2018).
Kuo, M. T., Chen, H. H. W., Feun, L. G. & Savaraj, N. Targeting the proline-glutamine-asparagine-arginine metabolic axis in amino acid starvation cancer therapy. Pharmaceuticals 14, 72 (2021).
Miricescu, D. et al. PI3K/AKT/mTOR signaling pathway in breast cancer: from molecular landscape to clinical aspects. Int. J. Mol. Sci. 22, 173 (2020).
Zhang, Z. H. et al. Convallatoxin promotes apoptosis and inhibits proliferation and angiogenesis through crosstalk between JAK2/STAT3 (T705) and mTOR/STAT3 (S727) signaling pathways in colorectal cancer. Phytomedicine 68, 153172 (2020).
Hugo, W. et al. Genomic and transcriptomic features of response to anti-PD-1 therapy in metastatic melanoma. Cell 168, 542 (2017).
Tao, T. et al. Down-regulation of PKM2 decreases FASN expression in bladder cancer cells through AKT/mTOR/SREBP-1c axis. J. Cell Physiol. 234, 3088–3104 (2019).
Staber, P. B. et al. The oncoprotein NPM-ALK of anaplastic large-cell lymphoma induces JUNB transcription via ERK1/2 and JunB translation via mTOR signaling. Blood 110, 3374–3383 (2007).
Raab, S. et al. Dual regulation of fatty acid synthase (FASN) expression by O-GlcNAc transferase (OGT) and mTOR pathway in proliferating liver cancer cells. Cell Mol. Life Sci. 78, 5397–5413 (2021).
Origanti, S. et al. Ornithine decarboxylase mRNA is stabilized in an mTORC1-dependent manner in Ras-transformed cells. Biochem. J. 442, 199–207 (2012).
Mangé, A. et al. FKBP4 connects mTORC2 and PI3K to activate the PDK1/Akt-dependent cell proliferation signaling in breast cancer. Theranostics 9, 7003–7015 (2019).
Liu, Y. et al. α-Enolase lies downstream of mTOR/HIF1α and promotes thyroid carcinoma progression by regulating CST1. Front. Cell Dev. Biol. 9, 670019 (2021).
Lee, H. P. et al. Adiponectin promotes VEGF-A-dependent angiogenesis in human chondrosarcoma through PI3K, Akt, mTOR, and HIF-α pathway. Oncotarget 6, 36746–36761 (2015).
Frank, D. A. STAT3 as a central mediator of neoplastic cellular transformation. Cancer Lett. 251, 199–210 (2007).
Tamura, R. E. et al. GADD45 proteins: central players in tumorigenesis. Curr. Mol. Med. 12, 634–651 (2012).
Lu, L. et al. Gene regulation and suppression of type I interferon signaling by STAT3 in diffuse large B cell lymphoma. Proc. Natl Acad. Sci. USA 115, E498–e505 (2018).
McLean, K. et al. Leukemia inhibitory factor functions in parallel with interleukin-6 to promote ovarian cancer growth. Oncogene 38, 1576–1584 (2019).
Albrengues, J. et al. LIF mediates proinvasive activation of stromal fibroblasts in cancer. Cell Rep. 7, 1664–1678 (2014).
Hammaren, H. M., Virtanen, A. T., Raivola, J. & Silvennoinen, O. The regulation of JAKs in cytokine signaling and its breakdown in disease. Cytokine 118, 48–63 (2019).
Hu, Y. et al. Inhibition of the JAK/STAT pathway with ruxolitinib overcomes cisplatin resistance in non-small-cell lung cancer NSCLC. Apoptosis 19, 1627–1636 (2014).
Koppikar, P. et al. Heterodimeric JAK-STAT activation as a mechanism of persistence to JAK2 inhibitor therapy. Nature 489, 155–159 (2012).
Keohane, C. et al. JAK inhibition induces silencing of T Helper cytokine secretion and a profound reduction in T regulatory cells. Br. J. Haematol. 171, 60–73 (2015).
Gritsina, G. et al. Targeted blockade of JAK/STAT3 signaling inhibits ovarian carcinoma growth. Mol. Cancer Ther. 14, 1035–1047 (2015).
Yang, Y. et al. Low-dose ruxolitinib shows effective in treating myelofibrosis. Ann. Hematol. 100, 135–141 (2021).
Lejeune, F. J., Rüegg, C. & Liénard, D. Clinical applications of TNF-alpha in cancer. Curr. Opin. Immunol. 10, 573–580 (1998).
Calzascia, T. et al. TNF-alpha is critical for antitumor but not antiviral T cell immunity in mice. J. Clin. Invest. 117, 3833–3845 (2007).
Valencia, X. et al. TNF downmodulates the function of human CD4+CD25hi T-regulatory cells. Blood 108, 253–261 (2006).
Nie, H. et al. Phosphorylation of FOXP3 controls regulatory T cell function and is inhibited by TNF-α in rheumatoid arthritis. Nat. Med. 19, 322–328 (2013).
Shankaran, V. et al. IFNgamma and lymphocytes prevent primary tumour development and shape tumour immunogenicity. Nature 410, 1107–1111 (2001).
Article ADS CAS PubMed Google Scholar
Kaplan, D. H. et al. Demonstration of an interferon gamma-dependent tumor surveillance system in immunocompetent mice. Proc. Natl Acad. Sci. USA 95, 7556–7561 (1998).
Dighe, A. S., Richards, E., Old, L. J. & Schreiber, R. D. Enhanced in vivo growth and resistance to rejection of tumor cells expressing dominant negative IFN gamma receptors. Immunity 1, 447–456 (1994).
Deffrennes, V. et al. Constitutive expression of MHC class II genes in melanoma cell lines results from the transcription of class II transactivator abnormally initiated from its B cell-specific promoter. J. Immunol. 167, 98–106 (2001).
Zhao, M. et al. MHC class II transactivator (CIITA) expression is upregulated in multiple myeloma cells by IFN-gamma. Mol. Immunol. 44, 2923–2932 (2007).
Cramer, L. A., Nelson, S. L. & Klemsz, M. J. Synergistic induction of the Tap-1 gene by IFN-gamma and lipopolysaccharide in macrophages is regulated by STAT1. J. Immunol. 165, 3190–3197 (2000).
Amaldi, I., Reith, W., Berte, C. & Mach, B. Induction of HLA class II genes by IFN-gamma is transcriptional and requires a trans-acting protein. J. Immunol. 142, 999–1004 (1989).
Shirayoshi, Y., Burke, P. A., Appella, E. & Ozato, K. Interferon-induced transcription of a major histocompatibility class I gene accompanies binding of inducible nuclear factors to the interferon consensus sequence. Proc. Natl Acad. Sci. USA 85, 5884–5888 (1988).
Castro, F., Cardoso, A. P., Goncalves, R. M., Serre, K. & Oliveira, M. J. Interferon-gamma at the crossroads of tumor immune surveillance or evasion. Front. Immunol. 9, 847 (2018).
Juneja, V. R. et al. PD-L1 on tumor cells is sufficient for immune evasion in immunogenic tumors and inhibits CD8 T cell cytotoxicity. J. Exp. Med. 214, 895–904 (2017).
Lin, H. et al. Host expression of PD-L1 determines efficacy of PD-L1 pathway blockade-mediated tumor regression. J. Clin. Invest. 128, 805–815 (2018).
Tang, H. et al. PD-L1 on host cells is essential for PD-L1 blockade-mediated tumor regression. J. Clin. Invest. 128, 580–588 (2018).
Chang, C. H. et al. Metabolic competition in the tumor microenvironment is a driver of cancer progression. Cell 162, 1229–1241 (2015).
Williams, J. B. et al. Tumor heterogeneity and clonal cooperation influence the immune selection of IFN-γ-signaling mutant cancer cells. Nat. Commun. 11, 602 (2020).
Mohrherr, J. et al. JAK-STAT inhibition impairs K-RAS-driven lung adenocarcinoma progression. Int. J. Cancer 145, 3376–3388 (2019).
Benci, J. L. et al. Tumor interferon signaling regulates a multigenic resistance program to immune checkpoint blockade. Cell 167, 1540–1554 e1512 (2016).
Bonner, J. A. et al. Enhancement of cetuximab-induced radiosensitization by JAK-1 inhibition. BMC Cancer 15, 673 (2015).
Hua, Y. et al. NVP-BSK805, an inhibitor of JAK2 kinase, significantly enhances the radiosensitivity of esophageal squamous cell carcinoma in vitro and in vivo. Drug Des. Devel. Ther. 14, 745–755 (2020).
Sun, Y. et al. Inhibition of JAK2 signaling by TG101209 enhances radiotherapy in lung cancer models. J. Thorac. Oncol. 6, 699–706 (2011).
Tavallai, M., Booth, L., Roberts, J. L., Poklepovic, A. & Dent, P. Rationally repurposing ruxolitinib (Jakafi ((R))) as a solid tumor therapeutic. Front. Oncol. 6, 142 (2016).
Han, E. S. et al. Ruxolitinib synergistically enhances the anti-tumor activity of paclitaxel in human ovarian cancer. Oncotarget 9, 24304–24319 (2018).
Nguyen, T.-T. et al. Mutations in the IFNγ-JAK-STAT pathway causing resistance to immune checkpoint inhibitors in melanoma increase sensitivity to oncolytic virus treatment. Clin. Cancer Res. 27, 3432–3442 (2021).
Kanie, T. et al. TEL-Syk fusion constitutively activates PI3-K/Akt, MAPK and JAK2-independent STAT5 signal pathways. Leukemia 18, 548–555 (2004).
Carnevale, J. et al. SYK regulates mTOR signaling in AML. Leukemia 27, 2118–2128 (2013).
Babon, J. J., Lucet, I. S., Murphy, J. M., Nicola, N. A. & Varghese, L. N. The molecular regulation of Janus kinase (JAK) activation. Biochem. J. 462, 1–13 (2014).
Shen, H. et al. MicroRNA-30a attenuates mutant KRAS-driven colorectal tumorigenesis via direct suppression of ME1. Cell Death Differ. 24, 1253–1262 (2017).
Anders, S., Pyl, P. T. & Huber, W. HTSeq-a Python framework to work with high-throughput sequencing data. Bioinformatics 31, 166–169 (2015).
Love, M. I., Huber, W. & Anders, S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 15, 550 (2014).
Kanehisa, M., Furumichi, M., Tanabe, M., Sato, Y. & Morishima, K. KEGG: new perspectives on genomes, pathways, diseases and drugs. Nucleic Acids Res. 45, D353–d361 (2017).
Kuleshov, M. V. et al. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Res. 44, W90–W97 (2016).
Zhang, X. et al. Quantitative phosphoproteomics reveals novel phosphorylation events in insulin signaling regulated by protein phosphatase 1 regulatory subunit 12A. J. Proteom. 109, 63–75 (2014).
Li, J. et al. TMTpro reagents: a set of isobaric labeling mass tags enables simultaneous proteome-wide measurements across 16 samples. Nat. Methods 17, 399–404 (2020).
Hugo, W. et al. Genomic and transcriptomic features of response to anti-PD-1 therapy in metastatic melanoma. Cell 165, 35–44 (2016).
Shen, H. et al. Selective suppression of melanoma lacking IFN-gamma pathway by JAK inhibition depends on T cells and host TNF signaling. Zenodo (2022).
We would like to acknowledge other members of Shi lab and the Department of Radiation Oncology at UAB for their constructive input. We are grateful for the Startup fund from the Department of Radiation Oncology and the O'Neal Invests pre-R01 Grant from the UAB-O'Neal Comprehensive Cancer Center granted to Shi lab. This study is also funded by National Institutes of Health grants (1R21CA230475-01A1 and 1R21CA259721-01A1), the V Foundation Scholar Award (V2018-023), a DoD-Congressionally Directed Medical Research Programs grant (ME210108), a Cancer Research Institute CLIP Grant (CRI4342), an American Cancer Society Institutional Research Grant (91-022-19), and National Institute of General Medical Sciences (1R35GM138212).
These authors contributed equally: Hongxing Shen, Fengyuan Huang.
These authors jointly supervised this work: James A. Bonner, Lewis Zhichang Shi.
Department of Radiation Oncology, Heersink School of Medicine, University of Alabama at Birmingham (UAB-SOM), Birmingham, AL, 35233, USA
Hongxing Shen, Oluwagbemiga A. Ojo, Yuebin Li, Hoa Quang Trummell, Joshua C. Anderson, John Fiveash, Markus Bredel, Eddy S. Yang, Christopher D. Willey, James A. Bonner & Lewis Zhichang Shi
Department of Genetics and Informatics Institute, UAB-SOM, Birmingham, AL, USA
Fengyuan Huang & Zechen Chong
Department of Pharmaceutical Sciences, Wayne State University, Detroit, MI, 48201, USA
Xiangmin Zhang
O'Neal Comprehensive Cancer Center, UAB-SOM, Birmingham, AL, USA
John Fiveash, Markus Bredel, Eddy S. Yang, Christopher D. Willey, Zechen Chong, James A. Bonner & Lewis Zhichang Shi
Department of Microbiology, UAB-SOM, Birmingham, AL, USA
Lewis Zhichang Shi
Department of Pharmacology and Toxicology, UAB-SOM, Birmingham, AL, USA
Programs in Immunology, UAB-SOM, Birmingham, AL, USA
Hongxing Shen
Fengyuan Huang
Oluwagbemiga A. Ojo
Yuebin Li
Hoa Quang Trummell
Joshua C. Anderson
John Fiveash
Markus Bredel
Eddy S. Yang
Christopher D. Willey
Zechen Chong
James A. Bonner
H.S. designed and did the experiments with cells and mice, analyzed data, and contributed to writing the manuscript; F.H. and Z.C. performed the bioinformatic analyses of the TCGA and GSE78220 datasets and contributed to writing the manuscript; X.Z. conducted the phosphoproteomic studies, analyzed data, and contributed to writing the manuscript; O.A.O., Y.L., and H.Q.T. did the experiments with cells and/or mice; J.C.A. and C.D.W. performed the kinomic studies, analyzed data, and contributed to writing the manuscript; J.F., E.S.Y., and M.B. contributed to manuscript construction and discussion; L.Z.S. and J.A.B. were responsible for the original conceptualization of this study, overall data presentation, and manuscript construction; L.Z.S. acquired funding for this study, designed experiments, supervised laboratory studies and data analyses, wrote, and edited the manuscript. All authors have met the requirements for authorship and are in consensus on the content of this publication.
Correspondence to Zechen Chong, James A. Bonner or Lewis Zhichang Shi.
Nature Communications thanks Miguel Quintela-Fandino and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Shen, H., Huang, F., Zhang, X. et al. Selective suppression of melanoma lacking IFN-γ pathway by JAK inhibition depends on T cells and host TNF signaling. Nat Commun 13, 5013 (2022). https://doi.org/10.1038/s41467-022-32754-7
Nature Communications (Nat Commun) ISSN 2041-1723 (online)
Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly.
Get what matters in cancer research, free to your inbox weekly. Sign up for Nature Briefing: Cancer
|
CommonCrawl
|
$\left(1+ \frac {1}{n}\right)^n$
This doesn't appear to work on all models of calculator. Let me know whether yours handles it properly…
"I saw this thing about Euler's identity," said the student, and the words "ut" and "oh" forced themselves, unbidden, into my head.
You've maybe seen it: it's $e^{\pi i} + 1 \equiv 0$, or - if you prefer $e^{\pi i} \equiv -1$. I can go either way. It's frequently voted the most beautiful equation in all of maths (even though it's an identity rather than an equation).
I was thinking ut-oh because, in honesty, I didn't fancy explaining the idea of an imaginary power to someone who has only just about grasped fractional ones. But he surprised me with his follow-up: "what's $e$?"
Well, that's a freebie
"It's about $2.718281828$," I said, in forming a mnemonic to remember a constant in analysis.
"I know that," said the student, as if he knew that. "But why is it important?"
My rabbit-hole alarm went off. This was an opportunity for a 10 minute lecture discussing the intricacies of calculus in the hope of inspiring the student to see that there's way more to maths than what's in the GCSE, but I picked a different route, since we'd just been talking about compound interest.
"Let's say you've got £100 invested at 100% interest in some weird Ponzi-scheme bank account," I said, brushing off his "what's a Ponzi-scheme?" as irrelevant. "How much would you have after a year?"
"£200!" he said, brightly.
"Good. How about if you had interest payments every six months?"
"So 50% after 6 months, and 50% compounded after another six?"
"Yup." I rolled my eyes as he reached for a calculator, but he was going to need it shortly anyway.
"£225," he said.
"What about every month?"
"Um…" tappity tap, "… so that's 8.3% every month? Do I have to add that on each time? No, wait, we talked about this - you can just make it $1.083^{12}$, can't you?"
"Use your answer button rather than rounding it, but yes."
"£261.30," he said, dollar signs starting to form in his eyes. "The more often you compound it, the more you get!"
I nodded. "How about daily?"
"Mutter mutter 0.274 mutter mutter "£271.46 - it's not gone up by much, has it?"
A brief aside about calculator use
If I was doing this - for example, to research a blog post - I'd be working out the number of interest periods (say, $365 \times 24$, ignoring that there are more than 365 days in a year, and less than 24 hours in a day) and then typing in $((Ans + 1) ÷ Ans)^{Ans}$ - and making copious use of the scroll-back function on my trusty Casio.
Back to the action
"Not really, no. It's almost as if there's a hard limit!"
"How about every hour?"
"Go for it."
"£271.81… that's hardly any change. And isn't it…?"
Luckily, I'd written $e$ on the board. "It's almost $e$ times what you started with."
"Minutes!" he insisted. "£271.8279! Within a fraction of a penny! We must be nearly there! Seconds! £271.8281615!"
"That's pretty close."
"Milliseconds!" he nearly yelled, triumphantly. "Two hundred and seventy one pounds… 74? That's gone down." He looked deflated.
I raised an eyebrow. "That's not meant to happen," I said. "I fear, my friend, you've got a defective calculator."
Does your calculator break like this? Why? ((Yes, I have an idea why.))
Ask Uncle Colin: Is this an ellipse?
Ask Uncle Colin: Simultaneous Equations
My mocks were a disaster - what now?
Quotable maths - McKiver
|
CommonCrawl
|
Optimal spectrum access and power control of secondary users in cognitive radio networks
Yang Yang ORCID: orcid.org/0000-0002-9817-34731,
Linglong Dai1,
Jianjun Li2,
Shahid Mumtaz3 &
Jonathan Rodriguez3
EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 98 (2017) Cite this article
In future 5G communication system, radio resources can be effectively reused by cognitive radio networks (CRNs), where a lot of secondary users (SUs) are able to access the spectrum of primary users (PUs). In this paper, we analyze the optimal spectrum access and power control of SUs on multiple bands with the target of maximizing the average sum rate (ASR) of SUs. Specifically, based on the stochastic geometry, the random distributions of PUs and SUs are modeled by Poisson point processes (PPPs), based on which we derive out the closed-form outage probabilities and obtain the ASR of SUs. Then, we formulate the maximization problem of ASR on multiple bands under the constraints of outage probabilities. With the help of convex optimization, the optimal density of SUs is obtained in closed-form when the power of SUs is fixed. The convexity of ASR is also verified, and we evaluate the optimal power of SUs when the density of SUs is fixed. Based on these two obtained results, a spectrum access and power control algorithm is further proposed to maximize the ASR of SUs on multiple bands. Simulation results demonstrate that the proposed algorithm can achieve a higher maximum ASR of SUs over the average power allocation algorithm, and the density and power boundary of SUs are constrained by PUs as well as the interference in the networks.
The rapid growth of mobile traffic and the explosion of mobile users makes spectrum shortage more and more serious [1]. Cognitive radio networks (CRNs) are one of the most promising solution to improve spectrum efficiency, which can effectively alleviate the traffic demands by reusing the spectrum of primary users (PUs). In CRNs, secondary users (SUs) are able to access the spectrum of PUs and transmit signals without causing serious interference to PUs [2]. CRNs also brings a lot of benefits such as improving data rate, reducing power consumption, and enhancing spectral efficiency, which make it become an important part for future 5G wireless communications [3].
In CRNs, the random distribution of a large number of SUs significantly aggravate the interference caused by spectrum reuse. It is very difficult to model the dedicated interference for every SU in random geographical distribution, especially when the number of SUs is very large. Many research works have been proposed to model and analyze the CRNs with Poisson point process (PPP), which different network performances such as signal to interference plus noise ratio (SINR) distribution [4], coverage probability [5], and average spectral efficiency [6]. However, those performance analyses highly depend on the CRNs which PUs and SUs have already accessed the networks. Moreover, if the number of SUs is large, the optimization of spectrum access for performance enhancement in CRNs is challenging [7].
On the other hand, SUs accessing the spectrum of PUs makes the power control in CRNs difficult to operate, especially when multiple frequency spectrum bands are considered. This is due to the factor that different number of users may access different bands, and each user uses different powers to transmit signal on different bands. Moreover, each terminal in CRNs has its own power limitation, and the reliability of the PUs' transmission should also be guaranteed. Therefore, power control is necessary to improve spectral efficiency [8] and network throughput [9] and reduce the interference in CRNs [10]. However, combining the spectrum access and power control of SUs to enhance the network performance, particularly on multiple bands with power limitation and random distribution in CRNs, is still needing further investigation.
Spectrum access is very important for SUs in CRNs, since the communication of PUs in CRNs must be protected at first [11]. Some early research works focused on the spectrum access in the uplink [12] and downlink [13] of PUs, where both the control and the scheduling were managed by the base stations (BSs) in CRNs. Based on these works, a cooperative spectrum sense of other SUs was introduced to reduce the false detection caused by shadowing fading [14]. Furthermore, it was proposed that both PUs and SUs can operate spectrum access in CRNs to improve the transmission rate, where the network connectivity and flexibility can be also enhanced [15].
While there have been considerable works on spectrum access scheduled by some central operator such as BSs in CRNs, very little attention has been paid to the spectrum access under random conditions. The authors in [16] adopted the random spectrum access of SUs in CRNs, where the network latency can be reduced by some extra MAC layer information. Besides, a hybrid spectrum access method was investigated in [17], where both opportunistic spectrum access and exclusive spectrum access were used to optimize the network performance. In addition, some spectrum access schemes were also extended to multiple bands [18], relay scenarios [19], and so on. However, most of the existing works considered the spectrum access in only one or several cells, while the spectrum access of SUs in a large area, especially for the condition with large random geographical user distribution, which still requires further investigation.
This study aims to optimize the spectrum access and power control of SUs for maximizing the average sum rate (ASR) in CRNs. Different from most of the existing works in the literature, we not only model the random distribution of the PUs and SUs by PPPs but also consider that the SUs can access the frequency spectrum of PUs on multiple bands. We evaluate the optimal density and power of SUs with outage probabilities and power and density constraints. Based on the analysis, we also propose an algorithm which considers both the spectrum access and the power control of SUs. More specifically, the contributions of this work are summarized as follows:
With the assumption of random distribution of both PUs and SUs in CRNs, we model the system as PPPs and use Laplace transformation to derive the outage probabilities of both PUs and SUs on one single band. These results show the requirements to the transmission of PUs and SUs, which are also the basic constraints for optimizing the spectrum access of SUs. Note that our derived results can be further optimized on multiple bands, which is more generalized and can include many previously published works [4, 5, 15, 20, 21] as special cases only on one single bands.
The power control is optimized for maximizing ASR of SUs in CRNs. Our analysis not only considers the constraints of outage probabilities, power, and density on multiple bands, but also combines the optimal spectrum access of SUs. We further prove the convexity of ASR in the power definition domain of a secondary network. Then, we get the optimal density of SUs and optimal power of SUs on each bands. Based on these results, a spectrum access and power control algorithm is proposed, which considers the joint interaction effect between density and power of SUs. Simulation results demonstrate that the proposed algorithm can achieve a higher maximum ASR of SUs over the average power allocation algorithm. In addition, the density and power boundaries are constrained by PUs as well as the interference in the networks, and the ASR of SUs can be impacted by different powers of PUs. This new method is different from the previous works only consider power control on multiple bands [16, 17, 22].
The rest of the paper is organized as follows. Section 2 describes the scenario and network model. Section 3 presents the outage probabilities and the definition of ASR of SUs on multiple bands. In Section 4, we derived the optimal density and power of SUs which access the spectrum of PUs. Moreover, a spectrum access and power control algorithm is proposed to get the maximum ASR of SUs. Simulation results are shown in Section 5. Last, the conclusions are summarized in Section 6.
Scenario description and network model
In this section, we first describe the scenario of the cognitive radio networks, and then we model the networks with PPPs. Last, the propagation model is explained in detail.
As shown in Fig. 1, the CRNs contain the primary network S 1 and the secondary network S 0. The primary networks are deployed on N multiple independent bands, the bandwidths are denoted as W i ,i=1,2,…N. SUs can access the frequency spectrum of PUs. The SUs are under power control because the communication of the PUs cannot be seriously interfered. On each band, the frequency spectrum of the primary network is divided into several frequency-flat sub-channels by utilizing an orthogonal frequency-division multiplexing technology. The full set of the sub-channels can be reused by SUs as an underlay sharing with the primary network.
Scenario of cognitive radio networks
Network models
Based on stochastic geometry theory, following assumptions are made:
Assumption 1
The transmitters of secondary network form a PPP on the two dimensional plane ℜ, which is denoted as Π 0 with the density λ 0,i on band i,i=1,2,…,N. The transmission powers of the SUs are denoted as P 0,i ,(i=1,2,…,N) on each band.
The transmitters of PUs form a series of stationary PPPs on each band, which are denoted as \(\Pi _{1}^{i}\) with the density λ 1,i ,(i=1,2,…,N) on ℜ. The transmission powers of the PUs are denoted as P 1,i ,(i=1,2,…,N) on each band.
According to Palm theory [23], a typical receiver of S j ,j∈{0,1} is located at the origin, which does not influence the statistics of the PPP.
Propagation models
We consider both path loss and Rayleigh fading as the propagation models with the following form:
$$ P_{rx} = \delta {P_{tx}}{\left| D \right|^{-\alpha}}, $$
where P tx and P rx represent the transmitter and receiver power, respectively, α is the path loss exponent, |D| is the distance between the transmitter and the receiver. δ stands for Rayleigh fading coefficient, which has an independent exponential distribution with unit mean for every communication link in the CRNs.
When SUs access the spectrum of PUs, the receiver suffers from the interference generated by transmitters in both primary and secondary networks. The distribution of the interfering users in a single network S j ,j∈{0,1} on band i can be modeled by mark PPP, which is denoted as Π j ={(X jk ,δ jk )}, where δ jk and X jk are Rayleigh fading and the distance from the origin to the node k of network S j .
Average sum rate of secondary network on multi-bands
The outage probability on one single band
The interference received at a typical receiver is generated by both primary and secondary networks occupying the specific band, the SINR of network S n , (n is 0 or 1) on the ith band at the receiver is
$$ {SINR}_{n,i}= \frac{{{P_{n,i}}{\delta_{n0,i}}R_{n0,i}^{- \alpha }}}{{\sum\limits_{j \in \left\{ {0,1} \right\}} {\sum\limits_{\left({{X_{jk}},{\delta_{jk}}} \right) \in {\Pi_{j}}} {{P_{j,i}}{\delta_{jk}}{{\left| {{X_{jk}}} \right|}^{- \alpha }}}} + {N_{0}}}}, $$
where δ n0,i and R n0,i are the Rayleigh fading and the distance from the desired transmitter to the typical receiver of network S n on the ith band, respectively. N 0 is the thermal noise. Because the spectrum access of SUs is our main consideration, which means that the CRNs are interference limited, the thermal noise is negligible. Then, SINR is replaced by SIR (signal to interference ratio) as follows:
$$ SIR_{n,i} = \frac{{{\delta_{n0,i}}R_{n0,i}^{- \alpha }}}{{{I_{n,i,0}} + {I_{n,i,1}}}}, $$
where \(I_{n,i,0} = \sum \limits _{\left ({{X_{0k}},{\delta _{0k}}} \right) \in {\Pi _{0}}} {\left ({\frac {{{P_{0,i}}}}{{{P_{n,i}}}}} \right){\delta _{0k}}{{\left | {{X_{0k}}} \right |}^{- \alpha }}}, I_{n,i,1} = \sum \limits _{\left ({{X_{1k}},{\delta _{1k}}} \right) \in {\Pi _{1}}} {\left ({\frac {{{P_{1,i}}}}{{{P_{n,i}}}}} \right){\delta _{1k}}{{\left | {{X_{1k}}} \right |}^{- \alpha }}}\). Set T n,i as the threshold of SIR on the ith band, the following lemma shows the outage probability of a typical receiver:
Lemma 1
The outage probability of a typical receiver of S n , (n is 0 or 1) on the ith band (i=1,2,…,N) satisfies
$$ \Pr \left({SI{R_{n,i}} \le {T_{n,i}}} \right) = 1 - e^{\left\{ { - {\varsigma_{n,i}}\sum\limits_{j \in \left\{ {0,1} \right\}} {{\lambda_{j,i}}{{\left({\frac{{{P_{j,i}}}}{{{P_{n,i}}}}} \right)}^{\frac{2}{\alpha }}}}} \right\}}, $$
where Pr(∙) represent the probability, \(\varsigma _{n,i} = \left [ \pi \Gamma \left (1 + \frac {2}{\alpha } \right) \Gamma \left ({1 - \frac {2}{\alpha }} \right) \right ]T_{n,i}^{\frac {2}{\alpha }}R_{n0,i}^{2}\).
From Eq. (3), the outage probability satisfies:
$$ \begin{aligned} &\Pr \left({SI{R_{n,i}} \le {T_{n,i}}} \right)\\ &\quad= 1 - \Pr \left({SI{R_{n,i}} \ge {T_{n,i}}} \right)\\ &\quad= 1 - \Pr \left({{\delta_{n0,i}} \ge {T_{n,i}}R_{n0,i}^{\alpha} \left({{I_{n,i,0}} + {I_{n,i,1}}} \right)} \right) \\ &\quad= 1 - \int_{0}^{\infty} {{e^{- s{T_{n,i}}R_{n0,i}^{\alpha} }}d\left[ {\Pr \left({{I_{n,i,0}} + {I_{n,i,1}} \le s} \right)} \right]}\\ &\quad= 1 - {\psi_{{I_{n,i,0}}}}\left({{T_{n,i}}R_{n0,i}^{\alpha}} \right){\psi_{{I_{n,i,1}}}}\left({{T_{n,i}}R_{n0,i}^{\alpha}} \right) \\ \end{aligned} $$
where \({\psi _{I_{n,i,0}}}(\bullet)\) and \(\psi _{{I_{n,i,1}}}(\bullet)\) are Laplace transformation of I n,i,0 and I n,i,1, respectively. Because the analysis is based on the two dimensional planes, δ n0,i satisfies an independent exponential distribution and we have [24]
$$ \psi_{I_{n,i,0}}\left(s \right) = e^{\left\{ { - {\lambda_{0,i}}\pi {{\left({\frac{{s{P_{0,i}}}}{{{P_{n,i}}}}} \right)}^{\frac{2}{\alpha }}}\Gamma \left({1 + \frac{2}{\alpha }} \right)\Gamma \left({1 - \frac{2}{\alpha }} \right)} \right\}}. $$
Here, Γ(∙) is the gamma function with the form \(\Gamma \left (z\right)=\int _{0}^{\infty } {e^{-t}{t^{z-1}}dt}\). Similarly,
Substitute (6) and (7) back into (5), the result is
$$ \begin{aligned} &\Pr \left({SI{R_{n,i}} \le {T_{n,i}}} \right)\\ &\quad= 1 - e^{\left\{ { - \pi \Gamma \left({1 + \frac{2}{\alpha }} \right)\Gamma \left({1 - \frac{2}{\alpha }} \right)T_{n,i}^{\frac{2}{\alpha }}R_{n0,i}^{2}\sum\limits_{j \in \left\{ {0,1} \right\}} {{\lambda_{j,i}}{{\left({\frac{{{P_{j,i}}}}{{{P_{n,i}}}}} \right)}^{\frac{2}{\alpha }}}}} \right\}} \end{aligned}. $$
Denote \(\varsigma _{n,i} = \pi \Gamma \left ({1 + \frac {2}{\alpha }} \right)\Gamma \left ({1 - \frac {2}{\alpha }} \right)T_{n,i}^{\frac {2}{\alpha }}R_{n0,i}^{2}\), Eq. (4) is obtained. □
Based on Lemma 1, the successful transmission probability of a typical receiver of S n , (n is 0 or 1) on the ith band (i=1,2,…,N) is expressed as
$$ \begin{aligned} &\text{Pr}_{n,i}^{suc}\left({{\lambda_{n,i}},{\lambda_{j,i}}} \right)\\ &\quad= 1 - \Pr \left({SI{R_{n,i}} \le {T_{n,i}}} \right)\\ &\quad= \Pr \left({SI{R_{n,i}} \ge {T_{n,i}}} \right), \end{aligned} $$
where λ n,i is the node density of S n on the ith band.
Average sum rate of the secondary network on multiple bands
In the CRNs, the ASR of the secondary network on multiple bands is defined as [25]:
$$ \begin{aligned} &f\left({{\lambda_{0,i}},{P_{0,i}}} \right) &\quad= \sum\limits_{i = 1}^{N} {{\omega_{i}}{\lambda_{0,i}}{e^{- {\varsigma_{0,i}}\left[ {{\lambda_{0,i}} + {{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}} \right]}}}, \end{aligned} $$
where \(\omega _{i} = \frac {{{W_{i}}}}{{\sum \limits _{m=1}^{N} {{W_{m}}}}}, W_{i}\) is the bandwidth of the ith band which is reused by SUs, P 0,i is the power of SUs, and λ 0,i is the density of SUs on the ith band.
The communication of PUs must be protected when SUs access the spectrum. Then, we get the following constraints:
$$ 1 - {e^{- {\varsigma_{0,i}}\left[ {{\lambda_{0,i}} + {{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}} \right]}} \le {\theta_{0}}, $$
$$ 0 \le {\lambda_{0,i}} \le {\lambda_{\max,i}}, \left({i = 1,2,\ldots,N} \right), $$
$$ 0 \le {P_{0,i}} \le {P_{\max,i}}, \left({i = 1,2,\ldots,N} \right), $$
where θ 0 and θ 1 are the outage probability thresholds of SUs and PUs, respectively. λ max,i and P max,i are the maximum density and power of SUs on each band, respectively.
Maximizing the ASR of the secondary network on multiple bands
In this section, the ASR of the secondary network on multiple bands is maximized under the density and power constraints. Then, we get the optimal density and power of SUs in closed-form. Last, a spectrum access and power control algorithm is proposed to get the maximum ASR of the secondary network.
Maximizing ASR of the secondary network on multi-bands with density constraints
First, we fix the power of SUs. Then, from (11) and (12), we have:
$$ \lambda_{0,i} \le \frac{{ - 1}}{{{\varsigma_{0,i}}}}\ln \left({1 - {\theta_{0}}} \right) - {\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)^{\frac{2}{\alpha }}}{\lambda_{1,i}}, $$
$$ \lambda_{0,i}\le {\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)^{\frac{2}{\alpha }}}\left({\frac{{ - 1}}{{{\varsigma_{1,i}}}}\ln \left({1 - {\theta_{1}}} \right) - {\lambda_{1,i}}} \right). $$
Make \(\lambda _{0,i,{{\sup }_{1}}} \,=\, \frac {{ - 1}}{{{\varsigma _{0,i}}}}\ln \left ({1 \,-\, {\theta _{0}}} \right) \,-\, {\left (\! {\frac {{{P_{1,i}}}}{{{P_{0,i}}}}} \!\right)^{\frac {2}{\alpha }}}{\!\lambda _{1,i}}\) and \(\lambda _{0,i,{{\sup }_{2}}}={\left ({\frac {{{P_{1,i}}}}{{{P_{0,i}}}}} \right)^{\frac {2}{\alpha }}}\left ({\frac {{ - 1}}{{{\varsigma _{1,i}}}}\ln \left ({1 - {\theta _{1}}} \right) - {\lambda _{1,i}}} \right)\), from constraint (13), the upper density limit of SUs on a single band is \(\phantom {\dot {i}\!}{\lambda _{0,i,\sup }} = \min \left \{ {{\lambda _{0,i,{{\sup }_{1}}}},{\lambda _{0,i,{{\sup }_{2}}}},{\lambda _{\max,i}}} \right \}, \left ({i = 1,2,\ldots,N} \right)\).
Denoting the density of SUs as λ 0, we get \(\sum \limits _{i = 1}^{N} {{\lambda _{0,i}}} = {\lambda _{0}}\) and \({\lambda _{0}}\le \sum \limits _{i=1}^{N} {{\lambda _{0,i,\sup }}}\). While \({\lambda _{0}} > \sum \limits _{i = 1}^{N} {{\lambda _{0,i,\sup }}}\), the network should control the spectrum access of SUs to satisfy 0≤λ 0,i ≤λ 0,i,sup,(i=1,2,…,N). Then, we get the following two conditions:
(1) When \(\lambda _{0} > \sum \limits _{i = 1}^{N} {{\lambda _{0,i,\sup }}}\), we have
$$ \begin{array}{ccc} {\max} & {} & f\left({{\lambda_{0,i}},{P_{0,i}}} \right)= \sum\limits_{i = 1}^{N} {{\omega_{i}}{\lambda_{0,i}}{e^{- {\varsigma_{0,i}}\left[ {{\lambda_{0,i}} + {{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}} \right]}}} \\ {s.t.} & {} & {0 \le {\lambda_{0,i}} \le {\lambda_{0,i,\sup }}, \left({i = 1,2,\ldots,N} \right)}. \\ \end{array} $$
Taking the partial derivation of f(λ 0,i ,P 0,i ) with respect to λ 0,i ,
$$ \frac{{\partial f\left({{\lambda_{0,i}},{P_{0,i}}} \right)}}{{\partial {\lambda_{0,i}}}} = \left({1 - {\varsigma_{0,i}}{\lambda_{0,i}}} \right){\omega_{i}}{e^{- {\varsigma_{0,i}}\left[ {{\lambda_{0,i}} + {{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}} \right]}}. $$
Making \(\frac {{\partial f\left ({{\lambda _{0,i}},{P_{0,i}}} \right)}}{{\partial \lambda _{0,i}}}=0, {\lambda _{0,i}} = \frac {1}{{{\varsigma _{0,i}}}}\) is obtained, so the optimal density of SUs on the ith band \(\lambda _{0,i,opt1}^{*}\) is
$$ \lambda_{0,i,opt1}^{*} = \left\{ {\begin{array}{cc} {{\lambda_{0,i,\sup }}} & {{\lambda_{0,i,\sup }} < \frac{1}{{{\varsigma_{0,i}}}}} \\ {\frac{1}{{{\varsigma_{0,i}}}}} & {{\lambda_{0,i,\sup }} \ge \frac{1}{{{\varsigma_{0,i}}}}} \\ \end{array}} \right.\left({i = 1,2,\ldots,N} \right). $$
(2) When \(\lambda _{0}\le \sum \limits _{i = 1}^{N} {{\lambda _{0,i,\sup }}}\), we have
$$ \begin{array}{ccc} {\max} & {} & {f\left({{\lambda_{0,i}},{P_{0,i}}} \right) = \sum\limits_{i = 1}^{N} {{\omega_{i}}{\lambda_{0,i}}{e^{- {\varsigma_{0,i}}\left[ {{\lambda_{0,i}} + {{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}} \right]}}} }\\ {s.t.} & {} & {0 \le {\lambda_{0,i}} \le {\lambda_{0,i,\sup }},\left({i = 1,2,\ldots,N} \right)}\\ {} & {} & {\sum\limits_{i = 1}^{N} {{\lambda_{0,i}}} = {\lambda_{0}}}\\ \end{array}. $$
Then, we get the optimal density of SUs in the following theorem:
Theorem 1
When the power of SUs on the ith band (i=1,2,…,N) is fixed, the optimal density of SUs on the ith band \(\lambda _{0,i,opt2}^{*}\) is
$$ \begin{array}{l} \lambda_{0,i,opt2}^{*} = \left\{ {\begin{array}{ccc} {{\lambda_{0,i,\sup }}} &, & {0 \le \psi < \psi_{u}} \\ {\frac{\left({1 - \sqrt \rho} \right)}{{{\varsigma_{0,i}}}}} &, & {\psi_{u}\le \rho < 1} \\ 0 &, & {1 \le \rho} \\ \end{array} } \right., \end{array} $$
where \(\rho = \frac {v}{{{A_{i}}}}, {A_{i}} = {\omega _{i}}{e^{- {\varsigma _{0,i}}{{\left ({\frac {{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac {2}{\alpha }}}{\lambda _{1,i}}}}, \left ({i = 1,2,\ldots,N} \right)\). \(\psi _{u}=\left ({1 - {\varsigma _{0,i}}{\lambda _{0,i,\sup }}} \right){e^{- {\varsigma _{0,i}}{\lambda _{0,i,\sup }}}}\). v is a Lagrange multiplier coefficient which satisfies \(\sum \limits _{i = 0}^{N} {\lambda _{0,i,opt2}^{*}} = {\lambda _{0}}\).
See Appendix 1. □
Maximizing the ASR of the secondary network on multi-bands with power constraints
In the following, we will maximize the ASR of the secondary network with the power constraint of SUs. Similarly, we fixed the density of SUs. According to (11) and (12), we have
$$ {P_{0,i}} \ge {P_{1,i}}{\left[ {\frac{{ - \ln \left({1 - {\theta_{0}}} \right)}}{{{\lambda_{1,i}}{\varsigma_{0,i}}}} - \frac{{{\lambda_{0,i}}}}{{{\lambda_{1,i}}}}} \right]^{\frac{{ - \alpha }}{2}}}, $$
$$ P_{0,i} \le {P_{1,i}}{\left[ {\frac{{ - \ln \left({1 - {\theta_{1}}} \right)}}{{{\lambda_{0,i}}{\varsigma_{1,i}}}} - \frac{{{\lambda_{1,i}}}}{{{\lambda_{0,i}}}}} \right]^{\frac{\alpha }{2}}}. $$
Denote \({P_{0,i,{{\inf }_{1}}}} \,=\, {P_{1,i}}{\left [ {\frac {{ - \ln \left ({1 - {\theta _{0}}} \right)}}{{{\lambda _{1,i}}{\varsigma _{0,i}}}} - \frac {{{\lambda _{0,i}}}}{{{\lambda _{1,i}}}}} \right ]^{\frac {{ - \alpha }}{2}}}\) and \({P_{0,i,{{\sup }_{1}}}} = {P_{1,i}}{\left [ {\frac {{ - \ln \left ({1 - {\theta _{1}}} \right)}}{{{\lambda _{0,i}}{\varsigma _{1,i}}}} - \frac {{{\lambda _{1,i}}}}{{{\lambda _{0,i}}}}} \right ]^{\frac {\alpha }{2}}}\), from constraint (14), the lower and upper power limit of SUs in a single band are \(\phantom {\dot {i}\!}{P_{0,i,\inf }} = \max \left \{ {0,{P_{0,i,{{\inf }_{1}}}}} \right \}\) and \(\phantom {\dot {i}\!}{P_{0,i,\sup }} = \min \left \{ {{P_{\max,i}},{P_{0,i,{{\sup }_{1}}}}} \right \}, i=1,2,\ldots,N\), respectively.
Define P 0 as the power for one SUs on all the bands, so we have \(P_{0}\le \sum \limits _{i=1}^{N} {{P_{0,i,\sup }}}\); otherwise, we can control the spectrum access of SUs on each band to make P 0,i,inf≤P 0,i ≤P 0,i,sup,(i=1,2,…,N) established when \(P_{0} > \sum \limits _{i = 1}^{N} {{P_{0,i,\sup }}}\). Similarly, we get following two aspects:
(1) When the power constraint \(P_{0} > \sum \limits _{i = 1}^{N} {P_{0,i,\sup }}\), we have
$$ \begin{array}{cc} {\max} & {f\left({{\lambda_{0,i}},{P_{0,i}}} \right) = \sum\limits_{i = 1}^{N} {{\omega_{i}}{\lambda_{0,i}}{e^{- {\varsigma_{0,i}}\left[ {{\lambda_{0,i}} + {{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}} \right]}}}} \\ {s.t.} & {{P_{0,i,\inf }} \le {P_{0,i}} \le {P_{0,i,\sup }},\left({i = 1,2,\ldots,N} \right)} \\ \end{array}. $$
When P 0,i =P 0,i,sup, f(λ 0,i ,P 0,i ) can reach the maximum value in the definition domain of P 0,i , we get the optimal power of SUs on the ith band as:
$$ P_{0,i,opt1}^{*} = {P_{0,i,\sup }},\left({i = 1,2,\ldots,N} \right). $$
According to the above analysis, we can the maximize value of ASR only when P 0,i,inf≤P 0,i,sup,(i=1,2,…,N). But when P 0,i,inf>P 0,i,sup on the ith band, the following inequality holds:
$$ {\left[ {\frac{{ - \ln \left({1 - {\theta_{0}}} \right)}}{{{\lambda_{1,i}}{\varsigma_{0,i}}}} - \frac{{{\lambda_{0,i}}}}{{{\lambda_{1,i}}}}} \right]^{\frac{{ - \alpha }}{2}}} > {\left[ {\frac{{ - \ln \left({1 - {\theta_{1}}} \right)}}{{{\lambda_{0,i}}{\varsigma_{1,i}}}} - \frac{{{\lambda_{1,i}}}}{{{\lambda_{0,i}}}}} \right]^{\frac{\alpha }{2}}}. $$
Let \({\xi _{0,i}} = \frac {{ - \ln \left ({1 - {\theta _{0}}} \right)}}{{{\varsigma _{0,i}}}}\) and \({\xi _{1,i}} = \frac {{ - \ln \left ({1 - {\theta _{1}}} \right)}}{{{\varsigma _{1,i}}}}\), then reshape (26), we have
$$ \left({\frac{{{\xi_{0,i}}}}{{{\lambda_{1,i}}}} - \frac{{{\lambda_{0,i}}}}{{{\lambda_{1,i}}}}} \right)\left({\frac{{{\xi_{1,i}}}}{{{\lambda_{0,i}}}} - \frac{{{\lambda_{1,i}}}}{{{\lambda_{0,i}}}}} \right) < 1. $$
Remark 1
When more SUs access the spectrum, the interference is becoming more and more serious. Once the density of SUs on the ith band is large enough to make inequality (27) established, the communication of the PUs cannot be ensured if SUs transmit signal on that band. Then power control must be operated, so P 0,i should be zero under this condition.
(2) When the power constraints of SUs \({P_{0}} < \sum \limits _{i = 1}^{N} {{P_{0,i,\sup }}}\), we have
$$ \begin{array}{ccc} {\max} & {} & {f\left({{\lambda_{0,i}},{P_{0,i}}} \right) = \sum\limits_{i = 1}^{N} {{\omega_{i}}{\lambda_{0,i}}{e^{- {\varsigma_{0,i}}\left[ {{\lambda_{0,i}} + {{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}} \right]}}}} \\ {s.t.} & {} & {{P_{0,i,\inf }} \le {P_{0,i}} \le {P_{0,i,\sup }},\left({i = 1,2,\ldots,N} \right)} \\ {} & {} & {\sum\limits_{i = 1}^{N} {{P_{0,i}}} = {P_{0}}} \\ \end{array}. $$
In cognitive radio networks, the outage probability threshold of PUs θ 1 is usually defined as a very small value because the priority of PUs should be provided. Then, denoting \(B_{i} = {\omega _{i}}{\lambda _{0,i}}{e^{- {\varsigma _{0,i}}{\lambda _{0,i}}}}, D_{i} = {\varsigma _{0,i}}P_{1,i}^{\frac {2}{\alpha }}{\lambda _{1,i}}, (i = 1,2,\ldots,N)\), we get the following lemma and theorem:
When outage probability threshold is \({\theta _{1}} \in \left ({0,1 - {e^{- {\lambda _{1,i}}{\varsigma _{1,i}}}}} \right)\), the negative ASR of secondary network \(-f\left ({{\lambda _{0,i}},{P_{0,i}}} \right) = - \sum \limits _{i = 1}^{N} {{\omega _{i}}{\lambda _{0,i}}{e^{- {\varsigma _{0,i}}\left [ {{\lambda _{0,i}} + {{\left ({\frac {{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac {2}{\alpha }}}{\lambda _{1,i}}} \right ]}}}\) is convex in the power definition domain of SUs [P 0,i,inf,P 0,i,sup].
Then, following theorem shows the optimal power of SUs in each band:
When the density of SUs on each band is fixed, the optimal power of SUs in the ith (i=1,2,…,N) band \(P_{0,i,opt2}^{*}\) is
$$ P_{0,i,opt2}^{*} = \left\{ {\begin{array}{ccc} {{P_{0,i,\sup }}} &, & {u \le {h_{0,i,\min }}} \\ {P_{0,i,solution}^{*}} &, & {{h_{0,i,\min }} < u \le {h_{0,i,\max }}} \\ {{P_{0,i,\inf }}} &, & {{h_{0,i,\max }} < u} \\ \end{array}} \right., $$
where [h 0,i,min,h 0,i,max] is the range of the function \(h\left ({{P_{0,i}}} \right) = \frac {{2{B_{i}}{D_{i}}}}{\alpha }{e^{- {D_{i}}P_{0,i}^{\frac {{ - 2}}{\alpha }}}}P_{0,i}^{- \left ({1 + \frac {2}{\alpha }} \right)}\), and \(P_{0,i,solution}^{*}\) is the solution of u−h(P 0,i )=0. While u is a Lagrange multiplier coefficient which is determined by the condition \(\sum \limits _{i = 1}^{N} {{P_{0,i}}}={P_{0}}\).
Spectrum access and power control algorithm for maximizing ASR of the secondary network
Based on the previous analysis, the ASR of the secondary network is convex when the density or the power of SUs is fixed. Then, we propose a spectrum access and power control algorithm for maximizing ASR of the secondary network. The detail of the algorithm is described in Algorithm 1.
The algorithm aims to use the optimal results in Theorems 1 and 2 to get the optimal spectrum of SUs and the optimal power of SUs, respectively. Several interpretations are shown as follows:
From step 4 to step 8, the algorithm calculates the optimal density of SUs on each band (i=1,2,…,N), i.e., we can get the optimal number of SUs which access the spectrum of PUs. Step 9 updates the upper density bound of SUs on each band. Then, step 12 to step 16 calculate the optimal power of SUs on each band (i=1,2,…,N), which the calculation is according to the Theorem 2. Step 17 updates the lower and upper bound of SUs on each band.
According to the two if-else-end parts, each time we get the optimal density of SUs, the constraints of the power are changed. Similarly, when we calculate the optimal power of SUs, the value will change the constraints of the density. So, we make an iteration between the optimization between the density and power of SUs, which is controlled by the flag bit in step 21. We also calculate the ASR gap between each two iterations, which is shown as Δ C in step 20. The final optimal density and power of SUs will be reached when Δ C<ε, where ε is a pre-defined threshold of Δ C. In the end, we get the optimal density and power of SUs in CRNs.
Simulation results and discussions
In this section, the outage probability and ASR of secondary network on one single band are analyzed. Then, the density and power boundary which confine the optimization are discussed. Moreover, we present the maximum ASRs of the secondary network on multiple bands. Finally, the maximum ASR on five bands with the proposed algorithm is compared with average power allocation method to make the results more insightful.
Simulation analysis of outage probability, density and power boundaries of SUs on the single band
The basic parameters on one single band are listed in Table 1. We consider a simulation scenario as a 500×500 m2 square region. The traffic model is full buffer. Both the number of PUs or SUs on these single bands is set as 25. The band is assumed with a bandwidth normalized to 1. The subscript i means the serial number of this band.
Table 1 Simulation parameters on one single band
Figure 2 illustrates the relationship between the outage probability and the density of SUs on a single band. From Fig. 2, outage probability is rising as the density of SUs is increasing because of higher density of SUs causing more serious interference to the secondary network. Furthermore, Fig. 2 shows the outage probability of SUs under different densities of PUs on a same band. With the increasing density of PUs, the secondary network suffers more interference from the primary network, so the outage probability of SUs is bigger when the density of PUs is higher. Furthermore, the higher the power of SUs is, the lower the outage probability of SUs is. It is known that secondary network can get a better communication quality with a higher power of SUs under the same density of SUs in the CRNs.
Outage probability of SUs vs. density of SUs on one single band
In Fig. 3, the relationship between outage probability of SUs and power of SUs on a single band is shown. From Fig. 3, outage probability of SUs is reducing as the power of SUs is increasing because the high power of SUs can bring the improvement of SIR in the secondary network under the same environment. In addition, the figure reveals that the bigger density of SUs makes a higher outage probability of SUs under the same power of SUs. This is due to the interference between SUs that is more serious as the density of SUs is increasing. Moreover, when the power of SUs is increasing, the outage probability of SUs are approaching to stable. This is due to the power of SUs that can only reduce the harmful influence from the primary network to the secondary network, but cannot completely reduce the interference among secondary networks.
Outage probability of SUs vs. power of SUs on one single band
Following, Fig. 4 illustrates that the value range of density of SUs, which means how many SUs can access the spectrum of PUs. Two curves with same color are based on Eqs. (15) and (16). Black ellipses mean the density boundaries of SUs, which are the maximum densities of SUs we can choose. Furthermore, SUs can use a lower power when the power of PUs is low because interference from the primary network is small. In addition, when the power of PUs is high, SUs can get bigger density value with a high power of SUs. This is due to the primary network that can bear more interference when the power of PUs is high.
Density boundary of SUs vs. power of SUs on one single band
In Fig. 5, power boundaries of SUs on a single band are shown. Black circles point out the intersection of the power upper bound and lower bound. When density of SUs exceeds that point, SUs cannot access the spectrum of PUs in order to protect the communication of PUs. From Fig. 5, the boundary curve with bigger power of PUs is higher than the curve with low power. This is due to the primary network that suffers more interference from SUs while the secondary network needs high power to resist interference from the primary network.
Power boundary of SUs vs. density of SUs on one single band
Simulation analysis of ASR and maximum ASR of the secondary network
Figure 6 demonstrates the change of ASR of the secondary network with power of SUs on a single band. We can see that the ASR of SUs increases with the power of SUs because the high power of SUs can improve the SIR in the secondary network. More specifically, Fig. 6 also indicates that the high density of SUs leads to high ASR of SUs. When the power of SUs is high, the curves tend to stable values because the interference among SUs cannot be completely eliminated by enhancing the power of SUs.
ASR of the secondary network vs. power of SUs on one single band
In Fig. 7, the relationship between ASR of the secondary network and density of SUs on a single band is considered. When density of SUs is low, the ASR of the secondary network increases as the density of SUs increases. This is due to more SUs accessing the spectrum of PUs that enhance the performance of the secondary network. When the density of SUs is high and continues to increase, the interference among the SUs becomes large and causes harmful interference to the secondary network, so the ASR of the secondary network begins to reduce. Furthermore, the figure illustrates that when the power of PUs is decreasing, the ASR of SUs is rising, which is due to the reduction of interference from PUs to secondary network.
ASR of secondary network vs. Density of SUs on one single band
Next, the simulation results of CRNs on multiple bands are discussed. We also consider a simulation scenario as a 500×500 m2 square region under the Monte Carlo simulation. The spectrum band of PUs is divided into five sub-bands with the bandwidth in three cases. Each band is assumed with a bandwidth normalized to 1. The key parameters are listed in Table 2. In cases 1 and 2, the average number of PUs on each band is 20, while the average number of PUs on each band is 70 in case 3.
Table 2 Key parameters of the simulation on five bands
In Fig. 8, a comparison between the proposed algorithm and the average power allocation algorithm is illustrated. The simulation is based on five bands with different densities and powers of PUs; the detailed parameters are shown in Table 2. Both algorithms adopt the same way to calculate the optimal density or transmission power of SUs. The only difference is when we calculate the density or transmission power of SUs in each iteration, the change in the quantity of density or power is distributed averagely to all the five bands with average power allocation algorithm. From Fig. 8, we can see that the sum of the ASR of the secondary network can achieve the highest values. While with the constraints of the sum of the density and power of SUs, the ASR is decreased due to the constraints from PUs that become strict. This forbids some SUs to access the spectrum of the PUs. Furthermore, we can see the proposed algorithm results to a better ASR of the secondary network, which is due to our proposed algorithm that can make a proper number of SUs to access the spectrum. In addition, compare case 1 with case 2, we can see that PUs can endure more interference when the power of PUs becomes larger. So, more SUs can access the spectrum of PUs. But when both the density and power of PUs become large, the spectrum access of SUs is confined because the constraints of PUs become strict. Then, we can see that the ASR of the secondary network in case 3 is lower than that in case 2.
Maximum ASR of secondary network on all five bands
Figure 9 shows the relationship between the maximum ASR of SUs and the average power of PUs on five bands. The densities of PUs on the five bands are as cases 1 and 2 have shown in Table 2. At first, we can see the maximum ASR of SUs is increasing as the average power of PUs is increasing. This is due to the fact that PUs cannot endure too much interference from SUs, the constraints to the SUs is strict, and the maximum ASR of SUs is also not very high. When the power continues to increase, the maximum ASR of SUs can reach its peak value. We can see our proposed algorithm is better than the average power allocation algorithm. In our proposed algorithm, a proper number of SUs access the spectrum of PUs. In addition, the maximum ASR is decreasing when the average power of PUs continues to increase. The interference from the PUs becomes serious when the average power of PUs becomes larger.
Maximum ASR of SUs vs. average power of PUs on all five bands
In Fig. 10, the maximum ASR of SUs is decreasing as the average density of PUs is decreasing. The power of PUs on the five bands are as cases 2 and 3 have shown in Table 2. As the black ellipse has shown, when the average density of PUs is low, the interference among the PUs is not high. Which means that when the SIR of PUs is high, PUs in the networks can endure more interference from the SUs. Thus, more SUs can access the spectrum of PUs, and the maximum ASR of SUs is high. But as the the density of PUs increases, the interference of PUs becomes large, which causes the constraints to SUs become strict. Then, we can see the curves decline. In addition, the proposed algorithm achieves a better performance than the average power allocation algorithm, which also verifies the results in Fig. 8 before.
Maximum ASR of SUs vs. average density of PUs on all five bands
In this paper, we have studied the optimal spectrum access and power control of SUs on multiple bands. The outage probabilities and ASR of SUs have been obtained based on the CRNs which are modeled by PPPs. Then, we have obtained the constraints from both PUs and SUs. The convexity of the target ASR has also been verified. So, we have derived out the optimal densities and powers of SUs in closed-form. Last, a spectrum access and power control algorithm of SUs has been proposed to get the maximum ASR of SUs. From the simulation, we can see that the outage probability, ASR, density, and power of SUs on each band are all constrained by the primary network and the interference in CRNs. The optimal values on multiple bands are reduced when the sum constraints are added. Simulation results have also verified the superiority of the proposed algorithm over the average power allocation algorithm.
Proof of Theorem 1
Make λ 0,i =x i , change the optimization problem (20) into the standard form as
$$ \begin{array}{ccc} {\min} & {} & { - f\left({{x_{i}}} \right) = - \sum\limits_{i = 1}^{N} {{A_{i}}{x_{i}}{e^{- {\varsigma_{0,i}}{x_{i}}}}}} \\ {s.t.} & {} & {{x_{i}} \ge 0} \\ {} & {} & {{x_{i}} - {\lambda_{0,i,\sup }} \le 0,\left({i = 1,2,\ldots,N} \right)} \\ {} & {} & {\sum\limits_{i = 1}^{N} {{x_{i}}} = {\lambda_{0}}} \\ \end{array}. $$
The second derivative of target function with respect to λ 0,i satisfies
$$ - f^{\prime\prime}\left({{x_{i}}} \right) = {A_{i}}{\varsigma_{0,i}}\left({2 - {\varsigma_{0,i}}{x_{i}}} \right){e^{- {\varsigma_{0,i}}{x_{i}}}}. $$
In practical CRNs, the outage probability thresholds θ 0 and θ 1 are both very small values, which leads to ς 0,i x i <2, so
$$ - f^{\prime\prime}\left({{x_{i}}} \right) > 0, 0\le{x_{i}}\le {\lambda_{0,i,\sup }}. $$
So, the maximization problem is a convex problem; define the symbol of optimal x i as \(x_{i}^{*}\), then we have the Lagrange function as follows:
$$ \begin{aligned} &L\left({x_{i}^{*},{k_{i}},{l_{i}},v} \right)\\ &= - \sum\limits_{i = 1}^{N} {{A_{i}}x_{i}^{*}{e^{- {\varsigma_{0,i}}x_{i}^{*}}}} - \sum\limits_{i = 1}^{N} {{k_{i}}x_{i}^{*}}\\ &\quad+ \sum\limits_{i = 1}^{N} {{l_{i}}\left({x_{i}^{*} - {\lambda_{0,i,\sup }}} \right) +} v\left({\sum\limits_{i = 1}^{N} {x_{i}^{*} - {\lambda_{0}}}} \right). \end{aligned} $$
According to the KKT condition, we get the following algebras:
\(x_{i}^{*} \ge 0, i = 1,2,\ldots,N\);
k i ≥0,i=1,2,…,N;
l i ≥0,i=1,2,…,N;
\(x_{i}^{*}-{\lambda _{0,i,\sup }}\le 0, i=1,2,\ldots,N\);
\({k_{i}}x_{i}^{*} = 0, i=1,2,\ldots,N\);
\({l_{i}}\left ({x_{i}^{*}-{\lambda _{0,i,\sup }}}\right)=0, i=1,2,\ldots,N\);
\(-{A_{i}}\left ({1-{\varsigma _{0,i}}x_{i}^{*}}\right){e^{-{\varsigma _{0,i}}x_{i}^{*}}}-{k_{i}}+{l_{i}}+v=0, i=1,2,\ldots,N\);
\(\sum \limits _{i=1}^{N}{x_{i}^{*}}=\lambda _{0}\).
From (7) we have
$$ {k_{i}} = {l_{i}} + v - {A_{i}}\left({1 - {\varsigma_{0,i}}x_{i}^{*}} \right){e^{- {\varsigma_{0,i}}x_{i}^{*}}}. $$
$$ {l_{i}}x_{i}^{*} = {l_{i}}{\lambda_{0,i,\sup }}. $$
Take the two equations above into (5) and transform
$$ \left[ {v - {A_{i}}\left({1 - {\varsigma_{0,i}}x_{i}^{*}} \right){e^{- {\varsigma_{0,i}}x_{i}^{*}}}} \right]x_{i}^{*} + {l_{i}}{\lambda_{0,i,\sup }} = 0. $$
Combine with (1) to (5), we can know:
If \(v\ge A_{i}, v-{A_{i}}\left ({1-{\varsigma _{0,i}}x_{i}^{*}} \right){e^{-{\varsigma _{0,i}}x_{i}^{*}}}>0\), so \(x_{i}^{*}=0, l_{i}=0\).
If v<A i , the following results are obtained:
If \(v \ge {A_{i}}\left ({1 - {\varsigma _{0,i}}\lambda _{0,i,\sup }^{*}} \right){e^{- {\varsigma _{0,i}}\lambda _{0,i,\sup }^{*}}}\), so l i =0. According to (36), we have
$$ v - {A_{i}}\left({1 - {\varsigma_{0,i}}x_{i}^{*}} \right){e^{- {\varsigma_{0,i}}x_{i}^{*}}} = 0. $$
Bring \({e^{- {\varsigma _{0,i}}x_{i}^{*}}} \sim \left ({1 + {\varsigma _{0,i}}x_{i}^{*}} \right)\) into the equation, we get \(x_{i}^{*} = \frac {1}{{{\varsigma _{0,i}}}}\left ({1 - \sqrt {\frac {v}{{{A_{i}}}}}} \right)\). Otherwise, we have \(x_{i}^{*} = {\lambda _{0,i,\sup }}\).
So from above, the results in Eq. (21) are obtained. □
Proof of Lemma 2
For the target function in (28), we get
$$ \begin{aligned} &-{\omega_{i}}{\lambda_{0,i}}{e^{- {\varsigma_{0,i}}\left[ {{\lambda_{0,i}} + {{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}} \right]}}\\ &\quad= - {\omega_{i}}{\lambda_{0,i}}{e^{- {\varsigma_{0,i}}{\lambda_{0,i}}}} \cdot {e^{- {\varsigma_{0,i}}{{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}}}. \end{aligned} $$
Then the equation above can be transformed into \(-f\left ({{\lambda _{0,i}},{P_{0,i}}} \right) = - {B_{i}}{e^{- {D_{i}}P_{0,i}^{\frac {{ - 2}}{\alpha }}}}\). Calculate its second partial derivative with respective to P 0,i . We get
$$ \begin{aligned} &-f^{\prime\prime}\left({{\lambda_{0,i}},{P_{0,i}}} \right)\\ &\quad= \frac{{2{B_{i}}{D_{i}}{e^{- {D_{i}}P_{0,i}^{\frac{{ - 2}}{\alpha }}}}P_{0,i}^{\frac{{ - 2\left({\alpha + 2} \right)}}{\alpha }}}}{{{\alpha^{2}}}}\left[ { - 2{D_{i}} + \left({\alpha + 2} \right)P_{0,i}^{\frac{2}{\alpha }}} \right]. \end{aligned} $$
The domain of P 0,i is P 0,i,inf≤P 0,i ≤P 0,i,sup, and it is obvious that \(- 2{D_{i}} + \left ({\alpha + 2} \right)P_{0,i}^{\frac {2}{\alpha }}\) is monotonously increasing with P 0,i , so if the lower limit of P 0,i makes the second partial derivative greater than zero, all the values of P 0,i in the domain make so. Because \({P_{0,i,\inf }} = \left \{ {0,{P_{1,i}}{{\left [ { - \frac {{\ln \left ({1 - {\theta _{0}}} \right)}}{{{\lambda _{1,i}}{\varsigma _{0,i}}}} - \frac {{{\lambda _{0,i}}}}{{{\lambda _{1,i}}}}} \right ]}^{\frac {{ - \alpha }}{2}}}} \right \}\), we have
(1) When P 0,i,inf=0, it is obvious that −f ″(λ 0,i ,P 0,i )≥0.
(2) When P 0,i,inf>0, we have
$$ \begin{aligned} &- 2{D_{i}} + \left({\alpha + 2} \right)P_{0,i,\inf }^{\frac{2}{\alpha }}\\ &\quad= - {\varsigma_{0,i}}P_{1,i}^{\frac{2}{\alpha }}{\lambda_{1,i}}\left[ {2 + \frac{{\left({\alpha + 2} \right)}}{{\ln \left({1 - {\theta_{0}}} \right) + {\lambda_{0,i}}{\varsigma_{1,i}}}}} \right]. \end{aligned} $$
From P 0,i,inf>0, we know
$$ {P_{0,i,\inf }} = {P_{1,i}}{\left[ { - \frac{{\ln \left({1 - {\theta_{0}}} \right)}}{{{\lambda_{1,i}}{\varsigma_{0,i}}}}-\frac{{{\lambda_{0,i}}}}{{{\lambda_{1,i}}}}} \right]^{\frac{{-\alpha }}{2}}}> 0. $$
So, we have
$$ {P_{1,i}}{\left({\frac{1}{{{\lambda_{1,i}}}}} \right)^{\frac{{ - \alpha }}{2}}}{\left({\frac{1}{{{\varsigma_{0,i}}}}} \right)^{\frac{{ - \alpha }}{2}}}{\left[ { - \ln \left({1 - {\theta_{0}}} \right) - {\varsigma_{0,i}}{\lambda_{0,i}}} \right]^{\frac{{ - \alpha }}{2}}} > 0. $$
So, ln(1−θ 0)+ς 0,i λ 0,i <0. The outage probability threshold of SUs θ 0 is a very tiny value which ensures the communication in the secondary network. Here, we can make \(\phantom {\dot {i}\!}0 < {\theta _{0}} < 1 - {e^{- \left ({1 + {\varsigma _{0,i}}{\lambda _{0,i}}} \right)}}\), so we get
$$ -1 < \ln \left({1 - {\theta_{0}}} \right) + {\varsigma_{0,i}}{\lambda_{0,i}} < 0. $$
Then, \(2+\frac {{\alpha + 2}}{{\ln \left ({1-\theta _{0}} \right)+{\varsigma _{0,i}}{\lambda _{0,i}}}}<0\), and we have
$$ -2{D_{i}} + \left({\alpha + 2} \right)P_{0,i,\inf }^{\frac{2}{\alpha }} > 0. $$
Thus, we know −f ″(λ 0,i ,P 0,i )>0 when P 0,i =P 0,i,inf. While \(-2{D_{i}} + \left ({\alpha + 2} \right)P_{0,i}^{\frac {2}{\alpha }}\) is monotonously increasing with P 0,i , so −f(λ 0,i ,P 0,i ) is convex when P 0,i ∈[P 0,i,inf,P 0,i,sup]. □
Make P 0,i =p i , change the optimization problem (28) into the standard form as follows:
$$ \begin{array}{ccc} {\min} & {} & { - f\left({{p_{i}}} \right) = - \sum\limits_{i = 1}^{N} {{\omega_{i}}{\lambda_{0,i}}{e^{- {\varsigma_{0,i}}\left[ {{\lambda_{0,i}} + {{\left({\frac{{{P_{1,i}}}}{{{P_{0,i}}}}} \right)}^{\frac{2}{\alpha }}}{\lambda_{1,i}}} \right]}}}} \\ {s.t.} & {} & {{p_{i}} - {P_{0,i,\inf }} \ge 0,i = 1,2,\ldots,N} \\ {} & {} & {{p_{i}} - {P_{0,i,\sup }} \le 0,i = 1,2,\ldots,N} \\ {} & {} & {\sum\limits_{i = 1}^{N} {{p_{i}}} = {P_{0}}} \\ \end{array}. $$
From Lemma 2, we know the target function is convex. Define the symbol of optimal p i as \(p_{i}^{*}\) and construct Lagrange function as follows:
$$ \begin{aligned} &L\left({p_{i}^{*},{s_{i}},{t_{i}},u} \right)\\ &\quad= - \sum\limits_{i = 1}^{N} {{B_{i}}{e^{- {D_{i}}{{p_{i}^{*}}^{\frac{{ - 2}}{\alpha }}}}}}- \sum\limits_{i = 1}^{N} {{s_{i}}\left({p_{i}^{*} - {P_{0,i,\inf }}} \right)}\\ &\qquad+ \sum\limits_{i = 1}^{N} {{t_{i}}\left({p_{i}^{*} - {P_{0,i,\sup }}} \right)} + u\left({\sum\limits_{i = 1}^{N} {p_{i}^{*}} - {P_{0}}} \right). \end{aligned} $$
From KKT condition, we get
s i ≥0,i=1,2,…,N
t i ≥0,i=1,2,…,N
\(p_{i}^{*} - {P_{0,i,\inf }} \ge 0, i=1,2,\ldots,N\)
\(p_{i}^{*} - {P_{0,i,\sup }} \le 0, i=1,2,\ldots,N\)
\({s_{i}}\left ({p_{i}^{*} - {P_{0,i,\inf }}} \right) = 0, i=1,2,\ldots,N\)
\({t_{i}}\left ({p_{i}^{*} - {P_{0,i,\sup }}} \right) = 0, i=1,2,\ldots,N\)
\(- \frac {{2{B_{i}}{D_{i}}}}{\alpha }{e^{- {D_{i}}{{p_{i}^{*}}^{\frac {- 2}{\alpha }}}}}{{p_{i}^{*}}^{- \left ({1 + \frac {2}{\alpha }} \right)}} - {s_{i}} + {t_{i}} + u = 0\);
\(\sum \limits _{i = 1}^{N} {p_{i}^{*}} = {P_{0}}\).
Transform (7) into
$$ s_{i}= -\frac{2B_{i}D_{i}}{\alpha}e^{-D_{i}p_{i}^{*\frac{-2}{\alpha}}}{p_{i}^{*}}^{-\left(1+\frac{2}{\alpha}\right)}+t_{i}+u. $$
From (6), we have
$$ {t_{i}}p_{i}^{*} = {t_{i}}{P_{0,i,\sup }}. $$
Substituting the two equations above into (5), we get
$$ \begin{aligned} &\left[ {u - \frac{{2{B_{i}}{D_{i}}}}{\alpha }{e^{- {D_{i}}{{p_{i}^{*}}^{\frac{{ - 2}} {\alpha }}}}}{{p_{i}^{*}}^{- \left({1 + \frac{2}{\alpha }} \right)}}} \right]\left({p_{i}^{*} - {P_{0,i,\inf }}} \right)\\ &\quad+ {t_{i}}\left({{P_{0,i,\sup }} - {P_{0,i,\inf }}} \right) = 0. \end{aligned} $$
If P 0,i,sup≤P 0,i,inf, we get \(p_{i}^{*}=0\); otherwise, make \(h(p_{i}^{*})=\frac {2B_{i}D_{i}}{\alpha }e^{-D_{i}{p_{i}^{*}}^{\frac {-2}{\alpha }}}{p_{i}^{*}}^{-(1+\frac {2}{\alpha })}\), then it is a continuous function with its definition domain a compact closed set. So, defining its range [h 0,i,min,h 0,i,max], we have the following:
(1) When u≤h 0,i,min, we have \(p_{i}^{*} = {P_{0,i,\inf }}, t_{i}=0\).
(2) When h 0,i,min < u ≤ h 0,i,max,t i = 0, and \(u- \frac {2B_{i}D_{i}}{\alpha }e^{-D_{i}{p_{i}^{*}}^{\frac {-2}{\alpha }}}{p_{i}^{*}}^{-(1+\frac {2}{\alpha })} \,=\, 0\). According to equivalent infinite \(e^{-D_{i}{p_{i}^{*}}^{\frac {-2}{\alpha }}} \!\sim \! \left (\! {1 \,-\, {D_{i}}{{p_{i}^{*}}^{\frac {{ - 2}}{\alpha }}}} \right)\) and substitute it into the equation before, we have \(u-\frac {2B_{i}D_{i}}{\alpha }\left ({\vphantom {{{D_{i}}{{p_{i}^{*}}^{\frac {{ - 2}}{\alpha }}}}}} {1 -}{{D_{i}}{{p_{i}^{*}}^{\frac {{ - 2}}{\alpha }}}} \right){p_{i}^{*}}^{-(1+\frac {2}{\alpha })}=0\). When all the parameters are fixed, the numerical solution of \(p_{i}^{*}\) can be obtained, and we define it as \(P_{0,i,solution}^{*}\).
(3) When u>h 0,i,max, we have \(p_{i}^{*} = {P_{0,i,\sup }}\) which satisfies \(u-\frac {2B_{i}D_{i}}{\alpha }e^{-D_{i}P_{0,i,sup}^{\frac {-2}{\alpha }}}P_{0,i,sup}^{-(1+\frac {2}{\alpha })}+t_{i}=0\).
Substitute the results above into (8), i.e., \(\sum \limits _{i = 1}^{N} {p_{i}^{*}}=P_{0}\). We can get the numerical solution of u, so we can obtain the specific value of \(p_{i}^{*}\) on each band. □
A Gupta, RK Jha, A survey of 5G network: architecture and emerging technologies. IEEE Access. 3:, 1206–1232 (2015).
S Haykin, Cognitive radio: brain-empowered wireless communications. IEEE J. on Sel. Areas Commun.23(2), 201–220 (2005).
F-H Tseng, H-c Chao, J Wang, et al., Ultra-dense small cell planning using cognitive radio network toward 5G. IEEE Wirel. Commun.22(6), 76–83 (2015).
C-h Lee, M Haenggi, Interference and outage in poisson cognitive networks. IEEE Trans. Wirel. Commun.11(4), 1392–1401 (2012).
P Madhusudhanan, L Youjian, T X Brown, On primary user coverage probabilities and faulty cognitive radios. IEEE Trans. Wirel. Commun.13(11), 6207–6218 (2014).
X Xu, W Liu, Y Cai, S Jin, On the secure spectral-energy efficiency tradeoff in random cognitive radio networks. IEEE J. on Sel. Areas Commun.34(10), 2706–2722 (2016).
H Shokri-Ghadikolaei, I Glaropoulos, V Fodor, C Fischione, A Ephremides, Green sensing and access: energy-throughput trade-offs in cognitive networking. IEEE Commun. Mag.53(11), 199–207 (2015).
X Hong, J Wang, C-X Wang, J Shi, Cognitive radio in 5G: a perspective on energy-spectral efficiency trade-off. IEEE Commun. Mag.52(7), 46–53 (2014).
M Yousefvand, N Ansari, S Khorsandi, Maximizing network capacity of cognitive radio networks by capacity-aware spectrum allocation. IEEE Trans. Wirel. Commun.14(9), 5058–5067 (2015).
J Mansukhani, P Ray, PK Varshney, Coupled detection and estimation based censored spectrum sharing in cognitive radio networks. IEEE Trans. Wirel. Commun.15(6), 4206–4217 (2016).
W Wang, L Wu, Z Zhang, L Chen, Joint spectrum sensing and access for stable dynamic spectrum aggregation. EURASIP J. Wirel. Commun. Netw.2015(130), 1–14 (2015).
X Zhang, H Su, Opportunistic spectrum sharing schemes for CDMA-based uplink mac in cognitive radio networks. IEEE J. on Sel. Areas Commun.29(4), 716–730 (2011).
J Huang, VG Subramanian, R Agrawal, RA Berry, Downlink scheduling and resource allocation for OFDM systems. IEEE Trans. Wirel. Commun.8(1), 288–296 (2009).
Z Dai, Z Wang, V Wong, An overlapping coalitional game for cooperative spectrum sensing and access in cognitive radio networks. IEEE Trans. Veh. Technol.65(10), 8400–8413 (2016).
X Yuan, F Tian, YT Hou, W Lou, HD Sherali, S Kompella, JH Reed, in Proceedings of IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN). Optimal throughput curve for primary and secondary users with node-level cooperation (Stockholm, 2015), pp. 358–364.
D-J Lee, Adaptive random access for cooperative spectrum sensing in cognitive radio networks. IEEE Trans. Wirel. Commun.14(2), 831–840 (2015).
AV Kordali, PG Cottis, A contract-based spectrum trading scheme for cognitive radio networks enabling hybrid access. IEEE Access. 3:, 1531–1540 (2015).
G Hattab, M Ibnkahla, Multiband spectrum access: great promises for future cognitive radio networks. Proc. IEEE. 102(3), 282–306 (2014).
TMC Chu, H Phan, H-J Zepernick, Hybrid interweave-underlay spectrum access for cognitive cooperative radio networks. IEEE Trans. Wirel. Commun.62(7), 2183–2197 (2014).
C Wang, S Tang, X-Y Li, C Jiang, Multicast capacity scaling laws for multihop cognitive networks. IEEE Trans. Mob. Comput.11(11), 1627–1639 (2012).
N Rahimian, C N. Georghiades, M Z. Shakir, K A. Qaraqe, On the probabilistic model for primary and secondary user activity for OFDMA-based cognitive radio systems: spectrum occupancy and system throughput perspectives. IEEE Trans. Wirel. Commun.13(1), 356–369 (2014).
S Kusaladharma, P Herath, C Tellambura, Underlay interference analysis of power control and receiver association schemes. IEEE Trans. Veh. Technol.65(11), 8978–8991 (2016).
F Baccelli, B Błaszczyszyn, Stochastic Geometry and Wireless Networks Volume I: Theory (NOW: Foundations and Trends in Networking, Pairs, 2009).
M Haenggi, Stochastic Geometry for Wireless Networks (Cambridge University Press, New York, 2012).
Q Ye, M Al-Shalash, C Caramanis, JG Andrews, in Proceedings of IEEE International Conference on Communications (ICC). A tractable model for optimizing device-to-device communications in downlink cellular networks (Sydney, 2014), pp. 2039–2044.
This work was supported by the the National Key Basic Research Program of China (Grant No. 2013CB329203), the National Natural Science Foundation of China (Grant Nos. 61571270 and 61271266), the China Postdoctoral Science Foundation (Grant No. 2016M591177), and the British Telecom and Tsinghua SEM Advanced ICT LAB. The research leading to these results also received funding from the European Commission H2020 programme under grant agreement no. 671705 (SPEED-5G project).
Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
Yang Yang
& Linglong Dai
School of Electric and Information Engineering, Zhongyuan University of Technology, Zhengzhou, 450007, China
Jianjun Li
Instituto de Telecomunicações, Aveiro, 1049-001, Portugal
Shahid Mumtaz
& Jonathan Rodriguez
Search for Yang Yang in:
Search for Linglong Dai in:
Search for Jianjun Li in:
Search for Shahid Mumtaz in:
Search for Jonathan Rodriguez in:
Correspondence to Linglong Dai.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Yang, Y., Dai, L., Li, J. et al. Optimal spectrum access and power control of secondary users in cognitive radio networks. J Wireless Com Network 2017, 98 (2017) doi:10.1186/s13638-017-0876-5
Accepted: 05 May 2017
Poisson point process
Convex optimization
Dynamic Spectrum Access and Cognitive techniques for 5G
|
CommonCrawl
|
Local study of a renormalization operator for 1D maps under quasiperiodic forcing
Some examples of generalized reflectionless Schrödinger potentials
Russell Johnson 1, and Luca Zampogni 2,
Dipartimento di Matematica e Informatica, Università di Firenze, Via di Santa Marta 3, 50139 Firenze
Dipartimento di Matematica e Informatica, Università degli Studi di Perugia, Italy
Received May 2015 Revised October 2015 Published August 2016
the class of generalized reflectionless Schrödinger operators was introduced by Lundina in 1985. Marchenko worked out a useful parametrization of these potentials, and Kotani showed that each such potential is of Sato-Segal-Wilson type. Nevertheless the dynamics under translation of a generic generalized reflectionless potential is still not well understood. We give examples which show that certain dynamical anomalies can occur.
Keywords: reflectionless potential in the sense of Craig, glueing property, divisor map., Generalized reflectionless potential.
Mathematics Subject Classification: 37B55, 34B20, 34L40, 31A3.
Citation: Russell Johnson, Luca Zampogni. Some examples of generalized reflectionless Schrödinger potentials. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1149-1170. doi: 10.3934/dcdss.2016046
J. Avron and B. Simon, Almost periodic Schrödinger operators II. The integrated density of states,, Duke Math. Jour., 50 (1983), 369. doi: 10.1215/S0012-7094-83-05016-0. Google Scholar
M. Bebutov, On Dynamical Systems in the Space of Continuous Functions,, Bull. Inst. Mat. Moskov. Gos. Univ. 2 (1940)., 2 (1940). Google Scholar
E. Coddington and N. Levinson, Theory of Ordinary Differential Equations,, Mc Graw-Hill, (1955). Google Scholar
W. Coppel, Dichotomies in Stability Theory,, Lecture Notes in Mathematics, (1978). Google Scholar
W. Craig, The trace formula for Schrödinger operators on the line,, Comm. Math. Phys., 126 (1989), 379. doi: 10.1007/BF02125131. Google Scholar
W. Craig and B. Simon, Subharmonicity of the Lyapunov index,, Duke Math. Jour., 50 (1983), 551. doi: 10.1215/S0012-7094-83-05025-1. Google Scholar
D. Damanik and P. Yuditskii, Counterexamples to the Kotani-Last conjecture for continuum Schrödinger operators via character-automorphic Hardy spaces,, Adv. Math., 293 (2016), 738. doi: 10.1016/j.aim.2016.02.023. Google Scholar
C. De Concini and R. Johnson, The algebraic-geometric AKNS potentials,, Ergod. Th. & Dynam. Sys., 7 (1987), 1. doi: 10.1017/S0143385700003783. Google Scholar
B. Dubrovin, S. Novikov and V. Matveev, Nonlinear equations of Korteweg-de Vries type, finite zone linear operators and Abelian varieties,, Russ. Math. Surveys, 31 (1976), 55. Google Scholar
P. Duren, Theory of $H^p$ Spaces,, Academic Press, (1970). Google Scholar
R. Ellis, Lectures on Topological Dynamics,, Benjamin, (1969). Google Scholar
A. Eremenko and P. Yuditskii, Comb functions,, Contemp. Math., 578 (2012), 99. doi: 10.1090/conm/578/11472. Google Scholar
F. Gesztesy and B. Simon, The xi function,, Acta Matematica, 176 (1996), 49. doi: 10.1007/BF02547335. Google Scholar
F. Gesztesy and P. Yuditskii, Spectral properties of a class of reflectionless Schrödinger operators,, Jour. Func. Anal., 241 (2006), 486. doi: 10.1016/j.jfa.2006.08.006. Google Scholar
I. Goldsheid, S. Molchanov and L. Pastur, A random homogeneous Schrödinger operator has pure point spectrum,, Funk. Anal. i Prilozh., 11 (1977), 1. doi: 10.1007/BF01135526. Google Scholar
M. Hasumi, Hardy Classes on Infinitely Connected Riemann Surfaces,, Lecture Notes in Math. 1027, 1027 (1983). Google Scholar
L. Helms, Introduction to Potential Theory,, Robert E. Krieger Publ. Co., (1975). Google Scholar
R. Johnson, The recurrent Hill's equation,, Jour. Diff. Eqns, 46 (1982), 165. doi: 10.1016/0022-0396(82)90114-0. Google Scholar
R. Johnson, A review of recent work on almost periodic differential and difference operators,, Acta Applicandae Mathematicae, 1 (1983), 241. doi: 10.1007/BF00046601. Google Scholar
R. Johnson, Exponential dichotomy, rotation number and linear differential equations with bounded coefficients,, Jour. Diff. Eqns., 61 (1986), 54. doi: 10.1016/0022-0396(86)90125-7. Google Scholar
R. Johnson, Lyapunov numbers for the almost-periodic Schroedinger equation,, Illinois Jour. Math., 28 (1984), 397. Google Scholar
R. Johnson and J. Moser, The rotation number for almost periodic potentials,, Comm. Math. Phys., 84 (1982), 403. doi: 10.1007/BF01208484. Google Scholar
R. Johnson and L. Zampogni, Some remarks concerning reflectionless Sturm-Liouville potentials,, Stoch. and Dynamics, 8 (2008), 413. doi: 10.1142/S0219493708002391. Google Scholar
R. Johnson and L. Zampogni, Remarks on a paper of Kotani concerning generalized reflectionless Schrödinger potentials,, Discr. Cont. Dynam. Sys. B, 14 (2010), 559. doi: 10.3934/dcdsb.2010.14.559. Google Scholar
R. Johnson and L. Zampogni, Remarks on the generalized reflectionless Schrödinger potentials,, Jour. Dynam. Diff. Eqns., (2015), 1. doi: 10.1007/s10884-014-9424-8. Google Scholar
S. Kotani, Lyapunov indices determine absolutely continuous spectrum of stationary random Schrödinger operators,, Proc. Taniguchi Symp. SA, (1985), 219. Google Scholar
S. Kotani, Generalized Floquet theory for stationary Schrödinger operators in one dimension,, Chaos Solitons and Fractals, 8 (1997), 1817. doi: 10.1016/S0960-0779(97)00042-8. Google Scholar
S. Kotani, KdV flow on generalized reflectionless Schrödinger potentials,, Jour. Math. Phys., 4 (2008), 490. Google Scholar
D. Lundina, Compactness of the set of reflectionless potentials,, Funk. Anal. i Prilozh., 44 (1985), 55. Google Scholar
V. Marchenko, The Cauchy problem for the KdV equation with non-decreasing initial data,, in What is Integrability?, (1991), 273. Google Scholar
H. McKean and P. van Moerbeke, The spectrum of Hill's equation,, Invent. Math., 30 (1975), 217. doi: 10.1007/BF01425567. Google Scholar
J. Moser, An example of a Schrödinger operator with almost periodic potential and nowhere dense spectrum,, Helv. Math. Acta, 56 (1981), 198. doi: 10.1007/BF02566210. Google Scholar
V. Nemytskii and V. Stepanov, Qualitative Theory of Differential Equations,, Princeton Univ. Press, (1960). Google Scholar
V. Oseledets, A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems,, Trans. Moscow Math. Soc., 19 (1968), 197. Google Scholar
L. Pastur, Spectral properties of disordered systems in the one-body approximation,, Comm. Math. Phys., 75 (1980), 179. doi: 10.1007/BF01222516. Google Scholar
C. Remling, Topological properties of reflectionelss Jacobi matrices,, J. Approx. Theory, 168 (2013), 1. doi: 10.1016/j.jat.2012.12.009. Google Scholar
R. Sacker and G. Sell, Existence of dichotomies and invariant splittings for linear differential systems II,, Jour. Diff. Eqns, 22 (1976), 478. doi: 10.1016/0022-0396(76)90042-5. Google Scholar
M. Sato, Soliton equations as dynamical systems on infinite-dimensional Grassmann manifold,, North-Holland Mathematics Studies, 81 (1983), 259. doi: 10.1016/S0304-0208(08)72096-6. Google Scholar
G. Segal and G. Wilson, {Loop groups and equations of K-dV type,, Publ. IHES, 61 (1985), 5. Google Scholar
B. Simon, Almost periodic Schrödinger operators: A review,, Adv. Appl. Math., 3 (1982), 463. doi: 10.1016/S0196-8858(82)80018-3. Google Scholar
B. Simon, A new approach to inverse spectral theory I. Fundamental formalism,, Annals of Math., 150 (1999), 1029. doi: 10.2307/121061. Google Scholar
M. Sodin and P. Yuditskii, Almost periodic Jacobi matrices with homogeneous spectrum, infinite dimensional Jacobi inversion, and Hardy spaces of character-automorphic functions,, Jour. Geom. Anal., 7 (1997), 387. doi: 10.1007/BF02921627. Google Scholar
M. Sodin and P. Yuditskii, Almost periodic Schrödinger operators with Cantor homogeneous spectrum,, Comment. Math. Helv., 70 (1995), 639. doi: 10.1007/BF02566026. Google Scholar
H. Weyl, Über gewöhnliche lineare Differentialgleichungen mit Singularitäten und die zugehörigen Entwicklungen willkürlicher Funktionen,, Math. Annalen, 68 (1910), 220. doi: 10.1007/BF01474161. Google Scholar
Russell Johnson, Luca Zampogni. Remarks on a paper of Kotani concerning generalized reflectionless Schrödinger potentials. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 559-586. doi: 10.3934/dcdsb.2010.14.559
Jian Zhai, Jianping Fang, Lanjun Li. Wave map with potential and hypersurface flow. Conference Publications, 2005, 2005 (Special) : 940-946. doi: 10.3934/proc.2005.2005.940
Victor Isakov. Increasing stability for the Schrödinger potential from the Dirichlet-to Neumann map. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 631-640. doi: 10.3934/dcdss.2011.4.631
Fengming Ma, Yiju Wang, Hongge Zhao. A potential reduction method for the generalized linear complementarity problem over a polyhedral cone. Journal of Industrial & Management Optimization, 2010, 6 (1) : 259-267. doi: 10.3934/jimo.2010.6.259
Victor Isakov, Jenn-Nan Wang. Increasing stability for determining the potential in the Schrödinger equation with attenuation from the Dirichlet-to-Neumann map. Inverse Problems & Imaging, 2014, 8 (4) : 1139-1150. doi: 10.3934/ipi.2014.8.1139
Sebastián Ferrer, Francisco Crespo. Alternative angle-based approach to the $\mathcal{KS}$-Map. An interpretation through symmetry and reduction. Journal of Geometric Mechanics, 2018, 10 (3) : 359-372. doi: 10.3934/jgm.2018013
Hongxia Shi, Haibo Chen. Infinitely many solutions for generalized quasilinear Schrödinger equations with sign-changing potential. Communications on Pure & Applied Analysis, 2018, 17 (1) : 53-66. doi: 10.3934/cpaa.2018004
Miaohua Jiang. Derivative formula of the potential function for generalized SRB measures of hyperbolic systems of codimension one. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 967-983. doi: 10.3934/dcds.2015.35.967
Yacheng Liu, Runzhang Xu. Potential well method for initial boundary value problem of the generalized double dispersion equations. Communications on Pure & Applied Analysis, 2008, 7 (1) : 63-81. doi: 10.3934/cpaa.2008.7.63
Hyun-Jung Kim. Stochastic parabolic Anderson model with time-homogeneous generalized potential: Mild formulation of solution. Communications on Pure & Applied Analysis, 2019, 18 (2) : 795-807. doi: 10.3934/cpaa.2019038
Yanheng Ding, Fukun Zhao. On a diffusion system with bounded potential. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 1073-1086. doi: 10.3934/dcds.2009.23.1073
Shair Ahmad, Alan C. Lazer. On a property of a generalized Kolmogorov population model. Discrete & Continuous Dynamical Systems - A, 2013, 33 (1) : 1-6. doi: 10.3934/dcds.2013.33.1
Vladimir Georgiev, Sandra Lucente. Focusing nlkg equation with singular potential. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1387-1406. doi: 10.3934/cpaa.2018068
Zifei Shen, Fashun Gao, Minbo Yang. On critical Choquard equation with potential well. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3567-3593. doi: 10.3934/dcds.2018151
Farman Mamedov, Sara Monsurrò, Maria Transirico. Potential estimates and applications to elliptic equations. Conference Publications, 2015, 2015 (special) : 793-800. doi: 10.3934/proc.2015.0793
Alberto Maspero, Beat Schaad. One smoothing property of the scattering map of the KdV on $\mathbb{R}$. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1493-1537. doi: 10.3934/dcds.2016.36.1493
Yoshikazu Giga, Hirotoshi Kuroda. A counterexample to finite time stopping property for one-harmonic map flow. Communications on Pure & Applied Analysis, 2015, 14 (1) : 121-125. doi: 10.3934/cpaa.2015.14.121
Shingo Takeuchi. The basis property of generalized Jacobian elliptic functions. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2675-2692. doi: 10.3934/cpaa.2014.13.2675
Xiaolong Han, Guozhen Lu. Regularity of solutions to an integral equation associated with Bessel potential. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1111-1119. doi: 10.3934/cpaa.2011.10.1111
Sze-Bi Hsu, Bernold Fiedler, Hsiu-Hau Lin. Classification of potential flows under renormalization group transformation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (2) : 437-446. doi: 10.3934/dcdsb.2016.21.437
Russell Johnson Luca Zampogni
|
CommonCrawl
|
Model-free detection of unique events in time series
Detecting dynamic spatial correlation patterns with generalized wavelet coherence and non-stationary surrogate data
Mario Chavez & Bernard Cazelles
Multivariate analysis of short time series in terms of ensembles of correlation matrices
Manan Vyas, T. Guhr & T. H. Seligman
Burst-tree decomposition of time series reveals the structure of temporal correlations
Hang-Hyun Jo, Takayuki Hiraoka & Mikko Kivelä
Ordinal patterns-based methodologies for distinguishing chaos from noise in discrete time series
Massimiliano Zanin & Felipe Olivares
A matched-filter technique with an objective threshold
Shiro Hirano, Hironori Kawakata & Issei Doi
Visibility graph based temporal community detection with applications in biological time series
Minzhang Zheng, Sergii Domanskyi, … George I. Mias
Bayesian model selection for complex dynamic systems
Christoph Mark, Claus Metzner, … Ben Fabry
Theory of Lehmer transform and its applications in identifying the electroencephalographic signature of major depressive disorder
Masoud Ataei & Xiaogang Wang
Revisiting signal analysis in the big data era
Madhur Srivastava
Zsigmond Benkő1,2,
Tamás Bábel1 &
Zoltán Somogyvári1
Scientific Reports volume 12, Article number: 227 (2022) Cite this article
Astronomy and planetary science
Computational biology and bioinformatics
Recognition of anomalous events is a challenging but critical task in many scientific and industrial fields, especially when the properties of anomalies are unknown. In this paper, we introduce a new anomaly concept called "unicorn" or unique event and present a new, model-free, unsupervised detection algorithm to detect unicorns. The key component of the new algorithm is the Temporal Outlier Factor (TOF) to measure the uniqueness of events in continuous data sets from dynamic systems. The concept of unique events differs significantly from traditional outliers in many aspects: while repetitive outliers are no longer unique events, a unique event is not necessarily an outlier; it does not necessarily fall out from the distribution of normal activity. The performance of our algorithm was examined in recognizing unique events on different types of simulated data sets with anomalies and it was compared with the Local Outlier Factor (LOF) and discord discovery algorithms. TOF had superior performance compared to LOF and discord detection algorithms even in recognizing traditional outliers and it also detected unique events that those did not. The benefits of the unicorn concept and the new detection method were illustrated by example data sets from very different scientific fields. Our algorithm successfully retrieved unique events in those cases where they were already known such as the gravitational waves of a binary black hole merger on LIGO detector data and the signs of respiratory failure on ECG data series. Furthermore, unique events were found on the LIBOR data set of the last 30 years.
Anomalies in time series are rare and non-typical patterns that deviate from normal observations and may indicate a transiently activated mechanism different from the generating process of normal data. Accordingly, recognition of anomalies is often important or critical, invoking interventions in various industrial and scientific applications.
Anomalies can be classified according to various aspects1,2,3. These non-standard observations can be point outliers, whose amplitude is out of range from the standard amplitude or contextual outliers, whose measured values do not fit into some context. A combination of values can also form an anomaly named a collective outlier. Thus, in the case of point outliers, a single point is enough to distinguish between normal and anomalous states, whilst in the case of collective anomalies, a pattern of multiple observations is required. Two characteristic examples of extreme events are black swans and dragon kings, distinguishable by their generation process4,5. Black swans are generated by a power law process and they are usually unpredictable by nature. In contrast, the dragon king, such as stock market crashes, occurs after a phase transition and it is generated by different mechanisms from normal samples making it more predictable. Both black swans and dragon kings are extreme events easily recognizable post-hoc (retrospectively), but not all the anomalies are so effortless to detect. Even post-hoc detection can be a troublesome procedure when the amplitude of the event does not fall out of the data distribution.
Although the definition of an anomaly is not straightforward, two of its key features include rarity and dissimilarity from normal data.
Most, if not all the outlier detection algorithms approach the anomalies from the dissimilarity point of view. They search for the most distant and deviant points without much emphasis on their rarity. In contrast, our approach is the opposite: we quantify the rarity of a state, largely independent of the dissimilarity.
Here we introduce a new type of anomaly, the unique event, which is not an outlier in the classical sense of the word: it does not necessarily lie out from the background distribution, neither point-wise nor collectively. A unique event is defined as a unique pattern that appears only once during the investigated history of the system. Based on their hidden nature and uniqueness one could call these unique events "unicorns" and add them to the strange zoo of anomalies. Note that unicorns can be both traditional outliers appearing only once or patterns that do not differ from the normal population in any of their parameters.
But how do you find something you've never seen before, and the only thing you know about is that it only appeared once?
The answer would be straightforward for discrete patterns, but for continuous variables, where none of the states are exactly the same, it is challenging to distinguish the really unique states from a dynamical point of view.
Classical supervised, semi-supervised, and unsupervised strategies have been used to detect anomalies1,6,7 and recently deep neural networks8,9,10 were applied to detect extreme events11,12,13,14,15,16. Supervised outlier detection techniques can be applied to identify anomalies when labeled training data is available for both normal and outlier classes. Semi-supervised techniques also utilize labeled training data, but this is limited to the normal or the outlier class. Some of the semi-supervised methods do not need perfectly anomaly-free data to learn the normal class but allow some outlier-contamination even in the training data17. Model-based pattern matching techniques can be applied to detect specific anomalies with best results when the mechanism causing the anomaly is well known and simple18. However, when the background is less well known or the system is too complex to get analytical results (or to run detailed simulations), it is hard to detect even specific types of anomalies with model-based techniques due to the unknown nature of the waveforms. Model-free unsupervised outlier detection techniques can be applied to detect unexpected events from time series in cases when no tractable models or training data is available.
The closest concept to our unicorns in the anomaly detection literature is the discord, defined as the unique subsequence, which is the farthest from the rest of the (non-overlapping) time series19. Multiple model-free unsupervised anomaly detection methods have been built based on the discord concept19,20. Other unsupervised anomaly detection techniques, such as the Local Outlier Factor (LOF) algorithm21 are based on k Nearest Neighbor (kNN) distances. The LOF algorithm was also adapted to time series data by Oehmcke et al.22.
In the followings, we present a new model-free unsupervised anomaly detection algorithm to detect unicorns (unique events), that builds on nonlinear time series analysis techniques such as time delay embedding23 and upgrades time-recurrence based non-stationarity detection methods24 by defining a local measure of uniqueness for each point.
We validate the new method on simulated data, compare its performance with other model-free unsupervised algorithms19,20,21 and we apply the new method to real-world data series, where the unique event is already known.
Time delay embedding
To adapt collective outlier detection to time series data, nonlinear time series analysis provides the possibility to generate the multivariate state space from scalar observations. The dynamical state of the system can be reconstructed from scalar time series25 by taking the temporal context of each point according to Takens' embedding theorem23. This can be done via time delay embedding:
$$\begin{aligned} X(t) = [ x(t), x(t+\tau ), x(t+2\tau ), \ldots x(t+(E-1)\tau )] \end{aligned}$$
where X(t) is the reconstructed state at time t, x(t) is the scalar time series. The procedure has two parameters: the embedding delay (\(\tau\)) and the embedding dimension (E).
Starting from an initial condition, the state of a dynamical system typically converges to a subset of its state space and forms a lower-dimensional manifold, called the attractor, which describes the dynamics of the system in the long run. If E is sufficiently big (\(E > 2*d\)) compared to the dimension of the attractor (d), then the embedded (reconstructed) space is topologically equivalent to the system's state space, given some mild conditions on the observation function generating the x(t) time series are also met23.
As a consequence of Takens' theorem, small neighborhoods around points in the reconstructed state-space also form neighborhoods in the original state space, therefore a small neighborhood around a point represents nearly similar states. This topological property has been leveraged to perform nonlinear prediction26, noise filtering27,28 and causality analysis29,30,31,32. Naturally, time delay embedding can be introduced as a preprocessing step before outlier detection (with already existing methods i.e. LOF) to create the contextual space for collective outlier detection from time series.
Besides the spatial information preserved in reconstructed state space, temporal relations in small neighborhoods can contain clues about the dynamics. For example, recurrence time statistics were applied to discover nonstationary time series24,33, to measure attractor dimensions34,35,36 and to detect changes in dynamics37,38.
Temporal Outlier Factor
The key question in unicorn search is how to measure the uniqueness of a state, as this is the only attribute of a unique event. The simplest possible definition would be that a unique state appears only once in the time series. A problem with this definition arises in the case of continuous-valued observations, where almost every state is visited only once. Thus, a different strategy should be applied to find the unicorns. Our approach is based on measuring the temporal dispersion of the state-space neighbors. If state-space neighbors are separated by large time intervals, then the system returns to the same state time-to-time. In contrast, if all the state space neighbors are temporal neighbors as well, then the system never returned to that state again. This concept is shown on an example ECG data series from a patient with Wolff–Parkinson–White (WPW) Syndrome (Fig. 1). The WPW syndrome is due to an aberrant atrio-ventricular connection in the heart. Its diagnostic signs are shortened PR-interval and appearance of the delta wave, a slurred upstroke of the QRS complex. However, for our representational purpose, we have chosen a data segment, which contained one strange T wave with uniquely high amplitude (Fig. 1A).
To quantify the uniqueness on a given time series, the Temporal Outlier Factor (TOF) is calculated in the following steps (Fig. 1 and Fig. S1): firstly, we reconstruct the system's state by time delay embedding (Eq. 1), resulting in a manifold, topologically equivalent to the attractor of the system (Fig. 1C-D and Fig. S1B).
Secondly, we search for the kNN in the state space at each time instance on the attractor. A standard choice for the distance metric is the Euclidean distance (Eq. 2).
$$\begin{aligned} d\left( X(t), X(t') \right) =\sqrt{\sum _{l=1}^{E}{(X_l(t)- X_l(t'))^2}} \end{aligned}$$
where d is the distance between the X(t) and \(X(t')\) points, with \(X_l\) as coordinate components in the reconstructed state space. We save the time index of the k nearest points around each sample to use it later on. Two examples are shown on Fig. 1C: a red and a blue diamond and their 6 nearest neighbors marked by orange and green diamonds respectively.
Thirdly, the Temporal Outlier Factor (TOF) is computed from the time indices of the kNN points (Fig. S1C):
$$\begin{aligned} \hbox {TOF}(t)=\root q \of {\frac{\sum _{i=1}^{k}{|t-t_{i}|}^{q}}{k}} \end{aligned}$$
where t is the time index of the sample point (X(t)) and \(t_i\) is the time index of the i-th nearest neighbor in reconstructed state-space. Where \(q\in {\mathcal {R}}^{+}\), in our case we use \(q=2\) (Fig. 1E).
As a final step for identifying unicorns, a proper threshold \(\theta\) should be defined for TOF (Fig. 1E, dashed red line), to mark unique events (orange dots, Fig. 1F).
TOF measures an expected temporal distance of the kNN neighbors in reconstructed state-space (Eq. 3), thus it has time dimension. A high or medium value of TOF implies that neighboring points in state-space were not close in time, therefore the investigated part of state-space was visited on several different occasions by the system. In our example, green diamonds on (Fig. 1C) mark states which were the closest points to the blue diamond in the state space, but were evenly distributed in time, on Fig. 1A. Thus the state marked by the blue diamond was not a unique state, the system returned there several times.
However a small value of TOF implies that neighboring points in state-space were also close in time, therefore this part of the space was visited only once by the system. On Fig. 1C,D orange diamonds mark the closest states to the red diamond and they are also close to the red diamond in time, on the (Fig. 1B). This results in a low value of TOF in the state marked by the red diamond and means that it was a unique state never visited again. Thus, small TOF values feature the uniqueness of sample points in state-space and can be interpreted as an outlier factor. Correspondingly, TOF values exhibit a clear breakdown at the time interval of the anomalous T wave (Fig. 1F).
The number of neighbors (k) used during the estimation procedure sets the minimal possible TOF value:
$$\begin{aligned} \mathrm{TOF}_{\mathrm{min}} =\sqrt{\frac{ \sum _{i=-k/2 }^{k/2} + k \bmod 2 }{{i^2}}k} \Delta t \end{aligned}$$
where \(\lfloor k/2 \rfloor\) is the integer part of k/2, \(\bmod\) is the modulo operator and \(\Delta t\) is the sampling period.
The approximate maximal possible TOF value is determined by the length (T) and neighborhood size (k) of the embedded time series:
$$\begin{aligned} \mathrm{TOF}_{\mathrm{max}} = \sqrt{\frac{\sum _{i=0}^{k-1} (T - i \Delta t)^ 2 }{k} } \end{aligned}$$
TOF shows a time-dependent mean baseline and variance (Fig. 1E, Fig. S2) which can be computed if stationary activity without presence of anomaly is assumed. In this case, the time indices of the nearest points are evenly distributed along the whole time series. The approximate mean baseline is a square-root-quadratic expression, it has the lowest value in the middle and highest value at the edges (see exact derivation for continuous time limit and \(q=1\) in the Supporting Information, Figs. S2-S3):
$$\begin{aligned}{}&\sqrt{\left\langle \hbox {TOF}_{\mathrm{noise}}\left( t\right) ^{2} \right\rangle } =\sqrt{t^2 - t T + \frac{T^2}{3}} \end{aligned}$$
$$\begin{aligned}&\mathrm{VAR} \left( {\mathrm{TOF}^{2}_{\mathrm{noise}}} \left( t \right) \right) =\frac{1}{k} \left( \frac{t^5 + (T-t)^5}{5 T} - \left( t^2 - tT + \frac{T^2}{3} \right) ^2 \right) \end{aligned}$$
Based on the above considerations, imposing a threshold \(\theta\) on \(TOF_{k}\) has a straightforward meaning: it sets a maximum detectable event length (M) or vice versa:
$$\begin{aligned} \theta = \sqrt{ \frac{\sum _{i=0}^{k-1}{\left( M-i \Delta t\right) ^2}}{k}} \quad \bigg | \quad k \Delta t {\mathop {\le }\limits ^{!}} M \end{aligned}$$
where in the continuous limit, the threshold and the event length becomes equivalent:
$$\begin{aligned} \lim _{\Delta t \rightarrow 0}{\theta (M)} = M \end{aligned}$$
Also, the parameter k sets a necessary detection criteria on the minimal length of the detectable events: only events with length \(M\ge k \Delta t\) may be detected. This property comes from the requirement that there must be at least k neighbors within the unique dynamic regime of the anomaly.
The current implementation of the TOF algorithm contains a time delay embedding, a kNN search, the computation of TOF scores from the neighborhoods, and a threshold application for it. The time-limiting step is the neighbor-search, which uses the scipy cKDTree implementation of the kDTree algorithm39. The most demanding task is to build the data-structure; its complexity is \(O(k n \log {n})\)40, while the nearest neighbor search has \(O(\log n)\) complexity.
Schema of our unique event detection method and the Temporal Outlier Factor (TOF). (A) An ECG time series from a patient with Wolff-Parkinson-White Syndrome, a strange and unique T wave zoomed on graph (B). (C) The reconstructed attractor in the 3D state space by time delay embedding (\(E=3, \tau =0.011\,{\text{s}}\)). Two example states (red and blue diamonds) and their 6 nearest neighbors in the state space (orange and green diamonds respectively) are shown. The system returned several times back to the close vicinity of the blue state, thus the green diamonds are evenly distributed in time, on graph (A). In contrast, the orange state-space neighbors of the red point (zoomed on graph D) are close to the red point in time as well on graph (A). These low temporal distances show that the red point marks a unique event. (E) TOF measures the temporal dispersion of the k nearest state-space neighbors (\(k=20\)). The red dashed line is the threshold \(\theta =0.28\,{\text{s}}\). Low values of TOF below the threshold mark the unique events, denoted by orange dots on the original ECG data on graph (F).
Box 1: TOF analysis workflow
Preprocessing and applicability check
Time delay embedding (Eq. 1)
kNN Neighbor search (Eq. 2)
TOF score computation (Eq. 3)
Threshold application on TOF score to detect unicorns (Eq. 8).
Previous methods to compare
We compare our method to widely used model-free, unsupervised outlier detection methods: the Local Outlier Factor (LOF) and two versions of discord detection algorithms19,20 (see SI). The main purpose of the comparison is not to show that our method is superior to the others in outlier detection, but to present the fundamental differences between the previous outlier concepts and the unicorns.
The first steps of all three algorithms are parallel: While TOF and LOF use time-delay embedding as a preprocessing step to define a state-space, discord detection algorithms reach the same by defining subsequences due to a sliding window. As a next step, state-space distances are calculated in all of the three methods, but with a slightly different focus. Both LOF and TOF search for the kNNs in the state-space for each time instance. As a key difference, the LOF calculates the distance of the actual points in state-space from their nearest neighbors and normalizes it with the mean distance of those nearest neighbors from their nearest neighbors, resulting in a relative local density measure. LOF values around 1 are considered the signs of normal behavior, while higher LOF values mark the outliers. While LOF concentrates on the densities of the nearest neighbors in the state-space, the discord concept is based on the distances directly. For each time instance, it searches for the closest, but temporary non-overlapping subsequence (state). This distance defines the distance of the actual state from the whole sequence and is called the matrix profile41. Finally, the top discord is defined as the state, which is the most distant from the whole data sequence by this means. Besides this top discord, any predefined number of discords can be defined by finding the next most distant subsequence which does not overlap with the already found discords.
The only parameter of this brute force discord detection algorithm is the expected length of the anomaly, which is given as the length of the subsequences used for the distance calculation. Senin et al.20,42 extended Keogh's method by calculating the matrix profile for different subsequence lengths, then normalizing the distances by the length of the subsequences, and finally choosing the most distant subsequence according to the normalized distances. Through this method, Senin's algorithm provides an estimation of the anomaly length as well. Both Keogh's and Senin's algorithm can be implemented in a slower but exact way by calculating all the distances, can be called as brute force algorithm or fastening them by using the Symbolic Aggregate approXimation (SAX) method. In our comparisons, Keogh's brute force method was calculated exactly while SAX was used for Senin's algorithm only.
Simulated data series for validation
We tested the TOF method on various types of simulated data series to demonstrate its wide applicability. These simulations are examples of deterministic discrete-time systems, continuous dynamics, and a stochastic process.
We simulated two datasets with deterministic chaotic discrete-time dynamics generated by a logistic map43 (\(N=2000\), 100–100 instances each) and inserted variable-length (\(l=20\)–200 step) outlier-segments into the time series at random times (Fig. 2A,B). Two types of outliers were used in these simulations, the first type was generated from a tent-map dynamics (Fig. 2A) and the second type was simply a linear segment with low gradient (Fig. 2B) for simulation details see the Supporting Information (SI). The tent map demonstrates the case, where the underlying dynamics is changed for a short interval, but it generates a very similar periodic or chaotic oscillatory activity (depending on the parameters) to the original dynamics. This type of anomaly is hard to distinguish by the naked eye. In contrast, a linear outlier is easy to identify for a human observer but not for many traditional outlier detection algorithms. The linear segment is a collective outlier and all of its points represent a state that was visited only once during the whole data sequence, therefore they are unique events as well.
As a continuous deterministic dynamics with realistic features, we simulated electrocardiograms with short tachycardic periods where beating frequency was higher (Fig. 2C). The simulations were carried out according to the model of Rhyzhii and Ryzhii44, where the three heart pacemakers and muscle responses were modeled as a system of nonlinear differential equations (see SI). We generated 100 s of ECG and randomly inserted 2–20 s long faster heart-rate segments, corresponding to tachycardia (\(n=100\) realizations).
Takens' time delay embedding theorem is valid for time series generated by deterministic dynamical systems, but not for stochastic ones. In spite of this, we investigated the applicability of time delay embedded temporal and spatial outlier detection on stochastic signals with deterministic dynamics as outliers. We established a dataset of multiplicative random walks (\(n=100\) instances, \(T=2000\) steps each) with randomly inserted variable length linear outlier segments (\(l=20\)–200, see SI). As a preprocessing step, to make the random walk data series stationary, we took the log-difference of time series as is usually the case with economic data series (Fig. 2D).
Model evaluation metrics
TOF and LOF calculate scores on which thresholds should be applied to reach final detections. In contrast, the discord detection algorithms do not apply a threshold on the matrix profile values but choose the highest peak as a top discord. The effectiveness of TOF and LOF scores to distinguish anomalous points from the background can be evaluated by measuring the Area Under Receiver Operator Characteristic Curve45 (ROC AUC). The ROC curve consists of point pairs of True Positive Rate (TPR, recall) and False Positive Rate (FPR) parametrized by a threshold (\(\alpha\), Eq. 10).
$$\begin{aligned} ROC(\alpha ) := \left( \mathrm{FPR}(\alpha ), \mathrm{TPR}(\alpha ) \right) \end{aligned}$$
where \(\alpha \in [-\infty , \infty ]\). The area under the ROC curve can be computed as the Riemann integral of the TPR in the function of FPR on the (0, 1) interval.
This evaluation method considers all the possible thresholds, thus providing a threshold-independent measure of the detection potential for a score, where 1 means that a threshold can separate all the anomalous points from the background. Thus, we applied ROC AUC to evaluate TOF and LOF scores on the four datasets mentioned above with fixed embedding parameters \(E=3\) and \(\tau =1\) and determined its dependency on the neighborhood size (\(k=1\)–200) that was used for the calculations.
After choosing the optimal neighborhood parameter which maximises the ROC AUC values, precision, recall, and \(\mathrm{F}_1\) score were used to evaluate the detection performance of the methods on the simulated datasets:
The precision metrics measures the ratio of true positive hits among all the detections:
$$\begin{aligned} \mathrm{precision}(\alpha ) = \frac{\mathrm{true} \, \mathrm{positives}(\alpha )}{\mathrm{true} \, \mathrm{positives}(\alpha ) + \mathrm{false} \, \mathrm{positives}(\alpha )} \end{aligned}$$
The recall evaluates what fraction of the points to be detected were actually detected:
$$\begin{aligned} \mathrm{recall}(\alpha ) = \frac{\mathrm{true} \, \mathrm{positives}(\alpha )}{\mathrm{true} \, \mathrm{positives}(\alpha ) + \mathrm{false} \, \mathrm{negatives}(\alpha )} \end{aligned}$$
\(\mathrm{F}_1\) score is the harmonic mean of precision and recall and it provides a single scalar to rate model performance:
$$\begin{aligned} F_1(\alpha ) = 2 \, \frac{\mathrm{precision}(\alpha ) \times \mathrm{recall}(\alpha )}{\mathrm{precision}(\alpha ) + \mathrm{recall}(\alpha )} \end{aligned}$$
where the optimal the threshold (\(\alpha\)) were chosen to correspond to the actual mean number of anomalous points, or the expected length of the anomaly.
We implemented these steps in the python programming language (python3), the software is available at github.com/phrenico/uniqed. A detailed description of the data generation process and analysis steps can be found in the Supporting Information.
Detection examples on simulated time series with anomalies of different kinds. (A) Logistic map time series with tent-map anomaly. (B) Logistic map time series with linear anomaly. (C) Simulated ECG time series with tachycardia. (D) Random walk time series with linear anomaly, where TOF was measured on the discrete-time log derivative (\(\Delta log x_t\)). Each subplot shows an example time series of the simulations (black) in arbitrary units and in three forms: Top left the return map, which is the results of the 2D time delay embedding and defines the dynamics of the system or its 2D projection. Bottom: Full length of the simulated time series (black) and the corresponding TOF values (green). Shaded areas show anomalous sections. Top right: Zoom to the onset of the anomaly. In all graphs, the outliers detected by TOF, LOF, and Keogh's brute force discord detection algorithms are marked by orange dots, blue plus, and red x signs respectively. While anomalies form clear outliers on A and B, D shows an example where the unique event is clearly not an outlier, but it is located in the center of the distribution. All the three algorithms detected the example anomaly well in case A, TOF, and discord detected well the anomalies in B and C cases, but only TOF was able to detect all the four anomaly examples.
Validation and comparison on simulated data series
Figure 3A shows the performance of the two methods in terms of mean ROC AUC and SD for \(n=100\) realizations. TOF produced higher maximal ROC AUC than LOF in all four experimental setups. The ROC AUC values reached their maxima at small k neighborhood sizes in all of the four cases and decreased with increasing k afterward. In contrast, LOF resulted in reasonable ROC AUC values in only three cases (logmap-tent anomaly, logmap-linear anomaly, and ECG tachycardia), and it was not able to distinguish the linear anomaly from the random walk background at all. The ROC AUC values reached their maxima at typically higher k neighborhood size in the instances where LOF worked (Table 1).
Performance evaluation of TOF, LOF, and Keogh's discord detection algorithms on four simulated datasets. (A) Mean Receiver Observer Characteristic Area Under Curve (ROC AUC) score and SD for TOF (orange) and LOF (blue) are shown as a function of neighborhood size (k). TOF showed the best results for small neighborhoods. In contrast, LOF showed better results for larger neighborhoods in the case of the logistic map and ECG datasets but did not reach reasonable performance on random walk with linear outliers. (B) Mean \(\mathrm{F}_1\) score for TOF (orange), LOF (blue), and Keogh's discord detection (red) algorithms as a function of the expected anomaly length (for TOF) given in either data percentage (for LOF) or window length parameter (for discord). Black dashed lines show the theoretical maximum of the mean \(\mathrm{F}_1\) score for algorithms with prefixed detection numbers or lengths (LOF and discord), but this upper limit does apply for TOF. The \(\mathrm{F}_1\) score of TOF was very high for the linear anomalies and slightly lower for logistic map—tent map anomaly and ECG datasets, but it was higher than the \(\mathrm{F}_1\) score of the two other methods and their theoretical limits in all cases. Note, that the only comparable performance was shown by discord detection on ECG anomaly, while neither algorithms based on discord nor LOF were able to detect the linear anomaly on random background.
In order to evaluate the final detection performance, as well as the type of errors made and the parameter dependency of these algorithms, \(\mathrm{F}_1\) score, precision and recall were computed for all four algorithms. \(\mathrm{F}_1\) score is especially useful to evaluate detection performance in cases of highly unbalanced datasets as in our case, see Methods.
As TOF showed the best performance in terms of ROC AUC with lower k neighborhood sizes, the \(\mathrm{F}_1\) scores were calculated at a fixed \(k=4\) neighborhood forming a simplex in the 3-dimensional embedding space29. In contrast, as LOF showed stronger dependency on neighborhood size, the optimal neighborhood sizes were used for \(\mathrm{F}_1\) score calculations. The brute force discord detection algorithm uses no separate neighborhood parameter, as it calculates all-to-all distances between points in the state space.
Three among the four investigated algorithms require an estimation of the expected length of the anomaly, however, this estimation becomes effective through different parameters within the different algorithms. In the case of LOF, the expected length of the anomaly can be translated into a threshold, which determines the number of time instances above the threshold. In the absence of this information, the threshold is hard to determine in any principled way. In the case of Keogh's brute force discord detection algorithm, the length of the anomaly is the only parameter and no further threshold is required. Both LOF and Keogh's algorithm find the predefined number of time instances exactly. While the discord finds them in one continuous time interval, LOF detects independent points along the whole data. The expected maximal anomaly length is necessary to determine the threshold in the case of TOF as well (Eq. 8). As Senin's discord detection algorithm does not require predefined anomaly length, it was omitted from this test, and we calculated the \(\hbox {F}_1\) score at the self-determined window length.
Figure 3B shows the mean \(\mathrm{F}_1\) scores for \(\hbox {n}=100\) realizations, as a function of the expected anomaly length, for the three algorithms and for all the four test datasets. Additionally, Fig. S8 shows the precision and the recall, which are the two constituents of the \(\mathrm{F}_1\) score as a function of the expected anomaly length as well. The actual length of the anomalies was randomly chosen between 20 and 200 time steps for each realization in three of our four test cases and between 200 and 2000 time steps in ECG realizations, thus the effect of the expected length parameters was examined up to these lengths as well.
Table 1 Detection performance on simulations in terms of ROC AUC scores and the optimal neighborhood parameter k. Maximal mean ROC AUC values and the corresponding SDs are shown. LOF was able to distinguish tent map and linear outliers from logistic background and tachycardia from the normal rhythm with reasonable reliability but TOF outperformed LOF for all data series. Linear outliers can not be detected on random walk background by the LOF method at all, while TOF detected them almost perfectly. TOF reached its maximal performance mostly for low k values, while LOF required larger k for optimal performance on those three data series, on which it worked reasonably. While the ROC AUC was maximal at \(k=30\) in the case of random walk with linear outlier, the performance was not significantly lower for lower k values.
While it is realistic, that we only have a rough estimate on the expected length of the anomaly, it turns out, that the randomness in the anomaly length sets an upper bound (Fig. 3B, black dashed lines, Fig. S6), for the mean \(\mathrm{F}_1\) scores for those algorithms, that work with an exact predefined number of detections i.e. the LOF and the Keogh's discord detection. Although the expected length parameter and the randomness in the actual anomaly length affect the detection performance of TOF as well, they do not set a strict upper bound, as the number of detections is not in a one-to-one correspondence with the expected anomaly length.
For all the four test datasets, TOF algorithm reached higher maximal \(\mathrm{F}_1\) scores than the LOF and Keogh's discord detection method (Fig. 3B, Fig. S8, orange lines). The maximal \(\mathrm{F}_1\) score was even higher than the theoretical limit imposed by the variable anomaly lengths to the other methods. Similar to the results on ROC AUC values, the performance of the TOF algorithm was excellent on the linear type anomalies and very good for the logmap-tent map and the simulated ECG-tachycardia datasets.
In contrast, the LOF algorithm showed good performance on the logmap-tent map data series and mediocre results on logmap-linear anomalies and on the ECG-tachycardia data series. The linear outlier on random walk background was completely undetectable for the LOF method (Fig. 3B, Fig. S8, blue lines).
Keogh's discord detection algorithm displayed good \(\mathrm{F}_1\) scores on three datasets, but weak results were given in case of the linear anomaly on the random walk background (Fig. 3B, Fig. S8, red lines).
The simulated ECG dataset was the only one, where any of the competitor methods showed comparable performance to TOF: Keogh's brute force discord detection reached its theoretical maximum, thus TOF resulted in an only slightly higher maximal \(\mathrm{F}_1\) score in an optimal range of the length parameter. If the expectation significantly overestimated the actual length, the results of discord detection were slightly better.
Table 2 Performance evaluation by \(\mathrm{F}_1\), precision and recall scores on simulations. The optimal expected anomaly length parameter (M) in time steps, mean scores, and their standard deviations are shown for all methods and datasets; the highest scores are highlighted in bold. In case of TOF, \(k=4\) neighbour number is used, while for LOF, the k resulted the best ROC AUC were used from Table 1: \(k=42\) for logmap-tent map, \(k=199\) for logmap-linear, \(k=129\) for ECG tachycardia and \(k=1\) for random walk-linear datasets. TOF resulted in the highest \(\mathrm{F}_1\) scores and highest precision for all datasets and the highest recall in three of the four cases but the simulated ECG tachycardia, where Keogh's brute force discord detection algorithm reached a slightly higher recall score. The only comparable performance was reached by Keogh's discord detection algorithm on ECG tachycardia in terms of \(\mathrm{F}_1\) score while LOF produced reasonable results on logmap-tent map anomaly series. Although Senin's discord detection algorithm resulted in reasonable mean estimations for the lengths of the anomalies, its detection performance was worse than the other three algorithms.
The \(\mathrm{F}_1\) scores reached their maxima when the expected anomaly length parameters were close to the mean of the actual anomaly lengths for all algorithms and for all detectable cases when the \(\mathrm{F}_1\) score showed significant peaks (Table 2).
As we have seen, the variable and unknown length of the anomalies had a significant effect on the detection performance of all methods, but especially LOF and brute force discord detection. Senin et al.20,46 extended the discord detection method to overcome the problem of predefined anomaly length and to allow the algorithm to find the length of the anomalies. Thus, we have tested Senin's algorithm on our test data series and included the anomaly lengths found by this algorithm as well as the performance measures into the comparison in Table 2. While the mean estimated anomaly lengths were not far from the mean of the actual lengths, the performance of this algorithm lags well behind all three previously tested ones on all four types of test data series.
We have identified several factors, which could explain the different detection patterns of different algorithms. Table S1 shows that the tent map and the tachycardia produce lower density, thus more dispersed points in the state space, presumably making them more detectable by the LOF. In contrast, linear segments resulted in a similar density of points to the normal logistic activity or a higher density of points compared to the random walk background. Detrending via differentiation of the logarithm was applied as a preprocessing step in the latter case, making the data series stationary and drastically increasing the state space density of the anomaly.
LOF relies solely on the local density, thus it only counts the low-density sets as outliers. In contrast, as discord detection method identifies anomalies based on the distances in the state space, it was able to detect linear anomaly on chaotic background, tent-map anomaly on log-map data series, and tachycardia on the simulated ECG data, but failed on the detection of the linear anomaly on random walk background. The state-space points belonging to the well-detected anomalies are truly farther from the points in the manifolds of the background dynamics (Fig. 1A-C). In contrast, after discrete-time derivation of logarithms, the points belonging to a linear anomaly are placed near the center of the background distribution (Fig. 1D), making them undetectable either for LOF and discord algorithms.
The detection performance of TOF was less affected by the relation between the expected and the actual length of the anomalies in the linear cases. The reason behind this is that each point of the linear segment is a unique state in itself, thus it always falls below the expected maximal anomaly length. In contrast, the tent map and tachycardic anomalies produce short, but stationary segments, which can be less effectively detected if they are longer than the preset expected length.
We can conclude that 1) TOF has reached better performance to detect anomalies in all the investigated cases, 2) there are special types of anomalies that can be detected only by TOF and can be considered unicorns but not outliers or discords.
TOF detects unicorns only
TOF detects unique events only. Detection performance measured by ROC AUC as a function of the minimum Inter-Event Interval (IEI) between two inserted tent-map outlier segments. TOF was able to distinguish outliers from the background very well when IEIs were below 300 steps, and the two events can be considered one. However, the detection performance of TOF decreased for higher IEIs. In contrast, LOF's peak performance was lower, but independent of the IEI.
To show that TOF enables detection of only unique events, additional simulations were carried out, where two, instead of one, tent-map outlier segments were inserted into the logistic map simulations. We detected outliers by TOF and LOF and subsequently, ROC AUC values were analyzed as a function of the Inter-Event Interval (IEI, Fig. 4) of the outlier segments. LOF performed independent of IEI, but TOF's performance showed strong IEI-dependence. The highest TOF ROC AUC values were found at small IEI-s and AUC was decreasing with higher IEI. Also, the variance of ROC AUC values was increasing with IEI. This result showed that the TOF algorithm can detect only unique events: if two outlier events are close enough to each other, they can be considered as one unique event together. In this case, TOF can detect it with higher precision, compared to LOF. However, if they are farther away than the time limit determined by the detection threshold, then the detection performance decreases rapidly.
The results also showed that anomalies can be found by TOF only if they are alone, a second appearance decreases the detection rate significantly.
Application examples on real-world data series
Detecting apnea event on ECG time series
To demonstrate that the TOF method can reveal unicorns in real-world data, we have chosen data series where the existence and the position of the unique event are already known.
We applied TOF to ECG measurements from the MIT-BIH Polysomnographic Database's47,48 to detect an apnea event. Multichannel recordings were taken on 250 Hz sampling frequency, and the ECG and respiratory signal of the first recording was selected for further analysis (\(n=40{,}000\) data points 1600 s.
While the respiratory signal clearly showed the apnea, there were no observable changes on the parallel ECG signal.
We applied time delay embedding with \(E_{\mathrm{TOF}}=3\), \(E_{\mathrm{LOF}}=7\) and \(\tau =0.02\,\mathrm{s}\) according to the first zero crossing of the autocorrelation function (Fig. S9). TOF successfully detected apnea events in ECG time series; interestingly, the unique behaviour was found mostly during T waves when the breathing activity was almost shut down (Fig. 5, \(k=11\), \(M=5\,\mathrm{s}\)). In contrast, LOF was sensitive to the increased and irregular breathing before apnea (\(k=200\), threshold\(=0.5{\,\%}\)), while the top discord (\(M=5\,\mathrm{s}\)) were found at the transient between the irregular breathing and the apnea. This example shows that our new method could be useful for biomedical signal processing and sensor data analysis.
Detecting apnea with arousal on ECG. (A) ECG time series with unique events detected by TOF (orange dots, \(E=3, \tau =0.02 \,\mathrm{s}, k=11, M=5 \,\mathrm{s}\)), outliers detected by LOF (blue + signs, \(E=7,\tau =0.02 \,\mathrm{s}, k=100\), threshold \(=0.5 \%\)) and the top discord (red x signs, M=5 s). The inset shows the more detailed pattern of detections: unique behavior mainly appears on the T waves. (B–D) Breathing air-flow time series parallel to the above ECG recording, colored according to the scores of the three anomaly methods. The anomaly starts with a period of irregular breathing at 340 s, followed by the apnea when breathing almost stops (350–370 s). After this anomalous period, arousal restores the normal breathing. (B) Airflow is colored according to the TOF score at each sample. Low values (darker colors) mark the anomaly corresponding to the period of apnea. (C) Air-flow time series with coloring corresponds to the LOF score at each sample. Higher LOF values mark the outliers. LOF finds irregular breathing preceding the apnea. (D) Airflow time series colored according to the matrix profile values by the discord. Discord detection algorithm finds the point of transition from irregular breathing to the apnea.
Detecting gravitational waves
As a second example of real-world datasets with known unique events, we analyzed gravitational wave detector time series around the GW150914 merger event18 (Fig. 6). The LIGO Hanford detector's signal (4096 Hz) was downloaded from the GWOSC database49. A 12 s long segment of strain data around the GW150914 merger event was selected for further analysis. As a preprocessing step, the signal was bandpass-filtered (50–300 Hz). Time delay embedding was carried out with embedding delay of 8 time-steps (1.953 ms) and embedding dimension of \(E=6\) and \(E=11\) for TOF and LOF respectively. The neighbor parameter was set to \(k=12\), for TOF and \(k=100\) for LOF. The length of the event was set to \(M=146.484\,{\text{ms}}\) for TOF and discord detection and correspondingly, the threshold to \(0.5 \%\) for LOF (Fig. S10).
All three algorithms detected the merger event, albeit with some differences. LOF found the whole period, while TOF selectively detected the period when the chirp of the spiraling black holes was the loudest. Interestingly, the top discord found the end of the event (Fig. 6B-D).
To investigate the performance of TOF on detecting noise bursts called blip in LIGO detector data series, we applied the algorithm on the Gravity Spy50 blip data series downloaded from the GWOSC database49 (Fig. S7). We determined the value of the optimal threshold on the training set (\(N=128\)), then measured precision, F\(_1\) score, recall, and block-recall metrics on the test set (\(N=29\)). We set the threshold value by the maximum precision (\(M=36\), Fig. S7A). TOF reached high precision (1), low \(\hbox {F}_1\) score, low recall and high block-recall (0.9) values (Fig. S7B) on the test set. The high precision shows that the detected anomaly is likely to be a real blip and the high block recall (hit rate) implies that TOF found blips in the majority of the sample time series.
Detection of the GW150914 event on LIGO open data with TOF and LOF and discord. (A) Strain time series (black) from Hanford detector around GW150914 event (grey vertical line) with TOF (orange dots), LOF (blue plus) and discord (red x) detections. TOF score values (B), LOF scores (C) and matrix profile scores (D) are mapped to the time series (orange, blue and red colors respectively), the strongest colors show the detected event around 0 s. (E) The Q-transform of the event shows a rapidly increasing frequency bump in the power spectra right before the merger event (grey). The grey horizontal dashed lines show the lower (50 Hz) and upper (300 Hz) cutoff frequencies of the bandpass filter, which was applied on the time series as a preprocessing step before anomaly detection. (F) Filtered strain data at 0.1 s neighborhood around the event. TOF, LOF, and discord detection algorithms detected the merger event with different sensitivity. LOF detected more points of the event, while TOF found the period which has the highest power in the power spectra, and a discord was detected at the end of the event. (\(E_{\mathrm{TOF}}=6\), \(\tau _{\mathrm{TOF}}=1.953\) ms, \(k_{\mathrm{TOF}}=12\), \(M_{\mathrm{TOF}}=146.484\) ms, \(w=7\); \(E_{\mathrm{LOF}}=11\), \(\tau _{\mathrm{LOF}}=1.953\) ms, \(k_{\mathrm{LOF}}=100\), threshold\(=0.5\)%, \(M_{discord}=146.484\,{\text{ms}}\)).
London InterBank Offer Rate dataset
Our final real-world example is the application of TOF, LOF, and discord detection algorithms on the London InterBank Offer Rate (LIBOR) dataset. In this case, we have no exact a priori knowledge about the appearance of unique events, but we assumed that unique states found by the TOF algorithm may have unique economic characteristics.
As a preprocessing step, discrete time derivative was calculated to eliminate global trends, then we applied TOF (\(E=3, \tau =1, k=5, M=30\) month) and LOF (\(E=3, \tau =1, k=30\), \(\hbox {threshold}=18.86\,\%\)) on the derivative (Figs. S11-S12). TOF found the uprising period prior to the 2008 crisis and the slowly rising period from 2012 onwards as outlier segments. LOF detected several points, but no informative pattern emerged from the detections (Fig. 7). Also, Discord detected a period between 1993 and 1999, with no obvious characteristic.
Analysis of LIBOR dataset. The detections were run on the temporal derivative of the LIBOR time series. (A) time-series with detections. (B) TOF score values. (C) LOF score values. (D) Matrix profile scores by the discord detection algorithm. TOF detected two rising periods: the first between 2005 and 2007 and a second, started in 2012 and lasts until now. While both periods exhibit unique dynamics, they differ from each other as well.
While in this case the ground-truth was not known, the two periods highlighted by TOF show specific patterns of monotonous growth. Moreover, the fact that both of the two periods were detected by TOF shows that both dynamics are unique, therefore different from each other.
In this paper we introduced a new concept of anomalous event called unicorn; unicorns are the unique states of the system, which were visited only once. A new anomaly concept can be valid only if a proper detection algorithm is provided: we have defined the Temporal Outlier Factor to quantify the uniqueness of a state. We demonstrated that TOF is a model-free, non-parametric, domain-independent anomaly detection tool, which can detect unicorns.
TOF measures the temporal dispersion of state-space neighbors for each point. If state-space neighbors are temporal neighbors as well, then the system has never returned to that state, therefore it is a unique event. ie. a unicorn.
The unicorns are not just outliers in the usual sense, they are conceptually different. As an example of their inherently different behavior, one can consider a simple linear data series: All of the points of this series are unique events; they are only visited once and the system never returned to either one of them. Whilst this property may seem counter-intuitive, it ensures that our algorithm finds unique events regardless of their other properties, such as amplitude or frequency. This example also shows that the occurrences of unique events are not necessarily rare: actually, all the points of a time series can be unique. This property clearly differs from other anomaly concepts: most of them assume that there is a normal background behavior that generates the majority of the measurements and outliers form only a small minority.
Keogh's discord detection algorithm19 differs from our method in an important aspect: Keogh's algorithm finds one, or other predefined number of anomalies on any dataset. Thus Keogh's algorithm can not be used to distinguish, whether there are any anomalies on the data or not, it will always find at least one. This property makes it inappropriate in many real-world applications since usually, we do not know if there are any anomalies on the actual dataset or not. In contrast, our algorithm can return any number of anomalies, including zero.
Detection performance comparison of TOF, LOF, and two discord detection algorithms on different simulated datasets highlighted the conceptual difference between the traditional outliers and the unique events as well. As our simulations showed, TOF with the same parameter settings was able to find both higher and lower density anomalies, based on the sole property that they were unique events. The algorithm has a very low false detection rate, but not all the outlier points were found or not all the points of the event were unique. As an example, QRS waves of ECG simulations do not appear to be different from normal waves, hence the algorithms did not find them.
Of course, our aim was not to compete with those specific algorithms that have been developed to detect sleep apnea events from ECG signal51. Most of the methods extract and classify specific features of the R-R interval series called heart rate variability (HRV). It was shown, that sympathetic activation during apnea episodes leaves its mark on HRV52, its spectral components, sample entropy53 or correlation dimension54. Song et al.55 used discriminative Markov-chain models to classify HRV signals and reached \(97\%\) precision for per-recording classification.
While ECG analysis mostly concentrates on the temporal relations of the identified wave components, here we apply the detection methods to the continuous ECG data. Previously, it was shown that apnea is associated with morphological changes of the P waves and the QRS complex in the ECG signal51,56,57.
Interestingly, TOF marked mainly the T waves of the heart cycle as anomalous points. T waves are signs of ventricular repolarization and are known to be largely variable, thus they are often omitted from the ECG analysis. This example showed that they can carry relevant information as well.
The already identified gravitational wave GW150914 event was used to demonstrate the ability of our method to find another type of anomaly without prior knowledge about it.
Clearly, specific model-based algorithms (such as matched filter methods58) or unmodelled algorithms that were originally used to recognize gravitational waves, such as coherent Wave Bursts, omicron-LALInference-Bursts, and BayesWave are much more sensitive to the actual waveforms generated by the merger of black holes or neutron stars than our TOF method59. The unmodelled methods have only two basic assumptions: first, that the gravitational wave background (unlike ECG signal) is basically silent, thus detectors measure only Gaussian noise in the absence of an event. Thus, any increase in the observed wave-power needs to be detected and classified. Second, an increase in the coherent power between the far located detectors is the hallmark of candidate events of astrophysical origin. The detectors should observe similar waveforms with phase difference corresponding to the waves traveling with light-speed between them. In contrast, increased power in only one of the detectors should have a terrestrial origin and these are called glitches. After the unmodelled detection of candidate waveforms, more specific knowledge about the possible waveforms can be incorporated into the analysis pipeline, such as analyzing time evolution of the central frequency of the signal, or comparison of the waveform to the model database, containing simulated waveforms generated by merger events. Model-free methods can detect events with unpredicted waveforms and may help to find glitches. The presence of different types of glitches significantly increases the noise level and decreases the useful data length of detectors, thus limiting its sensitivity.
In contrast to apnea and gravitational wave detection, the nature of anomalies is much less known in the economical context. Most of these anomaly detection methods concentrate on fraud detection of transaction or network traffic records and utilize clustering techniques to distinguish normal and fraudulent behaviors60.
Whilst LOF showed no specific detection pattern, TOF detected two rising periods on the temporal derivative of the USD LIBOR dataset: one preceding the 2008 crisis and another one from 2012 onwards. Both detected periods showed unique dynamics: the large fluctuations are replaced by constant rising during these periods, the dynamics are 'frozen'. Note, that the rising speeds differ in the two periods. The period between 2005-2007 can be considered unique in many ways; not only was there an upswing of the global market, but investigations revealed that several banks colluded in manipulation and rigging of LIBOR rates in what came to be known as the infamous LIBOR scandal61. Note, that this was not the only case when LIBOR was manipulated: During the economic breakdown in 2008 the Barclys Bank submitted artificially low rates to show healthier appearance62,63,64. As a consequence of these scandals, significant reorganization took place in controlling LIBOR calculation, starting from 2012.
To sum it up, gravitational waves of the merger black-holes on the filtered dataset formed a traditional outlier which was well detectable by all the TOF, the LOF, and the discord detection algorithms, while LIBOR exhibited longer periods of unique events only detectable by TOF. Apnea generated a mixed event on ECG; the period of irregular breathing formed outliers detectable by LOF, while the period of failed respiration generated a unique event detectable only by the TOF. Meanwhile, the top discord was found at the transitory period between the two states.
Comparing TOF, LOF, and discord detection algorithms proved that temporal scoring has advantageous properties and adds a new aspect to anomaly detection. One advantage of TOF can be experienced when it comes to threshold selection. Since the TOF score has time dimension, an actual threshold value means the maximal expected length of the event to be found. Also, on the flipside the neighborhood size k parameter sets the minimal event length. Because of these properties, domain knowledge about possible event lengths renders threshold selection a simple task.
While TOF and LOF have similar computational complexity (\(O(k n \log (n))\)), the smaller embedding dimensions and neighborhood sizes make TOF computations faster and less memory hungry. While the brute force discord detection algorithm has \(O(k n^2 \log {n})\) complexity19, the running time of discord detection has been significantly accelerated by the SAX approximation19 and latter the DRAG algorithm, which is essentially linear in the length of the time series65. However, our results may indicate that the SAX approximation has seriously limited the precision of Senin's algorithm.
To measure the running time empirically, we applied TOF algorithm on random noise from \(10^2-10^6\) sample size, 15 instances each (\(d=3\), \(\tau =1\), \(k=4\)). The runtime on the longest tested \(10^6\) points long dataset was \(15,144\pm 0.351\) secs (Fig. S4) on a laptop powered by Intel® Core\(^{\mathrm{TM}}\) i5-8265U. The fitted exponent of the scaling was 1.3. Based on these results, we have estimated that if memory issues could be solved, running a unicorn search on the whole 3 months length of the LIGO O1 data downsampled to 4096Hz would take 124 days on a single CPU (8 threads). A search through one week of ECG data would take 3 hours. As calculations on the ECG data are much shorter than the recording length; online processing is feasible as well.
Time indices of k nearest neighbors have been previously utilized differently in nonlinear time series analysis to diagnose nonstationary time series24,33,66, measure intrinsic dimensionality of system's attractors34,35,36, monitor changes in dynamics37 and even for fault detection38. Rieke et al. 33,66 utilized very resembling statistics to TOF: the average absolute temporal distances of k nearest neighbors from the points. However, they analyzed the distribution of temporal distances to determine nonstationarity and did not interpret the resulting distance scores locally. Gao & Hu and Martinez-Rego et al.38 used recurrence times to monitor dynamical changes in time series locally, but these statistics are not specialized for detecting extremely rare unique events. TOF utilizes the temporal distance of k nearest neighbors at each point, thus providing a locally interpretable outlier score, which takes small values when the system visits an undiscovered territory of state-space for a short time period.
The minimal detectable event length might be the strongest limitation of the TOF method. We have shown that the TOF method has a lower bound on the detectable event length (\(\Theta _{min}\)), which depends on the number of neighbors (k) used in the TOF calculations. This means that TOF is not well suited to detect point-outliers, which are easily detectable by many traditional outlier detection methods.
Furthermore, the shorter the analyzed time series and the smaller k is used, the higher the chance that the background random or chaotic dynamics spontaneously produce a unique event. Smaller k results in higher fluctuations of the baseline TOF values, which makes the algorithm prone to produce false-positive detections.
A further limitation arises from the difficulty of finding optimal parameters for the time delay embedding: the time delay \(\tau\) and the embedding dimension E. Figure S5 shows the sensitivity of the \(F_1\) score to the time delay embedding parameters and the relation between the used and the optimal parameter pairs. This post hoc evaluation, which can be done for simulations but not in a real-life data showed, that our general parameter setting (\(E=3\), \(\tau =1\)) used during the tests was suboptimal for the simulated ECG-tachycardia dataset. The optimal parameter settings (\(E=7\), \(\tau =6\)) would have resulted in 0.94 as the maximal \(F_1\) score instead of 0.83, shown in Table 2).
The model-free nature of these algorithms can be an advantage and a limitation at the same time. The specific detection algorithms, which are designed on purpose and use specifically a priori knowledge about the target pattern to be detected, can be much more effective than a model-free algorithm. Model-free methods are preferred when the nature of the anomaly is unknown. Consequently, detecting a unicorn tells us that the detected state of the system is unique and differs from all other observed states, but it is not often obvious in what sense; posthoc analysis or domain experts are needed to interpret the results.
Preprocessing can eliminate information from the data series, thus can filter out aspects considered uninteresting. For example, we have seen that a strong global trend on data can make all the points unique. By detrending the data, as done on random walk and LIBOR datasets, we defined that these points should not be considered unique solely based on this feature. Similarly, band-pass filtering of gravitational wave data defines that states should not be considered unique based on the out-of-frequency-range waveforms.
Future directions to develop TOF would be to form a model which is able to represent uncertainty over detections by creating temporal outlier probabilities just like Local Outlier Probabilities67 created from LOF. Moreover, an interesting possibility would be to make TOF applicable also on different classes of data, such as multi-channel data or point processes, like spike-trains, network traffic time-stamps or earthquake dates.
Chandola, V., Banerjee, A. & Kumar, V. Anomaly detection: A survey. ACM Comput. Surv. 41, 1–58 (2009). http://portal.acm.org/citation.cfm?doid=1541880.1541882. arXiv:1011.1669v3.
Blázquez-García, A., Conde, A., Mori, U. & Lozano, J. A. A review on outlier/anomaly detection in time series data. arXiv:2002.04236 (2020).
Shaukat, K. et al. A review of time-series anomaly detection techniques: A step to future perspectives. Adv. Intell. Syst. Comput. 1363 AISC, 865–877 (2021).
Taleb, N. N. The Black Swan: The Impact of the Highly Improbable (2007).
Sornette, D. Dragon-kings, black swans and the prediction of crises. Int. J. Terraspace Sci. Eng. 2, 1–18 (2009) arXiv:0907.4290.
Hodge, V. J. & Austin, J. A survey of outlier detection methodologies. Artif. Intell. Rev. 22, 85–126. https://doi.org/10.1007/s10462-004-4304-y (2004).
Article MATH Google Scholar
Pimentel, M. A. F., Clifton, D. A., Clifton, L. & Tarassenko, L. A review of novelty detection. Signal Process. 99, 215–249. https://doi.org/10.1016/j.sigpro.2013.12.026 (2014).
Chalapathy, R. & Chawla, S. Deep learning for anomaly detection: A survey (2019). arXiv:1901.03407.
Kwon, D. et al. A survey of deep learning-based network anomaly detection. Cluster Comput. 22, 949–961. https://doi.org/10.1007/s10586-017-1117-8 (2019).
Braei, M. & Wagner, S. Anomaly detection in univariate time-series: A survey on the state-of-the-art (2020). arXiv:2004.00433.
Qi, D. & Majda, A. J. Using machine learning to predict extreme events in complex systems. Proc. Natl. Acad. Sci. U.S.A. 117, 52–59 (2020).
Article MathSciNet CAS Google Scholar
Memarzadeh, M., Matthews, B. & Avrekh, I. Unsupervised anomaly detection in flight data using convolutional variational auto-encoder. Aerospace 7, 115 (2020).
Moreno, E. A., Vlimant, J.-R., Spiropulu, M., Borzyszkowski, B. & Pierini, M. Source-agnostic gravitational-wave detection with recurrent autoencoders. arXiv:2107.12698 (2021).
Zhang, M., Guo, J., Li, X. & Jin, R. Data-driven anomaly detection approach for time-series streaming data. Sensors (Switzerland) 20, 1–17 (2020).
Han, K., Li, Y. & Xia, B. A cascade model-aware generative adversarial example detection method. Tsinghua Sci. Technol. 26, 800–812 (2021).
Guezzaz, A., Asimi, Y., Azrour, M. & Asimi, A. Mathematical validation of proposed machine learning classifier for heterogeneous traffic and anomaly detection. Big Data Min. Anal. 4, 18–24 (2021).
Beggel, L., Kausler, B. X., Schiegg, M., Pfeiffer, M. & Bischl, B. Time series anomaly detection based on shapelet learning. Comput. Stat 34, 945–976. https://doi.org/10.1007/s00180-018-0824-9 (2019).
Article MathSciNet MATH Google Scholar
Abbott, B. P. et al. Observation of gravitational waves from a binary black hole merger. Phys. Rev. Lett. 116, 061102 (2016).
Article ADS MathSciNet CAS Google Scholar
Keogh, E., Lin, J. & Fu, A. HOT SAX: Efficiently finding the most unusual time series subsequence. In Proceedings—IEEE International Conference on Data Mining, ICDM (2005).
Senin, P. et al. Time series anomaly discovery with grammar-based compression. In EDBT 2015—18th International Conference on Extending Database Technology, Proceedings 481–492 (2015).
Breunig, M. M., Kriegel, H.-P., Ng, R. T. & Sander, J. LOF: Identifying density-based local outliers. In SIGMOD Record (ACM Special Interest Group on Management of Data) (2000).
Oehmcke, S., Zielinski, O. & Kramer, O. Event detection in marine time series data. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 9324. 279–286 (2015).
Takens, F. Detecting strange attractors in turbulence. Dyn. Syst. Turbul. Warwick 1980 898, 366–381 (1981). arXiv:1011.1669v3.
Kennel, M. B. Statistical test for dynamical nonstationarity in observed time-series data (1997). arXiv:9512005.
Packard, N. H., Crutchfield, J. P., Farmer, J. D. & Shaw, R. S. Geometry from a time series. Phys. Rev. Lett. 45, 712–716. https://doi.org/10.1103/PhysRevLett.45.712 (1980).
Ye, H. et al. Equation-free mechanistic ecosystem forecasting using empirical dynamic modeling. Proc. Natl. Acad. Sci. U.S.A. 112, E1569–E1576 (2015).
Schreiber, T. & Kaplan, D. T. Nonlinear noise reduction for electrocardiograms. Chaos Interdiscip. J. Nonlinear Sci. 6, 87–92. https://doi.org/10.1063/1.166148 (1996).
Hamilton, F., Berry, T. & Sauer, T. Ensemble Kalman filtering without a model. Phys. Rev. X, 6, 011021 (2016).
Sugihara, G. et al. Detecting causality in complex ecosystems. Science (New York, N.Y.) 338, 496–500 (2012).
Benkő, Z. et al. Causal relationship between local field potential and intrinsic optical signal in epileptiform activity in vitro. Sci. Rep. 9, 1–12 (2019).
Selmeczy, G. B. et al. Old sins have long shadows: Climate change weakens efficiency of trophic coupling of phyto- and zooplankton in a deep oligo-mesotrophic lowland lake (Stechlin, Germany)—a causality analysis. Hydrobiologia (2019).
Benkő, Z. et al. Complete inference of causal relations between dynamical systems. 1–43. arXiv:1808.10806 (2018).
Rieke, C. et al. Measuring nonstationarity by analyzing the loss of recurrence in dynamical systems. Phys. Rev. Lett. 88, 4 (2002).
Gao, J. B. Recurrence time statistics for chaotic systems and their applications. Phys. Rev. Lett. 83, 3178–3181 (1999).
Carletti, T. & Galatolo, S. Numerical estimates of local dimension by waiting time and quantitative recurrence. Physica A Stat. Mech. Appl. 364, 120–128 (2006).
Marwan, N., Carmenromano, M., Thiel, M. & Kurths, J. Recurrence plots for the analysis of complex systems. Phys. Rep. 438, 237–329 (2007).
Article ADS MathSciNet Google Scholar
Gao, J. & Hu, J. Fast monitoring of epileptic seizures using recurrence time statistics of electroencephalography. Front. Comput. Neurosci. 7, 1–8 (2013).
Martínez-Rego, D., Fontenla-Romero, O., Alonso-Betanzos, A. & Principe, J. C. Fault detection via recurrence time statistics and one-class classification. Pattern Recogn. Lett. 84, 8–14 (2016).
Bentley, J. L. Multidimensional binary search trees used for associative searching. Commun. ACM 18, 509–517. https://doi.org/10.1145/361002.361007 (1975).
Brown, R. A. Building a balanced \(k\)-d tree in \(O(kn \log n)\) time. J. Comput. Graph. Techn. (JCGT) 4, 50–68 (2015).
Yeh, C. C. M. et al. Matrix profile I: All pairs similarity joins for time series: A unifying view that includes motifs, discords and shapelets. In Proceedings—IEEE International Conference on Data Mining, ICDM (2017).
Senin, P. jmotif. https://github.com/jMotif/jmotif-R (2020).
May, R. M. Simple mathematical models with very complicated dynamics. Nature 261, 459–467. https://doi.org/10.1038/261459a0 (1976).
Article ADS CAS PubMed MATH Google Scholar
Ryzhii, E. & Ryzhii, M. A heterogeneous coupled oscillator model for simulation of ECG signals. Comput. Methods Prog. Biomed. 117, 40–49. https://doi.org/10.1016/j.cmpb.2014.04.009 (2014).
Article CAS MATH Google Scholar
Bradley, A. P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recogn. 30, 1145–1159 (1997).
Senin, P. et al. GrammarViz 2.0: A tool for grammar-based pattern discovery in time series. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2014).
Ichimaru, Y. & Moody, G. B. Development of the polysomnographic database on CD-ROM. Psychiatry Clin. Neurosci. 53(2), 175–7. https://doi.org/10.1046/j.1440-1819.1999.00527.x (1999).
Goldberger, A. L. et al. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Circulation 101, e215–e220. (2000).
Abbott, R. et al. Open data from the first and second observing runs of advanced LIGO and advanced Virgo (2019). arXiv:1912.11716.
Zevin, M. et al. Gravity spy: Integrating advanced ligo detector characterization, machine learning, and citizen science. Class. Quantum Gravit. 34, 064003. https://doi.org/10.1088/1361-6382/aa5cea (2017).
Sharma, H. & Sharma, K. K. An algorithm for sleep apnea detection from single-lead ECG using Hermite basis functions. Comput. Biol. Med. 77, 116–24. https://doi.org/10.1016/j.compbiomed.2016.08.012 (2016).
Penzel, T. Is heart rate variability the simple solution to diagnose sleep apnoea? Eur Respir J. 22(6), 870–1. https://doi.org/10.1183/09031936.03.00102003 (2003).
Al-Angari, H. M. & Sahakian, A. Use of sample entropy approach to study heart rate variability in obstructive sleep apnea syndrome. IEEE Trans. Biomed. Eng. 54(10), 1900–4. https://doi.org/10.1109/TBME.2006.889772 (2007).
Bock, J. & Gough, D. A. Toward prediction of physiological state signals in sleep apnea. IEEE Trans. Biomed. Eng. 45(11), 1332–41. https://doi.org/10.1109/10.725330 (1998).
Song, C., Liu, K., Zhang, X., Chen, L. & Xian, X. An obstructive sleep apnea detection approach using a discriminative hidden Markov model from ECG signals. IEEE Trans. Biomed. Eng. 63(7), 1532–42. https://doi.org/10.1109/TBME.2015.2498199 (2016).
Penzel, T. et al. Systematic comparison of different algorithms for apnoea detection based on electrocardiogram recordings. Med. Biol. Eng. Comput. 40(4), 402–7. https://doi.org/10.1007/BF02345072 (2002).
Boudaoud, S., Rix, H., Meste, O., Heneghan, C. & O'Brien, C. Corrected integral shape averaging applied to obstructive sleep apnea detection from the electrocardiogram. Eurasip J. Adv. Signal Process. 032570. https://doi.org/10.1155/2007/32570 (2007).
Abbott, B. et al. Gw150914: First results from the search for binary black hole coalescence with advanced ligo. Phys. Rev. D (2016). https://doi.org/10.1103/PhysRevD.93.122003.
Abbott, B. P. et al. Observing gravitational-wave transient GW150914 with minimal assumptions. Phys. Rev. D (2016). arXiv:1602.03843.
Ahmed, M., Mahmood, A. N. & Islam, M. R. A survey of anomaly detection techniques in financial domain. Future Gen. Comput. Syst. 55, 278–288. https://doi.org/10.1016/j.future.2015.01.001 (2016).
Department of Justice of The United States. Barclays bank PLC admits misconduct related to submissions for the London interbank offered rate and the euro interbank offered rate and agrees to pay \$160 million penalty. https://www.justice.gov/opa/pr/barclays-bank-plc-admits-misconduct-related-submissions-london-interbank-offered-rate-and (2012).
Snider, C. & Youle, T. Diagnosing the libor: Strategic manipulation member portfolio positions. Working paper- faculty.washington.edu (2009).
Snider, C. & Youle, T. Does the libor reflect banks' borrowing costs? Social Science Research Network: SSRN.1569603 (2010).
Snider, C. & Youle, T. The fix is in: Detecting portfolio driven manipulation of the libor. Social Science Research Network: SSRN.2189015 (2012).
Yankov, D., Keogh, E. & Rebbapragada, U. Disk aware discord discovery: Finding unusual time series in terabyte sized datasets. Knowl. Inf. Syst. 17, 241–262. https://doi.org/10.1007/s10115-008-0131-9 (2008).
Rieke, C., Andrzejak, R. G., Mormann, F. & Lehnertz, K. Improved statistical test for nonstationarity using recurrence time statistics. Phys. Rev. E Stat. Phys. Plasmas Fluids Relat. Interdiscip. Top. 69, 9 (2004).
Kriegel, H. P., Kröger, P., Schubert, E. & Zimek, A. LoOP: Local outlier probabilities. In International Conference on Information and Knowledge Management, Proceedings (2009).
We are grateful to prof. Róbert Gábor Kiss MD, PhD for his helpful comments on ECG data sets and to Roberta Rehus for her help on MS preparation. This research has made use of data, software and/or web tools obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and Hungarian institutes. This research was supported by grants from the Hungarian National Research, Development and Innovation Fund NKFIH K 113147, K 135837, the Human Brain Project associative grant CANON, under grant number NN 118902 and the Hungarian National Brain Research Program KTIA NAP 2017-1.2.1-NKP-2017-00002. Authors thank the support of Eötvös Loránd Research Network.
Department of Computational Sciences, Wigner Research Centre for Physics, Budapest, 1121, Hungary
Zsigmond Benkő, Tamás Bábel & Zoltán Somogyvári
János Szentágothai Doctoral School of Neurosciences, Semmelweis University, Ullői road 26, Budapest, 1085, Hungary
Zsigmond Benkő
Tamás Bábel
Zoltán Somogyvári
Z.S. designed the methods, Z.B., T.B. and Z.S. conceived the analysis, ran the simulations and the analysis, wrote and reviewed the manuscript.
Correspondence to Zoltán Somogyvári.
Benkő, Z., Bábel, T. & Somogyvári, Z. Model-free detection of unique events in time series. Sci Rep 12, 227 (2022). https://doi.org/10.1038/s41598-021-03526-y
Accepted: 29 November 2021
|
CommonCrawl
|
View all Nature Research journals
Explore our content
Zigzag carbon as efficient and stable oxygen reduction electrocatalyst for proton exchange membrane fuel cells
Longfei Xue ORCID: orcid.org/0000-0003-4577-49771,
Yongcheng Li1,
Xiaofang Liu1,
Qingtao Liu1,
Jiaxiang Shang1,
Huiping Duan1,
Liming Dai2,3 &
Jianglan Shui1
Nature Communications volume 9, Article number: 3819 (2018) Cite this article
Electrocatalysis
Non-precious-metal or metal-free catalysts with stability are desirable but challenging for proton exchange membrane fuel cells. Here we partially unzip a multiwall carbon nanotube to synthesize zigzag-edged graphene nanoribbons with a carbon nanotube backbone for electrocatalysis of oxygen reduction in proton exchange membrane fuel cells. Zigzag carbon exhibits a peak areal power density of 0.161 W cm−2 and a peak mass power density of 520 W g−1, superior to most non-precious-metal electrocatalysts. Notably, the stability of zigzag carbon is improved in comparison with a representative iron-nitrogen-carbon catalyst in a fuel cell with hydrogen/oxygen gases at 0.5 V. Density functional theory calculation coupled with experimentation reveal that a zigzag carbon atom is the most active site for oxygen reduction among several types of carbon defects on graphene nanoribbons in acid electrolyte. This work demonstrates that zigzag carbon is a promising electrocatalyst for low-cost and durable proton exchange membrane fuel cells.
Proton exchange membrane fuel cells (PEMFCs) attract worldwide attention because they can efficiently convert hydrogen energy into electricity. The device uses an acid solid electrolyte (sulfonated tetrafluoroethylene-based fluoropolymer-copolymer) and is the most mature type of fuel cell. In the PEMFC, the oxygen reduction reaction (ORR) is the rate-determining step. Noble metal platinum is generally employed as a catalyst, and has hindered the wide application of PEMFCs. To solve this problem, a variety of non-precious metal catalysts (NPMCs), including nitrogen-coordinated transition metals (M-N-C, M = Fe, Co, etc.)1,2,3,4,5,6,7, and even metal-free catalysts (mainly heteroatom-doped carbon nanomaterials) have been developed8,9,10,11,12. The NPMCs present satisfactory activity and stability during half-cell characterization (a three-electrode system) with both alkaline and acidic electrolytes. However, poor stability in the actual PEMFC device is still one of the biggest challenges for NPMCs, especially for M-N-C electrocatalysts13,14,15. Recently, an N-doped graphene/carbon nanotube (CNT) composite has been reported to exhibit a stable PEMFC performance, though at a relatively low activity, which attracted a great deal of interest in metal-free electrocatalysts for the PEMFCs16.
In addition to the heteroatom-doped carbon nanomaterials, recent studies demonstrated that the defect-rich carbon could also effectively catalyze ORR in both alkaline and acidic electrolytes17,18,19,20,21. Thus, graphene nanoribbons (GNRs) could be promising metal-free catalysts due to the large aspect ratio and numerous defects along their edges22,23. GNRs with different edge structures can be obtained by unzipping CNTs in different ways24,25,26,27. In particular, Tour et al. created a zigzag-type edge on unzipped multiwall carbon nanotubes (MWCNTs) using H2SO4 and KMnO425,28. This offers an opportunity to study the specific activity of a certain type of carbon defects. In spite of the abundant defect sites, GNRs have not been characterized with any decent ORR activities in the acid half cell. The easy restacking of GNRs could lead to low activity since most of the GNR edges in the restacked materials are blocked from the reactants, leading to extremely low utilization of the active sites. This blocking effect will be more serious in a PEMFC because the catalyst layer in a PEMFC is much thicker than that in a half cell. Furthermore, in contrast to the liquid electrolyte in a half cell, the solid ionomer electrolyte in PEMFCs causes a severe mass transport barrier. So far, the promise of GNRs (or graphitic carbon defects) for electrocatalysis in PEMFCs is yet to be realized.
Here a composite of zigzag-edged GNRs on CNTs (GNR@CNT) is synthesized for use as the cathode catalyst in H2/O2 PEMFCs. The CNT backbones in the middle of GNR, coupled with a carbon black spacer, effectively exposes the zigzag carbon for ORR. Compared with previously reported metal-free electrocatalysts16, GNR@CNT delivers an unprecedented peak power density of 520 W g−1 in PEMFCs, even better than that of the N-doped counterpart. More importantly, CNT@GNR presents a remarkable stability in PEMFCs. A density functional theory (DFT) calculation reveals zigzag carbon is indeed the active site on the GNR.
Fabrication and characterization of catalysts
Figure 1 schematically illustrates the partial unzipping of a MWCNT into zigzag-edged GNR with a CNT backbone. Following the published method25, purified commercial MWCNTs (diameters ~20 nm and lengths of 0.5–2 μm) were partially unzipped by concentrated sulfuric acid and potassium permanganate, forming an oxidized GNR@CNT hybrid (Fig. 1b). As shown in Supplementary Fig. 1a, b, the oxidized GNR has a width of approximately 60 nm and is attached on the CNT backbone. Finally, the oxidized GNR was reduced at high temperature, producing a GNR@CNT composite. According to previous reports25,28, this unzipping method created zigzag-type carbon along GNR edges. GNR@CNT thus obtained was then used as the cathode catalyst in a PEMFC. The rigid CNT backbone could prevent the stacking of GNR in the catalyst layer, as schematically shown in Fig. 1d. Meanwhile, carbon black was introduced into the catalyst layer to further separate GNR@CNT, facilitating the mass transport in the catalyst layer to ensure a high utilization of zigzag carbon defects on the GNR edges16. As a result, GNR@CNT exhibited efficient and stable ORR catalytic activity in PEMFCs, as discussed below. For comparison, three control samples have been prepared, including a completely unzipped MWCNT (denoted GNR) using a double-concentrated KMnO4 than that for GNR@CNT, and the N-doped counterparts (N-GNR@CNT and N-GNR) by annealing corresponding materials in NH3 at 800 °C29.
Schematic illustration. The synthetic route of zigzag-type graphene nanoribbons on carbon nanotubes (GNR@CNT) from a MWCNT to b partially unzipped oxidized CNT and to c GNR@CNT. d The application as oxygen reduction reaction catalyst in a proton exchange membrane fuel cell (PEMFC). Carbon black XC-72 is used as spacer to prevent the stacking of active materials
Figure 2 and Supplementary Fig. 2 compared the morphology of GNR@CNT, GNR, and their N-doped counterparts. As shown in Fig. 2a and Supplementary Fig. 2a, the partially unzipped CNTs formed ribbon-like graphene nanosheets closely attached on the remaining CNT backbones. NH3-treatment could severely corrode the GNR, leaving the CNT skeleton as the dominant material, as observed in Fig. 2c and Supplementary Fig. 2b. Fourier transform infrared (FT-IR) spectra of GNR@CNT and N-GNR@CNT suggested the reduction of oxidized GNR after Ar and NH3 annealing (Supplementary Fig. 3), showing the removal of oxygen functional groups. Different from partially unzipped CNTs, the fully unzipped CNTs displayed a typical graphene-like morphology without any residual CNTs (Fig. 2e and Supplementary Fig. 2c). Meanwhile, NH3 corrosion created dense holes with diameter of 5–10 nm on N-GNR sheets (Fig. 2g and Supplementary Fig. 2d). The surface composition of GNR@CNT and N-GNR@CNT were analyzed by X-ray photoelectron spectroscopy (XPS). The survey scan did not detect any Ni signals (catalyst for the CNT growth) after the acid pretreatment (Supplementary Fig. 4a). The high-resolution C 1s spectrum in Supplementary Fig. 4b reveals the dominance of sp2 carbon, suggesting a highly graphitic structure of GNR@CNT. The XPS spectra of N-GNR@CNT indicates a successful doping of 3.09 at.% N, which was composed of pyridinic N (398.5 eV), pyrrolic N (399.8 eV), quaternary N (401.2 eV), and intercalated nitrogen molecules and/or oxides (403.5 eV) (Supplementary Fig. 5).
Transmission electron microscopy images and schematic diagrams. a, b Partially unzipping multiwall carbon nanotube (MWCNT) to graphene nanoribbons on carbon nanotube (GNR@CNT), c, d nitrogen-doped GNR@CNT (N-GNR@CNT), e, f totally unzipped MWCNT to graphene nanoribbons (GNR), and g, h nitrogen-doped GNR (N-GNR). Scale bar: 100 nm, and 10 nm for inset micrograph in g
Electrochemical performance evaluation
The electrocatalytic activity of the resultant catalysts was first thoroughly investigated in a half cell. The optimal annealing temperatures of 900 °C for GNR@CNT in Ar and 800 °C for N-GNR@CNT in NH3 were deduced by comparing the linear sweep voltammetry (LSV) curves in Supplementary Figs. 6, 7, respectively. Catalysts obtained under the optimized conditions were used for subsequent characterization. As shown in the above transmission electron microscopy (TEM) characterizations, ammonia treatment created an abundance of nanoholes, defects, and N-dopants in GNRs, which can facilitate mass transfer in the stacked catalyst layer and introduce significant pseudocapacitance30,31. Moreover, CNT backbones also improved the capacitances of GNR@CNT and N-GNR@CNT, compared with GNR and N-GNR, by alleviating stacking of GNRs (Supplementary Fig. 8)32,33. Benefitting from the ammonia treatment and CNT backbones, N-doped catalysts presented higher ORR activities than the non-doped counterparts; GNR@CNT and N-GNR@CNT also exhibited superior ORR activities in comparison to GNR and N-GNR counterparts in an alkaline electrolyte (Fig. 3a). It is worth noting that GNR@CNT displayed a high onset potential of 0.960 V and a half-wave potential of 0.819 V, which were only a little less than those of N-GNR@CNT (0.990 and 0.839 V) and even close to those of Pt/C (20 wt% Pt, 1.030 and 0.841 V). As shown in Supplementary Fig. 9, all these metal-free catalysts had a low H2O2 yield (<5.9%) and a high electron transfer number (n = 3.87–3.96), indicating a 4e− ORR process in the alkaline electrolyte.
Half-cell characterization of the catalysts. Linear sweep voltammetry curves of graphene nanoribbons on carbon nanotubes (GNR@CNT), N-doped GNR@CNT (N-GNR@CNT), and N-doped graphene nanoribbons (N-GNR) for a oxygen reduction reaction (ORR) activity, b peroxide reduction reaction (PRR) activity with 1.3 or 10 mM H2O2, and c ORR activity at 5, 25, and 35 °C in 0.1 M KOH; d ORR activity, e PRR activity with 1.3 or 10 mM H2O2, and f ORR activity at 5, 25, and 35 °C in 0.5 M H2SO4. Electrolyte was O2-saturated, except for PRR experiments with Ar-saturated electrolyte. Rotating speed: 1600 rpm. Scan rate: 10 mV s−1
To further investigate the ORR mechanism, we carried out peroxide reduction reaction (PRR) tests for GNR@CNT, N-GNR@CNT (Fig. 3b), and Pt/C (Supplementary Fig. 10)34. It is clear that the N-GNR@CNT and Pt/C showed high PRR activity, whereas the GNR@CNT displayed very low PRR activity at H2O2 concentrations of 1.3 mM (the same concentration of saturated O2 in the electrolyte for the normal ORR test) and 10 mM. It suggests that the zigzag carbon did not prefer a "2e− + 2e−" ORR process (O2 → H2O2 → H2O). Therefore, a direct 4e− mechanism should dominate the ORR process of the GNR@CNT (zigzag carbon) in the alkaline electrolyte. The temperature dependence of catalytic activity was investigated for GNR@CNT and N-GNR@CNT in the range of 5–35 °C. As presented in Fig. 3c, the half-wave potential and the limiting current density of GNR@CNT increased with increasing operation temperature. In contrast, N-GNR@CNT showed an insensitivity to the temperature. Additionally, the GNR and N-GNR displayed the similar temperature dependence as GNR@CNT and N-GNR@CNT, respectively (Supplementary Fig. 11), further confirming the distinctively different active sites between GNR@CNT and N-GNR@CNT.
Our carbon catalysts were further characterized in an acid electrolyte, i.e. 0.5 M H2SO4. The capacitance tests in Supplementary Fig. 12 also confirmed the effects of ammonia treatment and CNT backbones, like in the alkline media. As shown in Fig. 3d, GNR@CNT exhibited a considerable activity with an onset potential of 0.760 V and a half-wave potential of 0.633 V although still lower than Pt/C (20 wt% Pt) in the acid electrolyte35. It is worth noting that both onset potential and limiting current of GNR@CNT were higher than those of N-GNR@CNT in acid, which was opposite to the tendency in alkaline media. This phenomenon could be explained by the protonation of pyridinic nitrogen of N-GNR@CNT in the acid solution. As previously reported, the pyridinic nitrogen could adsorb proton in acidic medium, which reduced the affinity of O2 to the active carbon atoms adjacent to the pyridinic N, thus decreasing the ORR activity of N-GNR@CNT36,37. In contrast, the GNR@CNT could be tolerant to the protonation reaction. The electron transfer number of GNR@CNT was 3.6–3.9, suggesting the formation of H2O2 byproduct (Supplementary Fig. 13). Different from the previous PRR results in the alkaline electrolyte (Fig. 3b), both GNR@CNT and N-GNR@CNT showed quite weak PRR currents (<0.4 mA cm−2) no matter with a high or low concentration of H2O2. This phenomenon indicates that GNR@CNT and N-GNR@CNT had no activity to PRR in the acid electrolyte, which differed from the Pt/C catalyst (Supplementary Fig. 14). Therefore, both GNR@CNT and N-GNR@CNT should undergo a direct 4e− ORR with a negligible "2e− + 2e−" process in acid. The temperature dependence of electrocatalytic activities for GNR@CNT, N-GNR@CNT, GNR, and N-GNR was illustrated in Fig. 3f and Supplementary Fig. 15-16. All these catalysts displayed increased ORR activities with increasing temperature from 5 to 35 °C. Since the practical PEMFCs are usually operated at tens of degrees Centigrade, this temperature dependence implied a promising future for metal-free electrocatalysts to achieve high PEMFC performance38.
After the half-cell characterization, GNR@CNT, GNR, N-GNR@CNT, and N-GNR were assembled into the membrane electrode assembly (MEA) for evaluation in an actual PEMFC. Per our experience, the well separation of GNR in the catalyst layer should be crucial for a good PEMFC performance. Here carbon black (XC-72) was added into the catalyst layer as a spacer to separate GNR. GNR@CNT exhibited the best polarization curve and the highest peak power density of 520 W g−1, better than 430 W g−1 of N-GNR@CNT at the catalyst loading of 0.25 mg cm−2 (Fig. 4a). N-GNR and GNR presented relatively poor polarization curves and low power performances (<200 W g−1) due to the lack of CNT backbones. To the best our knowledge, the obtained gravimetric peak power density of GNR@CNT in PEMFC is the highest value among all reported metal-free electrocatalysts and comparable to most NPMCs (Supplementary Table S4). Moreover, we found a strong dependence of fuel cell performance on catalyst loading. When the catalyst loading was doubled to 0.5 mg cm−2, the power density of GNR@CNT declined obviously (Fig. 4b), due to the worsened oxygen transport in the thick catalyst layer. For N-GNR@CNT, the performance decay was relatively small in comparison with GNR@CNT, which could be ascribed to the nanoholes on N-GNR@CNT (Fig. 2b, c) to facilitate the mass transport in the catalyst layer39. When expressed in the unit of W cm−2, GNR@CNT and N-GNR@CNT could reach 161 and 241 mW cm−2, respectively (Supplementary Fig. 17), which are among the top performance metal-free catalysts16,40. It should be noted that the open circuit voltage of nanoribbon catalysts as shown in Supplementary Table 5 were still less than that of Pt/C and most NPMCs. Finally, GNR@CNT, N-GNR@CNT, and a representative Fe/N/C catalyst were subject to a stability test in PEMFC at a constant voltage of 0.5 V with pure H2/O2 as fuel gases. As shown in Fig. 4c, N-GNR@CNT exhibited a more stable current performance than that of Fe/N/C, which was consistent with previous reports16,41. It is interesting to find that the GNR@CNT possessed an even better PEMFC stability than N-GNR@CNT at 0.5 V. To the best of our knowledge, this is the only zigzag carbon that exhbits both high activity and stability in a PEMFC.
Proton exchange membrane fuel cell evaluation. Polarization and power density curves of graphene nanoribbons on carbon nanotubes (GNR@CNT), N-doped GNR@CNT (N-GNR@CNT), and N-doped graphene nanoribbons (N-GNR) as a function of the areal current density with cathode catalyst loading of a 0.25 mg cm−2 and b 0.50 mg cm−2 in a proton exchange membrane fuel cell (PEMFC); c stability of the indicated catalysts in PEMFC measured at 0.5 V. The absolute current densities before durability tests (at 100%) were 136, 80 and 1216 mA cm−2 for graphene nanoribbons on carbon nanotube (GNR@CNT), nitrogen-doped catalyst (N-GNR@CNT), and reference catalyst iron-nitrogen-carbon (Fe/N/C), respectively. Weight ratio of Nafion/catalyst/carbon black (XC-72) = 5/1/4. Cell: 80 °C; H2/O2: 80 °C, 100% relative humidity, 2 bar back pressure
The observed high PEMFC performance prompted us to identify the active sites of GNR@CNT. Since both the defective carbon atoms and the oxidized defect carbon atoms are active to ORR19,42 and defective carbon atoms could be oxidized after long exposure to air, we prepared a couple of samples with controlled oxidative defects as active sites. GNR@CNT was protected under Ar atmosphere during the synthesis, storage, and ink preparation, while the control sample was exposed to air for 5 days (denoted as GNR@CNT-5days) after the synthesis to enhance the oxidation43. It has been previously reported that the enhanced oxidation on CNT could decrease the sp2 peak and increase the sp3 peak in the C 1s XPS spectrum, accompanied by a hydroxyl group (C–OH) transferring to a carboxyl group (O = C–O) and then to CO2 and H2O44,45. The decreased sp2/sp3 ratio and increased O = C–O/H–O–H groups indicated a successful oxidation of GNR@CNT-5days, as shown in Supplementary Fig. 18 and Table 1. In the acid electrolyte, the ORR activity of GNR@CNT-5days was obviously lower than that of GNR@CNT in terms of half-wave potential, current density, electron transfer number, and H2O2 yield, as shown in Supplementary Fig. 19. Hence, the observed high ORR activity of GNR@CNT in PEMFC should be mainly ascribed to the zigzag carbon rather than the oxidized zigzag carbon.
Theoretical calculations of free energy
To further identify the active site on GNR in the acid electrolyte, a DFT method was employed to calculate the free energy of ORR process on the representative carbon atoms. Although zigzag edges of GNR@CNT could be decorated by a small amount of –O (or –OH), the OH–C could be excluded since it could not adsorb O2 for ORR42. So, five types of carbon atoms on GNR were deduced as possible active sites, including (a) zigzag carbon atom, (b) carbon atom in basal plane, (c) carbon atom near O-doped zigzag edge, (d) carbon atom at armchair edge, and (e) carbon atom near a void as shown in Fig. 5. The minimum energy pathways for ORR on each type of carbon atoms were calculated (for more computational details see Supporting Information). As shown in Fig. 5a, the most active site under pH = 0.25 (0.5 M H2SO4) was figured out to be the zigzag carbon atom (a) (see Supplementary Fig. 20 for ORR process on zigzag carbon), which possessed the smallest limiting ΔGOOH* of 0.54 eV for O2* to OOH* at UNHE = 0.745 V (onset potential for GNR@CNT in 0.5 M H2SO4, * stands for an active site on the catalyst). The obtained ΔGOOH* is very close to the reported value for pure graphene in 0.1 M KOH46. In contrast, the carbon atom in GNR basal plane, the carbon atom near O-doped zigzag edge, and the armchair carbon had significantly enlarged free energy for the step of O2* to OOH*, and the corresponding ΔG were 1.78, 1.71, and 1.46 eV, respectively (Fig. 5b–d). As to the carbon atom near a void, the rate-limiting ΔG was 1.24 eV for the step of OH* to H2O. These large ΔG hindered the formation of OOH* (the key intermediate) or H2O, and thus excluded their possibility as ORR active sites at the onset potential.
Theoretical calculations. Models (top) and the corresponding free energy diagrams (bottom) for cycled carbon atoms at electrode potential UNHE = 0 and 0.745 V in 0.5 M H2SO4 (NHE normal hydrogen electrode, RHE reversible hydrogen electrode, UNHE = URHE − 0.0591 × pH, pH = 0.25). a Carbon atom at zigzag edge, b carbon atom in basal plane, c carbon atom at O-doped zigzag edge, d carbon atom at armchair edge, and e carbon atom near a void
In summary, the zigzag-edged GNRs with CNT backbones (GNR@CNT) have been developed by partially unzipping MWCNTs and used as the metal-free ORR electrocatalyst for the H2/O2 PEMFCs. With the assistance of a CNT backbone and carbon black spacer, the mass transport in the catalyst layer was enhanced, coupled with an enhanced exposure of the zigzag carbon active sites along the GNR edge. Among all carbon-based metal-free electrocatalysts, GNR@CNT achieved the highest gravimetric power density of 520 W g−1 in PEMFCs to the best of our knowledge. More importantly, GNR@CNT demonstrated an improved stability compared to a Fe/N/C catalyst in PEMFCs. DFT calculations coupled with experimental results revealed that zigzag carbon atoms possessed the higher electrocatalytic activity to ORR, than oxidized zigzag carbon, basal plane carbon, armchair edge carbon, and carbon atom near a void in acid electrolyte. This study demonstrates a great potential for defective graphitic carbon to be used in PEMFC application.
MWCNTs (length: 0.5–2 μm, outer diameter: 10–20 nm) were obtained from Chengdu Organic Chemicals Co. Ltd. (Chengdu). Concentrated sulfuric acid, concentrated hydrochloric acid, potassium permanganate, and hydrogen peroxide were obtained from Sinopharm Chemical Reagent Beijing Co. Ltd. (Beijing). Pt/C (20 wt% Pt) and Pt/C (40 wt% Pt) were purchased from Sigma-Aldrich. Besides CNTs, all of the other materials were of analytical grade and used without further purification.
Preparation of catalysts
To purify nanotubes (to remove metal impurities), raw MWCNT was stirred in concentrated HCl for 24 h. A unit of 200 mg purified MWCNT was put into a flask and mixed with 36 mL concentrated H2SO4. After 30 min of ultra-sonication and 1 h stirring, 800 mg KMnO4 (1600 mg for completed unzipping) was slowly added into the mixture and stirred for 1 h. The mixture was further heated in oil bath at 55 °C for 15 min and then at 70 °C for 45 min. After the mixture cooled down to the room temperature, it was poured into 100 mL ice water (containing 3 mL 30 wt% H2O2) and then washed in 10 wt% HCl solution five times and collected by centrifuge at 12 000 rpm, and finally went through a dialysis for 1 week. Sample solution was freeze-dried and annealed in Ar atmosphere at 900 °C for 30 min for defective carbon GNR@CNT and GNR, or in NH3 atmosphere at 800 °C for 30 min for N-doped catalysts N-GNR@CNT and N-GNR.
Fe/N/C was synthesized according to literature47. Specifically, we performed ball milling on 100 mg zeolitic imidazolate framework (ZIF-8) together with 10 mg tris(1, 10-phenanthroline) iron(II) perchlorate ion for 1 h, which was subsequently heated in Ar at 1000 °C for 1 h, and then at 900 °C under NH3 for 15 min.
Electrochemical measurements in half cell
Electrochemical properties were characterized using rotating ring disc electrode (model 3A, ALS Co.) in a three-electrode beaker cell equipped with a Pt wire counter electrode and a saturated Ag/AgCl reference electrode. Here 0.1 M KOH or 0.5 M H2SO4 was used as the electrolyte. Before the electrochemical experiments, the Ag/AgCl reference electrode was calibrated and potentials were converted to reversible hydrogen electrode (RHE) scale (VRHE = VAg/AgCl + 0.0591 × pH + 0.197 V). The catalyst ink was prepared by dispersing 1 mg catalyst in Nafion solution (100 μL, 1 mg mL−1). The working electrode was prepared by dropping the catalyst ink (5 μL) onto the glassy carbon disk electrode (4 mm diameter) and then drying at room temperature. Loading amount of the catalysts was 398 μg cm−2 (Pt/C (20 wt% Pt) 99 μg cm−2). The LSV were recorded at a scan rate of 10 mV s−1 in an O2-saturated electrolyte (rotating speed: 1600 rpm, scan range: 0.2–1.1 VRHE). The PRR polarization curves were recorded at 10 mV s–1 and 1600 rpm rotation speeds in Ar-saturated 0.1 M KOH and 0.5 M H2SO4 solution with 1.3 and 10 mM H2O2 electrolytes. ORR and PRR results were presented after the subtraction of the capacitive background measured in Ar-saturated 0.1 M KOH or 0.5 M H2SO4 electrolyte.
The electron transfer number (n) and OOH− intermediate production percentage (%OOH−) were determined by:
$${n} = 4 \times \frac{{I_{\mathrm{d}}}}{{I_{\mathrm{d}} + I_{\mathrm{r}}/N}}$$
$${\mathrm{\% OOH}}^ - = 200 \times \frac{{I_{\mathrm{r}}/N}}{{I_{\mathrm{d}} + I_{\mathrm{r}}/N}}$$
where Id is the disk current, Ir is the ring current, and N is the current collection efficiency of the Pt ring, which was determined to be 0.4.
PEMFC test
The synthesized catalyst was used as the cathode for a PEMFC. A typical catalyst ink was prepared by dispersing 1.25 mg catalyst, 5 mg XC-72, and 125 mg Nafion solution (5 wt% Nafion) in deionized water (300 μL) and isopropanol (600 μL) by sonication and stirring. The catalyst ink was coated on 5 cm2 carbon paper at a loading of 0.25 mg cm−2. For the anode, the catalyst was a commercial Pt/C (40 wt% Pt) catalyst with a loading of 0.2 mg Pt per cm2. MEAs were fabricated by hot-pressing the anode, cathode, and a Nafion membrane (model: NRE 211) at 1.5 MPa for 120 s at 130 °C. The performance of the fuel cell was assessed using a Model 850e fuel cell test system (Scribner Associates Inc.) operated at 80 °C. The H2 and O2 flow rates were 0.3 and 0.5 L min−1 at 100% relative humidity and 2 bar back pressure.
Computational methods
The DFT calculation was performed within a generalized gradient approximation designed by Perdew-Burke-Ernzerhof, as implemented in the Vienna ab initio simulation package. The projector-augmented wave method is used to describe the ionic potentials. The unit box was built for calculations with a volume of 9.86 Å × 29.00 Å × 18.00 Å for zigzag carbon, basal plane carbon, oxidized zigzag carbon, and carbon near a void, or 8.54 Å × 29.00 Å × 18.00 Å for armchair carbon. The plane wave basis set has a high cutoff energy of 500 eV throughout the computation. The K-point sampling of the Brillouin zone was obtained using a 1 × 1 × 1 grid generating meshes with their origin centered at the gamma (Γ) point. All calculations were spin-polarized and the force convergence criterion for atomic relaxation was 0.01 eV Å−1. The ORR can proceed either through a two-step two-electron pathway that reduces O2 to H2O2 or via a direct four-electron process in which O2 is directly reduced to H2O without involvement of an H2O2 intermediate. Here we study the complete reduction cycle because GNR@CNT shows a transfer number of about 3.7 in ORR, which is close to the four-electron process. In the acidic environment, the ORR can be written as:
$${\mathrm{O}}_2\left( {\mathrm{g}} \right) + ^ \ast \to {\mathrm{O}}_2^ \ast$$
$${\mathrm{O}}_{\mathrm{2}}^ \ast {\mathrm{ + H}}^{\mathrm{ + }}{\mathrm{ + e}}^{\mathrm{ - }} \to {\mathrm{OOH}}^ \ast$$
$${\mathrm{OOH}}^ \ast + {\mathrm{H}}^ + + {\mathrm{e}}^ - \to {\mathrm{O}}^ \ast + {\mathrm{H}}_{\mathrm{2}}{\mathrm{O }}\left( 1 \right)$$
$${\mathrm{O}}^ \ast {\mathrm{ + H}}^{\mathrm{ + }}{\mathrm{ + e}}^{\mathrm{ - }} \to {\mathrm{OH}}^ \ast$$
$${\mathrm{OH}}^ \ast {\mathrm{ + H}}^{\mathrm{ + }}{\mathrm{ + e}}^{\mathrm{ - }} \to {\mathrm{H}}_{\mathrm{2}}{\mathrm{O }}\left( {\mathrm{l}} \right){\mathrm{ + }}^ \ast$$
where * stands for an active site on the graphene surface or at zigzag edge, (l) and (g) refer to liquid and gas phases, respectively, and O*, OH*, and OOH* are adsorbed intermediates.
It was reported that the ORR rate-determining step can either be the O2* to OOH* (Eq. 4) or OH* to water (Eq. 7)48. The overpotential of the ORR processes can be determined by examining the reaction free energies (ΔG) of the different elementary steps. For each step the reaction free energy is defined as the difference between free energies of the initial and final states and is given by the expression:
$${{\Delta G = \Delta E + \Delta {\mathrm{ZPE}}-T\Delta S + \Delta G}}_{\mathrm{U}}{{ + \Delta G}}_{{\mathrm{pH}}}$$
Where ΔE is the reaction energy of reactant and product molecules adsorbed on catalyst surface obtained from DFT calculations. ΔGU = −eU, where U is the potential at the electrode. T is the temperature and e is the transferred charge. ΔZPE is the change of zero-point energy computed by density functional perturbation theory. ΔS is the entropy change after the adsorption from the DFT simulation. ΔGpH (here pH is 0.25) is the correction of the H+ free energy calculated by the concentration dependence of the entropy, in which kB is the Boltzmann constant.
$${{\Delta G}}_{{\mathrm{pH}}}{{ = - k}}_{\mathrm{B}}{{T{\mathrm{ln}}}}\left[ {{\mathrm{H}}^ + } \right]$$
Physical characterization
The morphology and structure were characterized by TEM (JEM-2100F, Japan). XPS was performed on an ESCALAB 250Xi (Thermo Fisher Scientific Company, US) using a monochromated Al Kα source and a pass energy of 50 eV at a base pressure of 1 × 10−8 mbar. FT-IR spectra were recorded using a Nicolet iN10 MX (Thermo Fisher Scientific Company).
Chung, H. T. et al. Direct atomic-level insight into the active sites of a high-performance PGM-free ORR catalyst. Science 357, 479–484 (2017).
ADS Article PubMed CAS Google Scholar
Zhu, Y. P., Guo, C., Zheng, Y. & Qiao, S. Z. Surface and interface engineering of noble-metal-free electrocatalysts for efficient energy conversion processes. Acc. Chem. Res. 50, 915–923 (2017).
Yan, D. et al. Defect chemistry of nonprecious-metal electrocatalysts for oxygen reactions. Adv. Mater. 29, 1606459 (2017).
Liu, Q., Liu, X., Zheng, L. & Shui, J. The solid-phase synthesis of an Fe-N-C electrocatalyst for high-power proton-exchange membrane fuel cells. Angew. Chem. Int. Ed. 57, 1204–1208 (2018).
Zhang, H. et al. Single atomic iron catalysts for oxygen reduction in acidic media: particle size control and thermal activation. J. Am. Chem. Soc. 139, 14143–14149 (2017).
Yang, L., Zeng, X., Wang, W. & Cao, D. Recent progress in MOF-derived, heteroatom-doped porous carbons as highly efficient electrocatalysts for oxygen reduction reaction in fuel cells. Adv. Funct. Mater. 28, 1704537 (2017).
Fu, X. et al. In situ polymer graphenization ingrained with nanoporosity in a nitrogenous electrocatalyst boosting the performance of polymer-electrolyte-membrane fuel cells. Adv. Mater. 29, 1604456 (2017).
Guo, D. et al. Active sites of nitrogen-doped carbon materials for oxygen reduction reaction clarified using model catalysts. Science 351, 361–365 (2016).
Yang, H. B. et al. Identification of catalytic sites for oxygen reduction and oxygen evolution in N-doped graphene materials: development of highly efficient metal-free bifunctional electrocatalyst. Sci. Adv. 2, e1501122 (2016).
ADS Article PubMed PubMed Central CAS Google Scholar
Li, J. C., Hou, P. X. & Liu, C. Heteroatom-doped carbon nanotube and graphene-based electrocatalysts for oxygen reduction reaction. Small 13, 1702002 (2017).
Ricke, N. D. et al. Molecular-level insights into oxygen reduction catalysis by graphite-conjugated active sites. ACS Catal. 7, 7680–7687 (2017).
Sinthika, S., Waghmare, U. V. & Thapa, R. Structural and electronic descriptors of catalytic activity of graphene-based materials: first-principles theoretical analysis. Small 14, 1703609 (2018).
Zeng, X. et al. Single-atom to single-atom grafting of Pt1 onto Fe-N4 center: Pt1@Fe-N-C multifunctional electrocatalyst with significantly enhanced properties. Adv. Energy Mater. 8, 1701345 (2018).
Chenitz, R. et al. A specific demetalation of Fe–N4 catalytic sites in the micropores of NC_Ar+NH3 is at the origin of the initial activity loss of the highly active Fe/N/C catalyst used for the reduction of oxygen in PEM fuel cells. Energy Environ. Sci. 11, 365–382 (2018).
Armel, V. et al. Structural descriptors of zeolitic-imidazolate frameworks are keys to the activity of Fe-N-C catalysts. J. Am. Chem. Soc. 139, 453–464 (2017).
Shui, J., Wang, M., Du, F. & Dai, L. N-doped carbon nanomaterials are durable catalysts for oxygen reduction reaction in acidic fuel cells. Sci. Adv. 1, e1400129 (2015).
Shen, A. et al. Oxygen reduction reaction in a droplet on graphite: direct evidence that the edge is more active than the basal plane. Angew. Chem. Int. Ed. 53, 10804–10808 (2014).
Waki, K. et al. Non-nitrogen doped and non-metal oxygen reduction electrocatalysts based on carbon nanotubes: mechanism and origin of ORR activity. Energy Environ. Sci. 7, 1950–1958 (2014).
Jiang, Y. et al. Significant contribution of intrinsic carbon defects to oxygen reduction activity. ACS Catal. 5, 6707–6712 (2015).
Benzigar, M. R. et al. Highly crystalline mesoporous C60 with ordered pores: a class of nanomaterials for energy applications. Angew. Chem. Int. Ed. 57, 569–573 (2018).
Seredych, M., Szczurek, A., Fierro, V., Celzard, A. & Bandosz, T. J. Electrochemical reduction of oxygen on hydrophobic ultramicroporous polyHIPE carbon. ACS Catal. 6, 5618–5628 (2016).
Zhao, Y. et al. Graphitic carbon nitride nanoribbons: graphene-assisted formation and synergic function for highly efficient hydrogen evolution. Angew. Chem. Int. Ed. 53, 13934–13939 (2014).
Tang, C. & Zhang, Q. Nanocarbon for oxygen reduction electrocatalysis: dopants, edges, and defects. Adv. Mater. 29, 1604103 (2017).
Jiao, L., Zhang, L., Wang, X., Diankov, G. & Dai, H. Narrow graphene nanoribbons from carbon nanotubes. Nature 458, 877–880 (2009).
Kosynkin, D. V. et al. Longitudinal unzipping of carbon nanotubes to form graphene nanoribbons. Nature 458, 872–876 (2009).
Kim, K., Sussman, A. & Zettl, A. Graphene nanoribbons obtained by electrically unwrapping carbon nanotubes. ACS Nano 4, 1362–1366 (2010).
Elias, A. L. et al. Longitudinal cutting of pure and doped carbon nanotubes to form graphitic nanoribbons using metal clusters as nanoscalpels. Nano Lett. 10, 366–372 (2010).
Rangel, N. L., Sotelo, J. C. & Seminario, J. M. Mechanism of carbon nanotubes unzipping into graphene ribbons. J. Chem. Phys. 131, 031105 (2009).
Li, X. et al. Simultaneous nitrogen doping and reduction of graphene oxide. J. Am. Chem. Soc. 131, 15939–15944 (2009).
Jeong, H. M. et al. Nitrogen-doped graphene for high-performance ultracapacitors and the importance of nitrogen-doped sites at basal planes. Nano Lett. 11, 2472–2477 (2011).
Xu, Y. et al. Holey graphene frameworks for highly efficient capacitive energy storage. Nat. Commun. 5, 4554 (2014).
Lin, L.-Y. et al. A novel core–shell multi-walled carbon nanotube@graphene oxide nanoribbon heterostructure as a potential supercapacitor material. J. Mater. Chem. A 1, 11237–11245 (2013).
Wang, J. et al. Nitrogen and sulfur co-doping of partially exfoliated MWCNTs as 3-D structured electrocatalysts for the oxygen reduction reaction. J. Mater. Chem. A 4, 5678–5684 (2016).
Choi, C. H. et al. Unraveling the nature of sites active toward hydrogen peroxide reduction in Fe-N-C catalysts. Angew. Chem. Int. Ed. 56, 8809–8812 (2017).
Zhang, J., Zhao, Z., Xia, Z. & Dai, L. A metal-free bifunctional electrocatalyst for oxygen reduction and oxygen evolution reactions. Nat. Nanotechnol. 10, 444–452 (2015).
Liu, G., Li, X. G. & Popov, B. N. Stability study of nitrogen-modified carbon composite catalysts for oxygen reduction reaction in polymer electrolyte membrane fuel cells. ECS Trans. 25, 1251–1259 (2009).
Rauf, M. et al. Insight into the different ORR catalytic activity of Fe/N/C between acidic and alkaline media: protonation of pyridinic nitrogen. Electrochem. Commun. 73, 71–74 (2016).
Fleige, M., Holst-Olesen, K., Wiberg, G. K. H. & Arenz, M. Evaluation of temperature and electrolyte concentration dependent oxygen solubility and diffusivity in phosphoric acid. Electrochim. Acta 209, 399–406 (2016).
Shui, J. L., Chen, C., Grabstanowicz, L., Zhao, D. & Liu, D. J. Highly efficient nonprecious metal catalyst prepared with metal-organic framework in a continuous carbon nanofibrous network. Proc. Natl Acad. Sci. USA 112, 10629–10634 (2015).
Ding, W. et al. Space-confinement-induced synthesis of pyridinic- and pyrrolic-nitrogen-doped graphene for the catalysis of oxygen reduction. Angew. Chem. Int. Ed. 52, 11755–11759 (2013).
Geng, D. et al. High oxygen-reduction activity and durability of nitrogen-doped graphene. Energy Environ. Sci. 4, 760–764 (2011).
Liu, Z. et al. In situ exfoliated, edge-rich, oxygen-functionalized graphene from carbon fibers for oxygen electrocatalysis. Adv. Mater. 29, 16006207 (2017).
Miwa, R. H., Veiga, R. G. A. & Srivastava, G. P. Structural, electronic, and magnetic properties of pristine and oxygen-adsorbed graphene nanoribbons. Appl. Surf. Sci. 256, 5776–5782 (2010).
ADS Article CAS Google Scholar
Larciprete, R., Gardonio, S., Petaccia, L. & Lizzit, S. Atomic oxygen functionalization of double walled C nanotubes. Carbon N. Y. 47, 2579–2589 (2009).
Wang, W.-H., Huang, B.-C., Wang, L.-S. & Ye, D.-Q. Oxidative treatment of multi-wall carbon nanotubes with oxygen dielectric barrier discharge plasma. Surf. Coat. Technol. 205, 4896–4901 (2011).
Jiao, Y., Zheng, Y., Jaroniec, M. & Qiao, S. Z. Origin of the electrocatalytic oxygen reduction activity of graphene-based catalysts: a roadmap to achieve the best performance. J. Am. Chem. Soc. 136, 4394–4403 (2014).
Proietti, E. et al. Iron-based cathode catalyst with enhanced power density in polymer electrolyte membrane fuel cells. Nat. Commun. 2, 416 (2011).
Calle-Vallejo, F. & Koper, M. T. M. First-principles computational electrochemistry: a chievements and challenges. Electrochim. Acta 84, 3–11 (2012).
This work was supported by the National Thousand Talents Plan of China, the National Natural Science Foundation of China (Grant Nos. 21673014 and 51732002), the 111 project (B17002) funded by the Ministry of Education of China and the Fundamental Research Funds for the Central Universities of China.
School of Materials Science and Engineering, Beihang University, No. 37 Xueyuan Road, Beijing, 100083, China
Longfei Xue, Yongcheng Li, Xiaofang Liu, Qingtao Liu, Jiaxiang Shang, Huiping Duan & Jianglan Shui
Department of Macromolecular Science and Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
Liming Dai
UNSW-CWRU International Joint Laboratory, School of Chemical Engineering, University of New South Wales, Sydney, NSW, 2051, Australia
Longfei Xue
Yongcheng Li
Xiaofang Liu
Qingtao Liu
Jiaxiang Shang
Huiping Duan
Jianglan Shui
L.X. and Q.L. synthesized the catalysts, prepared the fuel cell test materials and conducted fuel cell tests, and processed the data. X.L. and H.D. performed physical characterizations (XPS, FT-IR, and TEM). Y.L. and J.Shang contributed theoretical calculations., X.L., J.Shui, and L.D. wrote and edited the manuscript. All authors contributed to discussions about the results and the manuscript. The project was conceived and supervised by J.Shui.
Correspondence to Huiping Duan or Liming Dai or Jianglan Shui.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Xue, L., Li, Y., Liu, X. et al. Zigzag carbon as efficient and stable oxygen reduction electrocatalyst for proton exchange membrane fuel cells. Nat Commun 9, 3819 (2018). https://doi.org/10.1038/s41467-018-06279-x
Accepted: 27 August 2018
DOI: https://doi.org/10.1038/s41467-018-06279-x
Vacancy-defects turn off conjugated π bond shield activated catalytic molecular adsorption process
Zemin Sun
, Mengwei Yuan
, Han Yang
, Liu Lin
, Genban Sun
& Xiaojing Yang
Applied Surface Science (2021)
A C‐S‐C Linkage‐Triggered Ultrahigh Nitrogen‐Doped Carbon and the Identification of Active Site in Triiodide Reduction
Jiangwei Chang
, Chang Yu
, Xuedan Song
, Xinyi Tan
, Yiwang Ding
, Zongbin Zhao
& Jieshan Qiu
Angewandte Chemie (2021)
Facile synthesis of monodisperse ultrasmall Fe3O4 nanoparticles on graphene nanosheets with excellent microwave absorption performance
Yichao Yin
, Hang Zhang
, Ya Li
, Nannan Yang
, Lu Gao
& Guoke Wei
Journal of Materials Science: Materials in Electronics (2021)
Selective H2O2 Electrosynthesis by O-doped and Transition-Metal-O-doped Carbon Cathodes via O2 Electroreduction: A Critical Review
Wei Zhou
, Liang Xie
, Jihui Gao
, Roya Nazari
, Haiqian Zhao
, Xiaoxiao Meng
, Fei Sun
, Guangbo Zhao
& Jun Ma
Chemical Engineering Journal (2021)
Angewandte Chemie International Edition (2021)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online)
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
JDG Home
New solutions of hyperbolic telegraph equation
doi: 10.3934/jdg.2020033
A Mean Field Games model for finite mixtures of Bernoulli and categorical distributions
Laura Aquilanti 1, , Simone Cacace 2, , Fabio Camilli 1,, and Raul De Maio 3,
SBAI, Sapienza Università di Roma, Via A. Scarpa 16, 00161 Roma, Italy
Dip. di Matematica e Fisica, Università degli Studi Roma Tre, Largo S. L. Murialdo 1, 00146 Roma, Italy
IConsulting, Via della Conciliazione 10, 00193 Roma, Italy
* Corresponding author: Fabio Camilli
Received May 2020 Revised November 2020 Published December 2020
Figure(9)
Finite mixture models are an important tool in the statistical analysis of data, for example in data clustering. The optimal parameters of a mixture model are usually computed by maximizing the log-likelihood functional via the Expectation-Maximization algorithm. We propose an alternative approach based on the theory of Mean Field Games, a class of differential games with an infinite number of agents. We show that the solution of a finite state space multi-population Mean Field Games system characterizes the critical points of the log-likelihood functional for a Bernoulli mixture. The approach is then generalized to mixture models of categorical distributions. Hence, the Mean Field Games approach provides a method to compute the parameters of the mixture model, and we show its application to some standard examples in cluster analysis.
Keywords: Mixture models, Bernoulli distribution, categorical distribution, cluster analysis, Expectation-Maximization algorithm, Mean Field Games.
Mathematics Subject Classification: 62H30, 60J10, 49N80, 91C20.
Citation: Laura Aquilanti, Simone Cacace, Fabio Camilli, Raul De Maio. A Mean Field Games model for finite mixtures of Bernoulli and categorical distributions. Journal of Dynamics & Games, doi: 10.3934/jdg.2020033
L. Aquilanti, S. Cacace, F. Camilli and R. De Maio, A mean field games approach to cluster analysis, Applied Math. Optim., (2020). doi: 10.1007/s00245-019-09646-2. Google Scholar
R. Bellman, Dynamic Programming, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, 1957. Google Scholar
J. A. Bilmes, A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov model, CTIT Technical Reports Series, 1998. Google Scholar
C. M. Bishop, Pattern Recognition and Machine Learning, Information Science and Statistics, Springer, New York, 2006. Google Scholar
A. Biswas, Mean Field Games with ergodic cost for discrete time Markov processes, preprint, arXiv: 1510.08968. Google Scholar
S. Cacace, F. Camilli and A. Goffi, A policy iteration method for Mean Field Games, preprint, arXiv: 2007.04818. Google Scholar
R. Carmona and M. Lauriere, Convergence analysis of machine learning algorithms for the numerical solution of mean field control and games: I – The ergodic case, preprint, arXiv: 1907.05980. Google Scholar
J. L. Coron, Quelques Exemples de Jeux à Champ Moyen, Ph.D. thesis, Université Paris-Dauphine, 2018. Available from: https://tel.archives-ouvertes.fr/tel-01705969/document. Google Scholar
W. E, J. Han and Q. Li, A mean-field optimal control formulation of deep learning, Res. Math. Sci., 6 (2019), 41pp. doi: 10.1007/s40687-018-0172-y. Google Scholar
B. S. Everitt, S. Landau, M. Leese and D. Stahl, Cluster Analysis, Wiley Series in Probability and Statistics, John Wiley & Sons, Ltd., Chichester, 2011. doi: 10.1002/9780470977811. Google Scholar
Fashion-MNIST., Available from: https://github.com/zalandoresearch/fashion-mnist. Google Scholar
W. H. Fleming, Some Markovian optimization problems, J. Math. Mech., 12 (1963), 131-140. Google Scholar
D. A. Gomes, J. Mohr and R. R. Souza, Discrete time, finite state space mean field games, J. Math. Pures Appl. (9), 93 (2010), 308-328. doi: 10.1016/j.matpur.2009.10.010. Google Scholar
D. A. Gomes and J. Saúde, Mean field games models–A brief survey, Dyn. Games Appl., 4 (2014), 110-154. doi: 10.1007/s13235-013-0099-2. Google Scholar
R. A. Howard, Dynamic Programming and Markov Processes, The Technology Press of MIT, Cambridge, Mass.; John Wiley & Sons, Inc., New York-London, 1960. doi: 10.1126/science.132.3428.667. Google Scholar
M. Huang, R. P. Malhamé and P. E. Caines, Large population stochastic dynamic games: Closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle, Commun. Inf. Syst., 6 (2006), 221-251. doi: 10.4310/CIS.2006.v6.n3.a5. Google Scholar
J.-M. Lasry and P.-L. Lions, Mean field games, Jpn. J. Math., 2 (2007), 229-260. doi: 10.1007/s11537-007-0657-8. Google Scholar
G. McLachlan and D. Peel, Finite Mixture Models, Wiley Series in Probability and Statistics: Applied Probability and Statistics, Wiley-Interscience, New York, 2000. doi: 10.1002/0471721182. Google Scholar
The MNIST Database of Handwritten Digits., Available from: http://yann.lecun.com/exdb/mnist/. Google Scholar
K. Pearson, Contributions to the mathematical theory of evolution, Philosophical Trans. Roy. Soc., 185 (1894), 71-110. doi: 10.1098/rsta.1894.0003. Google Scholar
S. Pequito, A. Pedro Aguiar, B. Sinopoli and D. A. Gomes, Unsupervised learning of finite mixture models using mean field games, 49$^th$ Annual Allerton Conference on Communication, Control and Computing, Monticello, IL, 2011. doi: 10.1109/Allerton.2011.6120185. Google Scholar
M. L. Puterman, On the convergence of policy iteration for controlled diffusions, J. Optim. Theory Appl., 33 (1981), 137-144. doi: 10.1007/BF00935182. Google Scholar
M. L. Puterman and S. L. Brumelle, On the convergence of policy iteration in stationary dynamic programming, Math. Oper. Res., 4 (1979), 60-69. doi: 10.1287/moor.4.1.60. Google Scholar
M. E. Tarter and M. D. Lock, Model-Free Curve Estimation, Monographs on Statistics and Applied Probability, 56, Chapman & Hall, New York, 1993. Google Scholar
D. M. Titterington, A. F. M. Smith and U. E. Makov, Statistical Analysis of Finite Mixture Distributions, Wiley Series Probability and Mathematical Statistics: Applied Probability and Statistics, John Wiley & Sons, Ltd., Chichester, 1985. Google Scholar
M. Wedel and W. A. Kamakura, Market Segmentation: Conceptual and Methodological Foundations, International Series in Quantitative Marketing, 8, Springer, Boston, MA, 2000. doi: 10.1007/978-1-4615-4651-1. Google Scholar
Figure 1. Samples of hand-written digits from the MNIST database
Figure Options
Download full-size image
Download as PowerPoint slide
Figure 2. Different samples of hand-written digits from the MNIST database
Figure 3. Clusterization histogram for digits $ \mathbf{1},\mathbf{3} $ and the corresponding Bernoulli parameters
Figure 5. Clusterization histogram for even digits and the corresponding Bernoulli parameters
Figure 6. Samples of fashion products from the Fashion-MNIST database
Figure 7. Averaged categorical distributions for the Fashion-MNIST database
Figure 8. Clusterization histogram for types T-shirt, Trouser and the corresponding categorical parameters
Figure 9. Clusterization histogram for types Dress, Sneaker, Bag, Boot and the corresponding categorical parameters
Illés Horváth, Kristóf Attila Horváth, Péter Kovács, Miklós Telek. Mean-field analysis of a scaling MAC radio protocol. Journal of Industrial & Management Optimization, 2021, 17 (1) : 279-297. doi: 10.3934/jimo.2019111
Kalikinkar Mandal, Guang Gong. On ideal $ t $-tuple distribution of orthogonal functions in filtering de bruijn generators. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020125
Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039
Alain Bensoussan, Xinwei Feng, Jianhui Huang. Linear-quadratic-Gaussian mean-field-game with partial observation and common noise. Mathematical Control & Related Fields, 2021, 11 (1) : 23-46. doi: 10.3934/mcrf.2020025
Jingrui Sun, Hanxiao Wang. Mean-field stochastic linear-quadratic optimal control problems: Weak closed-loop solvability. Mathematical Control & Related Fields, 2021, 11 (1) : 47-71. doi: 10.3934/mcrf.2020026
Jie Li, Xiangdong Ye, Tao Yu. Mean equicontinuity, complexity and applications. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 359-393. doi: 10.3934/dcds.2020167
Aihua Fan, Jörg Schmeling, Weixiao Shen. $ L^\infty $-estimation of generalized Thue-Morse trigonometric polynomials and ergodic maximization. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 297-327. doi: 10.3934/dcds.2020363
Wolfgang Riedl, Robert Baier, Matthias Gerdts. Optimization-based subdivision algorithm for reachable sets. Journal of Computational Dynamics, 2021, 8 (1) : 99-130. doi: 10.3934/jcd.2021005
George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003
Jian Zhang, Tony T. Lee, Tong Ye, Liang Huang. An approximate mean queue length formula for queueing systems with varying service rate. Journal of Industrial & Management Optimization, 2021, 17 (1) : 185-204. doi: 10.3934/jimo.2019106
Bixiang Wang. Mean-square random invariant manifolds for stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1449-1468. doi: 10.3934/dcds.2020324
Petr Pauš, Shigetoshi Yazaki. Segmentation of color images using mean curvature flow and parametric curves. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1123-1132. doi: 10.3934/dcdss.2020389
Anna Abbatiello, Eduard Feireisl, Antoní Novotný. Generalized solutions to models of compressible viscous fluids. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 1-28. doi: 10.3934/dcds.2020345
Xin Guo, Lexin Li, Qiang Wu. Modeling interactive components by coordinate kernel polynomial models. Mathematical Foundations of Computing, 2020, 3 (4) : 263-277. doi: 10.3934/mfc.2020010
Urszula Ledzewicz, Heinz Schättler. On the role of pharmacometrics in mathematical models for cancer treatments. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 483-499. doi: 10.3934/dcdsb.2020213
P. K. Jha, R. Lipton. Finite element approximation of nonlocal dynamic fracture models. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1675-1710. doi: 10.3934/dcdsb.2020178
Min Chen, Olivier Goubet, Shenghao Li. Mathematical analysis of bump to bucket problem. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5567-5580. doi: 10.3934/cpaa.2020251
Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018
Mohammed Abdulrazaq Kahya, Suhaib Abduljabbar Altamir, Zakariya Yahya Algamal. Improving whale optimization algorithm for feature selection with a time-varying transfer function. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 87-98. doi: 10.3934/naco.2020017
Laura Aquilanti Simone Cacace Fabio Camilli Raul De Maio
Figures and Tables
|
CommonCrawl
|
<< Tuesday, November 26, 2019 >>
Computer Health Matters: User Friendly Workstations (BEUHS400)
Workshop | November 26 | 9-10 a.m. | Tang Center, University Health Services, Class of '42
Speaker: Greg Ryan, Be Well at Work - Ergonomics
Sponsor: Be Well at Work - Ergonimics
Learn how to set up a user-friendly workstation and practice stretches to help relieve computer-related aches and pains. This workshop is required to qualify for computer ergonomics matching funds.
BPM 201 Employee Engagement
Workshop | November 26 | 9 a.m.-12:30 p.m. | #24 University Hall
Sponsor: Human Resources
This workshop is for UC Berkeley Staff. The content covers an overview of employee engagement, new employee onboarding, the use of ongoing assessments of engagement, creation of an engagement action plan, and communicating the engagement strategy.
Being Communist, Being Other - Seminar
Seminar | November 26 | 10 a.m.-12 p.m. | 3401 Dwinelle Hall
Speaker/Performer: Etienne Balibar, Anniversary Chair Professor at the Center for Research in Modern European Philosophy (CRMEP), Kingston University and Visiting Professor, Department of French and Romance Philology, Columbia University
Sponsor: The Program in Critical Theory
Advance registration for the seminar is required. To register, please contact [email protected].
Etienne Balibar, in discussion with Zeynep Gambetti (Boğazici University, Turkey) and Jacques Lezra (UC Riverside).
Balibar will reflect on his relationship to reading Marx, starting with Reading Capital, his early work co-written with Louis Althusser. He will seek to... More >
Keyboards and Mice: Ergonomic Alternatives (BEUHS401)
Workshop | November 26 | 10:10-11 a.m. | Tang Center, University Health Services, Class of '42
Speaker/Performer: Greg Ryan, Campus Ergonomist, Be well at Work - Ergonomics
Learn about the ergonomics of keyboards and pointing devices, including appropriate workstation set-up, postures, and techniques for using them. Find out about the keyboards and pointing devices covered by the Computer Ergonomics Matching Funds Program. Enroll online at the UC Learning Center.
Student Faculty Macro Lunch - NO MEETING
Presentation | November 26 | 12-1 p.m. | 597 Evans Hall | Canceled
Sponsor: Clausen Center
Space Physics Seminar
Seminar | September 17 – December 3, 2019 every Tuesday | 1-2 p.m. | 325 LeConte Hall
Sponsor: Space Sciences Laboratory (SSL)
Seminar 237, Macro: No Seminar
Seminar | November 26 | 2-3:30 p.m. | 597 Evans Hall | Canceled
Sponsor: Department of Economics
Hai Wang — Multi-Objective Online Ride-Matching
Seminar | November 26 | 3:30-4:30 p.m. | 405 Soda Hall
Speaker/Performer: Hai Wang, Carnegie Mellon University
Sponsor: Industrial Engineering & Operations Research
Abstract: We propose a general framework to study the on-demand shared ride-sourcing transportation systems, and focus on the multi-objective matching between demand and supply. The platforms match passengers and drivers in real time without observing future information, considering multiple objectives such as pick-up time, platform revenue, and service quality. We develop an efficient online... More >
Harmonic Analysis and Differential Equations Student Seminar: Small data global regularity for simplified 3-D Ericksen-Leslie's compressible hyperbolic liquid crystal model
Seminar | November 26 | 3:40-5 p.m. | 740 Evans Hall
Speaker: Jiaxi Huang, USTC
Sponsor: Department of Mathematics
In this talk, we will consider the Ericksen-Leslie's hyperbolic system for compressible liquid crystal model in three spatial dimensions. Global regularity for small and smooth initial data near equilibrium is proved for the case that the system is a nonlinear coupling of compressible Navier-Stokes equations with wave map to $\mathbb { S }^2$. Our argument is a combination of vector field method... More >
Commutative Algebra and Algebraic Geometry: The Fellowship of the Ring: Finding local summands
Seminar | November 26 | 3:45-4:45 p.m. | 939 Evans Hall
Speaker: Mengyuan Zhang, UC Berkeley
The theory of basic elements developed by Eisenbud-Evans is concerned with finding local free summands of a module. A modification of the arguments by Bruns allows one to find local free summands up to a given codimension (or depth). In this expository talk, we discuss this problem in the graded case, where the degrees of the free local summands give extra structure not present in the affine... More >
Effect of surfaces and osmolytes in modulating peptide assembly
Seminar | November 26 | 4-5 p.m. | 120 Latimer Hall
Featured Speaker: Joan-Emma Shea, Department of Chemistry and Biochemistry, UC Santa Barbara
Sponsor: College of Chemistry
Intrinsically disordered peptides are a special class of proteins that do not fold to a unique three-dimensional shape. These proteins play important roles in the cell, from signaling to serving as structural scaffolds. Under pathological conditions, these proteins are capable of self-assembling into structures that are toxic to the cell, and a number of neurodegenerative diseases, such as... More >
Commutative Algebra and Algebraic Geometry: The Fellowship of the Ring: Moduli Spaces of Toric Vector Bundles
Seminar | November 26 | 5-6 p.m. | 939 Evans Hall
Speaker: Lauren Heller, UC Berkeley
I will discuss Klyachko's classification of toric vector bundles with given equivariant Chern class, as well as the application of this classification to the construction of moduli spaces, as described by Sam Payne. As an example, I will illustrate the possibilities for rank 2 vector bundles on $\mathbb P^1 \times \mathbb P^1$.
|
CommonCrawl
|
Has the notion of "non-cheating" quines been formalised?
Informally, it is often said that some quines "cheat", such as those that read their own source code from memory, or the case where a program consisting of the empty string happens to output the empty string.
Quines that are considerd to be "non-cheating" always seem to have the same structure. There is some data structure (usually, but not always, a string) which contains some representation of the source code. This structure is read twice, once to reformat it into the code part of the final output, and a second time where it's copied verbatim to reproduce the data part. For example, the following Python example (which I just invented) has this structure:
x = """x = {0}{1}{0}
print x.format(chr(34)*3,x)"""
print x.format(chr(34)*3,x)
The string 'x' is referred to twice in the last line, once to reformat it, and a second time to paste it verbatim into the reformatted string.
My question is whether this notion of "cheating" or "non-cheating" quines has been, or can be, formalised. That is to say, if we are given a formal specification of a language and its semantics, and we are also given an example of a quine in that language, is there a well-defined way to say, in principle, whether that quine is "cheating" or not? Or is this notion of "cheating" inherently too vague to admit a formal definition?
If this notion can be formally defined, is it possible to have a language that only admits "non-cheating" quines, and is an example of such a language known?
formal-languages
NathanielNathaniel
$\begingroup$ I wonder if a quines tag would be useful? If it would, could someone with the necessary rep create it and apply it to my question? $\endgroup$ – Nathaniel Aug 20 '16 at 8:12
Most formal semantics for programing languages tend to keep things simple, in particular, it's pretty rare to have the ability to express "grabbing your own source code" in a formal semantics.
This has the somewhat dubious advantage of disallowing some kinds of "cheating" by construction: the "cheat" is simply not part of the language semantics. In particular, a simple lambda calculus with simple string concatenation and printing primitives cannot implement many (any?) of the "cheats" you have in mind. In such a language, a quine might be a program which prints the exact string representation of it's own code. You can even avoid the print primitive if you allow returning the string rather than printing it. In that case, you only need string literals and concatenation.
It's a useful exercise to figure out various properties of quines in such a language. It's pretty easy to see that some variable will need to be repeated twice: there are no "linear" quines. However the statement
This structure is read twice, once to reformat it into the code part of the final output, and a second time where it's copied verbatim to reproduce the data part.
is a bit vague, and I'm not sure it's true in general, depending on the precise meaning of "copied verbatim".
codycody
$\begingroup$ I can think of several ways that "cheating" could exist in such a language. In particular, depending on the details of the formalism, I would expect the empty string to output the empty string. $\endgroup$ – Nathaniel Aug 22 '16 at 0:30
$\begingroup$ I'm aware that the concept I'm getting at is vague, but then, if it were precise I would have no need to ask a question about how to formalise it. To my mind, my second question (can we define a language with no cheating quines?) can only be answered after my first question (can we turn this vague concept of 'cheating' into a formal one?). $\endgroup$ – Nathaniel Aug 22 '16 at 0:35
$\begingroup$ I think you can exclude the first case by definition: a quine needs to be a function definition $q:\mathrm{String}\rightarrow\mathrm{String}$ that prints it's own code. This prevents the "empty" loophole. For the "reads it's own code" loophole, I think this simply isn't possible in the simple language I outlined. Of course, other forms of cheating might exist (depending on the judge), but I feel this excludes the ones mentioned at least. $\endgroup$ – cody Aug 27 '16 at 1:52
Not the answer you're looking for? Browse other questions tagged formal-languages or ask your own question.
For regular languages A and B, determine whether B might match early in (A B)
Regular language concatenation with superset
What are the definitions of syntax and semantics?
What is the Relationship Between Programming Languages, Regular Expressions and Formal Languages
Pushdown automaton accepting $\{a,b\}^*$ with twice as many $a$s as $b$s
Why can we (apparently) implement CFG parsers only using (N)DFAs?
Proving a Cauchy Matrix is positive definite
|
CommonCrawl
|
Virtual pathway explorer (viPEr) and pathway enrichment analysis tool (PEANuT): creating and analyzing focus networks to identify cross-talk between molecules and pathways
Marius Garmhausen1,
Falko Hofmann2,
Viktor Senderov4,5,
Maria Thomas3,
Benjamin A. Kandel3,6 &
Bianca Hermine Habermann4
Interpreting large-scale studies from microarrays or next-generation sequencing for further experimental testing remains one of the major challenges in quantitative biology. Combining expression with physical or genetic interaction data has already been successfully applied to enhance knowledge from all types of high-throughput studies. Yet, toolboxes for navigating and understanding even small gene or protein networks are poorly developed.
We introduce two Cytoscape plug-ins, which support the generation and interpretation of experiment-based interaction networks. The virtual pathway explorer viPEr creates so-called focus networks by joining a list of experimentally determined genes with the interactome of a specific organism. viPEr calculates all paths between two or more user-selected nodes, or explores the neighborhood of a single selected node. Numerical values from expression studies assigned to the nodes serve to score identified paths. The pathway enrichment analysis tool PEANuT annotates networks with pathway information from various sources and calculates enriched pathways between a focus and a background network. Using time series expression data of atorvastatin treated primary hepatocytes from six patients, we demonstrate the handling and applicability of viPEr and PEANuT. Based on our investigations using viPEr and PEANuT, we suggest a role of the FoxA1/A2/A3 transcriptional network in the cellular response to atorvastatin treatment. Moreover, we find an enrichment of metabolic and cancer pathways in the Fox transcriptional network and demonstrate a patient-specific reaction to the drug.
The Cytoscape plug-in viPEr integrates –omics data with interactome data. It supports the interpretation and navigation of large-scale datasets by creating focus networks, facilitating mechanistic predictions from –omics studies. PEANuT provides an up-front method to identify underlying biological principles by calculating enriched pathways in focus networks.
The integration and biological interpretation of large-scale datasets is currently one of the main challenges in bioinformatics research. How can we extract meaningful information from a list of differentially regulated genes? One possibility to understand, how (co-)regulated genes relate to each other is to view them in the context of their physical, genetic or regulatory interactions: network-based analysis using data from protein-protein or regulatory interactions can open new perspectives for further experimental studies.
Quantitative values from a functional screen or a list of mutated genes identified in a cancer genomics study can be used to generate sub-networks from a large, biological interaction network. These sub- or focus networks can be termed 'disease' or 'state' networks, as they describe the modules in the cell or the organism, which are affected by a certain experimental condition or by a particular disease. This approach has for instance been employed by software like the database and web-tool String [1] or the command-line based tool Netbox [2].
Focus networks can also be created based on a specific biological question: how are two specific proteins - or two groups of proteins - connected with each other? This approach allows an even more biologically focused view on the changes in the cellular network under different conditions.
Focus networks allow us moreover to understand the cross-talk between two molecules or pathways, which in this context is defined by all paths between two proteins or two groups of proteins.
Typically, some form of shortest-path algorithm like Dijkstra's algorithm [3] is used to create sub-networks between two or more nodes. The numeric values from functional genomics studies are used to score paths between two nodes. Methods like Pathfinder [4] or the Reactome browser [5] have implemented this functionality of connecting two molecules with each other within a biological network. Both tools use numeric values also to visualize regulatory changes that take place during state changes of the cell/organism under study.
Focus networks can be further enriched using Gene Ontology (GO) terms [6] or pathways from different sources to provide additional functional information for data interpretation. GO biological processes can also be used to explore cross-connections between two or more pathways and find missing pathway components. This provides a more integrative view of a biological network.
The drug family of the statins is currently widely used to lower cholesterol levels in the treatment of hypercholesterolemia. Statins, which act as HMG-CoA (3-hydroxy-3-methylglutaryl–coenzyme A) reductase (HMGCR) inhibitors, prevent the production of cholesterol by inhibiting the biosynthesis of isoprenoids and sterols in the mevalonate pathway [7]. However, statins are known to have a variety of side effects, including muscle adverse effects, liver damage, cognitive impairment, cancer progression or diabetes mellitus [8–11]. Functional genomics studies of statin-treated cell systems indicate extensive changes of expression levels upon drug treatment (see for instance [12–20]). The detailed analysis of these transcriptional changes should therefore lead to a better understanding of the functions and pleiotropic effects of statins.
In this study, we re-analyzed the time-course expression data from atorvastatin-treated, primary human hepatocytes from six different patients published in a previous study [20]. We focused our analysis on determining the regulation of downstream genes from statin drug targets as defined in STITCH [21]. We were especially interested in addressing the following issues: 1) How do statin targets and differentially regulated genes relate to each other? 2) Which pathways are affected upon statin treatment? 3) How does the dynamics of the neighborhood of specific proteins change after statin treatment?
In order to answer those questions, we have developed two Cytoscape plug-ins that work together: viPEr, the virtual Pathway Explorer, creates focus interaction networks by connecting two or more nodes with each other. It applies user-provided expression data to score paths between two nodes and thus limits the network to functionally relevant paths. The Cytoscape plug-in PEANuT (Pathway Enrichment ANalysis Tool) upgrades interaction networks with pathway information and identifies enriched pathways in focus networks.
We have applied our toolbox to re-analyze the expression data from atorvastatin-treated, primary human hepatocytes and found that the transcription factors FOXA1, 2 and 3 are important regulatory players in atorvastatin response.
viPEr was written in Java as a Cytoscape plug-in. The basis of all functions is a recursive method, which iterates through the members (nodes) of all paths emanating from a selected node. The step depth is influenced by two parameters: 1) the maximum number of steps allowed (set by the user). 2) the numerical values of the nodes. We used the log2fold expression changes of atorvastatin treated primary hepatocytes described in [20] as numerical values.
viPEr can be accessed under: http://sourceforge.net/projects/viperplugin/
viPEr has three main search options:
'A to B': 'A to B' connects two selected nodes with each other. We refer to the paths between nodes A and B as cross-talk. Mathematically, we define cross-talk as all paths between two nodes (x1, x2), where a single node in a path can only be passed once. The result is a focus network, which is determined by the maximum number of steps allowed between the start and the target node. The search is stopped when the target node is reached or the maximum number of steps is exceeded. Only if the target has been found, a path is stored, scored and displayed in the results tab. The focus network is created based on all nodes that are present in all stored paths. The connecting edges are taken from the original network. Therefore, all known interactions between the subset of nodes are included in the newly created focus network.
Scoring of paths is done using the following equation:
$$ Score=\frac{\#\kern0.5em of\kern0.5em differently\kern0.5em regulated\kern0.5em nodes\kern0.5em \in \kern0.5em path}{(pathlength)^2} $$
The p-values for discovered paths in focus networks are calculated based on the cumulative probability of the hypergeometric distribution to find k or more differentially expressed genes in a path of length n.
'connecting in batch': similarly to the 'A to B' search, two groups of nodes can be connected using the 'connection in batch' function. For every node in the start list A, the recursive search is computed towards every node in the target list B. A results tab with scored paths is not created in this case.
'environment search': The third option is to explore the regulated proximity of a single node using the 'environment search'. Just one starting node is selected in this case. Mathematically, we define the environment search as follows: a network is calculated from all outgoing paths of length l from x1, where every node is allowed to be passed only once per path and all paths with at least two consecutive node scores below threshold t have been removed. The iteration through emanating paths is carried out until the allowed maximum search depth is reached. When exploring the neighborhood of a single node, the numerical data are used to select paths radiating from the selected node. Paths, in which at least two consecutive nodes are not differentially expressed, are removed from the resulting neighbor focus network. Thus, only paths that contain differentially regulated nodes are considered for the environment search, though single unregulated linker nodes are allowed. The resulting network is referred to as a neighbor focus network.
Using viPEr
Starting from any existing network supplemented with expression data, the user has to select the attribute field containing the expression information. A slider is automatically set to the respective range of expression values. After adjusting the slider to the desired expression range, different options are available in the workflow (see Fig. 1).
Workflow for creating focus networks. Workflow of viPEr in creating focus networks between two nodes/two groups of nodes, or in exploring the neighborhood of a single node of interest. The user must select two nodes or group of nodes for creating a focus network. A single node is selected when exploring the neighborhood. Numerical data (for instance from an expression screen) must be added to the network for scoring paths of a focus network and for creating a neighbor focus network from a single node. In both cases, the user selects the search depth. After creating the focus network, the network can for instance be explored by using and visualizing GO-terms. PEANuT is used to find and visualize enriched pathways
'A to B'
This function executes the path search algorithm between two selected nodes. The result is a focus network of all identified paths of a certain length between two nodes. The user selects the length (step-size) of the calculated paths. All interconnecting edges are added to the focus network. A result list, which includes every discovered path between the nodes, is located on the right side of the screen. This list shows all paths, their respective members and the assigned score as described above. The score can be used to further reduce the focus network or simply to visualize specific paths.
'connecting in batch'
Two groups of nodes can be connected in the 'connecting in batch' function of viPEr. The same algorithm is used as in the 'A to B' search, except that all paths between all members of a start list and a target list are computed. This algorithm can be applied to detect cross talk between two pathways, two protein complexes or two hit lists from different experiments. Three buttons have to be used for the 'connecting in batch' search: 1) a start protein list has to be defined by selecting all starting nodes and pressing the 'select start protein list'; 2) the target protein list has to be selected accordingly and confirmed by pressing the 'select target protein list' button; 3) the button 'start connection in batch' executes the search.
'environment search'
In case only a single protein of interest exists, the algorithm can be used to observe the dynamics of expression in the environment of this protein using the 'environment search'. A single node is selected and the search is executed with the button 'environment search'. All regulated nodes within a certain step size of the selected protein give rise to the neighbor focus network.
PEANuT (Pathway Enrichment ANalysis Tool) is a Cytoscape plug-in designed to annotate protein interaction networks with biological pathway information and to identify enriched pathways in focus networks. The interactome of the organism denotes the background network. Next to visualizing enriched pathways in the focus networks, the results can be exported as a tab delimited file.
PEANuT can be accessed under: http://sourceforge.net/projects/peanut-cyto and was implemented in Java.
The user can choose between the three databases ConsensusPathDB (http://consensuspathdb.org/, [22]), Pathway Commons (http://www.pathwaycommons.org/, [23]) and Wikipathways (http://www.wikipathways.org/ [24]) to annotate the network. While ConsensusPathDB requires Entrez gene IDs as input, Pathway Commons and Wikipathways require UniProt accession numbers. Annotation of nodes with these IDs can be done within Cytoscape using for instance the plug-in CyThesaurus [25].
ConsensusPathDB and Pathway Commons contain pathway data collected from publicly available pathway databases (e.g., Reactome [26], KEGG [27]; see the respective homepages for more information). WikiPathways is a database based on the 'wiki principle' and provides an open platform dedicated to collaborative registering, reviewing and curation of biological pathways.
While Pathway Commons and WikiPathways work with a wide variety of organisms, ConsensusPathDB is specialized on human, mouse and yeast pathways. When the user chooses to annotate his network of interest with ConsensusPathDB data, he can additionally import directed interactions from KEGG to increase the amount of vertex degrees, enabling more complex path searches using viPEr.
Information from Pathway Commons is accessed over their web service. Flat files from the ConsensusPathDB and WikiPathways webpages are downloaded via the Apache Commons IO library (http://commons.apache.org/proper/commons-io/) and Cytoscape internal downloader classes.
Once downloaded, ConsensusPathDB and WikiPathways can be used offline, while Pathway Commons requires internet access. Network annotation with Pathway Commons is slower, as it depends on the load and availability of the host server, as well as internet connection speed.
The probability value for the pathway enrichment in the focus network is determined using the Apache Commons Math library (http://commons.apache.org/proper/commons-math/) to calculate the cumulative probability of a hypergeometric distribution. Multiple testing correction is achieved by applying either Bonferroni [28] or Benjamini-Hochberg [29] correction.
PEANuT has three sub-menus:
'find pathways': the find pathways sub-menu annotates the networks in Cytoscape with pathway data. Networks can be labeled using more than one pathway resource by re-using the sub-menu with different pathway selections.
'show pathway statistics': the 'show pathway statistics' sub-menu calculates enriched pathways in a selected focus network. The user has to select the focus network of interest, the background network and choose a p-value cut-off. Enriched pathways can be selected for visualization and downloaded as a tab-delimited file.
'download/update dependencies': this sub-menu is used to download pathway information for network annotation. It needs to be run before using PEANuT the first time and should be run regularly to update pathway information.
Using PEANuT
After installing PEANuT in Cytoscape by placing the plug-in in the Cytoscape plug-in folder, the tool can be accessed via the plug-in menu. The sub-menus are used as follows:
'find pathways'
This sub-menu allows the user to start the software and annotate the network(s) of choice with pathway data. In a simple dialog the user can select between three different databases: ConsensusPathDB, Pathway Commons or WikiPathways. The user can select different options for each database depending on preferences (such as import of directed interactions from KEGG). Annotating the network with more than one database is possible by subsequently re-running this sub-menu with another selected database.
'show pathway statistics'
When the user has finished annotating the network, clicking on the 'show pathway statistics' sub-menu will invoke a table, two combo boxes, two text fields and several buttons. In one of the text fields the user can select the p-value cut-off for the enrichment calculations. The combo boxes permit the user to select the focus and background network. Once the results have been calculated the user can export the results as a tab delimited file or search the table for pathways via the text box. The user can select each pathway and highlight the members of the pathways in the focus network by clicking on 'select'.
'download/update dependencies'
In this sub-menu, dependencies can be used to download/update information from ConsensusPathDB and WikiPathways. 'download/update dependencies' needs to be executed when first using the plug-in. It is good practice to refresh the dependencies from time to time by clicking on the 'download all' button.
For more information on how to use PEANuT, see our wiki http://sourceforge.net/p/peanut-cyto/wiki/Home/.
Analysis of expression data
We used preprocessed data from Schröder et al. [20], which have already been normalized and filtered and repeated the clustering step described in the original publication. The patient data used here were part of a large-scale study originally published in [20] and deposited at the Gene Expression Ominbus [30] resource (Accession number GSE29868). Tissues and corresponding blood samples of the original study were taken with written informed consent from donors. The study was furthermore approved by the ethics committees of the medical faculties of the Charite, Humbold University and University of Tübingen. Before clustering, the data were log2 transformed and filtered for probes, which were at least 1.7 fold differentially expressed in at least 2 patients. With these 12,554 probes EDISA clustering was performed, using the same parameters as in Schröder et al. [20], with tG of 0.05 and tC of 0.25.
Creating the PP/TFG Interaction data
The basis for biologically meaningful network analysis is a reliable interaction network with high confidence. To accomplish this goal, we allowed only high confidence interaction data. No predicted interactions or interactions based on co-expression were included. As source for protein-protein interactions, we chose Pathway Commons [23] due to its large number of curated interactions and the simple format, which can be directly loaded in Cytoscape. Likewise, only experimentally verified transcription factor – gene interactions from TRANSFAC® were included in the network. Confidence scores from TRANSFAC® were used to identify reliable regulatory interactions.
GO and pathway enrichment of gene lists and networks
To analyze GO and pathway enrichment of focus networks, the official gene symbols (HGNC) were submitted to DAVID [31]. Additionally, the GO annotation for all nodes was added to the network via Cytoscapes' built-in function.
Clustering of patient data using Cluster3
Cluster3 [32] was used to cluster patient data of the FoxA1 neighbor focus network, as well as the 24 h time point of the probe ID of all patients. Standard parameters and average linkage clustering were chosen. The FoxA1 network was analyzed for presence (1) or absence (0) of a node. In case of up-regulation, the value (2) was assigned, while down-regulated nodes were assigned the value (−2) (see Additional file 1: Table S12).
Expression analysis of hepatocyte time-course experiment
In a first analysis, we found a total of 12.554 differentially expressed probes in at least two patients irrespective of time-points (Additional file 1: Table S1). We processed these genes with EDISA 3D clustering [33], which resulted in 902 differentially regulated genes (823 non-redundant genes) that were grouped into 24 clusters (Additional file 1: Tables S2 and S3). Further analysis was performed with the non-redundant version of this gene set.
GO and pathway enrichment
We submitted the processed gene list to DAVID [31] for functional annotation and enrichment analysis (Additional file 1: Tables S4 and S5). DAVID was chosen for its extended functionality beyond enrichment of Gene Ontology terms. Primarily, basic cellular and metabolic processes were enriched, such as amino acid or nucleic acid metabolism. In addition, several pathways and processes related to cellular cholesterol and lipid -metabolism and -homeostasis were enriched. Based on these results we could however not make any conclusions on the regulatory or transcriptional network involved in atorvastatin response.
Focus network analysis
Next, we used focus network analysis to extract more information from the data. We decided to use the interaction network provided by Pathway Commons [23] as the basic protein-protein interaction network, as it contains information from more than one primary source (including BioGRID, IntAct or Reactome). Given the type of data in protein interaction databases, we reasoned that we additionally needed information on gene regulatory relationships: which transcription factors are able to regulate which gene set? We considered this information on transcription factor - gene interactions (TFGI) as essential to identify the gene regulatory networks that control the cellular response to statins at a transcriptional level. We retrieved TFGIs from TRANSFAC® and combined them with the protein-protein interactions (PPIs) from Pathway Commons to a single interaction network, which was used for all further analysis steps (see Fig. 1).
Next, we extracted the top ten high confidence primary drug targets of atorvastatin from the STITCH database [21] (Additional file 1: Table S6), all of which were present in our interaction network. We used the viPEr 'connecting in batch' function to connect the ten atorvastatin targets with the 823 differentially regulated, non-redundant genes we have identified with EDISA 3D clustering, using a step-size of two. This functionality of viPEr creates focus networks by searching for all paths between two proteins or two groups of proteins, up to a user-defined path length. The nodes are supplemented with expression values. These weights on the nodes are used to score all paths in the resulting focus networks. The score of a path is dependent on its length and the number of differentially regulated nodes it contains.
The resulting atorvastatin focus network contained 1107 nodes and 22029 edges (Additional file 2: Figure S1). Of those, 21516 were PP interactions and 513 came from TFG interactions. We next performed GO enrichment of the atorvastatin focus network using DAVID (Additional file 1: Table S7). We observed that with the proteins from the focus network, we found more terms relating to regulatory functions, such as signaling or transcriptional control than with the 823 differentially expressed genes alone.
We searched for enriched pathways in the atorvastatin focus network. To this end, we employed our pathway enrichment tool PEANuT on the focus network, taking the entire network of PPIs and TFGIs as a background. Using ConsensusPathDB [22] as pathway resource, PEANuT identified a total of 926 pathways, many of which were redundant. Notably, again mainly signaling-, as well as transcriptional regulation pathways were identified (Additional file 1: Table S8). We compared results of PEANuT with pathway enrichment results from DAVID (Additional file 1: Table S9). Though different pathway resources are available in DAVID, a similar set of pathways was enriched. Those included growth receptor signaling pathways such as the EGF-receptor-, VEGF-, Insulin- and Ras-signaling pathways, Interleukin signaling pathways, as well as cancer pathways. Due to the availability of the pathway interaction database (PID, [34]) of the ConsensusPathDB resource, however, more transcription factor pathways were identified with PEANuT.
Involvement of the FoxA transcription factors in atorvastatin response
In the list of enriched pathways, our attention was caught by the Forkhead box A transcription factor pathways. In adults, the Forkhead box A transcription factors FoxA1, FoxA2 and FoxA3 are expressed in liver, pancreas and adipose tissue, where they regulate gene expression of metabolic genes [35]. Two direct targets of atorvastatin are also targets of the FoxA transcription factors: FoxA1 and FoxA2 regulate ApoB [36–38], while Cyp3A4 is a target of FoxA3 [39]. Given our experimental set-up of atorvastatin-treatment of primary human hepatocytes and the fact that atorvastatin is primarily acting on cholesterol synthesis in the liver, we further focused our analysis on these two transcriptional pathways. We created a viPEr focus network between the primary atorvastatin target HMGCR and the three transcription factors FoxA1, A2 and A3 (Fig. 2) starting from the atorvastatin focus network with a maximal step size of two, without considering a fold-change.
Focus networks between the primary atorvastatin target HMGCR and the transcription factors FoxA1, A2 and A3 of patient 65. Time course of patient 65 is shown at 6 h (a), 12 h (b), 24 h (c), 48 h (d) and 72 h (e). The focus networks were created using viPEr, with a maximal path length of 2 without considering the log2fold change: this allows direct comparison of time-course data. Up regulated nodes are shown as upward triangles, colored in red. Down regulated nodes are displayed as downward arrows, colored green. The Fox transcription factors are colored orange, as is the HMGCR. Protein-protein interactions have grey edges; edges of transcription factor gene interactions are colored red
We visualized expression changes over time and patients in the HMGCR-FoxA1/2/3 focus network. We first analyzed the expression changes over time from patient 65 in the focus network (Fig. 2). HMGCR itself is up-regulated at the first three time points, returning to normal expression values at the later two time points. Likewise, Cyp3A4 is up regulated at time points 6, 12 and 24, however becomes down regulated at 72 h.
Next, we were interested, whether the expression response to atorvastatin was similar in the six different patients and plotted HMGCR-FoxA1/2/3 focus networks for each patient at 12 h after treatment (Additional file 3: Figure S2). This representation of the data was especially informative as it illustrated the obvious differences in response to atorvastatin in the primary hepatocytes from the six patients. We observed that only donors 62, 65 and 79 showed somewhat overlapping regulatory responses at 12 h after treatment. The expression pattern of patient 67 was in many cases opposite to the first group (62, 65 and 79), while the hepatocytes from patients 80 and 81 showed milder responses to the drug.
Interpreting the HMGCR-FoxA1/2/3 focus network
The focus network of HMGCR and the FoxA transcription factors can be divided in two parts (see Figs. 2 and 3). A small, tightly connected sub-cluster contains small molecules, as well as metabolic genes, while the larger sub-cluster contains signaling molecules, as well as transcription factors. HMGCR is directly connected to two proteins from this cluster: the cAMP dependent protein kinase alpha, PRKACA and Rho GTPase activating protein 1, ARHGAP1.
Enriched pathways identified by PEANuT on HMGCR/FoxA focus networks. Three pathways were chosen for display: the Integrated Breast Cancer Pathway (a), the Prostate Cancer Pathway (b) and the Metabolism of Lipids and Lipoproteins Pathway (c). Members of the respective pathways are highlighted in yellow. Up regulated nodes are shown as upward triangles, colored in red. Down regulated nodes are displayed as downward arrows, colored green. The Fox transcription factors are shown in orange, as is the HMGCR. Protein-protein interactions have grey edges; edges of transcription factor gene interactions are colored red
Especially FoxA1 is tightly integrated with other transcription factors from the large sub-cluster, including the transcription factors Sp1, Tp53, Fos, Jun or Brca1. It also binds several transcriptional co-activators.
Note that HMGCR expression is not directly regulated by any of the transcription factors in the network. The connections between the FoxA proteins and HMGCR are mostly due to transcriptional targets of the FoxA's interacting with PRKACA and ARHGAP1.
Interestingly, statins have been implicated to have a preventive effect on different cancer types and to be beneficial for the treatment of several cancer types, among which are breast and prostate cancer [40–44].
Bcl2, a direct transcriptional target of FoxA1 [45], is also a direct interactor of the HMGCR interacting kinase PRKACA. Previous studies have shown that Bcl2 is down-regulated in response to statin-induced apoptosis ([46]; for a review on the involvement of Bcl2 in statin-response, see [47]); PRKACA has on the other hand been implicated in statin-resistance of tumors by phosphorylating the pro-apoptotic protein Bad (Bcl2 associated death receptor), thus allowing anti-apoptotic signaling [48].
In conclusion, cross-talk via PRKACA and ARHGAP1 is therefore one possible link between the mevalonate pathway and the FoxA transcriptional network, which could in part explain the effect of statins on apoptosis of cancer cells.
Pathway enrichment analysis on focus networks using the Cytoscape plug-in PEANuT
We performed pathway enrichment analysis using PEANuT on the HMGCR-FoxA1/2/3 focus network (Additional file 1: Table S10). Again, many signaling, as well as metabolic pathways were enriched. In accordance with previous reports on statin-sensitivity of cancer cells [41, 42, 46, 49], several cancer pathways are enriched in our focus network. In Fig. 3, we illustrate the usefulness of PEANuT in visualizing components of enriched pathways in focus networks created with viPEr. We chose to highlight the two cancer pathways 'integrated breast cancer' (Wikipathways) and 'prostate cancer' (Wikipathways), as well as the pathway metabolism of lipids and lipoproteins (Reactome) in the HMGCR-FoxA1/2/3 focus network. Similar pathways were identified using DAVID (Additional file 1: Table S11).
Exploring the molecular environment of a node using viPEr
In a next analysis step, we used viPEr's functionality 'environment search' to explore the neighborhood of a single protein of interest. As a rule, only one linker node without differential expression is allowed in the 'environment search'. The user sets again the search depth by defining the step size. All paths that lack differentially expressed nodes are removed from the resulting neighbor focus network.
We chose one of the Fox transcription factors, FoxA1, as the start node and used a search depth of two. We assumed that an environment search with FoxA1 might shed light on the pathways and genes regulated by this transcription factor during atorvastatin treatment. We chose the 24 h time point for further analysis. Earlier time points showed only few differentially expressed genes for most of the patient samples. In later time points, a considerable amount of genes was changed, possibly due to secondary effects. Data from the six patients at time point 24 h are shown in Fig. 4.
Neighbor focus networks of FoxA1 in all six patients at 24 h. We chose to explore the environment of FoxA1 with respect to differential expression. FoxA1 neighbor focus networks were created for patients 62 (a), 65 (b), 67 (c), 79 (d), 80 (e) and 81 (f). Up regulated nodes are shown as upward triangles, colored in red. Down regulated nodes are displayed as downward arrows, colored green. The Fox transcription factors are shown in orange, targets of atorvastatin are highlighted in cyan. Protein-protein interactions have grey edges; edges of transcription factor gene interactions are colored red
The striking difference of patient 67 is again visible: the environment search of FoxA1 under the chosen conditions (2 steps, log2 fold change of at least 1.5) produced the largest network of all patients. Most proteins in patient 67's network are up regulated. Notably, the central transcription factor Sp1 is up regulated, which is potentially the cause of the strong effect we observed in gene expression.
Many of the transcription factors that were present in the HMGCR-FoxA1/2/3 focus network are also present in the neighbor focus networks of the six patients. In all networks, Sp1 seems to be a central hub. The atorvastatin targets Ccl2 and Cyp3A4 are present in all but one network (Patient 80), while HMGCR is not found in any network. Note also that in Patient 79, most of the genes are down-regulated, including the transcriptional regulators Tp53 and Fos. Finally, we noticed that most nodes that are shared between all patients are linker nodes and thus not differentially expressed. Exceptions thereof are Sp1, Fos and Tp53, all of which show differential expression in at least one sample.
We decided to cluster the patient data based on the differential regulation and presence or absence of nodes in the FoxA1 neighbor focus network using Cluster3 [32]. The differential behavior of patient 67 is reflected in the resulting hierarchical tree (Additional file 4: Figure S3 a, data available in Additional file 1: Table S12). Furthermore, we observe a close clustering of patients 80 and 81, as well as a sub-group formed by patients 79, 65 and 62. We were interested, if the same groups would cluster at the 24 h time point in overall gene expression as well. We therefore used the expression values of the probe IDs at 24h (Additional file 1: Table S1) for cluster3 analysis (Additional file 4: Figure S3 b). Indeed, the clustering of all expression values is highly comparable; however, at the level of all analyzed genes, patient 62 displays more similarity to patient 79 than 65 in this small sub-group.
Comparison to previous analysis
We started with the same primary data, as well as clustering technique as Schröder, et al. [20], resulting in identical clusters of co-regulated genes. Yet, our approach has channeled the analysis of the data in a different direction. In the previous study, enrichment analysis, followed by Cis-regulatory module detection in combination with network analysis was done. This yielded a set of transcription factors with a hypothetical function in atorvastatin-induced gene regulation. Factors such as Krüppel-like factors Klf4 and Klf11, hypoxia-inducible factor 1 (Hif1A), Hnf4, the nuclear receptor RXR in combination with other nuclear receptors, nuclear receptors PPARA, NR1H2 and NR2C2, Sp3 and Sp1, as well as Tgif1, Smad2 or Elf1 were found in [20]. In our primary atorvastatin focus network, we also found the transcription factors RXRB, Hnf4A, PPARA, NR2C2, Sp1, Sp3 and Klf4. The corresponding pathways of these transcription factors were likewise enriched (see Additional file 1: Tables S8 and S9). In the second step, however, we did not pursue any of the above factors or pathways. We rather focused on the FoxA transcription factors and their potential role in atorvastatin response. Interestingly, we also found Sp1 as a major transcriptional regulator in our focus networks. Sp1 is known to coordinate expression together with both, RXR/RAR (see for instance [50–53]), as well as the FoxA transcription factors [54–56]. Both, Sp1, as well as the FoxA transcription factors are also known to regulate some direct atorvastatin drug targets [36, 38, 39, 52, 53, 57].
viPEr was in this study used to point towards less well known players in atorvastatin response. In fact, we see the advantage of a plug-in such as viPEr in exploring paths that are difficult to find otherwise. If used with numerical values from experimental studies such as an expression screen, it will open up new avenues for experimental research.
Here we present the Cytoscape plug-ins viPEr (virtual pathway explorer) and PEANuT (pathway analysis and enrichment tool). viPEr provides the possibility to navigate large interaction networks by linking two nodes or two groups of nodes with each other. It can be used to identify potential links between processes or pathways. viPEr furthermore enables users to explore the neighborhood of a single node with respect to the (numerical) quality of radiating paths. The Cytoscape plug-in PEANuT identifies enriched pathways in a focus network compared to a background network. We used viPEr to create focus networks, as well as neighbor focus networks using time-series data on atorvastatin-treated, primary hepatocytes from six donors. The focus network was analyzed using PEANuT. We identified the FoxA1/A2/A3 transcription factors to be involved in atorvastatin response. Furthermore, PEANuT revealed that the FoxA networks were enriched in cancer, as well as metabolic pathways. We found interesting differences in patient samples in the focus and neighbor focus networks, possibly explaining the often-observed, individual responses to drug treatment.
While we used viPEr with numerical values from a differential expression study, the plug-in can be used with numbers inferred from any experimental set-up and high-throughput assay. Functions 'A to B' and 'connecting in batch' work also without numerical values to create focus networks. In this case, no scoring of paths is done. The 'environment search', however, requires numerical values for the creation of a neighbor focus network.
Availability and requirements
viPEr and PEANuT are all available via sourceforge.org. Both plug-ins have been tested for the versions 2.8.2 and 2.8.3 of Cytoscape. Versions of viPEr and PEANuT are also available for Cytoscape 3.2 and higher (viPEr for Cytoscape version 3 is available via http://sourceforge.net/projects/viperplugin/ and PEANuT for Cytoscape 3.2 and higher is available via http://sourceforge.net/projects/peanutv3/; for both apps, the source code is also available via sourceforge.net and both apps will be released via the Cytoscape 3 app manager). Please note that the current version of PEANuT v3 does not work with Cytoscape versions below 3.0 or 3.1. viPEr and PEANuT are fully integrated in both Cytoscape versions and compatible with other Cytoscape plug-ins. viPEr and PEANuT will be maintained to work with future releases of Cytoscape, as well as updates of public pathway database.
The required amount of memory depends mainly on the background network. In general, we advise to have at least 8 GB of RAM for the mouse or human interactome. We furthermore advise to work with turned-off view of large background networks (using the built-in Cytoscape function 'destroy view'). Cytoscape itself is written in Java and runs on Linux, MacOS and Windows systems. We advise to use Oracle Java, as we have not extensively tested and validated our software for Open JDK.
PPI:
Protein-Protein interaction
TFGI:
Transcription Factor-Gene Interaction
HMGCR:
HMG-CoA (3-hydroxy-3-methylglutaryl–coenzyme A) reductase
Jensen LJ, Kuhn M, Stark M, Chaffron S, Creevey C, Muller J, et al. STRING 8--a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res. 2009;37(Database issue):D412–6.
Cerami E, Demir E, Schultz N, Taylor BS, Sander C. Automated network analysis identifies core pathways in glioblastoma. PLoS One. 2010;5:e8918.
Dijkstra EW. A note on two problems in connexion with graphs. Numerische Mathematik. 1959;1:269–71.
Bebek G, Yang J. PathFinder: mining signal transduction pathway segments from protein-protein interaction networks. BMC Bioinformatics. 2007;8:335.
Stein LD. Using the Reactome database. Curr Protoc Bioinformatics. 2004;Chapter 8:Unit 8.7–8.7.16.
Ashburner M, Ball CA, Blake JA, Botstein D, Butler H, Cherry JM, et al. Gene ontology: tool for the unification of biology. The Gene Ontology Consortium. Nat Genet. 2000;25:25–9.
Corsini A, Maggi FM, Catapano AL. Pharmacology of competitive inhibitors of HMG-CoA reductase. Pharmacol Res. 1995;31:9–27.
Mammen AL, Amato AA. Statin myopathy: a review of recent progress. Curr Opin Rheumatol. 2010;22:644–50.
Jukema JW, Cannon CP, de Craen AJM, Westendorp RGJ, Trompet S. The controversies of statin therapy: weighing the evidence. J Am Coll Cardiol. 2012;60:875–81.
Merli M, Bragazzi MC, Giubilo F, Callea F, Attili AF, Alvaro D. Atorvastatin-induced prolonged cholestasis with bile duct damage. Clin Drug Investig. 2010;30:205–9.
King DS, Wilburn AJ, Wofford MR, Harrell TK, Lindley BJ, Jones DW. Cognitive impairment associated with atorvastatin and simvastatin. Pharmacotherapy. 2003;23:1663–7.
Howe K, Sanat F, Thumser AE, Coleman T, Plant N. The statin class of HMG-CoA reductase inhibitors demonstrate differential activation of the nuclear receptors PXR, CAR and FXR, as well as their downstream target genes. Xenobiotica. 2011;41:519–29.
Rodrigues Díez R, Rodrigues-Díez R, Lavoz C, Rayego-Mateos S, Civantos E, Rodríguez-Vita J, et al. Statins inhibit angiotensin II/Smad pathway and related vascular fibrosis, by a TGF-β-independent process. PLoS One. 2010;5:e14145.
Griner LN, McGraw KL, Johnson JO, List AF, Reuther GW. JAK2-V617F-mediated signalling is dependent on lipid rafts and statins inhibit JAK2-V617F-dependent cell growth. Br J Haematol. 2013;160:177–87.
Lee H-Y, Youn S-W, Cho H-J, Kwon Y-W, Lee S-W, Kim S-J, et al. FOXO1 impairs whereas statin protects endothelial function in diabetes through reciprocal regulation of Kruppel-like factor 2. Cardiovasc Res. 2013;97:143–52.
Ekström L, Johansson M, Monostory K, Rundlöf A-K, Arnér ESJ, Björkhem-Bergman L. Simvastatin inhibits the core promoter of the TXNRD1 gene and lowers cellular TrxR activity in HepG2 cells. Biochem Biophys Res Commun. 2013;430:90–4.
Leszczynska A, Gora M, Plochocka D, Hoser G, Szkopinska A, Koblowska M, et al. Different statins produce highly divergent changes in gene expression profiles of human hepatoma cells: a pilot study. Acta Biochim Pol. 2011;58:635–9.
Medina MW, Theusch E, Naidoo D, Bauzon F, Stevens K, Mangravite LM, et al. RHOA is a modulator of the cholesterol-lowering effects of statin. PLoS Genet. 2012;8:e1003058.
Hafner M, Juvan P, Rezen T, Monostory K, Pascussi J-M, Rozman D. The human primary hepatocyte transcriptome reveals novel insights into atorvastatin and rosuvastatin action. Pharmacogenet Genomics. 2011;21:741–50.
Schröder A, Wollnik J, Wrzodek C, Dräger A, Bonin M, Burk O, et al. Inferring statin-induced gene regulatory relationships in primary human hepatocytes. Bioinformatics. 2011;27:2473–7.
Kuhn M, Szklarczyk D, Franceschini A, Campillos M, von Mering C, Jensen LJ, et al. STITCH 2: an interaction network database for small molecules and proteins. Nucleic Acids Res. 2010;38(Database issue):D552–6.
Kamburov A, Wierling C, Lehrach H, Herwig R. ConsensusPathDB--a database for integrating human functional interaction networks. Nucleic Acids Res. 2009;37(Database issue):D623–8.
Cerami EG, Gross BE, Demir E, Rodchenkov I, Babur O, Anwar N, et al. Pathway Commons, a web resource for biological pathway data. Nucleic Acids Res. 2011;39(Database issue):D685–90.
WikiPathways. Pathway editing for the people. PLoS Biol. 2008;6:e184.
van Iersel MP, Pico AR, Kelder T, Gao J, Ho I, Hanspers K, et al. The BridgeDb framework: standardized access to gene, protein and metabolite identifier mapping services. BMC Bioinformatics. 2010;11:5.
Joshi-Tope G, Gillespie M, Vastrik I, D'Eustachio P, Schmidt E, de Bono B, et al. Reactome: a knowledgebase of biological pathways. Nucleic Acids Res. 2005;33(Database issue):D428–32.
Kanehisa M, Goto S, Sato Y, Furumichi M, Tanabe M. KEGG for integration and interpretation of large-scale molecular data sets. Nucleic Acids Res. 2012;40(Database issue):D109–14.
Dunn OJ. Multiple comparisons among means. J Am Stat Assoc. 1961;56:52–64.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J Royal Stat Soc Ser B (Methodological). 1995;57:289–300.
NCBI Resource Coordinators. Database resources of the National Center for Biotechnology Information. Nucleic Acids Res. 2014;42(Database issue):D7–17.
PubMed Central Article Google Scholar
Huang DW, Sherman BT, Lempicki RA. Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources. Nat Protoc. 2009;4:44–57.
Eisen MB, Spellman PT, Brown PO, Botstein D. Cluster analysis and display of genome-wide expression patterns. Proc Natl Acad Sci U S A. 1998;95:14863–8.
Supper J, Strauch M, Wanke D, Harter K, Zell A. EDISA: extracting biclusters from multiple time-series of gene expression profiles. BMC Bioinformatics. 2007;8:334.
Schaefer CF, Anthony K, Krupa S, Buchoff J, Day M, Hannay T, et al. PID: the Pathway Interaction Database. Nucleic Acids Res. 2009;37(Database issue):D674–9.
Friedman JR, Kaestner KH. The Foxa family of transcription factors in development and metabolism. Cell Mol Life Sci. 2006;63:2317–28.
Antes TJ, Levy-Wilson B. HNF-3 beta, C/EBP beta, and HNF-4 act in synergy to enhance transcription of the human apolipoprotein B gene in intestinal cells. DNA Cell Biol. 2001;20:67–74.
Moya M, Benet M, Guzmán C, Tolosa L, García-Monzón C, Pareja E, et al. Foxa1 reduces lipid accumulation in human hepatocytes and is down-regulated in nonalcoholic fatty liver. PLoS One. 2012;7:e30014.
Paulweber B, Sandhofer F, Levy-Wilson B. The mechanism by which the human apolipoprotein B gene reducer operates involves blocking of transcriptional activation by hepatocyte nuclear factor 3. Mol Cell Biol. 1993;13:1534–46.
Rodríguez-Antona C, Bort R, Jover R, Tindberg N, Ingelman-Sundberg M, Gómez-Lechón MJ, et al. Transcriptional regulation of human CYP3A4 basal expression by CCAAT enhancer-binding protein alpha and hepatocyte nuclear factor-3 gamma. Mol Pharmacol. 2003;63:1180–9.
Murtola TJ, Visvanathan K, Artama M, Vainio H, Pukkala E. Statin use and breast cancer survival: a nationwide cohort study from Finland. PLoS One. 2014;9:e110231.
Swanson KM, Hohl RJ. Anti-cancer therapy: targeting the mevalonate pathway. Curr Cancer Drug Targets. 2006;6:15–37.
Katz MS. Therapy insight: Potential of statins for cancer chemoprevention and therapy. Nat Clin Pract Oncol. 2005;2:82–9.
Vemana G, Hamilton RJ, Andriole GL, Freedland SJ. Chemoprevention of prostate cancer. Annu Rev Med. 2014;65:111–23.
Demierre M-F, Higgins PDR, Gruber SB, Hawk E, Lippman SM. Statins and cancer prevention. Nat Rev Cancer. 2005;5:930–42.
Song L, Wei X, Zhang B, Luo X, Liu J, Feng Y, et al. Role of Foxa1 in regulation of bcl2 expression during oxidative-stress-induced apoptosis in A549 type II pneumocytes. Cell Stress Chaperones. 2009;14:417–25.
Wong WWL, Dimitroulakos J, Minden MD, Penn LZ. HMG-CoA reductase inhibitors and the malignant cell: the statin family of drugs as triggers of tumor-specific apoptosis. Leukemia. 2002;16:508–19.
Wood WG, Igbavboa U, Muller WE, Eckert GP. Statins, Bcl-2, and apoptosis: cell death or cell protection? Mol Neurobiol. 2013;48:308–14.
Moody SE, Schinzel AC, Singh S, Izzo F, Strickland MR, Luo L, et al. PRKACA mediates resistance to HER2-targeted therapy in breast cancer cells and restores anti-apoptotic signaling. Oncogene. 2015;34(16):2061–71.
Clendening JW, Pandyra A, Li Z, Boutros PC, Martirosyan A, Lehner R, et al. Exploiting the mevalonate pathway to distinguish statin-sensitive multiple myeloma. Blood. 2010;115:4787–97.
Ohoka Y, Yokota-Nakatsuma A, Maeda N, Takeuchi H, Iwata M. Retinoic acid and GM-CSF coordinately induce retinal dehydrogenase 2 (RALDH2) expression through cooperation between the RAR/RXR complex and Sp1 in dendritic cells. PLoS One. 2014;9:e96512.
Cheng Y-H, Yin P, Xue Q, Yilmaz B, Dawson MI, Bulun SE. Retinoic acid (RA) regulates 17beta-hydroxysteroid dehydrogenase type 2 expression in endometrium: interaction of RA receptors with specificity protein (SP) 1/SP3 for estradiol metabolism. J Clin Endocrinol Metab. 2008;93:1915–23.
Shimada J, Suzuki Y, Kim SJ, Wang PC, Matsumura M, Kojima S. Transactivation via RAR/RXR-Sp1 interaction: characterization of binding between Sp1 and GC box motif. Mol Endocrinol. 2001;15:1677–92.
Zannis VI, Kan HY, Kritis A, Zanni E, Kardassis D. Transcriptional regulation of the human apolipoprotein genes. Front Biosci. 2001;6:D456–504.
Convertini P, Infantino V, Bisaccia F, Palmieri F, Iacobazzi V. Role of FOXA and Sp1 in mitochondrial acylcarnitine carrier gene expression in different cell lines. Biochem Biophys Res Commun. 2011;404:376–81.
Tatewaki H, Tsuda H, Kanaji T, Yokoyama K, Hamasaki N. Characterization of the human protein S gene promoter: a possible role of transcription factors Sp1 and HNF3 in liver. Thromb Haemost. 2003;90:1029–39.
Ceelie H, Spaargaren-Van Riel CC, De Jong M, Bertina RM, Vos HL. Functional characterization of transcription factor binding sites for HNF1-alpha, HNF3-beta (FOXA2), HNF4-alpha, Sp1 and Sp3 in the human prothrombin gene enhancer. J Thromb Haemost. 2003;1:1688–98.
Bombail V, Taylor K, Gibson GG, Plant N. Role of Sp1, C/EBP alpha, HNF3, and PXR in the basal- and xenobiotic-mediated regulation of the CYP3A4 gene. Drug Metab Dispos. 2004;32:525–35.
The authors thank Assa Yeroslaviz and Caroline Merhag for critical input to viPEr, Michael Volkmer and Jose Villaveces for help in programming, and Frank Schnorrer for critical reading of the manuscript. MG was funded by BMBF grant 315737 the Virtual Liver Network (http://www.virtual-liver.de/); FH was funded by BMBF grant 0315893B, SyBACol (http://www.sybacol.org/). MT and BAK were supported by BMBF grant 315755 the Virtual Liver Network (http://www.virtual-liver.de/) and by the Robert Bosch Foundation, Stuttgart, Germany. We thank the Max Planck Society for supporting this work.
CECAD Research Center, Joseph-Stelzmann-Str. 26, 50931, Cologne, Germany
Marius Garmhausen
Gregor Mendel Institute of Molecular Plant Biology, Austrian Acacdemy of Sciences, Vienna Biocenter (VBC), Dr. Bohr-Gasse 3, 1030, Vienna, Austria
Falko Hofmann
Dr. Margarete Fischer-Bosch Institute of Clinical Pharmacology and University of Tübingen, Auerbachstr. 112, 70376, Stuttgart, Germany
Maria Thomas & Benjamin A. Kandel
Research Group Computational Biology, Max Planck Institute of Biochemistry, Am Klopferspitz 18, 82152, Martinsried, Germany
Viktor Senderov & Bianca Hermine Habermann
Present address: Pensoft Publisher, 1700, Sofia, Bulgaria
Viktor Senderov
Present address: Hain Lifescience GmbH, Hardwiesenstr. 1, 72147, Nehren, Germany
Benjamin A. Kandel
Maria Thomas
Bianca Hermine Habermann
Correspondence to Bianca Hermine Habermann.
MG conceived and implemented the software viPEr, constructed the interaction networks, analyzed the data and helped in writing this manuscript. FH conceived and implemented PEANuT and helped in writing this manuscript. VS ported PEANuT to Cytoscape Version 3, tested PEANuT for Cytoscape versions 2 and 3, and helped with coding the Cytoscape 3 version of viPEr. MT and BAK participated in data evaluation and helped with strategy discussion; BHH conceived and supervised this study, assisted in data analysis and wrote this manuscript. All authors read and approved the final manuscript.
log2 fold change of six selected patients at time points 6 h, 12 h, 24 h, 48 h and 72 h. Data were taken from [20], Affymetrix probe IDs are shown. Table S2. Time-course and patient- dependent clustering of probeIDs that showed differential expression in at least one time point or patient. 24 clusters were created using EDISA 3D clustering, containing 823 non-redundant, and differentially expressed genes. Table S3. Annotation of the EDISA 3D-selected, differentially expressed set of Affymetrix probe IDs. Table S4. GO enrichment of EDISA 3D-selected, differentially expressed set of Affymetrix probe IDs. Table S5. Pathway enrichment of EDISA 3D-selected, differentially expressed set of Affymetrix probe IDs. Table S6. Targets of atorvastatin as identified by STITCH. Table S7. GO enrichment of the atorvastatin focus network. Table S8. Pathway enrichment of atorvastatin focus network using PEANuT. Table S9. Pathway enrichment of the atorvastatin focus network using DAVID. Table S10. Pathway enrichment of the FoxA1/A2/A3 – HMGCR neighbor focus network using PEANuT. Table S11. Pathway enrichment of the FoxA1/A2/A3 – HMGCR neighbor focus network using DAVID. Table S12. Node scores for FoxA1 neighbor focus networks of the six patients used for clustering. (XLSX 5773 kb)
Focus network of all atorvastatin targets and differentially expressed genes after EDISA 3D clustering (see text). Up regulated nodes are shown as upward triangles, colored in red. Down regulated nodes are displayed as downward arrows, colored green. Protein-protein interactions have grey edges; edges of transcription factor gene interactions are colored red. (PDF 489 kb)
Focus networks of HMGCR and FoxA1/A2/A3 of the six patients. Differential expression of nodes at 12 h is highlighted. Data are shown for patients 62 (a), 65 (b), 67 (c), 79 (d), 80 (e) and 81 (f). Up regulated nodes are shown as upward triangles, colored in red. Down regulated nodes are displayed as downward arrows, colored green. The Fox transcription factors are shown in orange, as is the HMGCR. Protein-protein interactions have grey edges; edges of transcription factor gene interactions are colored red. (PDF 160 kb)
Additional file 4: Figure 3.
(a) Clustering of patients based on the FoxA1 neighbor focus network shown in Fig. 4, as well as the probe ID data from all patients at 24 h (b), data were taken from Additional file 1: Table S1. (PDF 75 kb)
Garmhausen, M., Hofmann, F., Senderov, V. et al. Virtual pathway explorer (viPEr) and pathway enrichment analysis tool (PEANuT): creating and analyzing focus networks to identify cross-talk between molecules and pathways. BMC Genomics 16, 790 (2015). https://doi.org/10.1186/s12864-015-2017-z
Focus network
Disease state
Shortest path algorithm
Node neighborhood
Pathway enrichment
|
CommonCrawl
|
Initial claims and other COVID-19 shocks
Back in June of 2020, I posted an estimate of the future path of initial claims [1] on Twitter (click to enlarge):
While the rate of improvement was overestimated, it captured the qualitative behavior quite well:
Being able to predict the qualitative behavior of the time series in the future is pretty good confirming evidence for a hypothesis — not the least of which being there's no way you could have had access to data in the future without travelling through time. The underlying concept was that the rate of improvement after the initial spike would gradually fall back to the long term equilibrium (logarithmic rate) of about −0.1/y (which shows up as the line that is almost at zero):
The hypothesis is that while the initial part of the non-equilibrium shock was a sharp spike, there is an underlying component that is a more typical, more gradual, shock. One way to visualize it is in the unemployment rate via "core" unemployment (per Jed Kolko):
Here's a cartoon version. In the current recession, we're seeing something that hasn't been that apparent (or at least as rapid) in the data [2]. There's the normal recession (solid line) as well as a sharp spike (dashed):
Instead of the usual derivative that's a single (approximately Gaussian) shock (solid line), we have a more complex structure with a smoothly falling return to the usual dynamic equilibrium (here exaggerated to −0.2/y so it looks different from zero):
Zooming in on the box in the previous graph, we get the cartoon version of the data above (dashed curve) that eventually asymptotes to the long run dynamic equilibrium rate:
Since we haven't had a shock of this type before in the available data with mass temporary layoffs, it's at least not entirely problematic to suggest an ad hoc model like this one. The underlying "evaporation" of the temporary shock information is based on the entropic shocks that appear in the stock market (including for this exact same COVID-19 event as well as the December 2018 Fed rate hike):
[1] This is not what would be the technically correct model in terms of dynamic equilibrium, but over this short time scale the civilian labor force has been roughly constant since June. It doesn't really change the shape except for the initial slope which is lower because it is undersampled using only monthly CLF measurements instead of weekly ICSA measurements:
The "real" model isn't that different:
[2] It's possible the "step response" in the unemployment rate in the 1950s and 60s is a similar effect, but nowhere near as rapid.
Posted by Jason Smith at 1:33 PM No comments:
Qualitative economics done right, part N [1]
I seem to have involved myself in a Twitter dispute with economist Roger Farmer about what it means to make macro models — or more broadly "the nature of the scientific enterprise", as Roger the economist kindly tried to explain to me, a physicist. Unfortunately, due to his prolific use of the quote tweet the argument is likely impossible to follow. You can see the various threads via this search.
This started when I noted that Roger Farmer's claims about unemployment — in particular in papers supporting his claims that he cites here Farmer (2011) and here Farmer (2015) — are inconsistent with the long run qualitative behavior of the unemployment rate data. That is to say the models are not consistent with the empirical fact that the unemployment rate between recessions falls at a logarithmic rate of about −0.09/y in the US
Let me say right off that I actually appreciate Roger Farmer's work — he does seem to think outside the box compared to the DSGE approach to macro that has taken over the field.
I am going to structure this summary in a series of claims that I am not making because it seems many people have confused requiring qualitative agreement with data with precise measurements of the electron magnetic moment.
I am not saying models with RMS error ε ≥ x must be rejected.
The funniest part about this is that in my figure that I use to show the Roger's model's lack of qualitative agreement actually shows the DIEM has worse RMS error over the range of the data Roger shows in his graph [2].
The thing is it's easy to get low RMS error on past data simply by adding parameters to a fit. This does not necessarily work with projected data, but in general more parameters often yield a better fit to past data and sometimes a better short run projection.
However, my original claim that started this off was that his model based on shocks to the stock market in the supporting papers was "disconnected from the long run empirical behavior of the unemployment rate". It's true that if you take the shocks to the unemployment rate and add the dynamic equilibrium of the S&P 500 model, you get a short run correlation that lasts from 1998 to about 2010:
This correlation around the 2008 recession is pointed out in Farmer (2011) Figure 2. However, you only have to go back to the early 90s recession to get a counterexample to the idea that shocks to the S&P 500 match up with shocks to the unemployment rate.
Second, there is also no particular empirical evidence that the unemployment rate will flatten out at any particular level (be it the natural rate in neoclassical models, or in Farmer's models a rate based on asset prices).Third, Farmer's models do not show log-linear decline between recession shocks.
It is these three basic empirical facts about the unemployment rate that I was referencing when I made my claim in that initial tweet. Even if the RMS error is bad, a model of the unemployment rate is at least qualitatively consistent with the data if 1) the shocks are not entirely dependent on the stock market, 2) the rate does not flatten out at any level except possibly u = 0, or 3) shows an average log-linear decline of −0.09/y between recessions (a fact that was called out in a recent NBER paper, BYDHTTMWFI).
What I am saying is that Roger's models are not qualitatively consistent with the data — think a model of gravity where things fall up — and should be rejected on those grounds. The unemployment rate literally levitates in his models. Additionally there exist models with lower RMS error and qualitative agreement with the data; the existence of those models should give us pause when considering Roger's models.
I am not calling for Roger Farmer to stop working on his models.
It's fine by me if he wants to give talks, write blog posts about his model, or think about improving it in the privacy of his own research notebook. I would prefer that he grapple with the fact that the models are not qualitatively consistent with the data instead of getting defensive and saying that they don't have to pass that low bar. I believe models that are not qualitatively consistent with the data should not be used for policy, though — and that is one of Roger's aims.
It's true that a lot of ideas start out kind of wrong — it's unrealistic to expect a model to match the data exactly right out of the gate. And that's fine! I've had a ton of bad ideas myself! But there is no reason we should expect half-baked ideas lacking qualitative agreement with the data to be taken seriously in the larger marketplace of ideas.
So many comments on the feed were about working towards an insight or the models being just an initial idea that could be improved. Most of us don't get a chance to put even really good ideas in front of a lot of people, so why should we accept something that's apparently not ready for prime time just because it's from a tenured professor? I have a Phd and a lot of garbage models of economic systems that aren't even qualitatively accurate in my Mathematica notebook directory — should we consider all of those? In any case, "it may lead to future progress" is not a reason to say "oh, fine then" to models that aren't qualitatively consistent with empirical data.
What I am saying is that we should set the bar higher for what we consider useful models in macro than "it might qualitatively agree with data one day". We can leave discussion of those models out of journals and policy recommendations.
I am not saying we should apply the standards of physics to economics.
This goes along with people saying I shouldn't be applying "Popperian rejection" to economic models. First off, this misconstrues Popper who was talking about falsifiability as a condition for scientific theories as opposed to pseudoscience. Roger's models are falsifiable — I don't think they are pseudoscience. However, Popper didn't really say much about models being falsified despite the fact that lots of people think he did.
General Relativity is a better model than Newtonian gravity, but both models are falsifiable. We consider Newtonian gravity to be incorrect for strong gravitational fields, precise enough measurements in weak fields, or velocities close to the speed of light. We still use good old Newton all the time — I did just the other day for an orbital dynamics question at work. I fully understand the difference between a model that is an approximation and one that is supposed to be a precise representation of reality.
Popper, however, did not say anything about models that don't qualitatively agree with the data. That's because in most of science, such models are thrown out before they are ever published. Economics, especially macro, operates in a different mode where I guess they consider models that look nothing like the data. Ok, I know the time series data is an exponentially increasing amplitude sine wave and this model says it's a straight line, but hear me out!
If the standards for agreement with the data are below qualitative agreement with the data, then there's really no reason to throw out Steve Keen's models [3]. But that's the problem — there are models that agree with the data! David Andofatto's simple model matches the data fairly well qualitatively! (It gets points 1 and 3 above and could be set to u* = 0 to get 2.) The existence of those models should set the bar for the level of empirical accuracy we should accept in macro models.
What I am saying is that there are existing models that more precisely match the data — and that is the standard I am using. It's not physics, but rather the performance other economic models. If you have a model that has worse RMS error, but has better qualitative agreement with the data, then that's ok to bring to the table. Overall, there seems to be far too much garbage that is allowed in macro because, well, there apparently wouldn't be any macro papers at all if some basic standards were enforced. When I say these models that aren't even qualitatively consistent with the data should be thrown out, I'm not talking about Popperian rejection, I am talking about desk rejection.
One last point ... what is the use of a model that doesn't qualitatively agree with data?
I didn't have a way to phrase this one as something I'm not saying. I literally cannot fathom how you can extract anything useful from a model that does not qualitatively agree with the data. This is lowest bar I can think of.
Yes this model looks nothing like the data but it's useful because I can use it to understand things based on ...
That ellipsis is where I cannot complete the sentence. Based on gut feelings? Based on divine revelation? If the model looks nothing like the data, what is anything derived from it derived from? The pure mathematical beauty of its construction?
It's like someone saying "Here's my model of a car!" and they show you a cat. Yes, this cat isn't qualitatively consistent with a car, but it's a useful first step in understanding a car. The cat gives me insights into how the car works. And you really shouldn't be using Popperian rejection of the cat model of a car because automobile engineering is not the same as physics. Making a detailed car model is unnecessary for figuring out how it works — a cat is perfectly acceptable. Eventually, this cat model will be improved and will get to a point where it matches car data well. The cat model also allows me to make repair recommendations for my car. You see the cat has a front and a back end, where the front has two things that match up with the car headlights, and yes the fuel goes in the front of the cat while it goes in the side of a car but that's at least qualitatively similar ...
Update 7 December 2020
Also realized Roger has made a major stats error here:
Jason Jason. @infotranecon I really don't know where to start. 1. The unemployment rate is I(1) to a first approximation. 2. The S&P measured in real units is I(1) to a first approximation. The two series are cointegrated. The S&P Granger causes the unemployment rate.
Here's Dave Giles, econometrician emeritus extraordinaire:
If two time series, X and Y, are cointegrated, there must exist Granger causality either from X to Y, or from Y to X, both in both directions.
[1] The title is a reference to my old series that led, among other places, to realizing Wynne Godley has been maligned by people who ostensibly support him, and that Dirk Bezemer fabricated quotes in his widely cited paper.
[2] I do find it problematic that Roger not only cuts off the data early compared to data that was available at the time Farmer (2015) was published, but also cuts of data that was available at the time that would appear in the domain of his graph — data that emphasizes that the model does not qualitatively match the data. He also uses quarterly unemployment data which further reduces the disagreement.
[3] I mean c'mon!
Posted by Jason Smith at 10:51 PM No comments:
The four failure modes of Enlightenment values
I don't write about process as much these days — in part because I'm no longer working my previous project that had me effectively commuting across the country every month to the middle of nowhere, and in part because I'm now working a much bigger project that barely leaves me enough time to update even the existing dynamic information equilibrium model forecasts. But recently there seems to be an upswing in calls for civility, declarations of incivility, and long sighs about about how to criticize the "correct" way. I saw George Mason economist Peter Boettke tweet out this the other day that includes a list of "rules" for how to criticize:
How to compose a successful critical commentary:
You should attempt to re-express your target's position so clearly, vividly, and fairly that your target says, "Thanks, I wish I'd thought of putting it that way."
You should list any points of agreement (especially if they are not matters of general or widespread agreement).
You should mention anything you have learned from your target.
Only then are you permitted to say so much as a word of rebuttal or criticism.
It seems fitting that Boettke would tweet this out given his defense of the racist economist/public choice theorist James Buchanan. It's pure "Enlightenment" rationalism — the same Enlightenment that gave us many advances in science, but also racism and eugenics. These rules are in general a great way to go about criticism — but if and only if certain norms are maintained. If these norms aren't maintained, these rules inculcate us with a vulnerability to what I've called viruses of the Enlightenment. To put in the terms of my job: this process has not been subjected to failure mode effect analysis (FMEA) and risk management.
This isn't intended to be a historical analysis of what the "Enlightenment" was, how it came to be, or its purpose, but rather how the rational argument process aspect is used — and misused — in discourse today. I've identified a few failure modes — the vulnerabilities of "Enlightenment" values.
Failure mode 1: Morally repugnant positions
I'm under the impression that like bioethics, medical ethics, or scientific ethics, someone needs to convene an interdisciplinary ethics of rational thought. There are still occasions when science seems to think the pursuit of knowledge is an aim higher than any human ethics, and failures run the gamut from the recent protests to building another telescope on Mauna Kea (part of a longer series of protests) to unethical human experiments.
Rationalism seems to continue to hold this view — that anything should be up for discussion. But we've long since discovered that science can't just experiment on people without considering the ethics, so why should we believe rationalism can just say whatever it wants?
Unfortunately, since we are humans and not rational robots, the discussion of some ideas themselves might spread or exacerbate morally repugnant beliefs. This is contrary to the stated purpose of "Enlightenment values" — open discussion that leads to the "best" ideas winning out in the "marketplace of ideas". And if that direct causality breaks (open discussion → better ideas), the rationale for open discussion is weakened [0]. Simply repeating a lie or conspiracy theory is known to strengthen the belief in it — in part from familiarity heuristic. And we know that simply changing the framing of a question on polls can change people's agreement or disagreement. Right wing publications try to launder their ideas by simply getting mainstream publications to acknowledge them, pulling them out the "conservative ecosystem" — as Steve Bannon has specifically talked about (see here).
Rule #1 fails to acknowledge our humanity. Simply repeating a morally repugnant idea can help spread it, and in the very least requires the critic to carry water for a morally repugnant idea. I cannot be required to restate someone's position that's favorable racism because that requires giving racism my voice, and immorally helping the cause of racism.
For example, Boettke's defense of Buchanan requires him to carry water for Buchanan. If we consider the possibility that Nancy MacLean's claims of a right-wing conspiracy to undermine democracy and promote segregation are true (I am not saying they are, and people I respect — e.g. Henry Farrell — strongly disagree with that interpretation of the evidence), then carrying that water should be held to a level of ethical scrutiny a bit higher than, say, discussing the differences between Bayesian and frequentist interpretations of probability.
This is not to say we shouldn't talk about Buchanan or racism. It's not like we don't experiment with human subjects (e.g. clinical trials). It's just that when we do, there are various ethical questions that need to be formally addressed from informed consent to what we plan to learn from that experiment. A human experiment where we ask the question about whether humans feel pain from being punched in the face is not ethical even if we have consent from the subjects because the likelihood of learning something from it is almost zero. "I'm just asking questions" here is not a persuasive ethical argument.
This is in part why I think shutting down racists from speaking on college campuses isn't problematic in any way. Would we authorize a human experiment where we engage in a campaign of intimidation of minorities just to measure the effects? We already know about racist thought — it's not like these are new ideas. They're already widely discussed — that's how students on campuses know what to protest. And in terms of ethical controls, we might well consider that the moral risk managed solution consistent with intellectual discourse is to have these "speakers" write their "ideas" down, have the forum led by someone who is not a famous racist, or possibly is even opposed to the "ideas" [3].
Failure mode 2: Over-representation of the elite
I criticized Roger Farmer's acceptance of Hayek's interpretation that prices contain information on Twitter a year or so ago (for more detail on my take, you can check out my Evonomics article). Farmer subsequently unfollowed me on Twitter which likely decreases the engagement I get through Twitter's algorithms.
Now my point here is not that one is obligated to listen to every crackpot (such as myself) and engage with their "ideas". It's that we cannot feasibly exist in a world where all expression is heard and responded to — regardless of how misguided or uninformed. And who would want that?
But it does mean participation via the (purportedly) egalitarian Enlightenment ideals of "free speech" and "free expression" in the marketplace of ideas is already limited. And the presumption of "equals" engaging in mutual criticism behind Bottke's "rules" artificially limits the bounds of criticism further. Already elites pick and choose the criticism they engage with — giving them an additional power of "permission" distorts the power balance even more.
Unfortunately public speech and public attention ends up being rationed the same way most scarce resources are rationed — by money. The elite gatekeepers at major publications push the opinions and findings of their elite comrades through the soda straw of public attention. We hear the opinions of millionaires and billionaires as well as people who find themselves in circles where they occasionally encounter billionaires far more often than is academically efficient. Bloomberg and Pinker talking about free speech. MMT. Charles Murray.
Bloomberg writing at bloomberg.com is a particularly egregious example of breaking the egalitarian norm. Bloomberg's undergraduate education is in electrical engineering from the 1960s and he has a business degree from the same era. He has no particular qualifications to judge the quality of discourse, the merits of the freedom of speech, or who should be forced to tolerate right wing intimidation on college campuses. He is in the position he is in because he made a great deal of money which enabled him to take a chance on running for office and becoming mayor of New York.
That said, I don't have particular expertise in this area — but then I don't get to write at bloomberg.com.
As such, "cancelling" the speech of these members of the elite mitigates this bias almost regardless of the actual reason for the cancellation simply because they're over-represented.
More market-oriented people might say having billions of dollars must mean you've done at least something right and therefore could result in being over-represented in the marketplace of ideas. That's an opinion you can argue — in the marketplace of ideas — not implement by fiat. Now this is just my own opinion, but I think having too much money seems to make people less intelligent. Maybe life gets too easy. Maybe you lose people around you that disagree with you because they're dependent on your largess. Lack of intellectual challenge seems to turn your brain to mush in the same way lack of physical activity turns your body to mush. You might have started out pretty sharp, but — whatever the reason — once the cash piles up it seems to take a toll. I mean, have you listened to Elon Musk lately? However, even if you believe having billions of dollars means you have something worthwhile to say, that is not the Enlightenment's egalitarian ethos. King George III had a lot more money than any of the founders of the United States, but it's not like they felt compelled to invite him or his representatives to speak at the signing of the Declaration of Independence.
While everyone has a right to say what they want, that right that does not grant everyone a platform. The "illiberal suppression" of speech can be a practical prioritization of speech. "Cancelling" can mitigate systemic biases, enabling a less biased, more genuine discourse. Why should we have to listen to the same garbage arguments over and over again? Even if they aren't garbage, why the repetition? And even if the repetition is valid, why must we have the same people doing the repeating? [1] An objective function optimized for academic discussion should prioritize novel ideas, not the same people rehashing racism, sexism, or even "enlightenment" values for 30 years.
It's true that novelty for novelty's sake creates its own bias in academia — journals are biased towards novel results rather than confirmation of last year's ideas creating a whole new set of problems. In addition to novel ideas, verifiability and empirically accuracy would also be good heuristics. Expertise or credentials in a particular subject is often a good heuristic for priority, but like the other heuristics it is just that — a heuristic. Knowing when to break with a heuristic is just as valuable as the heuristic itself.
In any case, just assuming elites and experts should be free from criticism unless it meets particular forms of "civility" or that their "ideas" should be granted a platform free from being "cancelled" do not further the spirit of the Enlightenment values that most of us agree on — that what's true or optimal ought to win out in the marketplace of ideas.
Failure mode 3: Rational thought and academic research is not free speech
Something obvious in the norms in Boettke's list is that he appears to recognize rational argument differs from free speech. "Free speech" does not require you to speak in some proscribed manner — that would ipso facto fail to be free speech.
However, the ordinary process by which old ideas die off through rational argument seems to be conflated with suppressing free speech these days. Having your paper on race and IQ rejected for publication because it rehashes the old mistakes and poor data sets is normal rational progress, not the suppression of free speech. "Just asking questions" needs to come to grips with the fact that lots of those questions have been asked before and have lots of answers. Just as we don't need to continuously rehash 19th century aether theory, we don't need to continuously rehash 19th century race science [2].
When shouts of "free speech" are used as a cudgel to force academic discussion of degenerative research programs in Lakatos' sense, it represents a failure mode of "Enlightenment" values and science in general. In order for science and the academy to function, it needs to rid itself these degenerative research programs regardless of whether rural white people in the United States continue to support them. If these research programs turn out to not be degenerative — well, there's a pretty direct avenue back into being discussed via those new results showing exactly that. Assuming they follow ethical research practices, of course.
Failure mode 4: People don't follow the spirit of the rules
Failure to follow the spirit of these rules tends to be rampant in any "school of thought" that claims to challenge orthodoxy from race science to Austrian economics. Feynman's famous "cargo cult science" commencement address is a paean to the spirit of the rules of science (and "Enlightenment" values generally), but unlike Boettke's rules for others Feynman asks fledgling scientists to direct the rules inward — "The first principle is that you must not fool yourself — and you are the easiest person to fool."
This failure mode is far less intense than discussing racism, unethical human experiments or plutocracy, but is far more common. Certainly, the "straw man" application of Rule #1 falls into this. But one of the most frustrating is the one many of us feel when engaging with e.g. MMT acolytes — never acknowledging that you have "re-express[ed] your target's position ... clearly, vividly, and fairly."
Randall Wray or William Mitchell (e.g.) simply never acknowledge any criticism is valid or accurate. Criticism is dismissed as ad hominem attacks instead of being acknowledged. If "successful" critical commentary (per the "rules") requires the subjects to grant you permission, any criticism can be shut down by a claim that the critic doesn't know what they are talking about.
This failure to follow the spirit of the rules appears in numerous ways, from claims that simply expressing a counterargument isn't civil discourse to the failure of someone espousing racist views to admit that those views are actually racist [4] to general hypocrisy. However, the end effect is that failure to follow the spirit of the rules is an attempt to enable the speaker with the ability to grant permission to which facts or counterarguments are allowed and which aren't. That's not really how "Enlightenment values" are supposed to work.
Being granted permission by the subject of criticism is also generally unnecessary to actual progress. Humans — especially established public figures — rarely listen to criticism. Upton Sinclair, Bertrand Russell, and Max Planck captured different dimensions of this (a rationale, a mechanism, and a real course of progress) in pithy quotes (respectively):
It is difficult to get a man to understand something, when his salary depends upon his not understanding it!
If a man is offered a fact which goes against his instincts, he will scrutinize it closely, and unless the evidence is overwhelming, he will refuse to believe it. If, on the other hand, he is offered something which affords a reason for acting in accordance to his instincts, he will accept it even on the slightest evidence.
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.
This is how the world has always been. Your audience for your criticism is never the subjects of the criticism, but rather the next generation. Explaining your subject's position before criticizing it is done as part of Feynman's "leaning over backward" — for yourself — not legitimacy.
Other failure modes
I wanted to collect my thoughts on free speech, "cancelling", and the terrible state of "the discourse" in one essay. This list is not meant to be exhaustive, and I may expand it in the future when I have new examples that don't fit in the previous four categories. For example, you might think that academic journals are a form of intellectual gatekeeping — and I'd agree — but I believe that falls under failure mode 2: the over-representation of the elite, not a separate category. There are also genuine workarounds in that case that everyone uses (arXiv, SSRN). You may also disagree with the particular choice of basis — and I'm certain another orthonormal set of failure modes could span the same failure effect space.
Also, because I talk about MMT along with Public Choice and racism, it doesn't mean I equate them. There are similarities (both get a leg up through the support of billionaires), but I am trying to find examples from across a broad spectrum of politics and political economy. There are major failures and minor. However, I think the examples I've chosen most clearly illustrate these failure modes.
I have been sitting on this essay for nearly a year. I was motivated to action by a tweet from Martin Kulldorff, a professor at the Harvard Medical School about how Scott Atlas was "censored" [5] for spreading misinformation about the efficacy of various coronavirus mitigations (from masks to lockdowns). Atlas is on the current administration's "Coronavirus Task Force" and a fellow at the Hoover institution — a front for right wing views funded by billionaires. There is literally no universe in which this is a true egalitarian "Enlightenment" discussion — from the elite over-representation with Harvard and the billionaires at Hoover to the lack of disclosure of conflicts of interest (failure modes 2 and 4, respectively). That far too many people think Atlas being "censored" is against the spirit of the Enlightenment is exactly how it can fail.
[0] This is similar to the argument against markets as mechanisms for knowledge discovery — information leakage in the causal mechanism breaks it.
[1] More on this here. Why do we have to hear specifically Charles Murray talk about race and IQ? (TL;DR because it's not about ideas, but rather signalling and authority.)
[2] Personally, I think IQ tests should include a true/false question that asks if you think there's nothing wrong with believing the racial or ethnic group to which you belong has on average a higher IQ than others. Answering "true" would indicate you're probably bad at understanding self-bias that is critical to scientific inquiry and should reduce your score by at least 1/2. As George Bernard Shaw said, "Patriotism is your conviction that this country is superior to all other countries because you were born in it." Racism is at its heart your conviction that your race is superior to all other races because you were born into it — the rest is confirmation bias.
[3] In Star Trek: The Next Generation "Measure of a Man" (S:2 E:9), Commander Riker is tasked with prosecuting the idea that the android Lt. Commander Data is not a person, but rather Federation property — something with which Riker personally disagrees.
[4] I have never really understood this. Unless you're hopelessly obtuse, you must know if you have racist views. Why would you be upset about other people identifying them as such? The typical argument being supported by racist views is that racism is correct and right! A racist (who happens to be white by pure coincidence) who believes that other non-white people have lower IQs through some genetic effect is trying to support racism. I have so much more respect for racists, like a pudgy white British man who appears in the beginning of The Filth and the Fury (2000) who openly admits he is racist. That's the Enlightenment!
[5] In no way is this censorship and calling it that is risible idiocy. The tweets were removed on Twitter, a private company, not by the US government. And Atlas still has access to multiple platforms — including amplification by elite Harvard professors, which is what is actually happening.
Posted by Jason Smith at 12:14 PM 4 comments:
Dynamic information equilibrium and COVID-19
Since I've gotten questions, I thought I'd put together a brief explainer on the Dynamic Information Equilibrium Model (DIEM) and its application to the path of COVID-19.
I wrote a preprint on the DIEM a couple years ago (posted at SSRN), and gave a talk about the approach at the UW economics department (see here). The primary application was to labor markets, specifically the unemployment rate. However, the model has many other applications in economics (and the original information equilibrium approach has applications to physics). So how did I end up applying this model to COVID-19? It started from laziness.
Back in April, I was looking at the various models of COVID-19 out there, in particular the IHME model. I wanted to compare the performance to the data, but instead of coding it up myself I took a screenshot and digitized the data. Digitizing adds error and digitizing exponentially falling functions creates all kinds of problems, so I instead fit the IHME forecasts with a DIEM model since I had the code readily available.
It turned out to do a decent job of describing the IHME models, but additionally when there were discrepancies with the observed data it turned out the DIEM worked better. Thinking about the foundations of the DIEM, the reason it worked became clear.
The DIEM is an application of "information equilibrium" — the idea that one process $A$ can be the source of information for another process $B$ such that it takes the same number of bits of (information theory) information to specify $A$ as it does to specify $B$. In a sense, if $A$ is in information equilibrium with $B$ then the two are informationally equivalent. Information equilibrium constrains what a process that matches e.g. $A$ with $B$ can look like.
That's all very abstract, but in economics we have demand for a good being matched with supply (creating a transaction) or job openings being matched with unemployed people (creating a hire) — in equilibrium. In the case of COVID-19, we have virus + healthy person $\rightarrow$ sick person.
Like any communication channel transferring information, these matches can fail to happen. Voices are garbled on a cell phone call causing a failure of the information specifying the sound waves going into the the speaker's phone being transferred completely to the sound waves coming out of the listener's phone. Information equilibrium is something of an idealized state that can be interrupted by non-equilibrium. It may seem vacuous to say sometimes you have equilibrium and sometimes you have non-equilibrium, but the information theory underlying it gives us some useful handles (e.g. failures to fully sample the underlying space, correlations, or other changes in information entropy).
Dynamic information equilibrium asks what information equilibrium can tell us when the processes $A$ and $B$ are growth processes.
A & \sim e^{a t}\\
B & \sim e^{b t}
Just because they are "growth" processes, that doesn't mean they are growing — they could be shrinking or $A$ could be growing and $B$ could be shrinking.
If you go to the paper you can get the details of the mathematics (including how this generalizes to ensembles of processes), but the key result is that information equilibrium requires
\frac{d}{dt} \log \frac{A}{B} \simeq (k - 1) b \equiv \alpha
where $k$ measures the relative information content of events in process $A$ versus events in process $B$. What this says is that if you look at the data on a log plot versus time, it will consist mostly of data where the rate of growth of decline of the data will be a straight line (i.e. exponential growth or decay with constant log-linear slope).
Mostly. What makes this DIEM a model and not a theory is that there's an assumption about what happens in non-equilibrium. In the original application of the model to the unemployment rate, there was an assumption that the straight line isn't interrupted by non-equilibrium too much — that non-equilibrium events are sparse in the time series data. If this wasn't true, then it'd be impossible to measure that $\alpha$ and your model of non-equilibrium would be everything. In labor markets, recessions are the sparse non-equilibrium events in the unemployment rate and the recovery is the equilibrium:
Adding in a logistic step function to handle the recessions shocks gives us a description of the unemployment rate (and other economic variables) over time:
It turns out that the DIEM is really good model of the data for COVID-19 cases and deaths and the forecast from April for the path of the outbreak in the US was remarkably accurate — at least until the 2nd surge in the most recent data (i.e. a non-equilibrium event):
The model works well for most countries, for example here are Italy and the UK (click to enlarge):
The fact that we can't really see that 2nd surge until it starts is due to the model being too simple to predict non-equilibrium events. It can, however, be used to see when a non-equilibrium event is getting started and then monitor its progress. For example, back on May 20th I was predicting the beginning of a 2nd surge in Florida based on the DIEM model of cases there (and I later added a 2nd non-equilibrium shock, which can be handled using e.g. this algorithm):
Another limitation of the model is that it has explicit assumptions that the number of events $n$ you're seeing is large $n \gg 1$. This means the model does not work well when there are just a few cases or deaths and for the initial onset of the outbreak. For example, here is South Korea:
Related to the $n \gg 1$ assumption, we basically start an outbreak at $t_{0}$ in the midst of a non-equilibrium shock with dynamic equilibrium valid for $t \gt t_{0}$. This is effectively treated in the model as if a previous outbreak had recently ended (so that dynamic equilibrium is also valid for $t \lt t_{0}$). The model that would deal with the initial outbreak would almost certainly have to incorporate specifics of the individual virus and the networks it travels in that is beyond the scope of information equilibrium — itself a "shortcut" in describing complex systems.
Other observations
One the things the model predicts is that after a 2nd (or 3rd) surge, the data should return to the previous log-linear path unless something has changed. This appears to be happening for several regions — Germany and King County, WA for example:
This remains to be seen if this holds up. In Sweden, the rate of decline after the 2nd surge in cases seems to have improved and is now comparable to Germany's
Previously, Sweden's rate of decline in cases of $\alpha \simeq$ 2% per day was approximately the same as most of the US — about half the rate of 4-5% apparent in most of Europe as well as in NY state (dominated by counts from NYC). Did people in Sweden change behavior in the face of that 2nd surge? It's an open question. [See update 25 July 2020 below.]
Another thing we need to keep in mind that these are reported cases and deaths. With testing increasing in many countries, more and more cases are discovered. This results in an obvious difference between the rate of decline for cases in the US versus that for deaths:
Other countries have much more similar rates of decline for the two measures. For the US, this means the rate of decline for cases is somewhat lower than would be if testing was widely available. That is to say observed $\alpha_{US} \simeq \alpha_{US}^{\text{cases}} + \alpha_{US}^{\text{testing}}$. It also means the observed rate of decline for cases must decrease at some point in the future (e.g. once testing far outpaces transmission). As it is, the "case fatality rate" (CFR) appears to be heading to zero:
This theoretically should flatten out at some point at the true population CFR (although it's complicated since more deaths can occur during a surge because hospitals are at capacity). Estimated CFRs are in the 0.1% order of magnitude so this point is likely far in the future for the US.
The DIEM is an incredibly simple model. In the senses above — too simple. However, it has also proven useful for estimating the long run path of COVID-19 in several regions. In the places it applies, a given pandemic can be seen as an instance of a universal process with its specific parameters aggregating the effects of multiple aspects of society from policy to social networks to details of the specific virus.
Overall, we should keep in mind that the combination of policy, epidemiology, and social behavior is a social system. There might be empirical regularities from time to time, but humans can always change their behavior and thus change outcomes.
Update 21 July 2020
Minor edits and updated Sweden, Germany and US ratio graphs with more recent data.
The assumption of sparseness mentioned above may have failed us in the estimation of the dynamic equilibrium rate for Sweden — the first and second surges were too close together to properly measure it. It would resolve some inconsistencies (i.e. Sweden seeming to have a higher rate than the rest of Europe before the 2nd surge, Sweden oddly shifting to a rate more consistent with the rest of Europe after the 2nd surge). Here is the model using the most recent data (as of 11am PDT) to estimate the dynamic equilibrium $\alpha$ compared to the original fit (click or tap to enlarge):
Another way to visualize multiple DIEMs is via what I call "seismograms" which displays the temporal information about the parameters (the shock width and the shock timing) on a timeline like this for several US states (click or tap to enlarge — the blue is only to differentiate the US aggregate, not direction of shock as in other uses):
The translation is fairly straightforward — a longer shock is represented by a wider band placed at the center (in time) of a non-equilibrium shock (above red-ish, below in gray). In the link above, you can add amplitude/magnitude information by scaling the color but this version just emphasizes time. Here's a graphical version of how these translate from my book:
Update 9 September 2020
The "return to equilibrium" has turned out to be remarkably accurate for the US:
A 3rd surge may be getting started in the US (associated with schools opening for the new year) — zoomed in on the gray box in the previous graph:
In Sweden, there is a 3rd surge ending ...
Also, the predicted path of deaths in the US using cases turned out to be fairly accurate with only the lag being uncertain in advance:
The ratio of deaths to cases for the US has returned to the "equilibrium" of a decline due to a likely combination of effects from demographic to increasing testing (the latter seeming like the primary contribution):
Update 2 October 2020
Another predictive success of the DIEM for COVID-19 — calling a 3rd surge in Florida on 9/13:
And its subsequent appearance:
International data from European CDC
https://www.ecdc.europa.eu/en/geographical-distribution-2019-ncov-cases
US state data from the COVID Tracking Project
https://covidtracking.com/
Economic data from FRED and Atlanta Fed Wage Growth tracker
https://www.frbatlanta.org/chcs/wage-growth-tracker.aspx?panel=1
That which we call a model by any other name would describe as well ... or not
Shakespeare, I think.
I'm in the process of trying to distract myself from obsessively modeling the COVID-19 outbreak, so I thought I'd write a bit about language in technical fields.
David Andolfatto didn't think this twitter thread was very illuminating, but at its heart is something that's a problem in economics in general — and not just macroeconomics. It's certainly a problem in economics communication, but I also believe it's a kind of a professional economics version of "grade inflation" where "hypotheses" are inflated into "theorems" and "ideas" [1] are inflated into "models".
Now every economist I've ever met or interacted with is super smart, so I don't mean "grade inflation" in the sense that economists aren't actually good enough. I mean it in the sense that I think economics as a field feels that it's made up of smart people so it should have a few "theorems" and "models" in the bag instead of only "hypotheses" and "ideas" — like how students who got into Harvard feel like they deserve A's because they got into Harvard. Economics has been around for centuries, so shouldn't there be some hard won truths worthy of the term "theorem"?
This was triggered by his claim that Ricardian equivalence is a theorem (made again here). And I guess it is — in economics. He actually asked what definitions were being used for "model" and "theorem" at one point, and I responded (in the manner of an undergrad starting a philosophy essay [2]):
the·o·rem
a general proposition not self-evident but proved by a chain of reasoning; a truth established by means of accepted truths
mod·el
a system of postulates, data, and inferences presented as a mathematical description of an entity or state of affairs
I emphasized those last clauses with asterisks in the original tweet (bolded them here) because they are important aspects that economics seems to either leave off or claim very loosely. No other field (as far as I know) uses "model" and "theorem" as loosely as economics does.
The Pythagorean theorem is established from Euclid's axioms (including the parallels axiom, which is why it's only valid in Euclidean space) that include things like "all right angles are equal to each other". Ricardian equivalence (per e.g. Barro) instead based on axioms (assumptions) like "people will save in anticipation of a hypothetical future tax increase". This is not an accepted truth, therefore Ricardian equivalence so proven is not a theorem. It's a hypothesis.
You might argue that Ricardian equivalence as shown by Barro (1974) is a logical mathematical deduction from a series of axioms — just like the Pythagorean theorem — making it also a theorem. And I might be able to meet you halfway on that if Barro had just written e.g.:
A_{1}^{y} + A_{0}^{o} = c_{0}^{o} + (1 - r) A_{1}^{o}
and proceeded to make a bunch of mathematical manipulations and definitions — calling it "an algebraic theorem". But he didn't. He also wrote:
Using the letter $c$ to denote consumption, and assuming that consumption and receipt of interest income both occur at the start of the period, the budget equation for a member of generation 1, who is currently old, is [the equation above]. The total resources available are the assets held while young, $A_{1}^{y}$, plus the bequest from the previous generation, $A_{0}^{o}$. The total expenditure is consumption while old, $c_{1}^{o}$, plus the bequest provision, $A_{1}^{o}$, which goes to a member of generation 2, less interest earnings at rate $r$ on this asset holding.
It is this mapping from these real world concepts to the variable names that makes this a Ricardian Equivalence hypothesis, not a theorem, even if that equation was an accepted truth (it is not).
In the Pythagorean theorem, $a$, $b$, and $c$ aren't just nonspecific variables, but are lengths of the sides of a triangle in Euclidean space. I can't just call them apples, bananas, and cantaloupes and say I've derived a relationship between fruit such that apples² + bananas² = cantaloupes² called the Smith-Pythagoras Fruit Euclidean Metric Theorem.
There are real theorems that exist in the real world in the sense I am making — the CPT theorem comes to mind as well as the noisy channel coding theorem. That's what I mean by economists engaging in a little "grade inflation". I seriously doubt any theorems exist in social sciences at all.
The last clause is also important for the definition of "model" — a model describes the real world in some way. The Hodgkin-Huxley model of a neuron firing is an ideal example here. It's not perfect, but it's a) based on a system of postulates (in this case, an approximate electrical circuit equivalent), and b) presented as a mathematical description of a real entity.
Reproduced from Hodgkin and Huxley (1952)
The easiest way to do part b) is to compare with data but you can also compare with pseudo-data [3] or moments (while its performance is lackluster, a DSGE model meets this low bar of being a real "model" as I talk about here and here). *Ahem* — there's also this.
Moment matching itself gets the benefit of "grade inflation" in macro terminology. I'm not saying it's necessarily wrong or problematic — I'm saying a model that matches a few moments is too often inflated to being called "empirically accurate" when it really just means the model has "qualitatively similar statistics".
One of the problems with a lack of concern with describing a real state of affairs is that you can end up with what Paul Pfleiderer called chameleon models — models that are proffered for use in policy, but when someone questions the reality of the assumptions the proponent changes the representation (like a chameleon) to being more of a hypothesis or plausibility argument. You may think using a so-called "model" that isn't ready for prime time can be useful when policy makers need to make decisions, but Pfleiderer put it well in a chart:
But what about toy models? Don't we need those? Sure! But I'm going to say something you're probably going to disagree with — toy models should come after empirically successful theory. I am not referring to a model that matches data to 10-50% accuracy or even just gets the direction of effects right as a toy model — that's a qualitative model. A toy model is something different.
I didn't realize it until writing this, but apparently "toy model" on Wikipedia is a physics-only term. The first line is pretty good:
In the modeling of physics, a toy model is a deliberately simplistic model with many details removed so that it can be used to explain a mechanism concisely.
In grad school, the first discussion of renormalization in my quantum field theory class used a scalar (spin-0) field. At the time, there were no empirically known "fundamental" scalar fields (the Higgs boson was still theoretical) and the only empirically successful uses of renormalization were QED and QCD — both theories with spin-1 gauge bosons (photons or gluons) and spin-½ fermions (electrons or quarks). Those details complicate renormalization (e.g. you need a whole different quantization process to handle non-Abelian QCD). The scalar field theory was a toy model of renormalization of QED — used in a class to teach renormalization to students about to learn QED that had already been shown to be empirically accurate to 10s of decimal places.
The scalar field theory would be horribly inaccurate if you tried to use it to describe the interactions of electrons and photons.
The problem is not that many economic "toy models" are horribly inaccurate, but rather that they don't derive from even qualitatively accurate non-toy models. Often it seems no one even bothers to compare the models (toy or not) to data. It's like that amazing car your friend has been working on for years but never seems to drive — does it run? Does he even know how to fix it?
At this stage, I'm often subjected to all kinds of defenses — economics is social science, economics is too complex, there's too much uncertainty. The first and last of those would be arguments against using mathematical models or deriving theorems at all, which a fortiori makes my point that the words "model" and "theorem" are inflated from their common definition in most technical fields.
David's defense is (as many economists have said) that models and theorems "organize [his] thinking". In the past, my snarky comment on this has been that economists must have really disorganized minds if they need to be organizing their thinking all the time with models. Zing!
But the thing is we have a word for organized thought — idea [4]:
i·de·a
a formulated thought or opinion
But what's in a name? Does it matter if economists call Ricardian equivalence a theorem, a hypothesis, or an idea? Yes — because most human's exposure to a "theorem" (if any) is the Pythagorean Theorem. People will think that the same import applies to Ricardian Equivalence, but that is false equivalence.
Ricardian Equivalence is nowhere near as useful as the Pythagorean Theorem, to say nothing about how true it is. Ricardian Equivalence may be true in Barro's model — one that has never been compared to actual data or shown to represent any entity or state of affairs. In contrast, you could right now with a ruler, paper, and pencil draw a right triangle with sides of length 3, 4, and 5 inches [5].
I hear the final defense now: But fields should be allowed their own jargon — and not policed by other fields! Who are you fooling?
Well, it turns out economists are fooling people — scientists who take the pronouncements of economics at face value. I write about this in my book (using two examples of E. coli and capuchin monkeys):
We have trusting scientists going along with rational agent descriptions put out there by economists when these rational agent descriptions have little to no empirical evidence in their favor — and even fewer accurate descriptions of a genuine state of affairs. In fact, economics might do well to borrow the evolutionary idea of an ecosystem being the emergent result of agents randomly exploring the state space.
My "to be fair" items so that I'm not just "calling out economics" are "information" in information theory and "theory" in physics. The former is really unhelpful — I know it's information entropy, but people who know that often shorten it to just information and people who don't think information is like knowledge despite the fact that information entropy is maximized for e.g. random strings.
In physics, any quantum field theory Lagrangian is called a "theory" even if it doesn't describe anything in the real world. It is true that the completely made up ones don't get names like quantum electrodynamics but rather "φ⁴ theory". If it were economics, that scalar field φ would get a name like "savings" or "consumption".
[1] I had a hard time coming up with the word here — my first choice was actually "scratch work". Also "concepts" or "musings".
[2] ... at 2am in a 24 hour coffee shop on the Drag in Austin.
[3] "Lattice data" (for QCD) or data generated with VAR models (in the case of DGSE) are examples of pseudo-data.
[4] Per [1], this is also why I thought "concept" would work here:
con·cept
something conceived in the mind
[5] This is actually how ancient Egyptians used to measure right angles — by creating 3-4-5 unit triangles [pdf].
Posted by Jason Smith at 6:30 PM 11 comments:
|
CommonCrawl
|
Two Identical Blocks Of Mass M Each Are Connected By A Spring
Coupled Oscillators and Normal Modes — Slide 8 of 49. PHYSICS 1401 (1) homework solutions 10-54 Two 30 kg children, each with a speed of 4. 1, but now we suppose that the. - [Instructor] Let's solve some more of these systems problems. Let the blocks accelerate by 𝑎. Springs in parallel Suppose you had two identical springs each with force constant k o from which an object of mass m was suspended. Worked example 11. 0 kg and they are connected together by a cord. When on object with mass M is placed on a smooth horizontal surface, and a string tied to it runs over a pulley and connects to a dangling mass m, a tension is created in the string. Even with these two industry-standard practices installed side by side, hundreds of crane collisions still occur each year, nearly all at night. Within the last few years, the power line has been marked with bird flight diverters and FireFly bird diverters to increase visibility. 90 kg are connected as shown in the diagram below. The spring is unstretched when the system is as shown in the gure,. 0 kg moving at 2. 8 J C) 0 J D) 80 J. They are given velocities toward each other such that the 1. Greatest Explain your reasoning. 0 kg moving at 2. Two identical blocks of mass m are connected with a massless spring and placed on a horizontal and frictionless table. This block is connected to two other blocks of masses M and 2M using two massless pulleys and strings. Straightening legs pushes on upper body and accelerates it up and down the track. This system is initially at rest with the rod horizontal, as shown above, and is free to rotate about a frictionless, horizontal axis through the center of the rod and perpendicular to the plane of the page. 6, in the figure below are connected by a massless rope that passes over a pulley. 30 m s−1 along a horizontal surface, as shown in Fig. The spring is initially unstretched. At some later instant, the left block is moving at 1 m s to the left, and the other block is moving. The Earth has a mass of 6. The block of mass m2 is attached to a spring of force constant k and m1 > m2. Determine the acceleration of M 2 at the instant the masses are released. Two identical blocks, A and B, on a frictionless surface are connected by a spring of negligible mass. asked by alex on October 27, 2014; physics. Assume that the incline is smooth. Since the masses are initially at rest,. [25] Ann (mass 50 kg) is standing at the left end of a 15-m-long, 500 kg cart that has frictionless wheels and rolls on a frictionless track. 0 kg is moved at a steady speed of v = 15. Frequency of vibration of two masses connected by a spring? Two masses #m_1# and #m_2# are joined by a spring of spring constant #k#. Two blocks are connected by a light string that passes over a frictionless pulley, as shown in Figure 2. We are given two blocks, each of mass m, sitting on a frictionless horizontal surface. 7 kg B) 17 kg C) 27 kg D) 1. There is no friction. A two degree-of-freedom system (consisting of two identical masses connected by three identical springs) has two natural modes, each with a separate resonance frequency. Three small identical coins of mass m each are connected by two light non-conducting strings of length d each. 0 kg are connected to each other by a light cord which passes over two identical pulleys , each having a moment of inertia I = 0. 0kg, which sits on a frictionless horizontal surface as in the figure below. If the mass is sitting at a point where the spring is just at the spring's natural length, the mass isn't going to go anywhere because when the spring is at its natural length, it is content with its place in the universe. Two blocks of equal mass m are tied to each other through a light string. A 150-kg crate is released from an airplane traveling due east at an altitude of 7400 m with a ground speed of 120 m/s. A second block of mass m is then placed on top of the first block. 0 V and then disconnected from the battery. The mass on the left, m 1, is on a frictionless inclined plane that slopes down from the pulley, the other mass, m 2, hangs freely. 44 drop a mass on a spring Drop a frame with an oscillating mass on a spring and the mass will be pulled up but stop oscillating. A block of mass m = 2. Two identical springs of spring constant k are attached to a block of mass m and to fixed supports as shown in figure. A frictional force of 0. Suddenly, Ann starts running along the cart at a speed of 5. Two blocks of mass m1 = 41 kg and m2 = 15 kg are connected by a massless string that passes over a pulley as shown in the figure below. Straightening legs pushes on upper body and accelerates it up and down the track. Introduction All systems possessing mass and elasticity are capable of free vibration, or vibration that takes place in the absence of external excitation. Find: (a) the stiffness of an equivalent single spring; (b) the period of oscillation. (7 points, suggested time 13 minutes) Two blocks are connected by a string of negligible mass that passes over massless pulleys that turn with. The two blocks are connected by a string of negligible mass passing over a frictionless pulley. Two blocks of masses 1. The upper wire has negligible mass and the lower wire has a uniform mass of 0. Choose the one alternative that best completes the statement or answers the question. 0 N, m1 = 14. Two identical blocks A and B are connected by a rod and rest against vertical and horizontal planes respectively as shown in figure 6. 0 kg and radius R = 0. Assume that each block has mass = asked by sammy on September 29, 2012; physics. Background. d m m v 2 1 1i. The table and the pulley are frictionless. The block lands on and compresses the spring by a. If a force F is applied to mass 2m, what is the force on mass m? Phys 1021 Ch 7, p 16 Two blocks of masses 2m and m are in contact on a horizontal frictionless surface. For Block 2, the acceleration is downward, or in the –y direction. Introduction All systems possessing mass and elasticity are capable of free vibration, or vibration that takes place in the absence of external excitation. A charge Q is slowly. Calculate the effect force constant keff in each of the three cases (a), (b), and (c). The Earth has a mass of 6. Two identical springs of spring constant k are attached to a block of mass m and to fixed supports as shown in figure. The two-mass combination is now pulled to the right a distance of A' greater than A and released. Two identical springs, each of spring constant K, are connected in a series and parallel as shown in figure 4. The interaction force between the masses is represented by a third spring with spring constant κ12, which connects the two masses. Sketch the trajectory of point G, the center of mass, for both bars on the diagram below. 6, in the figure below are connected by a massless rope that passes over a pulley. 0° is connected to a spring of negligible mass having a spring constant of 100 N/m (Fig. The blocks are placed on a smooth table with the spring between them compressed 1. The spring is unstretched when the system is as shown below, and the incline is frictionless. The mass on a spring motion was discussed in more detail as we sought to understand the mathematical properties of objects that are in periodic motion. 1 m)]/(µ kg) = 1. Assume that each block has mass = asked by sammy on September 29, 2012; physics. What would be the effective spring constant of the system above if k 2 = 3k 1? (A) 4k 1 (B. In the figure we see two blocks connected by a string and tied to a wall, with theta = 33°. When same springs are connected as shown in the figure below, these are said to be connected in series. The blocks are initially resting on a smooth horizontal floor with the spring at its natural length as shown in Fig. The springs coupling mass 1 and 3 and mass 1 and 2 have spring constant k, and the spring coupling mass 2 and mass 3 has spring constant 2k. After A catches the ball, his speed with respect to the ice is A) 4. The mass of the lower block is m = 1. AP® PHYSICS B 2008 SCORING GUIDELINES Question 2 15 points total Distribution of points a (a) 4 points For a correct application of Newton's 2nd law for the two-block system 1 point Fm m=+() AB Note: Newton's 2nd law may be applied to each block separately to produce an equivalent solution. An example of two equal masses hitting each other is shown below. tached spring. 17 The diagram below shows two identical wooden planks, A and B, at different incline angles, used to slide concrete blocks from a truck. Suppose that the masses are attached to one another, and to two immovable walls, by means of three identical light horizontal springs of spring constant , as shown in Figure 15. 5kg is released from rest at the top of a curved-shaped frictionless wedge of mass m 2 = 3. 0 cm from the equilibrium point and then released to set up a simple harmonic motion. A mass of m is suspended from them. In (d) the axis is the geometrical axis of the cylinder. (HI) A mass m is connected to two springs, Wit11 spring [. Two identical blocks, each of mass M, are connected by a light string over a frictionless pulley of radius R and rotational inertia (Fig). The force will be greater on the person who pulls harder. Two identical blocks resting on a frictionless, horizontal surface are connected by a light spring having a spring constant k = 100 N/m and an unstretched length Li = 0. Assume the mass slides on a frictionless floor and neglect damping effects in the springs. In experiment 1, two hands push identical blocks of mass m toward each other across a level, frictionless surface with a constant force of magnitude F0 over the interval from t1 to t2. In the system below a mass, m, is hung from a rectangular frame by a spring (k). After A catches the ball, his speed with respect to the ice is A) 4. Two springs with the same unstretched length but different force constants k1 and k2 are attached to a block of mass m on a level, frictionless surface. The magnitude of the tension in the string between blocks B and C is T = 3. Determine the value for the equivalent mass of the spring, me-spring, from the value of the y-intercept and the value of k found in step 10. What would the oscillation period be if the two springs were connected in parallel? A. 0 m to the right. A charge Q is slowly placed on each block, causing the spring to stretch to an equilibrium length L = 0. At equilibrium, the spring hangs vertically downward. We will solve this in two ways { a quick way and then a longer but more fail-safe way. The pulley is frictionelss. Two blocks are connected together by an ideal spring, and are free to slide on a horizontal frictionless surface. Assume that the spring constants are. 11 kg blocks (labeled 1 and 2) are initially at rest on a nearly frictionless surface, connected by an unstretched spring, as shown in the upper diagram, where x2 = 0. Least All the same. moves by the distance that the c. 0 kg block is pulled. Each coin carries an unknown charge q. 00 kg block is moving at an initial velocity of 6. The drawing shows a top view of a frictionless horizontal surface, where there are two springs with particles of mass m 1 and m 2 attached to them. Model the leg of -mass, midway between them, remains at rest. The blocks are released from rest. line of impact. A charge Q is slowly. Visualize a wall on the left and to the right a spring , a mass, a spring and another mass. The normal force acts on this block. The coefficient of static friction associated with the interaction between the two blocks is. only one force F. Block 1 of mass m1 and block 2 of mass m2 are sliding along the same line on a horizontal frictionless surface when they collide at time tc. at rest, what is the tension in each cord? (b) If the two buckets are pulled upward with an acceleration of 1. These diagrams represent two examples of signs hanging from identical ropes. For Block 1, this acceleration is up the incline, or in the +x direction. Let k 1 and k 2 be the spring constants of the springs. Friction in each bearing is negligible. Let the blocks accelerate by 𝑎. The ratio of the period for the springs connected in parallel (Figure 1) to the period for the springs connected in the series (Figure 2) is $ 1/2 $ What would be the better way to solve this? I have used this law $$\begin{equation} T = 2 \pi \sqrt{\frac{l}{g}} \end{equation}$$ and assumed, $2l$ for the $2^{nd}$ picture but got wrong answer. Two identical springs, each of spring constant K, are connected in a series and parallel as shown in figure 4. The same spring is then set up in a pulley system as shown in the drawing. Also calculate the nor- mal force N exerted by the side of the slot on the block. Two identical springs of spring constant k are attached to a block of mass m and to fixed supports as shown in figure. Block II has an ideal massless spring (with force constant, k) attached to one side and is initially stationary while block I approaches it across a frictionless, horizontal surface with a speed v o. Two Spring-Coupled Masses Consider a mechanical system consisting of two identical masses that are free to slide over a frictionless horizontal surface. Let k_1 and k_2 be the spring constants of the springs. Finally, the plate separation in one of the capacitors is doubled. Composite bodies. Form B RELEASED Fall 2009 Page 3 Go to next page 5. We encounter the important concepts of normal modes and normal coordinates. 300 m as shown in Figure P19. Determine the speed of each block when B descends y B = 1. SHM using a half-spring differ from thefrequency using same mass and the entire spring? Q13. 3 × 103 m/s B) 4. First, recall Newton's second Law of Motion: Newton's second Law of Motion Everyone unconsciously knows this Law. REASONING AND SOLUTION A block is attached to a horizontal spring and slides back and forth in simple harmonic motion on a frictionless horizontal surface. c) An empty roller coaster car is at the top of a loop-the-loop, and is travelling just fast enough to stay on the tracks. After A catches the ball, his speed with respect to the ice is A) 4. Two identical spherical balls, P and Q, each of mass 100 g, are suspended at the same point from a ceiling by means of identical light, inextensible insulating strings. 1 The block hits a spring and decelerates. line of impact. Place the meter stick vertically alongside the hanging mass. A second block of mass 2M and initial speed vo collides with and sticks to the first block Develop expressions for the following quantities in terms of M, k, and vo a. Suppose F = 79. The spring is then released by remote control. Choose the one alternative that best completes the statement or answers the question. Two blocks of mass 1. •10 An oscillating block -spring system takes 0. The other end of the spring is fixed to a wall. 0% that of m 1, are attached to a cord of negligible mass which passes over a frictionless pulley also of negligible mass. The coefficient of kinetic friction between the block and the table is 0. Three identical blocks connected by ideal strings are being pulled along a horizontal frictionless surface by a horizontal force. 250 m and mass M = 10. Assuming that block is also placed horizontal on surface. A spring can either push (when compressed) or pull (when stretched). Two forces are acting on a 2. The blocks are attached to three springs, and the outer springs are also attached to stationary walls, as shown in Figure 13. They are initially given charges of -2. Lecture 2: Spring-Mass Systems Reading materials: Sections 1. C) equal to the weight of the block. 1 The block hits a spring and decelerates. The system is initially shoved against a wall so that the spring is compressed a distance D from its original uncompressed length. An example of two equal masses hitting each other is shown below. When I was a teenager growing up in South Los Angeles in the 1960s, the most vaunted among us were the guys who tinkered with their cars and souped them up with fuel injection, overhead cams, dual carburetors, heavy-duty MacPherson strut suspensions, and 8-track super stereos with speakers arrayed. 10 (Secinstant, the momenta of the atoms relative to the center of mass tion 13. While the blocks are moving, what is the tension T in the rope to connect the two blocks? What is the mass of block B?. Use g = 10 m/s², (sin30º = ½, cos 30º = 0. The period of its oscillations is T. Neglect any friction between the blocks and the slots, and neglect the mass of the springs. 14 Two blocks are connected by a spring. If the system is in equilibrium, what will be the reading of the spring scale in newtons? (b) Two blocks each of mass m = 8. The electric field in an empty, square metal box of sides of length L = in the x and. 60 m above its initial position, its velocity is 1. The other end of the spring is connected to a wall. The other end of the spring is fixed to a wall. The two outer springs each have force constant k, and the inner spring has force constant k0. This equation tells us that as the mass of the block, m, increases and the spring constant, k, decreases, the period increases. 250 m and mass M = 10. Find the time after which the center of mass of this system will be again at the position it was when t = 0. A block with a mass M is attached to a vertical spring with a spring constant k. Two identical blocks A and B are connected by a rod and rest against vertical and horizontal planes respectively as shown in figure 6. Physics 1120: Work & Energy Solutions Energy 1. What will be the frequency of vertical oscillation? FIGURE 14-31 Problem 24. 90 kg is suspended as shown in the diagram below. 22) A block is at rest on a rough incline as shown. The blocks initially slide together on a frictionless surface with velocity v to the right. Two blocks A and B each of mass m, are connected by a massless spring of natural length L and spring constant k. If friction were included on the surface, say for the sake of concreteness that the coefficient of kinetic friction is $\mu_k$, then each object experiences a force equal in magnitude to $\mu_k m_1g$ for mass 1 and $\mu_k m_2 g$ for mass $2$. The magnitude of the tension in the string between blocks B and C is T= 3. (a) Write the di erential equation of motion for the system. Two simple pendulums, each of length 0:300 m and mass m = 0:950 kg, are cou-pled by attaching a light, horizontal spring of spring constant k = 1:50 Nm¡1 to the masses. (c) A block of mass 23. (Figure 1) The magnitude of the tension in the string between blocks B and C is = 3. 68 A block with mass M rests on a frictionless surface and is connected to a horizontal spring of force constant k. The force constant of each spring is most nearly. These diagrams represent two examples of signs hanging from identical ropes. Two blocks with masses m1 = 1. In either case, the tail of the vector force is attached to the contact point. Physics–Jan. The coefficient of static friction between the blocks is s. The coins are placed on a horizontal frictionless non-conducting surface as shown (the angle between the strings is very close to 180 o ). A laboratory cart with a mass of m and velocity v runs headlong into a stationary spring bumper and compresses the spring. A block of mass 1. Two identical bodies of mass M move with equal speeds v. and the acceleration a. The coefficient of kinetic friction between m2 and the incline is 0. Find the time m when the relative motion between the blocks starts. Worked example 11. When you remove one of the masses from the rope and attach that rope end to the table clamp, the scale still reads 9. 245 kg that is set oscillating over a frictionless floor. 3: Block and two springs Question: A block of mass is attached to two springs, as shown below, and slides over a horizontal frictionless surface. Bristling with bombs and missiles, each plane has an average mass of 29 000 kg. A bullet of mass m b is fired into the block from the left with a speed v 0 and comes to rest in the block. When the cart accelerates the scale reads 16 newtons. Two identical particles of mass m carry a charge Q. B catches the ball and throws it back to A with the same speed. We will study coupled oscillations of a linear chain of identical non-interacting bodies connected to each other and to fixed endpoints by identical springs. 14-31<1 and b, Show that the period for the configure. A system is composed of two blocks of mass ##m_1## and ##m_2## connected by a massless spring with spring constant k. The distance x in positive upwards) is measured from a fixed reference and defines the position of the frame. If the mass is sitting at a point where the spring is just at the spring's natural length, the mass isn't going to go anywhere because when the spring is at its natural length, it is content with its place in the universe. A block with a mass M is attached to a vertical spring with a spring constant k. The coefficient of static friction between the blocks is. Adding a a m s T N Block T f F m a T a Block m g T ma T a f N m g N N F m g N. Everyone knows that heavier objects require. In a previous part of this lesson, the motion of a mass attached to a spring was described as an example of a vibrating system. 68 A block with mass M rests on a frictionless surface and is connected to a horizontal spring of force constant k. Two identical wheeled carts of mass m are connected to a wall and each other as shown in the figure below. Develop expressions for the following quantities in terms of M, k, and vo v, the speed of the blocks immediately after impact (b) x, the maximum distance the spring is compressed m O k 0 3k. The other end of the spring is fixed to a wall. Calculate the power delivered by this force. Where is the block located when its velocity is a maximum in magnitude?. 90 kg is suspended as shown in the diagram below. Where's the Energy? In this problem, we will consider the following situation as depicted in the diagram: A block of mass m slides at a speed v along a horizontal smooth table. If the two - blocks stick together after the one-dimensional collision, what maximum compression of the spring does occur when the blocks momentarily stop? (Ans: 0. A second identical spring k is added to the first spring in parallel. b) Two masses are connected by a string passing over a pulley. A block of mass m is supported by two identical parallel vertical springs, each with spring stiffness constant k (Fig. Determine (b) the acceleration of the system (in terms of m1, m2 and m3), ( c) the net force on each block, and ( d) the. The coefficient of static friction between the blocks is s. Two identical copper blocks are connected by a weightless, unstretchable cord through a frictionless pulley at the top of a thin wedge. D) less than the weight of the block. A laboratory cart with a mass of m and velocity v runs headlong into a stationary spring bumper and compresses the spring. AP Physics Free Response Practice - Momentum and Impulse 1976B2. Two Identical Blocks Of Mass M Each Are Connected By A Spring.
|
CommonCrawl
|
FSBC: fast string-based clustering for HT-SELEX data
Shintaro Kato ORCID: orcid.org/0000-0001-6240-13261,2,
Takayoshi Ono2,
Hirotaka Minagawa1,
Katsunori Horii1,
Ikuo Shiratori1,
Iwao Waga1,
Koichi Ito2 &
Takafumi Aoki2
The combination of systematic evolution of ligands by exponential enrichment (SELEX) and deep sequencing is termed high-throughput (HT)-SELEX, which enables searching aptamer candidates from a massive amount of oligonucleotide sequences. A clustering method is an important procedure to identify sequence groups including aptamer candidates for evaluation with experimental analysis. In general, aptamer includes a specific target binding region, which is necessary for binding to the target molecules. The length of the target binding region varies depending on the target molecules and/or binding styles. Currently available clustering methods for HT-SELEX only estimate clusters based on the similarity of full-length sequences or limited length of motifs as target binding regions. Hence, a clustering method considering the target binding region with different lengths is required. Moreover, to handle such huge data and to save sequencing cost, a clustering method with fast calculation from a single round of HT-SELEX data, not multiple rounds, is also preferred.
We developed fast string-based clustering (FSBC) for HT-SELEX data. FSBC was designed to estimate clusters by searching various lengths of over-represented strings as target binding regions. FSBC was also designed for fast calculation with search space reduction from a single round, typically the final round, of HT-SELEX data considering imbalanced nucleobases of the aptamer selection process. The calculation time and clustering accuracy of FSBC were compared with those of four conventional clustering methods, FASTAptamer, AptaCluster, APTANI, and AptaTRACE, using HT-SELEX data (>15 million oligonucleotide sequences). FSBC, AptaCluster, and AptaTRACE could complete the clustering for all sequence data, and FSBC and AptaTRACE performed higher clustering accuracy. FSBC showed the highest clustering accuracy and had the second fastest calculation speed among all methods compared.
FSBC is applicable to a large HT-SELEX dataset, which can facilitate the accurate identification of groups including aptamer candidates.
FSBC is available at http://www.aoki.ecei.tohoku.ac.jp/fsbc/.
Systematic evolution of ligands by exponential enrichment (SELEX) is an experimental method for identifying aptamers, which bind to specific target molecules with high affinity and specificity [1, 2]. SELEX is an iterative method with multiple rounds for the enrichment of aptamers from the initial oligonucleotide random library. Each round consists of selection with target molecules and amplification with polymerase chain reaction(PCR). Aptamers are RNA or short single-stranded DNA molecules, which fold into a three-dimensional structure and bind different types of target molecules such as proteins [3], small molecules [4], toxins [5], ions [6], and cell surfaces [7]. Owing to the wide variety of possible target molecules, aptamers are commonly used for therapeutics [8], clinical diagnostics [9], the high-throughput multi-protein measurement [10], imaging [11], and biosensors [12].
A next-generation sequencing (NGS), which was originally developed for whole-genome sequencing, is available for analysis of large oligonucleotide pools obtained by SELEX to acquire an enormous sequence dataset for predicting aptamer candidates. This combined use of SELEX and NGS is referred to as high-throughput SELEX (HT-SELEX). It is not reasonable to evaluate the binding affinity with all observed sequences from NGS. In general, dozens of candidate aptamers are selected from the HT-SELEX data for evaluation with experimental analysis considering cost and time-consuming. In other words, the list of dozens of candidate aptamers is required from HT-SELEX data for evaluation with experimental analysis. Clustering for HT-SELEX data is an effective process to identify the sequence groups which are related to aptamer candidates, or noise sequences such as non-specific binding sequences, bead-binders, and PCR biased sequences which are easy to be enriched by PCR. Clustering is also useful to identify different types of aptamers such as different binding epitopes and for understanding the diversity and enrichment of oligonucleotide sequence pools. Figure 1 describes the typical procedure of selecting different types of aptamer candidates from the clustering results for binding verification with experimental analysis.
Procedure from obtaining clustering results to experimental analysis. HT-SELEX sequence data are grouped into different clusters according to cluster ranking. Sequences with a high frequency from high-ranked clusters are synthesized and evaluated for binding affinity with experimental analysis
Several clustering methods have been developed for HT-SELEX data to date, including FASTAptamer [13], AptaCluster [14, 15], APTANI [16], and AptaTRACE [17]. FASTAptamer generates clusters based on Levenshtein-distance (LD) which represents the full length of sequence similarity with highly ranked sequences. AptaCluster first roughly groups sequences with local sensitive hashing (LSH) and then generates clusters with the short k-mer sequence similarity. APTANI and AptaTRACE identify clusters with short motifs considering the nucleic acid secondary structure. APTANI estimates motifs from a single round of SELEX data whereas AptaTRACE estimates motifs by tracing the changes of frequency between multiple rounds.
It is often observed that the most enriched sequence does not show the binding affinity to the target molecules. These noise sequences are likely to be generated by PCR bias (some oligonucleotide molecules are easy to be enriched by PCR) or non-specific binding of other molecules such as beads with charge effect. Typically, aptamers harbor a specific sequence region, which is necessary for binding to the target molecules, although noise oligonucleotide sequences generally do not include such a target binding region. Hence, determining the sequence clusters with such a target binding region could be an effective approach to choose aptamer candidates. The length of the target binding region varies according to the target molecules, epitopes, and/or binding styles. Thus, estimating target binding regions with different lengths is required. Although AptaTRACE was designed for detecting the candidate motifs as target binding regions, it has a limitation of the length of motifs and requires multiple rounds of SELEX data, which increases the sequencing cost.
To overcome these limitations, we developed the fast string-based clustering (FSBC) method. FSBC estimates clusters considering different lengths of over-represented strings as target binding regions. FSBC was also designed for fast calculation with search space reduction of over-represented strings using only a single round of HT-SELEX data, especially in the final round of SELEX, considering the imbalance of nucleobases of the aptamer selection process. FSBC implemented with R [18] is available at http://www.aoki.ecei.tohoku.ac.jp/fscb/.
Overview of the clustering algorithm
FSBC is composed of two parts: selection of over-represented strings with different lengths and sequence clustering based on the selected over-represented strings. For over-represented string selection, we propose a new score calculation method that considers the imbalanced ratios of nucleobases due to the selection process of SELEX. Figure 2 shows the outline of the FSBC algorithm.
Outline of fast string-based clustering (FSBC). The algorithm includes over-represented string selection and clustering based on selected strings. The upper panel shows the selection of over-represented strings after minimizing the search space and comparing string scores (Z-scores) of pre- and post- extended strings. The lower panel shows the clustering based on selected strings ranked according to the Z∗-score, which is normalized Z-score for strings of different lengths
String score definition
For a set of nucleobases Ω={A,C,G,T(U)}, which represents adenine, cytosine, guanine, and thymine/uracil, respectively, the probability of each nucleobase is given as pj,(j∈Ω), the string is s, the length of the string is |s|, and the number of nucleobases in the string is ns,j. The probability of an L-mer oligonucleotide including string s, Ps,L, is then described as the following recurrent equation:
$$ \begin{aligned} P_{s, L} = P_{s, L - 1} + Q \left (1 - P_{s, L - |s|} - \sum\limits_{t \in \mathcal T} q^{-1} \left(P_{s, L - |s| + |t|} - P_{s, L - |s| + |t| - 1}\right) \right), \\ \end{aligned} $$
$$ Q = \prod\limits_{j \in \Omega} p_{j}^{n_{s, j}}, \quad q = \prod\limits_{j \in \Omega} p_{j}^{n_{t, j}}, \quad L \ge l, \notag $$
where \(\mathcal T\) is a set of self-overlapping regions of s, and \(n_{t, j}, (t \in \mathcal T)\) is the number of nucleobases of the self-overlapping regions. For example, if string s is "ATATA", the set of self-overlapping regions \(\mathcal T_{\text {ATATA}}\) is {A, ATA}. If L<|s|, then Ps,L=0. The terms Ps,L−1, Q, QPs,L−|s|, and \(Q \sum _{t \in \mathcal T} q^{-1} \left (P_{s, L - |s| + |t|} - P_{s, L - |s| + |t| - 1}\right)\) represent the probability that a sequence has the string from 1 to L−|s|−1, a sequence has the string at L−|s|, a sequence has the string both from 1 to L−|s|−1 and at L−|s|, and a sequence has the string at the self-overlapping position, respectively. Figure S1 shows a graphical representation of Eq. (1). In stringology, the probability calculation is the same approach with "missing words in random text" [19], and the self-overlapping region is the same meaning with "string overlaps" [20].
The lengths of observed sequences obtained using NGS vary owing to insertions and/or deletions during the SELEX process. Stoltenburg and Strehiltz described that around 78% of sequences had an expected length of random regions, while the other 22% of sequences are different from the original length of random region [21]. Therefore, the probability Ps,L was adjusted for different lengths of sequences using the following equation:
$$ P_{s} = \frac{1}{N} \sum\limits_{i=1}^{N} P_{s, L_{i}}, $$
where N is the number of observed sequences and Li is the length of the i-th sequence.
Let the frequency of observed sequences including string s be Fs. Ps follows a binomial distribution. If N is a large enough number for Fs, a random variable representing the difference between Fs and Ps normalized by the standard deviation of the binomial distribution then shows an approximate normal Gaussian distribution. Hence, the Z-score for string s is derived according to the following equation:
$$ Z_{s} = \frac{\frac{F_{s}}{N} - P_{s}}{\sqrt{\frac{P_{s} (1 - P_{s}) }{N}}}. $$
Selection of over-represented strings
Before selection of the over-represented strings, the probability of each nucleobase, \(\hat p_{j}\), is estimated with the following equation:
$$ \hat p_{j} = \frac{n_{j}}{\sum_{i = 1}^{N} L_{i}}, \quad j \in \Omega, $$
where nj is the number of observed nucleobases. These estimated probabilities are then used for calculation of the Z-scores. Since the ratios of nucleobases in the SELEX pool can change owing to the systematic selection bias of SELEX, the Z-score is calculated based on the balance of nucleobases using Eqs. (1) – (4).
Over-represented strings with lengths ranging from lmin to lmax are selected while reducing the search space from all possible combinations by comparing Z-scores. Selection of over-represented strings is then conducted according to the following process:
Enumerate all lmin-length strings and calculate their Z-scores. Exclude string whose Z-scores are less than 0.
Substitute l←lmin.
Enumerate extended strings by adding a nucleobase and calculate their Z-scores. Exclude extended strings whose Z-scores are less than those of the pre-extended strings.
If l+1>lmax, then finish the selection of over-represented strings.
Substitute l←l+1, and go to 3.
The algorithm for estimating over-represented strings reduces the search space by comparing of Z-scores between the post-extended and pre-extended strings. Thus, the number of selected strings, m, is much smaller than the exhaustive enumeration of all strings: \(m \ll \sum _{l = l_{min}}^{l_{max}} |\Omega |^{l}\). This search space minimization provides a huge reduction in the calculation time for an HT-SELEX dataset.
Clustering with selected over-represented strings
While extending the string length, the strings with higher Z-scores are selected for search space reduction. For evaluating the different lengths of strings equally, the normalization of the Z-score was performed. The normalized Z-score for string s, referred to as \(Z_{s}^{*}\), is calculated with the following equation:
$$ Z_{s}^{*} = \frac{Z_{s} - \hat \mu_{|s|}}{\hat \sigma_{|s|}}, $$
where \(\hat \mu _{|s|}\) and \(\hat \sigma _{|s|}\) are the mean and standard deviation, respectively, of the Z-score of selected strings with length |s|. The strings are then ordered by Z∗. Parameters \(\hat \mu _{|s|}\) and \(\hat \sigma _{|s|}\) are estimated with only selected strings. Therefore, there are no guarantees of Gaussian distribution of Z∗. The clustering is then achieved according to the following process:
Substitute i←1.
Extract sequences including the i-th strings from the sequence dataset, where a set of extracted sequences is referred to as the i-th cluster. Remove extracted sequences from the sequence dataset.
If there are no sequences remaining, finish the clustering.
Substitute i←i+1, and go to step 2.
The publicly available whole-cell SELEX dataset of human embryonic stem cells [22] was used for comparing the calculation speed and clustering accuracy. The SELEX was finished at the fifth round and nineteen sequences were evaluated for binding affinity with flow cytometry. According to the binding evaluation, eight of nineteen sequences showed higher fluorescent intensity and those sequences were defined as target-binding sequences.
Calculation time
The sequence data were filtered with different frequency cut-offs (1, 10, and 100) to vary the size of the dataset. The numbers of sequences included with frequency cut-offs of 1, 10, and 100 were 15,327,604 (4,381,160), 8,799,219 (156,587), and 4,947,522 (6,193) with 1, 10, and 100, respectively; the numbers of non-redundant sequences are indicated in parentheses.
The five different algorithms, namely FASTAptamer, AptaCluster, APTANI, AptaTRACE, and FSBC, were compared with respect to calculation time. The fifth round HT-SELEX data, which was the last round of SELEX, were used for FASTAptamer, AptaCluster, APTANI, and FSBC. The fourth and fifth round HT-SELEX data were used for AptaTRACE because AptaTRACE requires multiple rounds of HT-SELEX data.
FASTAptamer was performed with an edit distance option of 7 (according to the user guide), and the maximum cluster number was set to 100 to reduce the calculation time. AptaCluster was performed with the default options. The options for APTANI were no-filtering of frequency, fixed length for HT-SELEX data, and primer information for estimation of the secondary structure. There are no further options for reducing the calculation time except for frequency filtering; thus, we did not change any options for APTANI. AptaTRACE was performed with the background sequence option as 1,000 because AptaTRACE demonstrated the best accuracy with that parameter. The options of FSBC were lmin=5 and lmax=10.
FSBC was written in R [18] version 3.6.2 with Bioconductor packages [23], and other programs are provided with scripts and executable files. The computer specifications were as follows: OS Ubuntu 16.04 (Xenial Xerus) 64bit, Intel(R) Xeon(R) CPU [email protected], and 64 GB memory.
Clustering accuracy
Filtered data (frequency ≥10) of the fifth round, which was the final round of the SELEX, was applied for comparing the accuracy of the clustering methods because FASTAptamer and APTANI did not complete the clustering with the entire sequence dataset. The same parameters indicated in the previous subsection for AptaCluster and APTANI were applied for evaluating the clustering accuracy. The option of the maximum number of clusters for FASTAptamer was not used for the evaluation of clustering accuracy. Changing the parameters of LD and motif length did not improve the accuracy of FASTAptamer and AptaCluster, respectively. For AptaTRACE, the background sequence option was set as 1,000 because AptaTRACE showed the highest accuracy with that option. The options for FSBC were lmin={3,4,5} and lmax=10. FSBC was also applied to the entire sequence dataset and the filtered data (frequency ≥100) to evaluate the potential bias of frequency filtering, and missing aptamer sequences due to the sequence frequency filtering.
The sequences with binding/non-binding information were sorted with cluster ranking for each method. For evaluating the cluster ranking and binding sequences, the receiver operating characteristic (ROC) curves were generated according to the order of cluster ranking with the binding information. The area under the curve (AUC) values were calculated based on the area of the ROC curves. FSBC was also applied to all of the sequence data from the third and fourth rounds of SELEX to evaluate the possibility of the detection of aptamers in early rounds.
Comparison with exhaustive enumeration of strings
Due to the search space reduction, there are no guarantees that the top-ranked strings of exhaustive enumeration are included in the selected strings. Hence, we verified whether the top-ranked strings of exhaustive enumeration was included in the selected strings or not. The missing rate of the top ten ranked strings of exhaustive enumeration was also evaluated for each length.
Table 1 shows the calculation time for each method and the dataset size. The first column shows the clustering methods, and the second to seventh columns represent the actual and CPU time for each size of dataset. Note that the calculation time of FASTAptamer includes the pre-processing time, which involves counting the frequency of sequences, before clustering.
Table 1 Clustering calculation time for each method with datasets of different sizes. Sequences (≥ 10) and sequences (≥ 100) represent filtered data with frequency cutoff. DNF indicates did not finish. DNF 1: FASTAptamer did not complete the calculation for the entire sequence dataset in 7 days. DNF 2: APTANI showed a calculation error after the prediction of the secondary structure, which took 25 h
AptaCluster showed the fastest calculation time for clustering, followed by FSBC. However, FASTAptamer was the slowest of the five methods and did not complete the clustering of the entire dataset in 7 days, even when changing the clustering number option to "-c 100" to reduce the calculation time. APTANI also could not complete the calculation for the entire dataset due to an error after the secondary structure prediction, which required 25 h. AptaTRACE calculated clustering with parallel computing, hence the real-time was much smaller than CPU time.
The clustering result for each algorithm is shown in Table 2. The columns indicate the oligonucleotide sequences excluding both ends of the primers, sequence ID, ranking of frequency, frequency, binding information, and cluster ranking for each method. AptaCluster has a two-ranking system for clustering, including frequency and diversity, corresponding to the frequency of sequences in the cluster and the number of non-redundant sequences in the cluster, respectively. APTANI does not include any functions for ordering clusters; thus, we used frequency and diversity for this purpose as performed by AptaCluster. The binding information was already defined by the verification of experimental analysis using flow cytometry [22]. Sequence IDs, seq1 to seq8 are defined as binding sequences whereas sequence IDs seq9 to seq19 are not the binding sequences. Sequence ID seq8 was not included since it was filtered out based on the frequency cut-off before clustering. The strings selected by FSBC are underlined and in uppercase in the table. The order of sequences in Table 2 is based on the ranking of the frequency on binding/non-binding sequences. FASTAptamer, AptaCluster (frequency/diversity), APTANI (frequency/diversity), AptaTRACE and FSBC estimated 2,380, 136,350, 2,348, 13, and 155 clusters, respectively.
Table 2 Cluster ranking. AptaCluster (Freq.), AptaCluster (Div.), and APTANI (Freq.), APTANI (Div.) represent the cluster ranking of frequency and diversity (the number of non-redundant sequences) of AptaCluster and APTANI, respectively. Sequences with a frequency of less than 10 were excluded before the clustering analysis. Because FASTAptamer and APTANI did not finish with all sequence data. *: This sequence is filtered as the frequency is less than 10. **: The ranking of clusters is tied; however, the sequences are not grouped in the same cluster. ***: These sequences did not include any motifs estimated by AptaTRACE, thus the sequences are not grouped into any clusters
Among the five methods, only FSBC and AptaTRACE provided a top-ranked cluster that included binding sequences. By contrast, the top-ranked clusters obtained with FASTAptamer, AptaCluster (frequency), and APTANI, and the second top-ranked cluster obtained by AptaCluster (diversity) included the top-ranked sequence of "frequency", which did not show binding ability. Similarly, APTANI (diversity) yielded a top-ranked cluster including sequence ID seq17, which also did not bind to the target molecules. FASTAptamer, AptaCluster (frequency), AptaCluster (diversity), APTANI (frequency), and APTANI (diversity) showed 6, 7, 5, 7, and 290 as the highest ranked clusters including binding sequences, respectively, and these ranks were all lower than those with non-binding sequences. FSBC and AptaTRACE grouped all binding sequences from sequence ID seq1 to seq7 into two clusters with cluster ranks 1 and 5. However, AptaTRACE missed sequence ID seq6, and sequence ID seq7 was grouped with sequence ID seq17 which did not show the binding affinity. FASTAptamer grouped sequence ID seq2 and seq4 into the same cluster, which was ranked fifteenth. APTANI (diversity) showed the same cluster ranking from sequence ID, seq5 to seq8; however, these ranks were simply tied but the sequences did not group in the same cluster. AptaCluster (frequency/diversity) and APTANI (frequency) did not group any binding sequences into the same cluster. Table S1 shows the same result of FSBC with all sequences (no-filtering with frequency cutoff) and filtered data (≥100) under the option lmin=5. Similar to the result in Table 2, all binding sequences were in the higher-ranked clusters rather than in the clusters ranked with non-binding sequences.
FSBC selected a total of 1,003 strings, and the top 24 strings are shown in Table S2. The selected over-represented strings "ATGGACTTCGG" and "GACTT", ranked 1 and 12, respectively, were included in cluster 1 and 5 in Table 2. The selected string "GACTT" is a part of string "ATGGACTTCGG". The distribution of the Z-scores and Z∗-scores of the selected strings for each length of string is shown in Figure S2.
The relation between cluster ranking and frequency of oligonucleotide sequences with each method is displayed in Fig. 3, in which the red, blue, and gray dots represent binding, non-binding, and non-evaluated sequences to the target molecules, respectively. The top-ranked clusters obtained by FASTAptamer, AptaCluster (frequency), and APTANI (frequency) included the non-binding sequence of the highest frequency. AptaCluster (diversity) and APTANI (diversity) included the non-binding sequence of the highest frequency in higher ranked cluster than those including binding sequences. By contrast, FSBC and AptaTRACE grouped the binding sequences with lower frequencies in the top-ranked cluster.
Relation between cluster ranking and frequency of sequences. Binding, non-binding, and non-evaluated sequences are shown as red, blue, and gray dots, respectively
The ROC curve and AUC value for each clustering method are displayed in Fig. 4. FSBC with options lmin=4 and lmin=5 clearly distinguished binding from non-binding sequences, i.e. the AUC value equals to 1. The AUC value was slightly lower (0.96) when the FSBC options lmin=3 were applied, because some non-binding sequences were grouped into the same binding cluster. AptaTRACE also demonstrates a higher AUC value because AptaTRACE detected the target binding regions in the higher-ranked clusters. However, the other clustering methods resulted in lower AUCs because non-binding sequences with high frequency were included in the higher-ranked clusters. FSBC with option lmin=5 also showed that the AUC value equals to 1 for all sequence data and filtered data (frequency ≥ 100) in Table S1. The clustering results for third- and fourth-round data are summarized in Table S3 and Table S4, respectively. FSBC could identify aptamer sequences in the third and first clusters from third-and fourth-round data. AUC values for the third and fourth rounds are 0.89 and 1, respectively.
Receiver operating characteristic (ROC) curves of different clustering methods. "Freq." and "Div." in the parentheses (after AptaCluster and APTANI) indicate the cluster ranking with frequency and diversity (the number of non-redundant sequences) in the cluster for the respective method. AUC indicates the area under the curve
The top-ranked over-represented string of exhaustive enumeration was included in the selected strings for each length. The missing rate of the top 10 ranked strings of exhaustive enumeration for each length is shown in Table S5. Top 10 ranked 10-mer strings of the exhaustive enumeration include 6 selected strings. Thus, the missing rate of the 10-mer string is 0.4.
Our newly developed algorithm for clustering, FSBC, showed the second fastest calculation speed with HT-SELEX data. AptaCluster displayed a remarkably fastest calculation time. FASTAptamer and APTANI could not complete the clustering for all of the sequence data; hence, only FSBC, AptaCluster, and AptaTRACE are available for applications with a real HT-SELEX dataset. FSBC selected a total of 1,003 strings, which was much smaller than all exhaustive enumeration of strings: \(\sum _{i = 5}^{10} 4^{i} = 1,397,760\). The ratio of the number of selected strings over all combinations is 1,003/1,397,760=0.0007175767. Hence, the minimization of the search space was an effective method for finding over-represented strings of longer lengths such as 10-mer. FSBC was designed for handling a single-round sequence data from SELEX. This approach could also be helpful to reduce the sequencing cost and the calculation time compared to other methods such as MPBind [22] and AptaTRACE[17], which require multiple rounds of sequence data from SELEX pools.
Importantly, FSBC and AptaTRACE distinguished binding sequences as high-ranked clusters, whereas the other clustering methods categorized non-binding sequences with high frequency under high-ranked clusters. This demonstrates that FASTAptamer, AptaCluster, and APTANI are more sensitive to the frequency of sequences rather than to enrichment of over-represented strings. Thus, if the SELEX pool contains numerous non-binding sequences due to PCR bias, FASTAptamer, AptaCluster, and APTANI might place these PCR biased sequences in the high-ranked clusters. The sequencing data used for the current study includes enriched strings among the binding sequences, and FSBC and AptaTRACE could accurately detect these strings as the estimated target binding region. AptaTRACE detected binding sequences with higher-ranked clusters, however, the cluster of rank 5 includes both binding and non-binding sequences. Consequently, FSBC showed a better result for cluster ranking in this study.
This proposed string score calculation method can be extended to combine with other outcomes. In this study, we defined the outcome according to nucleobases: Ωnucleobase={A,C,G,T(U)}. However, other outcomes can also be defined, such as the oligonucleotide secondary structure: Ωstructure={H,B,S,M,E,I,G}, which represent the structure of the hairpin loop, bulge, stem, multi-loop, external loop, internal loop, and G-quadruplex, respectively. A set of outcomes can be extended as Ω=Ωnucleobase×Ωstructure. If the set is extended to include the secondary structure, FSBC is available for searching over-represented strings with a specific secondary structure. However, the calculation time will also increase with increasing the number of elements of Ω. Hence, to obtain the fastest calculation with FSBC, Ω=Ωnucleobase is the reasonable outcome. This string scoring method can also be used for other types of sequence analysis such as for amino acid sequences. In other words, if Ω is defined based on amino acids, Eq. (1) can be used for finding over-represented strings among amino acid sequences.
FSBC does not consider insertion/deletions or degenerated nucleobases, because the method was designed to reduce the calculation time to enable estimating longer over-represented strings in a huge dataset. Since the size of clusters is much smaller than the size of the entire sequence dataset, other motif-estimating methods such as MEME [24] can be used for more accurate estimation of candidate motifs.
Due to a lack of publicly accessible HT-SELEX data with binding information, only one HT-SELEX dataset was used. Sequence data could differ depending on the target molecules, SELEX methods, and initial bias of SELEX. Hence, the evaluation with other HT-SELEX data should be performed. After there will be enough dataset of HT-SELEX data publicly available for evaluation, the clustering methods need to be summarized. Moreover, only a single clustering method cannot cover all types of SELEX datasets. Thus, the most suitable clustering approach is to compare and summarize the results of different clustering methods.
We proposed a new and rapid string-based clustering method for HT-SELEX data. Our clustering method could complete the calculation from a huge dataset in a reasonable time, even though the method is designed to estimate longer over-represented strings such as 10-mer. Importantly, our clustering method could identify enriched strings that were included in binding sequences estimated as the target binding region of the aptamer. Overall, FSBC could be a helpful method to effectively identify aptamers with HT-SELEX data.
FSBC was implemented with R version 3.6.2 and is available at http://www.aoki.ecei.tohoku.ac.jp/fsbc/.
FSBC:
Fast string-based clustering
SELEX:
Systematic evolution of ligands by exponential enrichment
HT-SELEX:
High-throughput systematic evolution of ligands by exponential enrichment
Levenshtein-distance
LSH:
Locality sensitive hashing
Area under the curve
Ellington AD, Szostak JW. In vitro selection of rna molecules that bind specific ligands. Nature. 1990; 346(6287):818.
Tuerk C, Gold L. Systematic evolution of ligands by exponential enrichment: Rna ligands to bacteriophage t4 dna polymerase. Science. 1990; 249(4968):505–10.
Bock LC, Griffin LC, Latham JA, Vermaas EH, Toole JJ. Selection of single-stranded dna molecules that bind and inhibit human thrombin. Nature. 1992; 355(6360):564.
Zimmermann GR, WICK CL, SHIELDS TP, JENISON RD, PARDI A. Molecular interactions and metal binding in the theophylline-binding core of an rna aptamer. Rna. 2000; 6(5):659–67.
Cunha I, Biltes R, Sales M, Vasconcelos V. Aptamer-based biosensors to detect aquatic phycotoxins and cyanotoxins. Sensors. 2018; 18(7):2367.
Qu H, Csordas AT, Wang J, Oh SS, Eisenstein MS, Soh HT. Rapid and label-free strategy to isolate aptamers for metal ions. ACS nano. 2016; 10(8):7558–65.
Marton S, Cleto F, Krieger MA, Cardoso J. Isolation of an aptamer that binds specifically to e. coli. PLoS ONE. 2016; 11(4):0153637.
Ng EW, Shima DT, Calias P, Cunningham Jr ET, Guyer DR, Adamis AP. Pegaptanib, a targeted anti-vegf aptamer for ocular vascular disease. Nat Rev Drug Discov. 2006; 5(2):123.
Ruiz Ciancio D, Vargas M, Thiel W, Bruno M, Giangrande P, Mestre M. Aptamers as diagnostic tools in cancer. Pharmaceuticals. 2018; 11(3):86.
Gold L, Ayers D, Bertino J, Bock C, Bock A, Brody EN, Carter J, Dalby AB, Eaton BE, Fitzwater T, et al. Aptamer-based multiplexed proteomic technology for biomarker discovery. PloS ONE. 2010; 5(12):15004.
Röthlisberger P, Gasse C, Hollenstein M. Nucleic acid aptamers: Emerging applications in medical imaging, nanotechnology, neurosciences, and drug delivery. Int J Mol Sci. 2017; 18(11):2430.
Kaneko N, Horii K, Akitomi J, Kato S, Shiratori I, Waga I. An aptamer-based biosensor for direct, label-free detection of melamine in raw milk. Sensors. 2018; 18(10):3227.
Alam KK, Chang JL, Burke DH. Fastaptamer: a bioinformatic toolkit for high-throughput sequence analysis of combinatorial selections. Mol Ther Nucleic Acids. 2015; 4:230.
Hoinka J, Berezhnoy A, Sauna ZE, Gilboa E, Przytycka TM. AptaCluster - A Method to Cluster HT-SELEX Aptamer Pools and Lessons from its Application. Res Comput Mol Biol. 2014; 8394:115–28. https://doi.org/10.1007/978-3-319-05269-4_9.
Hoinka J, Berezhnoy A, Dao P, Sauna ZE, Gilboa E, Przytycka TM. Large scale analysis of the mutational landscape in ht-selex improves aptamer discovery. Nucleic Acids Res. 2015; 43(12):5699–707.
Caroli J, Taccioli C, De La Fuente A, Serafini P, Bicciato S. Aptani: a computational tool to select aptamers through sequence-structure motif analysis of ht-selex data. Bioinformatics. 2015; 32(2):161–4.
Dao P, Hoinka J, Takahashi M, Zhou J, Ho M, Wang Y, Costa F, Rossi JJ, Backofen R, Burnett J, et al. Aptatrace elucidates rna sequence-structure motifs from selection trends in ht-selex experiments. Cell Syst. 2016; 3(1):62–70.
R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2013. http://www.R-project.org/, R Foundation for Statistical Computing.
Rahmann S, Rivals E. On the distribution of the number of missing words in random texts. Comb Probab Comput. 2003; 12(1):73–87.
Guibas LJ, Odlyzko AM. String overlaps, pattern matching, and nontransitive games. J Comb Theory Ser A. 1981; 30(2):183–208.
Stoltenburg R, Strehlitz B. Refining the results of a classical selex experiment by expanding the sequence data set of an aptamer pool selected for protein a. Int J Mol Sci. 2018; 19(2):642.
Jiang P, Meyer S, Hou Z, Propson NE, Soh HT, Thomson JA, Stewart R. Mpbind: a meta-motif-based statistical framework and pipeline to predict binding potential of selex-derived aptamers. Bioinformatics. 2014; 30(18):2665–7.
Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, Ellis B, Gautier L, Ge Y, Gentry J, et al. Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10):80.
Bailey TL, Williams N, Misleh C, Li WW. Meme: discovering and analyzing dna and protein sequence motifs. Nucleic Acids Res. 2006; 34:369–73.
The authors are grateful to the researchers of Innovation Laboratories of NEC Solution Innovators and members of Computer Structures Laboratory of Graduate School of Information Sciences, Tohoku University, for their helpful discussions.
A part of this work was supported by JSPS KAKENHI Grant Number 18H03253.
NEC Solution Innovators, Ltd, 1-18-7 Shinkiba, Koto-ku, Tokyo, 136-8627, Japan
Shintaro Kato, Hirotaka Minagawa, Katsunori Horii, Ikuo Shiratori & Iwao Waga
Graduate School of Information Sciences, Tohoku University, 6-6-05 Aramaki Aza Aoba, Aoba-ku, Sendai-shi, Miyagi, 980-8579, Japan
Shintaro Kato, Takayoshi Ono, Koichi Ito & Takafumi Aoki
Shintaro Kato
Takayoshi Ono
Hirotaka Minagawa
Katsunori Horii
Ikuo Shiratori
Iwao Waga
Koichi Ito
Takafumi Aoki
SK, TO, HM, KH, IS, IW, KI, and TA conceived and designed the study. SK, TO, KI, and TA developed the method. SK and TO implemented the programs and analyzed the data. SK, TO, and KI drafted the manuscript. All authors have read and approved the manuscript.
Correspondence to Shintaro Kato.
SK, HM, KH, IS, and IW are employees of NEC Solution Innovators, Ltd. The company did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. All other authors declare that they have no competing interests.
The supplementary document includes supplementary tables (Tables S1 to S5) and figures (Figures S1 and S2).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Kato, S., Ono, T., Minagawa, H. et al. FSBC: fast string-based clustering for HT-SELEX data. BMC Bioinformatics 21, 263 (2020). https://doi.org/10.1186/s12859-020-03607-1
Received: 02 July 2019
Aptamer
|
CommonCrawl
|
Research | Open | Published: 10 July 2019
CELF significantly reduces milling requirements and improves soaking effectiveness for maximum sugar recovery of Alamo switchgrass over dilute sulfuric acid pretreatment
Abhishek S. Patri1,2,3,4,
Laura McAlister1,3,
Charles M. Cai1,2,3,4,
Rajeev Kumar2,3,4 &
Charles E. Wyman ORCID: orcid.org/0000-0002-7985-28411,2,3,4
Biotechnology for Biofuelsvolume 12, Article number: 177 (2019) | Download Citation
Pretreatment is effective in reducing the natural recalcitrance of plant biomass so polysaccharides in cell walls can be accessed for conversion to sugars. Furthermore, lignocellulosic biomass must typically be reduced in size to increase the pretreatment effectiveness and realize high sugar yields. However, biomass size reduction is a very energy-intensive operation and contributes significantly to the overall capital cost.
In this study, the effect of particle size reduction and biomass presoaking on the deconstruction of Alamo switchgrass was examined prior to pretreatment by dilute sulfuric acid (DSA) and Co-solvent Enhanced Lignocellulosic Fractionation (CELF) at pretreatment conditions optimized for maximum sugar release by each pretreatment coupled with subsequent enzymatic hydrolysis. Sugar yields by enzymatic hydrolysis were measured over a range of enzyme loadings. In general, DSA successfully solubilized hemicellulose, while CELF removed nearly 80% of Klason lignin from switchgrass in addition to the majority of hemicellulose. Presoaking and particle size reduction did not have a significant impact on biomass compositions after pretreatment for both DSA and CELF. However, presoaking for 4 h slightly increased sugar yields by enzymatic hydrolysis of DSA-pretreated switchgrass compared to unsoaked samples, whereas sugar yields from enzymatic hydrolysis of CELF solids continued to increase substantially for up to 18 h of presoaking time. Of particular importance, DSA required particle size reduction by knife milling to < 2 mm in order to achieve adequate sugar yields by subsequent enzymatic hydrolysis. CELF solids, on the other hand, realized nearly identical sugar yields from unmilled and milled switchgrass even at very low enzyme loadings.
CELF was capable of achieving nearly theoretical sugar yields from enzymatic hydrolysis of pretreated switchgrass solids without size reduction, unlike DSA. These results indicate that CELF may be able to eliminate particle size reduction prior to pretreatment and thereby reduce overall costs of biological processing of biomass to fuels. In addition, presoaking proved much more effective for CELF than for DSA, particularly at low enzyme loadings.
Biofuels derived from lignocellulosic biomass have the potential to substantially reduce greenhouse emissions and dependence on vulnerable and depletable fossil fuel resources [1, 2]. Switchgrass (Panicum virgatum) is a leading candidate as an effective bioenergy feedstock due to its perennial nature, high productivity, and soil restoration properties [3,4,5]. Switchgrass is mostly composed of carbohydrates and lesser amounts of lignin, with minor contributions from ash, extractives, and protein [6,7,8]. Cellulose and hemicellulose are the carbohydrates of primary interest for biological production of biofuels, as they can be broken down into five and six carbon sugars that microorganisms can ferment to ethanol with high yields. However, due to the complex nature of plant cell walls, pretreatment is typically required prior to enzymatic and biological conversion to expose carbohydrates from the lignin shield [9]. Various pretreatments that can be broadly categorized as mechanical, thermal, chemical, or their combination have been developed over the years to overcome this recalcitrance to sugar release [10, 11]. Mechanical methods typically involve particle size reduction by milling to increase enzyme access to cell wall carbohydrates [12, 13]. Thermochemical pretreatments utilize chemical reagents, such as acids, bases, or solvents, at elevated temperatures to disrupt the cell wall structure and achieve greater access to carbohydrates [14]. Dilute sulfuric acid (DSA) pretreatment is a research and commercial benchmark that solubilizes hemicellulose to sugars with high yields and increases digestibility of pretreated biomass, although high enzyme loadings are required for the latter to achieve satisfactory sugar yields [15]. Recent advanced pretreatment technologies, such as ammonia fiber expansion (AFEX), organosolv, ionic liquid, and sulfite pretreatments, have made strides in improving cellulose digestibility and increasing enzymatic sugar yields from pretreated biomass [16,17,18,19]. Co-solvent Enhanced Lignocellulosic Fractionation (CELF) is a recently developed advanced pretreatment that utilizes dilute acid in a miscible mixture of tetrahydrofuran (THF) and water to recover about 80–90% of the lignin and > 95% hemicellulose sugars in solution and achieve nearly theoretical sugar yields from the glucan and hemicellulose left in the resulting carbohydrate-rich solids at low enzyme loadings [20].
Several challenges are yet to be addressed before biomass-derived fuels can be considered cost competitive [21]. For one, because pretreatment is one of the most expensive single unit operations in a biomass processing plant [22], pretreatment cost reductions would be a significant step to lowering the cost of cellulosic biofuels. Lignocellulosic biomass particle size reduction and presoaking prior to pretreatment are typically needed to increase biomass surface area and effectively distribute acid (or other catalyst) throughout the biomass solids, respectively [23, 24] so that high sugar yields can be achieved from pretreatment combined with subsequent enzymatic hydrolysis [25]. Presoaking of biomass with the reaction ingredients at ambient temperatures prior to thermochemical pretreatment has also been shown to increase biomass wetting and improve inter-particle diffusion of acid catalysts [26,27,28]. However, these additional steps increase overall capital and operating costs for making biomass-derived fuels. Particle size reduction, in particular, can require intensive energy inputs [29,30,31,32], and reducing milling or eliminating it altogether has been proposed to lower pretreatment costs. In this study, DSA and CELF pretreatment temperatures and times were varied to maximize overall release of glucan and xylan from each pretreatment coupled with subsequent enzymatic hydrolysis of milled and presoaked Alamo switchgrass. At these maximum sugar release conditions, the impacts of biomass presoaking and particle size reduction by knife milling were assessed for both DSA and CELF pretreatment of Alamo switchgrass. Solids after both pretreatments were analyzed for compositional differences at varying presoaking times and particle sizes. Furthermore, sugar yields from enzymatic hydrolysis of pretreated solids were compared over a range of enzyme loadings to determine the impact of presoaking and knife milling on biomass sugar release following DSA and CELF pretreatments.
Maximizing overall glucose and xylose sugar yields from switchgrass by DSA and CELF pretreatments followed by enzymatic hydrolysis
Switchgrass was subjected to DSA and CELF pretreatments to identify conditions that maximized glucan plus xylan yields from each pretreatment coupled with subsequent enzymatic hydrolysis of the pretreated solids. DSA pretreatments were performed at 150 and 160 °C, and CELF pretreatments were at 140 and 150 °C, as these temperature ranges have previously been shown to be optimum for switchgrass [33] and corn stover [34], respectively. The reactions at each temperature were carried out over a range of times to be sure that differences in biomass sources did not alter the time to achieve the highest sugar yields. The liquid hydrolyzates from both pretreatments were analyzed for total dissolved glucose and xylose including gluco- and xylo-oligomers. The pretreatment step alone was termed Stage 1, and the sugars released were expressed in terms of equivalent glucan and xylan by taking into account the water added during hydrolysis (see Additional file 1: Figure S1). The pretreated solids were then subjected to enzymatic hydrolysis at a high cellulase loading of 65 mg protein/g glucan in unpretreated switchgrass to determine the maximum possible sugar release at each pretreatment condition. The enzymatic hydrolysis step was termed Stage 2 (see Additional file 1: Figure S1). Figures 1 and 2 summarize the trends in glucan and xylan released in Stages 1 and 2 alone, as well as the combined glucan and xylan yields from Stage 1 + 2 together.
Effect of pretreatment time at 150 °C and 160 °C on glucan, xylan, and total glucan plus xylan yields from dilute sulfuric acid (DSA) pretreatment (Stage 1) of switchgrass, enzymatic hydrolysis of the pretreated solids (Stage 2), and the two stages combined. Stage 1 reaction conditions: solids loading of 7.5 wt% with an acid loading of 0.5 wt%. Stage 2 enzymatic hydrolysis was performed on pretreated solids at a 10 g/L glucan loading with 65 mg of Accellerase® 1500 protein/g glucan in unpretreated switchgrass
Effect of pretreatment time at 140 °C and 150 °C on glucan, xylan, and total glucan plus xylan yields from CELF pretreatment (Stage 1) of switchgrass, enzymatic hydrolysis of the pretreated solids (Stage 2), and the two stages combined. Stage 1 reaction conditions: solids loading of 7.5 wt% with an acid loading of 0.5 wt % based on liquid weight, THF/water mass ratio-0.889:1. Stage 2 enzymatic hydrolysis was performed on pretreated solids at a 10 g/L glucan loading with 65 mg of Accellerase® 1500 protein/g glucan in unpretreated switchgrass
As expected, increasing time during both pretreatments initially increased Stage 1 xylan release as more xylan from the hemicellulose fraction was solubilized by the acid catalyst. However, at pretreatment times > 30 min, significant amounts of xylose were dehydrated to furfural, which reduced the maximum possible xylose yield. Sugar dehydration products, furfural, and 5-hydroxymethylfurfural (HMF) concentrations were quantified for both pretreatments using HPLC (Additional file 1: Table S1). At maximum sugar release conditions for both DSA and CELF, furfural and HMF concentrations in pretreatment hydrolyzates were below HPLC detection limit (< 0.1 g/L). Glucan, on the other hand, was largely conserved in the solid in Stage 1 for both pretreatments, as the pretreatment conditions were not harsh enough to solubilize significant amounts of crystalline cellulose or degrade much of the dissolved glucan. The small amount of glucan released during pretreatment, termed Stage 1 glucan, was likely mostly from hemicellulose and the amorphous portion of cellulose and was robust enough to suffer little degradation at the pretreatment conditions applied [35]. Increasing pretreatment time made biomass more susceptible to enzymatic breakdown by hydrolysis of pretreated solids, as illustrated by the increase in Stage 2 glucan release with increasing pretreatment time. Since the Accellerase® 1500 cellulase cocktail contains some hemicellulases and as well as auxiliary enzymes [36], residual xylan in pretreated solids was also solubilized during enzymatic hydrolysis and reported as Stage 2 xylan. The conditions that maximized sugar release for DSA and CELF pretreatments were 160 °C, 20 min and 150 °C, 25 min, respectively, demonstrating that CELF reduced the temperature needed to achieve maximum sugar yields by 10 °C from that needed for DSA (Figs. 1 and 2). The sugars solubilized during pretreatment at these conditions yielded liquids containing 2.4 g/L glucose, 20.2 g/L xylose, and 2.5 g/L arabinose in DSA hydrolyzate and 3.1 g/L glucose, 21 g/L xylose, and 2.0 g/L arabinose in CELF hydrolyzate.
The compositions of pretreated solids prepared at all pretreatment conditions were analyzed to determine the fate of components in the solids left by pretreatment. The mass of each component in solids produced by application of the maximum sugar recovery pretreatment conditions for both DSA and CELF pretreatments was then adjusted to a basis of 100 g of unpretreated switchgrass, as shown in Fig. 3. For both pretreatments at maximum sugar recovery conditions, most of the hemicellulose sugars (mostly xylan) were solubilized during pretreatment, in agreement with the previous results for both of these pretreatments [34]. Glucan was largely conserved in both pretreatments as expected.
Tracking mass of glucan, xylan, and lignin left in the solids produced by DSA and CELF pretreatments at conditions optimized to maximize total overall glucan plus xylan yields. The values shown are based on the content of each component in 100 g of switchgrass before pretreatment. Reaction conditions: DSA at 160 °C for 20 min with 0.5 wt% sulfuric acid, and CELF at 150 °C for 25 min with 0.5 wt % sulfuric acid and a 0.889:1 THF/water mass ratio
The major difference between DSA and CELF-pretreated solids was the amount of lignin left in pretreated solids. While DSA removed roughly 14 wt% of Klason lignin (K-lignin), CELF removed 77 wt% of K-lignin at optimized conditions. This greater degree of delignification is encouraging as lignin has been shown to be a major contributor to biomass recalcitrance [37]. At conditions optimized for maximum sugar recovery, the solids produced by DSA pretreatment contained 65% glucan, 4% xylan, and 32% K-lignin. CELF-pretreated solids, on the other hand, contained 86% glucan, 4% xylan, and 11% K-lignin at optimized conditions, consistent with enhanced lignin removal by CELF. Compositional analyses on solids resulting from more severe CELF pretreatments revealed that more lignin was removed at higher severities. However, a drawback was that more xylan was lost to dehydration products.
Effects of presoaking and particle size reduction on compositions of Alamo switchgrass solids pretreated by DSA and CELF
To understand the effect of presoaking on DSA and CELF at the previously optimized pretreatment conditions, unpretreated switchgrass that was knife milled to < 1 mm was soaked for 4 and 18 h at 4 °C prior to pretreatment. These solids were compared to samples that were not soaked prior to pretreatment. The effect of particle size reduction on switchgrass was determined by presoaking unmilled and knife-milled biomass for 18 h at 4 °C before pretreatment. Additional file 1: Figure S2 shows images of unmilled switchgrass and switchgrass knife milled through sieve sizes of 2 mm and 1 mm.
The masses of major components of Alamo switchgrass solids after DSA and CELF pretreatments at varying presoaking times and particle sizes are listed in Table 1. While expected results of xylan removal by both pretreatments and delignification during CELF were observed, minimal differences were observed in the masses of major components as a function of presoaking times for both pretreatments. However, unsoaked DSA-pretreated switchgrass contained slightly more glucan in the solids compared to soaked samples, implying that pretreatment acid was not able to fully reach cellulose without soaking, thus resulting in less glucan removal. On the other hand, CELF-pretreated switchgrass that was soaked for 4 and 18 h prior to pretreatment was slightly more delignified than samples that were not soaked prior to CELF. No major compositional differences were observed in solids produced by pretreatment of the range of particle sizes. As expected, both pretreatments removed hemicellulose from the solids, and CELF removed the majority of lignin from solid biomass. These results show that CELF removed nearly 80% of the lignin from switchgrass even without presoaking or particle size reduction prior to pretreatment.
Table 1 Masses of glucan, xylan, and lignin in solids for unpretreated switchgrass and following DSA and CELF pretreatments of switchgrass for varying presoaking times and particle sizes
Effect of presoaking on enzymatic hydrolysis of DSA- and CELF-pretreated Alamo switchgrass solids
Solids left after DSA and CELF pretreatments with presoaking for 4 and 18 h at 4 °C and without presoaking were hydrolyzed with Accellerase® 1500 cellulase at loadings of 65, 15, 5, and 2 mg protein/g glucan in unpretreated solids. At the highest enzyme loading of 65 mg protein/g glucan, Fig. 4(i) points out that for DSA, presoaking for 4 h increased glucose yields by 3% (± 0.04%) at the end of 7 days of enzymatic hydrolysis compared to solids that were not presoaked. However, increasing presoaking to 18 h did not affect glucose yields. Similar trends were observed at the lower enzyme loadings, with the minor differences in yields between presoaking times of 4 and 18 h indicating that 4 h of presoaking prior to DSA pretreatment was sufficient to realize virtually maximum glucan release.
Effect of DSA presoaking time on glucose yields from hydrolysis of pretreated solids by Accellerase 1500 cellulase at loadings of i 65 mg, ii 15 mg, iii 5 mg, and iv 2 mg cellulase protein/g glucan in unpretreated switchgrass. All DSA pretreatments were performed at 160 °C for 20 min with 0.5 wt% sulfuric acid to 7.5 wt% solid loadings of switchgrass knife milled to < 1 mm
As previously shown [34], CELF produced highly digestible solids that were virtually completely hydrolyzed to glucose in 48 h at enzyme loadings of 65 and 15 mg protein/g glucan. Thus, enzyme loadings of 5 and 2 mg protein/g glucan were applied to more clearly show the effect of presoaking times on enzymatic hydrolysis of CELF switchgrass. Figure 5(i) shows that presoaking of switchgrass for 4 h prior to CELF increased glucose yields by 3% (± 0.53%) for hydrolysis over 2 weeks at an enzyme loading of 5 mg protein/g glucan compared to unsoaked switchgrass. Furthermore, presoaking switchgrass for 18 h increased glucose yields an additional 4% (± 0.72%) to reach 100% in 2 weeks, as also shown in Fig. 5(i). For the enzyme loading of 2 mg protein/g glucan in Fig. 5(ii), 18 h of presoaking prior to CELF increased glucose yields from 2 weeks of enzymatic hydrolysis by 14% (± 0.70%) compared to unsoaked switchgrass. However, presoaking for more than 18 h produced no change in glucose yields (data not shown).
Effect of presoaking time on glucose yields from enzymatic hydrolysis of CELF pretreated by Accellerase 1500 cellulase at loadings of i 5 mg protein/g glucan in unpretreated switchgrass and ii 2 mg protein/g glucan in unpretreated switchgrass. All CELF pretreatments were performed at 150 °C for 25 min with 0.5 wt% sulfuric acid and a 0.889:1 THF/water mass ratio to 7.5 wt% solid loadings of switchgrass knife milled to < 1 mm
The improved sugar yield from CELF switchgrass upon increasing presoaking time indicated better presoaking effectiveness of the CELF mixture. It is hypothesized that THF penetrated biomass more effectively than water alone could, improving solvent contact possibly due to its significantly lower surface tension compared to pure water, thus increasing sugar yields with increasing presoaking time [38, 39]. The results of a simple experiment to test this hypothesis show that THF required far less time to wet 0.25 g of milled switchgrass than DI water in a stemmed funnel. THF required 20 min, whereas DI water required 8 h. Furthermore, the solution of THF and water in the CELF mixture (0.889:1 THF/water by mass) took 3.5 h, which was half the time of DI water alone to wet switchgrass, suggesting the ability of the CELF mixture to better soak switchgrass and increase the effectiveness of presoaking prior to pretreatment.
Effect of particle size reduction prior to DSA and CELF pretreatments on enzymatic hydrolysis of Alamo switchgrass
Solids produced by CELF and DSA pretreatments of switchgrass that had been presoaked for 18 h with and without prior milling were hydrolyzed by Accellerase® 1500 over a range of enzyme loadings. Figure 6 shows that milling significantly improved sugar yields from enzymatic hydrolysis of DSA-pretreated switchgrass solids. For example, glucose yields from DSA pretreatment of unmilled switchgrass were 14% (± 1.15%) lower than those from milled switchgrass even at a very high enzyme loading of 65 mg protein/g glucan (Fig. 6(i)). Furthermore, yields from enzymatic hydrolysis of solids produced by DSA pretreatment of unmilled switchgrass were lower at enzyme loadings of 15, 5, and 2 mg protein/g glucan compared to those from DSA on milled switchgrass. The sieve size used during milling, however, only had a slight effect on glucose yields from enzymatic hydrolysis of DSA-pretreated switchgrass.
Effect of milling (particle size) on enzymatic glucose yields for pretreated solids prepared by DSA at Accellerase 1500 cellulase loadings of i 65 mg, ii 15 mg, iii 5 mg, and iv 2 mg cellulase protein/g glucan in unpretreated switchgrass. All DSA pretreatments were performed at 160 °C for 20 min with 0.5 wt% sulfuric acid on switchgrass that had first been presoaked for 18 h at 4 °C at a 7.5 wt% solids loading
Because CELF pretreatment achieved nearly theoretical yields from enzymatic hydrolysis at high enzyme loadings, only low loadings of 5 and 2 mg protein/g glucan were applied so the effects of particle size could be distinguished. As shown in Fig. 7, all samples were highly digestible after 8 days of hydrolysis even at these very low enzyme loadings. Furthermore, glucose yields from enzymatic hydrolysis of CELF-pretreated switchgrass were within standard deviation (± 0.83%) regardless of whether the switchgrass was milled or not, and the particle size from milling did not affect glucose yields. These results suggest that CELF is capable of achieving high sugar yields from switchgrass even without prior particle size reduction.
Effect of milling (particle size) on glucose yields from CELF-pretreated solids for enzymatic hydrolysis with Accellerase 1500 at cellulase loadings of i 5 mg and ii 2 mg cellulase protein/g glucan in unpretreated switchgrass. All CELF pretreatments were performed at 150 °C for 25 min with 0.5 wt% sulfuric acid and a 0.889:1 THF/water mass ratio for switchgrass that had first been presoaked for 18 h at 4 °C and a 7.5 wt% solids loading
These results demonstrated that CELF pretreatment can remove a large portion of the lignin and hemicellulose without particle size reduction by knife milling. They also showed that milling has very little effect on glucose yields from enzymatic hydrolysis of CELF-pretreated solids. These outcomes are in stark contrast to those for DSA pretreatment for which particle size reduction to at least < 2 mm was required to achieve comparable sugar yields to CELF albeit at much greater loadings of expensive enzymes. Milling prior to pretreatment had a minor effect on the composition of solids produced by DSA pretreatment of switchgrass, implying that the increase in enzymatic hydrolysis yields with milling of DSA switchgrass resulted from enhanced micro-accessibility of cellulose [40] that can be improved by reducing cellulose crystallinity or degree of polymerization [41]. Because DSA solubilizes hemicellulose and increases cellulose accessibility without physically removing much of the lignin from biomass, it is likely that acid for DSA pretreatment does not effectively diffuse through the entire particle to contact all of the cellulose microfibrils and make them more micro-accessible to cellulolytic enzymes. For CELF pretreatment, on the other hand, the THF/water co-solvent solubilizes a large fraction of the lignin as well as hemicellulose to thus increase the glucan content in the pretreated solids. As lignin in plants coats cell wall polysaccharides and thereby impairs water access [42], removal of most of the lignin in addition to hemicellulose by CELF pretreatment of unmilled switchgrass could disintegrate the cell wall structure and allow acid catalyst to freely contact cellulose fibers.
CELF pretreatment has previously been demonstrated to reduce the amount of enzyme required to achieve high glucose yields and high ethanol titers from corn stover [20, 43]. This study showed that those findings apply to CELF pretreatment of switchgrass with 100% glucose yields achieved at enzyme loadings as low as 5 mg protein/g glucan. Because enzymes are a major contributor to the cost of cellulosic fuel production [44], realizing high sugar yields with much less enzyme can have a major impact on process economics [29, 30, 32]. In addition, the results presented here suggest that energy-intensive milling can be eliminated for CELF pretreatment of switchgrass but not for DSA pretreatment. Although elimination of particle size reduction could have significant commercial implications, the effect of higher solids loadings on the performance of CELF pretreatment and subsequent enzymatic hydrolysis must still be ascertained.
Most biological operations for biomass conversion require particle size reduction prior to pretreatment to realize high sugar yields by subsequent enzymatic hydrolysis. Biomass is also presoaked prior to most pretreatments to provide adequate reactant contact. Thus, both particle size reduction and presoaking can increase reactant diffusion into the biomass particle. However, the high milling energy required to reduce particle size sufficiently to realize high yields is a significant contributor to processing costs, and extended presoaking increases processing times. In this study, the effect of presoaking times and particle size reduction by knife milling on DSA and CELF pretreatment of Alamo switchgrass was investigated. Biomass presoaking slightly increased glucose yields from enzymatic hydrolysis for DSA and CELF solids. Significantly, CELF pretreatment is shown to be capable of achieving high glucose yields from subsequent enzymatic hydrolysis of solids from CELF pretreatment of switchgrass without size reduction even at low enzyme loadings, in definite contrast to DSA. The latter results indicate that particle size reduction of biomass could be eliminated prior to CELF pretreatment without a reduction in sugar yields, thus potentially reducing processing costs for biofuels production.
Chopped senescent Alamo switchgrass (diameter = 0.35 ± 0.1 cm) was provided by Genera Energy Inc. (Vonore, TN). DuPont Industrial Biosciences (Palo Alto, CA) provided the Accellerase® 1500 fungal cellulolytic enzyme cocktail used for enzymatic hydrolysis. The protein concentration was measured as 82 mg/ml by following the standard BCA method with bovine serum albumin as a standard [45]. Tetrahydrofuran (THF) was purchased from Fisher Scientific (Pittsburgh, PA).
Milling and soaking
Knife milling was performed using a Wiley Mill (Model 4, Arthur H. Thomas Company, Philadelphia, PA) with a 1 mm or 2 mm particle size interior sieve. Prior to pretreatment, milled and unmilled switchgrass solids were soaked for times varying from 0 to 18 h in appropriate reaction ingredients (see "Pretreatment" section) in the pretreatment reactor at 4 °C in a refrigerator to minimize reaction during presoaking and minimize solvent evaporation.
To test the wetting properties of THF and water to switchgrass, 0.25 g of milled Alamo switchgrass (< 1 mm) was packed into a stemmed glass funnel, and the bottom was plugged with cheesecloth. The plugged end was then inserted into a conical tube containing 10 mL of DI water, pure THF, or a 0.889:1 mixture of THF/water by mass. A timer was started when the funnel stem first made contact with the liquid, and the timer was stopped when the biomass was completely soaked. The timer reading was recorded as the time taken to wet 0.25 g of milled Alamo switchgrass.
Pretreatments were performed in a 1 L Parr® Hastelloy autoclave reactor (236HC Series, Parr Instruments Co., Moline, IL) equipped with a double-stacked pitch blade impeller rotating at 200 rpm. For DSA reactions, solutions were loaded with 0.5 wt% (based on liquid mass) sulfuric acid (Ricca Chemical Company, Arlington, TX), while for CELF reactions, THF (> 99% purity, Fisher Scientific, Pittsburgh, PA) was added to a 0.5 wt% sulfuric acid solution in water at a 0.889:1 THF to acidic water mass ratio. All reactions were maintained at reaction temperature (± 1 °C) by convective heating with a 4 kW fluidized sand bath (Model SBL-2D, Techne, Princeton, NJ). Sand bath temperature was maintained at twice the target pretreatment temperature to minimize heat up time. Heat up times were kept under 2 min for all pretreatments. The reaction temperature was directly measured by an in-line K-type thermocouple (Omega Engineering Inc., Stamford, Connecticut). Following pretreatment, solids were separated from the liquid by vacuum filtration at room temperature through glass fiber filter paper (Fisher Scientific, Pittsburgh, PA) and washed with room temperature deionized water until the filtrate pH reached neutral. The solids were carefully transferred to ziplock bags and weighed. Moisture content of the solids was determined by a halogen moisture analyzer (Model HB43, Mettler Toledo, Columbus, OH). Pretreatment hydrolyzate was collected for HPLC analysis of sugars and sugar dehydration products. Density of hydrolyzate was determined by measuring the mass of hydrolyzate contained in a 25-mL volumetric flask.
Enzymatic hydrolysis
Enzymatic hydrolysis was performed per a NREL protocol [37] in triplicate in 125-mL Erlenmeyer flasks with a 50 g total working mass that contained 50 mM sodium citrate buffer (pH 4.9) to maintain the hydrolysis pH and 0.02% sodium azide to prevent microbial contamination together with enough pretreated solids to result in approximately 1 wt% glucan. Accellerase® 1500 cellulase loadings for enzymatic hydrolysis were varied from 2 to 65 mg protein/g glucan in unpretreated biomass [31]. Enzyme loadings were based on unpretreated switchgrass so as not to penalize a pretreatment if it released more glucose in the pretreatment step. Enzymatic hydrolysis flasks were placed in a Multitron orbital shaker (Infors HT, Laurel, MD) set at 150 rpm and 50 °C and allowed to equilibrate for 1 h before enzyme addition. Homogenous samples of approximately 500 μL were collected at 4 h, 24 h, and every 24 h thereafter, into 2 mL centrifuge tubes (Fisher Scientific, Pittsburg, PA), and then centrifuged at 15,000 rpm for 10 min before analysis of the supernatant by HPLC.
All chemical analyses followed Laboratory Analytical Procedures (LAPs) documented by the National Renewable Energy Laboratory (NREL, Golden, CO). Compositional analyses of unpretreated and pretreated switchgrass were according to the NREL protocol in triplicates [46]. Residual mass after quantification of carbohydrates and lignin, which includes ash, proteins, etc., was expressed as "Other." Liquid samples along with appropriate calibration standards were analyzed on an HPLC (Waters Alliance e2695) equipped with a Bio-Rad Aminex® HPX-87H column and RI detector (Waters 2414) with an eluent (5 mM sulfuric acid) flow rate at 0.6 mL/min. The chromatograms were integrated using an Empower® 2 software package (Waters Co., Milford, MA).
After HPLC quantification, the following formulae were applied to calculate mass, volumes, enzyme loadings, and yields:
$${\text{Mass of sugar released in pretreatment hydrolyzate}} = {\text{Sugar concentration from HPLC}}*{\text{Volume of pretreatment hydrolyzate}}$$
$${\text{Volume of pretreatment hydrolyzate}} = \left( {{\text{Total reaction mass}} -\, \left( {{\text{Mass of wet pretreated solids}}*{\text{Moisture content}}} \right)} \right)/{\text{Hydrolyzate density}}$$
$${\text{Glucan yield after pretreatment}} = \left( {{\text{Mass of wet pretreated solids}}*\left( { 100 - {\text{Moisture content}}} \right)*\% \;{\text{of glucan in pretreated solids}}} \right)/\left( {{\text{Dry mass of unpretreated solids}}* \% \;{\text{of glucan in unpretreated solids}}} \right)$$
$${\text{Enzyme loading}} = {\text{mg of protein pergram of glucan in enzymatic hydrolysis flask}}/{\text{glucan yield after pretreatment}}$$
The mass of anhydrous sugar in enzymatic hydrolysis substrates was converted to the mass of the corresponding hydrous form by dividing cellobiose values by 0.95, glucan values by 0.90, and xylan values by 0.88 to compensate for the mass of water added during hydrolysis.
$${\text{Enzymatic}}\left( {\text{Stage 2}} \right){\text{glucose yield}},\% = 100*\left( {{\text{Concentration of monomeric sugar measured by HPLC}}*{\text{total reaction volume of enzymatic hydrolysis flask}}} \right)/\left( {{\text{Mass of glucan in enzymatic hydrolysis flask}}/{\text{anhydrous correction factor}}} \right)$$
For enzymatic hydrolysis samples, average sugar yield was calculated using values for triplicates. Standard deviation values were calculated using the following formula:
$${\text{Standard deviation}} = \sqrt {\frac{{\sum {\left( {x - \overline{x} } \right)^{2} } }}{{\left( {n - 1} \right)}}}$$
The datasets supporting the conclusions of this article are included within the article and its additional file.
DSA:
dilute sulfuric acid
CELF:
Co-solvent Enhanced Lignocellulosic Fractionation
THF:
tetrahydrofuran
high-performance liquid chromatography
Lynd LR, et al. Fuel ethanol from cellulosic biomass. Science. 1991;251(4999):1318–23.
Lynd LR, et al. Cellulosic ethanol: status and innovation. Curr Opin Biotechnol. 2017;45:202–11.
Fike JH, et al. Long-term yield potential of switchgrass-for-biofuel systems. Biomass Bioenergy. 2006;30(3):198–206.
Brown RA, et al. Potential production and environmental effects of switchgrass and traditional crops under current and greenhouse-altered climate in the central United States: a simulation study. Agr Ecosyst Environ. 2000;78(1):31–47.
Samson RA, Omielan JA. Switchgrass—a potential biomass energy crop for ethanol production. In: Proceedings of the thirteenth north American Prairie conference: spirit of the land, our prairie legacy. 1994. p. 253–8.
Dien BS, et al. Chemical composition and response to dilute-acid pretreatment and enzymatic saccharification of alfalfa, reed canarygrass, and switchgrass. Biomass Bioenergy. 2006;30(10):880–91.
Wyman CE, et al. Comparative data on effects of leading pretreatments and enzyme loadings and formulations on sugar yields from different switchgrass sources. Bioresour Technol. 2011;102:11052–62.
Kim Y, et al. Comparative study on enzymatic digestibility of switchgrass varieties and harvests processed by leading pretreatment technologies. Bioresour Technol. 2011;102:11089–96.
Yang B, Wyman CE. Pretreatment: the key to unlocking low-cost cellulosic ethanol. Biofuels Bioprod Biorefin. 2008;2(1):26–40.
Wyman CE, et al. Coordinated development of leading biomass pretreatment technologies. Bioresour Technol. 2005;96(18):1959–66.
Karimi K, et al. Progress in physical and chemical pretreatment of lignocellulosic biomass. In: Gupta VK, Tuohy MG, editors. Biofuel technologies. Berlin: Springer; 2013. p. 53–96.
Bridgeman TG, et al. Influence of particle size on the analytical and chemical properties of two energy crops. Fuel. 2007;86(1–2):60–72.
Zhu L, et al. Structural features affecting biomass enzymatic digestibility. Bioresour Technol. 2008;99(9):3817–28.
Mosier N, et al. Features of promising technologies for pretreatment of lignocellulosic biomass. Bioresour Technol. 2005;96(6):673–86.
Lloyd TA, Wyman CE. Combined sugar yields for dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis of the remaining solids. Bioresour Technol. 2005;96(18):1967–77.
Alizadeh H, et al. Pretreatment of switchgrass by ammonia fiber explosion (AFEX). Appl Biochem Biotechnol. 2005;121:1133–41.
Zhang ZY, et al. Organosolv pretreatment of plant biomass for enhanced enzymatic saccharification. Green Chem. 2016;18(2):360–81.
Xu F, et al. Transforming biomass conversion with ionic liquids: process intensification and the development of a high-gravity, one-pot process for the production of cellulosic ethanol. Energy Environ Sci. 2016;9(3):1042–9.
Zhu JY, et al. Sulfite pretreatment (SPORL) for robust enzymatic saccharification of spruce and red pine. Bioresour Technol. 2009;100(8):2411–8.
Nguyen TY, et al. Co-solvent pretreatment reduces costly enzyme requirements for high sugar and ethanol yields from lignocellulosic biomass. ChemSusChem. 2015;8:1716–25.
Himmel ME, et al. Biomass recalcitrance: engineering plants and enzymes for biofuels production. Science. 2007;315(5813):804–7.
Wyman CE. What is (and is not) vital to advancing cellulosic ethanol. Trends Biotechnol. 2007;25(4):153–7.
Cadoche L, Lopez GD. Assessment of size-reduction as a preliminary step in the production of ethanol from lignocellulosic wastes. Biol Wastes. 1989;30(2):153–7.
Athmanathan A, Trupia S. Examining the role of particle size on ammonia-based bioprocessing of maize stover. Biotechnol Prog. 2016;32(1):134–40.
Tillman LM, Lee YY, Torget R. Effect of transient acid diffusion on pretreatment hydrolysis of hardwood hemicellulose. Appl Biochem Biotechnol. 1990;24–5:103–13.
Kim SB, Lee YY. Diffusion of sulfuric acid within lignocellulosic biomass particles and its impact on dilute-acid pretreatment. Bioresour Technol. 2002;83(2):165–71.
Ewanick S, Bura R. The effect of biomass moisture content on bioethanol yields from steam pretreated switchgrass and sugarcane bagasse. Bioresour Technol. 2011;102(3):2651–8.
Ghose TK, Pannirselvam PV, Ghosh P. Catalytic solvent delignification of agricultural residues: organic catalysts. Biotechnol Bioeng. 1983;25(11):2577–90.
Jannasch R, Quan Y, Samson R. A process and energy analysis of pelletizing switchgrass. Prepared by REAP-Canada (http://www.reap-canada.com) for Natural Resources Canada, 2001.
Schell DJ, Harwood C. Milling of lignocellulosic biomass—results of pilot-scale testing. Appl Biochem Biotechnol. 1994;45–6:159–68.
Hinman ND, et al. Preliminary estimate of the cost of ethanol-production for SSF technology. Appl Biochem Biotechnol. 1992;34–5:639–49.
Mani S, Tabil LG, Sokhansanj S. Grinding performance and physical properties of wheat and barley straws, corn stover and switchgrass. Biomass Bioenergy. 2004;27(4):339–52.
Shi J, Ebrik MA, Wyman CE. Sugar yields from dilute sulfuric acid and sulfur dioxide pretreatments and subsequent enzymatic hydrolysis of switchgrass. Bioresour Technol. 2011;102(19):8930–8.
Nguyen T, et al. Co-solvent pretreatment reduces costly enzyme requirements for high sugar and ethanol yields from lignocellulosic biomass. Chemsuschem. 2015;8(10):1716–25.
Samuel R, et al. Solid-state NMR characterization of switchgrass cellulose after dilute acid pretreatment. Biofuels. 2010;1(1):85–90.
Chundawat SPS, et al. Proteomics-based compositional analysis of complex cellulase-hemicellulase mixtures. J Proteome Res. 2011;10(10):4365–72.
Selig M, Weiss N, Ji Y. Enzymatic saccharification of lignocellulosic biomass: laboratory analytical procedure (LAP): issue date, 3/21/2008. National Renewable Energy Laboratory. 2008.
Zeng YN, et al. Lignin plays a negative role in the biochemical process for producing lignocellulosic biofuels. Curr Opin Biotechnol. 2014;27:38–45.
Cheong WJ, Carr PW. The surface tension of mixtures of methanol, acetonitrile, tetrahydrofuran, isopropanol, tertiary butanol and dimethyl-sulfoxide with water at 25 °C. J Liq Chromatogr. 2006;10(4):561–81.
Boewer L, et al. Concentration-induced wetting transition in water–tetrahydrofuran–isobutane systems. J Phys Chem C. 2011;115(37):18235–8.
Kumar R, Wyman CE. Physical and chemical features of pretreated biomass that influence macro-/micro-accessibility and biological processing. In: Aqueous pretreatment of plant biomass for biological and chemical conversion to fuels and chemicals. 2013. p. 281–310.
Pu YQ, et al. Assessing the molecular structure basis for biomass recalcitrance during dilute acid and hydrothermal pretreatments. Biotechnol Biofuels. 2013;6:15.
Iiyama K, Lam TBT, Stone BA. Covalent cross-links in the cell-wall. Plant Physiol. 1994;104(2):315–20.
Nguyen TY, et al. Overcoming factors limiting high-solids fermentation of lignocellulosic biomass to ethanol. Proc Natl Acad Sci USA. 2017;114(44):11673–8.
Smith PK, et al. Measurement of protein using bicinchoninic acid. Anal Biochem. 1985;150(1):76–85.
Sluiter A, et al. Determination of structural carbohydrates and lignin in biomass. Lab Anal Proced. 2008;1617:1–16.
We acknowledge the Ford Motor Company for funding the Chair in Environmental Engineering that facilitates projects such as this one and the Center for Environmental Research and Technology (CE-CERT) of the Bourns College of Engineering for providing facilities. Finally, we would like to thank fellow group members Hamzah Alshatari, Christian Alcaraz, Priyanka Singh, Juana Sanchez, Raffay Ahmed, and Crystal Pargas for assistance in measuring time taken for solvents to wet switchgrass.
Department of Chemical and Environmental Engineering, Bourns College of Engineering, University of California, Riverside, 900 University Ave, Riverside, CA, 92521, USA
Abhishek S. Patri
, Laura McAlister
, Charles M. Cai
& Charles E. Wyman
BioEnergy Science Center (BESC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN, 37831, USA
, Rajeev Kumar
Center for Environmental Research and Technology (CE-CERT), Bourns College of Engineering, University of California, Riverside, 1084 Columbia Ave, Riverside, CA, 92507, USA
Center for Bioenergy Innovation (CBI), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN, 37831, USA
Search for Abhishek S. Patri in:
Search for Laura McAlister in:
Search for Charles M. Cai in:
Search for Rajeev Kumar in:
Search for Charles E. Wyman in:
ASP, RK, CMC, and CEW designed the study. ASP and LM carried out the experiments. ASP, RK, CMC, and CEW analyzed the data. ASP, RK, CMC, and CEW wrote and edited the manuscript. All authors read and approved the final manuscript.
Support by the Office of Biological and Environmental Research in the US Department of Energy (DOE) Office of Science through the BioEnergy Science Center (BESC) and the Center for Bioenergy Innovation (CBI), both managed by Oak Ridge National Laboratory, made this research possible. The award of a fellowship to the lead author by the National Center for Sustainable Transportation aided his participation.
Correspondence to Charles E. Wyman.
CEW is founding Editor in Chief of the Journal Biotechnology for Biofuels. CEW was a co-founder of Mascoma Corporation and former chair of their Scientific Advisory Board. He is also a co-founder, president, and CEO of Vertimass LLC, a start-up focused on catalytic conversion of ethanol to fungible hydrocarbon fuels. CMC is the co-founder and CTO of MG Fuels LLC, a start-up focused on electricity and gas production from biomass. The other authors declare that they have no competing interests.
Additional file 1: Table S1. Furfural concentrations in liquid hydrolyzates after DSA and CELF pretreatment of Alamo switchgrass at varying conditions. Figure S1. Flow diagram of pretreatment and enzymatic hydrolysis of switchgrass visualizing Stage 1 and Stage 2. Figure S2. Alamo switchgrass (i) before knife milling, (ii) after milling to < 2 mm, (iii) and after milling to < 1 mm.
Dilute acid
|
CommonCrawl
|
Definition:Inverse Secant/Real
< Definition:Inverse Secant
Let $x \in \R$ be a real number such that $x \le -1$ or $x \ge 1$.
The inverse secant of $x$ is the multifunction defined as:
$\sec^{-1} \left({x}\right) := \left\{{y \in \R: \sec \left({y}\right) = x}\right\}$
where $\sec \left({y}\right)$ is the secant of $y$.
Arcsecant
Arcsecant Function
From Shape of Secant Function, we have that $\sec x$ is continuous and strictly increasing on the intervals $\left[{0 \,.\,.\, \dfrac \pi 2}\right)$ and $\left({\dfrac \pi 2 \,.\,.\, \pi}\right]$.
From the same source, we also have that:
$\sec x \to + \infty$ as $x \to \dfrac \pi 2^-$
$\sec x \to - \infty$ as $x \to \dfrac \pi 2^+$
Let $g: \left[{0 \,.\,.\, \dfrac \pi 2}\right) \to \left[{1 \,.\,.\, \infty}\right)$ be the restriction of $\sec x$ to $\left[{0 \,.\,.\, \dfrac \pi 2}\right)$.
Let $h: \left({\dfrac \pi 2 \,.\,.\, \pi}\right] \to \left({-\infty \,.\,.\, -1}\right]$ be the restriction of $\sec x$ to $\left({\dfrac \pi 2 \,.\,.\, \pi}\right]$.
Let $f: \left[{0 \,.\,.\, \pi}\right] \setminus \dfrac \pi 2 \to \R \setminus \left({-1 \,.\,.\, 1}\right)$:
$f\left({x}\right) = \begin{cases} g\left({x}\right) & : 0 \le x < \dfrac \pi 2 \\ h\left({x}\right) & : \dfrac \pi 2 < x \le \pi \end{cases}$
From Inverse of Strictly Monotone Function, $g \left({x}\right)$ admits an inverse function, which will be continuous and strictly increasing on $\left[{1 \,.\,.\, \infty}\right)$.
From Inverse of Strictly Monotone Function, $h \left({x}\right)$ admits an inverse function, which will be continuous and strictly increasing on $\left({-\infty \,.\,.\, -1}\right]$.
As both the domain and range of $g$ and $h$ are disjoint, it follows that:
$f^{-1}\left({x}\right) = \begin{cases} g^{-1}\left({x}\right) & : x \ge 1 \\ h^{-1}\left({x}\right) & : x \le -1 \end{cases}$
This function $f^{-1} \left({x}\right)$ is called arcsecant of $x$ and is written $\operatorname{arcsec} x$.
The domain of $\operatorname{arcsec} x$ is $\R \setminus \left({-1 \,.\,.\, 1}\right)$
The image of $\operatorname{arcsec} x$ is $\left[{0 \,.\,.\, \pi}\right] \setminus \dfrac \pi 2$.
Retrieved from "https://proofwiki.org/w/index.php?title=Definition:Inverse_Secant/Real&oldid=182249"
Definitions/Inverse Secant
This page was last modified on 6 April 2014, at 05:23 and is 0 bytes
|
CommonCrawl
|
Generalized penalty for circular coordinate representation
FoDS Home
Homotopy continuation for the spectra of persistent Laplacians
December 2021, 3(4): 701-727. doi: 10.3934/fods.2021020
Learning landmark geodesics using the ensemble Kalman filter
Andreas Bock , and Colin J. Cotter
Department of Mathematics, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
* Corresponding author: Andreas Bock
Received March 2021 Revised July 2021 Published December 2021 Early access August 2021
We study the problem of diffeomorphometric geodesic landmark matching where the objective is to find a diffeomorphism that, via its group action, maps between two sets of landmarks. It is well-known that the motion of the landmarks, and thereby the diffeomorphism, can be encoded by an initial momentum leading to a formulation where the landmark matching problem can be solved as an optimisation problem over such momenta. The novelty of our work lies in the application of a derivative-free Bayesian inverse method for learning the optimal momentum encoding the diffeomorphic mapping between the template and the target. The method we apply is the ensemble Kalman filter, an extension of the Kalman filter to nonlinear operators. We describe an efficient implementation of the algorithm and show several numerical results for various target shapes.
Keywords: Ensemble Kalman filter, large deformation diffeomorphic metric mapping, Bayesian inverse problem, landmark matching, derivative-free optimisation.
Mathematics Subject Classification: Primary: 62F15, 65C05; Secondary: 34A55.
Citation: Andreas Bock, Colin J. Cotter. Learning landmark geodesics using the ensemble Kalman filter. Foundations of Data Science, 2021, 3 (4) : 701-727. doi: 10.3934/fods.2021020
M. F. Beg, M. I. Miller, A. Trouvé and L. Younes, Computing large deformation metric mappings via geodesic flows of diffeomorphisms, Int. J. Comput. Vis., 61 (2005), 139-157. doi: 10.1023/B:VISI.0000043755.93987.aa. Google Scholar
N. K. Chada, M. A. Iglesias, L. Roininen and A. M. Stuart, Parameterizations for ensemble Kalman inversion, Inverse Problems, 34 (2018), 31pp. doi: 10.1088/1361-6420/aab6d9. Google Scholar
B. Charlier, J. Feydy, J. A. Glaunès, F.-D. Collin and G. Durif, Kernel operations on the GPU, with autodiff, without memory overflows, preprint, arXiv: 2004.11127. Google Scholar
C. J. Cotter, S. L. Cotter and F.-X. Vialard, Bayesian data assimilation in shape registration, Inverse Problems, 29 (2013), 21pp. doi: 10.1088/0266-5611/29/4/045011. Google Scholar
S. L. Cotter, M. Dashti and A. M. Stuart, Approximation of Bayesian inverse problems for PDEs, SIAM J. Numer. Anal., 48 (2010), 322-345. doi: 10.1137/090770734. Google Scholar
P. Dupuis, U. Grenander and M. I. Miller, Variational problems on flows of diffeomorphisms for image matching, Quart. Appl. Math., 56 (1998), 587-600. doi: 10.1090/qam/1632326. Google Scholar
G. Evensen, Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics, J. Geophys. Res.-Oceans, 99 (1994), 10143-10162. doi: 10.1029/94JC00572. Google Scholar
U. Grenander and M. I. Miller, Computational anatomy: An emerging discipline, Quart. Appl. Math., 56 (1998), 617-694. doi: 10.1090/qam/1668732. Google Scholar
U. Grenander and M. I. Miller, Representations of knowledge in complex systems, J. Roy. Statist. Soc. Ser. B, 56 (1994), 549-603. doi: 10.1111/j.2517-6161.1994.tb02000.x. Google Scholar
S. J. Greybush, E. Kalnay, T. Miyoshi, K. Ide and B. R. Hunt, Balance and ensemble Kalman filter localization techniques, Monthly Weather Review, 139 (2011), 511-522. doi: 10.1175/2010MWR3328.1. Google Scholar
D. D. Holm, A. Trouvé and L. Younes, The Euler-Poincaré theory of metamorphosis, Quart. Appl. Math., 67 (2009), 661-685. doi: 10.1090/S0033-569X-09-01134-2. Google Scholar
M. A. Iglesias, A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems, Inverse Problems, 32 (2016), 45pp. doi: 10.1088/0266-5611/32/2/025002. Google Scholar
M. A. Iglesias, K. J. H. Law and A. M. Stuart, Ensemble Kalman methods for inverse problems, Inverse Problems, 29 (2013), 20pp. doi: 10.1088/0266-5611/29/4/045001. Google Scholar
S. C. Joshi and M. I. Miller, Landmark matching via large deformation diffeomorphisms, IEEE Trans. Image Process., 9 (2000), 1357-1370. doi: 10.1109/83.855431. Google Scholar
J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Applied Mathematical Sciences, 160, Springer-Verlag, New York, 2005. doi: 10.1007/b138659. Google Scholar
R. E. Kalman, A new approach to linear filtering and prediction problems, Trans. ASME Ser. D. J. Basic Engrg., 82 (1960), 35-45. doi: 10.1115/1.3662552. Google Scholar
L. Kühnel, S. Sommer and A. Arnaudon, Differential geometry and stochastic dynamics with deep learning numerics, Appl. Math. Comput., 356 (2019), 411-437. doi: 10.1016/j.amc.2019.03.044. Google Scholar
F. Le Gland, V. Monbet and V.-D. Tran, Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford Univ. Press, Oxford, 2011, 598-631. Google Scholar
J. Ma, M. I. Miller, A. Trouvé and L. Younes, Bayesian template estimation in computational anatomy, NeuroImage, 42 (2008), 252-261. doi: 10.1016/j.neuroimage.2008.03.056. Google Scholar
J. Mandel, L. Cobb and J. D. Beezley, On the convergence of the ensemble Kalman filter, Appl. Math., 56 (2011), 533-541. doi: 10.1007/s10492-011-0031-2. Google Scholar
M. I. Miller, A. Trouvé and L. Younes, Geodesic shooting for computational anatomy, J. Math. Imaging Vision, 24 (2006), 209-228. doi: 10.1007/s10851-005-3624-0. Google Scholar
D. Mumford, Pattern theory: The mathematics of perception, in Proceedings of the International Congress of Mathematicians, Vol. I, Higher Ed. Press, Beijing, 2002,401-422. Google Scholar
[23] D. S. Oliver, A. C. Reynolds and N. Liu, Inverse Theory for Petroleum Reservoir Characterization and History Matching, Cambridge University Press, 2008. doi: 10.1017/CBO9780511535642. Google Scholar
A. Paszke, S. Gross, F. Massa, A. Lerer and J. Bradbury, et al., PyTorch: An imperative style, high-performance deep learning library, preprint, arXiv: 1912.01703 Google Scholar
[25] S. Reich and C. Cotter, Probabilistic Forecasting and Bayesian Data Assimilation, Cambridge University Press, New York, 2015. doi: 10.1017/CBO9781107706804. Google Scholar
Y. Saad and M. H. Schultz, GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Statist. Comput., 7 (1986), 856-869. doi: 10.1137/0907058. Google Scholar
C. Schillings and A. M. Stuart, Analysis of the ensemble Kalman filter for inverse problems, SIAM J. Numer. Anal., 55 (2017), 1264-1290. doi: 10.1137/16M105959X. Google Scholar
T. Schneider, S. Lan, A. Stuart and J. Teixeira, Earth system modeling 2.0: A blueprint for models that learn from observations and targeted high-resolution simulations, Geophysical Research Letters, 44 (2017), 12396-12417. doi: 10.1002/2017GL076101. Google Scholar
A. M. Stuart, Inverse problems: A Bayesian perspective, Acta Numer., 19 (2010), 451-559. doi: 10.1017/S0962492910000061. Google Scholar
A. Trouvé, Diffeomorphisms groups and pattern matching in image analysis, Int. J. Comput. Vis., 28 (1998), 213-221. doi: 10.1023/A:1008001603737. Google Scholar
A. Trouvé, An infinite dimensional group approach for physics based models in pattern recognition, (1995). Google Scholar
A. Trouvé and L. Younes, Metamorphoses through Lie group action, Found. Comput. Math., 5 (2005), 173-198. doi: 10.1007/s10208-004-0128-z. Google Scholar
R. Van Der Merwe, A. Doucet, N. De Freitas and E. A. Wan, The unscented particle filter, in Advances in Neural Information Processing Systems, 2000,584–590. Google Scholar
C. R. Vogel, Computational Methods for Inverse Problems, Frontiers in Applied Mathematics, 23, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002. doi: 10.1137/1.9780898717570. Google Scholar
L. Younes, Shapes and Diffeomorphisms, Applied Mathematical Sciences, 171, Springer-Verlag, Berlin, 2010 doi: 10.1007/978-3-642-12055-8. Google Scholar
Figure 1. A matching between landmarks where the geodesics are shown
Figure 2. Template-target configurations for different values of $ M $. Left to right: 10, 50, 150. Linear interpolation has been used between the landmarks to improve the visualisation
Figure 3. Log data misfits for $ M = N_E = 50 $ for different values of $ \xi $ using three different targets
Figure 4. Progression of Algorithm 1 for various targets using $ M = 10 $ and $ N_E = 10 $. Computation times for 50 iterations: 6s for each configuration
Figure 5. Progression of Algorithm 1 for various targets using $M = 50$ and $N_E = 50$. Computation times for 50 iterations (top to bottom): 2m8s, 2m9s, 1m29s
Figure 6. Progression of Algorithm 1 for various targets using $ M = 150 $ and $ N_E = 100 $. Computation times for 50 iterations (top to bottom): 5m22s, 5m23s, 5m23s
Figure 7. Convergence of $ E^k $ where $ M = 10 $
Figure 9. Convergence of $ E^k $ where $ M = 150 $
Figure 7 where $ M = 10 $">Figure 10. Evolution of the relative error $ \mathcal{R}^k $ corresponding to the misfits in Figure 7 where $ M = 10 $
Figure 8 where $ M = 150 $">Figure 11. Evolution of the relative error $ \mathcal{R}^k $ corresponding to the misfits in Figure 8 where $ M = 150 $
Table 1. Global parameters used for Algorithm 1
Variable Value Description
$ n $ 50 Kalman iterations
$ T $ 15 time steps
$ \tau $ 1 landmark size (cf. (2))
$ \epsilon $ 1e-05 absolute error tolerance
Table 2. Relative error at the last iteration of algorithm 1 for different values of $ N_E $ for fixed $ M = 10 $. The rows correspond to the configurations in Figure 4
Table 4. Relative error at the last iteration of algorithm 1 for different values of $ N_E $ for fixed $ M = 150 $. The rows correspond to the configurations in Figure 6
Min Xi, Wenyu Sun, Jun Chen. Survey of derivative-free optimization. Numerical Algebra, Control & Optimization, 2020, 10 (4) : 537-555. doi: 10.3934/naco.2020050
Gaohang Yu. A derivative-free method for solving large-scale nonlinear systems of equations. Journal of Industrial & Management Optimization, 2010, 6 (1) : 149-160. doi: 10.3934/jimo.2010.6.149
Yigui Ou, Wenjie Xu. A unified derivative-free projection method model for large-scale nonlinear equations with convex constraints. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021125
Junyoung Jang, Kihoon Jang, Hee-Dae Kwon, Jeehyun Lee. Feedback control of an HBV model based on ensemble kalman filter and differential evolution. Mathematical Biosciences & Engineering, 2018, 15 (3) : 667-691. doi: 10.3934/mbe.2018030
Jiangqi Wu, Linjie Wen, Jinglai Li. Resampled ensemble Kalman inversion for Bayesian parameter estimation with sequential data. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021045
A. M. Bagirov, Moumita Ghosh, Dean Webb. A derivative-free method for linearly constrained nonsmooth optimization. Journal of Industrial & Management Optimization, 2006, 2 (3) : 319-338. doi: 10.3934/jimo.2006.2.319
Wei-Zhe Gu, Li-Yong Lu. The linear convergence of a derivative-free descent method for nonlinear complementarity problems. Journal of Industrial & Management Optimization, 2017, 13 (2) : 531-548. doi: 10.3934/jimo.2016030
Liang Zhang, Wenyu Sun, Raimundo J. B. de Sampaio, Jinyun Yuan. A wedge trust region method with self-correcting geometry for derivative-free optimization. Numerical Algebra, Control & Optimization, 2015, 5 (2) : 169-184. doi: 10.3934/naco.2015.5.169
Dong-Hui Li, Xiao-Lin Wang. A modified Fletcher-Reeves-Type derivative-free method for symmetric nonlinear equations. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 71-82. doi: 10.3934/naco.2011.1.71
Jun Takaki, Nobuo Yamashita. A derivative-free trust-region algorithm for unconstrained optimization with controlled error. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 117-145. doi: 10.3934/naco.2011.1.117
Alexander Bibov, Heikki Haario, Antti Solonen. Stabilized BFGS approximate Kalman filter. Inverse Problems & Imaging, 2015, 9 (4) : 1003-1024. doi: 10.3934/ipi.2015.9.1003
Russell Johnson, Carmen Núñez. The Kalman-Bucy filter revisited. Discrete & Continuous Dynamical Systems, 2014, 34 (10) : 4139-4153. doi: 10.3934/dcds.2014.34.4139
Sebastian Reich, Seoleun Shin. On the consistency of ensemble transform filter formulations. Journal of Computational Dynamics, 2014, 1 (1) : 177-189. doi: 10.3934/jcd.2014.1.177
Fabrizio Colombo, Irene Sabadini, Frank Sommen. The inverse Fueter mapping theorem. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1165-1181. doi: 10.3934/cpaa.2011.10.1165
Jiangfeng Huang, Zhiliang Deng, Liwei Xu. A Bayesian level set method for an inverse medium scattering problem in acoustics. Inverse Problems & Imaging, 2021, 15 (5) : 1077-1097. doi: 10.3934/ipi.2021029
Neil K. Chada, Claudia Schillings, Simon Weissmann. On the incorporation of box-constraints for ensemble Kalman inversion. Foundations of Data Science, 2019, 1 (4) : 433-456. doi: 10.3934/fods.2019018
Yunwen Yin, Weishi Yin, Pinchao Meng, Hongyu Liu. The interior inverse scattering problem for a two-layered cavity using the Bayesian method. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021069
Håkon Hoel, Gaukhar Shaimerdenova, Raúl Tempone. Multilevel Ensemble Kalman Filtering based on a sample average of independent EnKF estimators. Foundations of Data Science, 2020, 2 (4) : 351-390. doi: 10.3934/fods.2020017
Le Yin, Ioannis Sgouralis, Vasileios Maroulas. Topological reconstruction of sub-cellular motion with Ensemble Kalman velocimetry. Foundations of Data Science, 2020, 2 (2) : 101-121. doi: 10.3934/fods.2020007
Marc Bocquet, Alban Farchi, Quentin Malartic. Online learning of both state and dynamics using ensemble Kalman filters. Foundations of Data Science, 2021, 3 (3) : 305-330. doi: 10.3934/fods.2020015
Impact Factor:
Andreas Bock Colin J. Cotter
|
CommonCrawl
|
Home > Turkish Journal of Mathematics > Vol. 44 (2020) > No. 3
Turkish Journal of Mathematics
The Bochner-convolution integral for generalized functional-valued functions of discrete-time normal martingales
CHEN JINSHU
10.3906/mat-1912-21
Let $M$ be a discrete-time normal martingale satisfying some mild conditions, $\mathcal{S}(M)\subset L^2(M)\subset\mathcal{S}^*(M)$ be the Gel'fand triple constructed from the functionals of $M$. As is known, there is no usual multiplication in $\mathcal{S}^*(M)$ since its elements are continuous linear functionals on $\mathcal{S}(M)$. However, by using the Fock transform, one can introduce convolution in $\mathcal{S}^*(M)$, which suggests that one can try to introduce a type of integral of an $\mathcal{S}^*(M)$-valued function with respect to an $\mathcal{S}^*(M)$-valued measure in the sense of convolution. In this paper, we just define such type of an integral. First, we introduce a class of $\mathcal{S}^*(M)$-valued measures and examine their basic properties. Then, we define an integral of an $\mathcal{S}^*(M)$-valued function with respect to an $\mathcal{S}^*(M)$-valued measure and, among others, we establish a dominated convergence theorem for this integral. Finally, we also prove a Fubini type theorem for this integral.
JINSHU, CHEN (2020) "The Bochner-convolution integral for generalized functional-valued functions of discrete-time normal martingales," Turkish Journal of Mathematics: Vol. 44: No. 3, Article 6. https://doi.org/10.3906/mat-1912-21
Available at: https://journals.tubitak.gov.tr/math/vol44/iss3/6
Mathematics Commons
Manuscript Template - NEW!
All Issues Vol. 47, No. 1 Vol. 46, No. 8 Vol. 46, No. 7 Vol. 46, No. 6 Vol. 46, No. SI-2 Vol. 46, No. 4 Vol. 46, No. 3 Vol. 46, No. SI-1 Vol. 46, No. 1 Vol. 45, No. 6 Vol. 45, No. 5 Vol. 45, No. 4 Vol. 45, No. 3 Vol. 45, No. 2 Vol. 45, No. 1 Vol. 44, No. 6 Vol. 44, No. 5 Vol. 44, No. 4 Vol. 44, No. 3 Vol. 44, No. 2 Vol. 44, No. 1 Vol. 43, No. 6 Vol. 43, No. 5 Vol. 43, No. 4 Vol. 43, No. 3 Vol. 43, No. 2 Vol. 43, No. 1 Vol. 42, No. 6 Vol. 42, No. 5 Vol. 42, No. 4 Vol. 42, No. 3 Vol. 42, No. 2 Vol. 42, No. 1 Vol. 41, No. 6 Vol. 41, No. 5 Vol. 41, No. 4 Vol. 41, No. 3 Vol. 41, No. 2 Vol. 41, No. 1 Vol. 40, No. 6 Vol. 40, No. 5 Vol. 40, No. 4 Vol. 40, No. 3 Vol. 40, No. 2 Vol. 40, No. 1 Vol. 39, No. 6 Vol. 39, No. 5 Vol. 39, No. 4 Vol. 39, No. 3 Vol. 39, No. 2 Vol. 39, No. 1 Vol. 38, No. 6 Vol. 38, No. 5 Vol. 38, No. 4 Vol. 38, No. 3 Vol. 38, No. 2 Vol. 38, No. 1 Vol. 37, No. 6 Vol. 37, No. 5 Vol. 37, No. 4 Vol. 37, No. 3 Vol. 37, No. 2 Vol. 37, No. 1 Vol. 36, No. 4 Vol. 36, No. 3 Vol. 36, No. 2 Vol. 36, No. 1 Vol. 35, No. 4 Vol. 35, No. 3 Vol. 35, No. 2 Vol. 35, No. 1 Vol. 34, No. 4 Vol. 34, No. 3 Vol. 34, No. 2 Vol. 34, No. 1 Vol. 33, No. 4 Vol. 33, No. 3 Vol. 33, No. 2 Vol. 33, No. 1 Vol. 32, No. 4 Vol. 32, No. 3 Vol. 32, No. 2 Vol. 32, No. 1 Vol. 31, No. Suppl. Vol. 31, No. 4 Vol. 31, No. 3 Vol. 31, No. 2 Vol. 31, No. 1 Vol. 30, No. 4 Vol. 30, No. 3 Vol. 30, No. 2 Vol. 30, No. 1 Vol. 29, No. 4 Vol. 29, No. 3 Vol. 29, No. 2 Vol. 29, No. 1 Vol. 28, No. 4 Vol. 28, No. 3 Vol. 28, No. 2 Vol. 28, No. 1 Vol. 27, No. 4 Vol. 27, No. 3 Vol. 27, No. 2 Vol. 27, No. 1 Vol. 26, No. 4 Vol. 26, No. 3 Vol. 26, No. 2 Vol. 26, No. 1 Vol. 25, No. 4 Vol. 25, No. 3 Vol. 25, No. 2 Vol. 25, No. 1 Vol. 24, No. 4 Vol. 24, No. 3 Vol. 24, No. 2 Vol. 24, No. 1 Vol. 23, No. 4 Vol. 23, No. 3 Vol. 23, No. 2 Vol. 23, No. 1 Vol. 22, No. 4 Vol. 22, No. 3 Vol. 22, No. 2 Vol. 22, No. 1 Vol. 21, No. EK Vol. 21, No. 4 Vol. 21, No. 3 Vol. 21, No. 2 Vol. 21, No. 1 Vol. 20, No. 4 Vol. 20, No. 3 Vol. 20, No. 2 Vol. 20, No. 1
|
CommonCrawl
|
How do teachers respond to tenure?
Michael D Jones1
I use the 2007 Schools and Staffing Survey to estimate the effect of tenure on K-12 teacher behavior. Estimates are obtained by exploiting the cross-state variation in the probationary period length of novice teachers. I find that in the year that teachers are evaluated for tenure, they spend significantly more of their own money on classroom materials. The teachers also participate more in school committees and extracurricular activities during the evaluation year. After increased activity during the tenure evaluation year, behavior appears to return to the baseline established prior to evaluation.
JEL Classifications: I21; I28; J22; M5
In most public school districts, teacher tenure is a time-honored element of teacher employment contracts. However, several states across the United States have recently introduced legislation to modify or eliminate teacher tenure. In 2011, the state of Florida passed a bill that any new teacher hired would receive a year-to-year contract, effectively eliminating tenure. In 2009, Ohio extended the probationary period before a teacher is eligible for tenure from three years to seven years. Proponents of tenure argue that once teachers demonstrate competency during a probationary time period, they should be protected from arbitrary dismissal. Opponents of tenure argue that the process of firing poor performing teachers is too time-consuming and expensive. Once a teacher receives tenure, school districts must follow a detailed and costly sequence of steps to fire a poor performing tenured teacher. As a consequence, few tenured teachers are fired for poor performance in the United States. According to The Widget Effect, from 2004–2008, Chicago Public Schools only formally dismissed 9 tenured teachers, or 0.01 percent of its workforce. Prior to receiving tenure, school districts can fire, or fail to renew the contract, of a probationary teacher for almost any reason – with the exception of discriminatory or other illegal reasons. Because tenure status increases a teacher's job security by reducing the likelihood of being fired, I investigate how teachers anticipate and respond to receiving tenure.
In this paper, I look at the change in a teacher's spending on classroom materials and explore whether teachers change their time allocation in activities outside of the classroom (e.g., club sponsorship, coaching, serving on school committees, etc.). Research has consistently found that teacher quality is one of the most important school-level variables affecting student performance (Rivkin et al. 2005; Aaronson et al. 2007; Chetty et al. 2011). Changes in teacher behavior during the tenure evaluation year may affect student performance. I also investigate how teacher work hours change once a teacher is granted tenure – where the number of work hours is measured as the total time a teacher spends on school-related activities during a given week. Relative to the year that teachers are evaluated for tenure, I examine how these measures change immediately before receiving tenure and in the years following tenure. To answer these questions, I use data from the 2007–2008 restricted use version of the Schools and Staffing Survey (SASS) and exploit the cross-state variation in the probationary period length of novice teachers. The majority of states require that teachers serve for three years in a district before tenure is granted. However, several states have shorter probationary periods of only one or two years, while others have longer periods of four or five years.
Teacher tenure is a specific application of employment protection legislation (EPL) which consists of the laws and regulations that govern the hiring and firing of workers. Once a teacher is granted tenure, dismissal or firing costs increase considerably. There is a sizable economics literature on the effects of EPL on various outcomes of interest. Autor et al. (2007) found that the adoption of wrongful discharge protection laws in the United States altered firms' production choices, causing employers to retain unproductive workers. Blanchard and Portugal (2001) found that the strict employment protection in Portugal profoundly affected the labor market relative to the United States and led to an increased duration of unemployment. Heckman and Pages (2000) showed that job security legislation in Latin America reduced employment and increased wage inequality across workers. Several other papers also found that EPL affects worker employment (Lazear 1990; Miles 2000; Kugler and Saint Paul 2004; Martins 2009).
There are also papers that investigate the impact of EPL on individual worker behavior. Ichino and Riphahn (2005) used data from a large Italian bank and found that employee absenteeism increased significantly once employees were no longer under a probationary period. Scoppa (2010) used the 1990 EPL reform act in Italy to investigate the effect on worker absenteeism in that country. Using a difference-in-difference approach, the author exploited the fact that the law drastically increased the firing costs for small firms and found that shirking increased once employees were granted firing protection. Despite this extensive literature, there is little research that looks at EPL in the context of K-12 education. Jacob (2010) used the 2004 new collective bargaining agreement in Chicago Public Schools (CPS) that gave principals the flexibility to dismiss probationary teachers for any reason and found that annual teacher absences were reduced by roughly 10 percent. Goldhaber and Hansen (2010) examine the implications of using value-added models as a criterion for granting tenure to teachers. The Widget Effect, published by The New Teacher Project, documents the relationship between tenure and the number of teachers who are fired. While not specifically addressing teacher tenure, Hansen (2010) used North Carolina administrative data and found that teacher absences increased dramatically in the year prior to teacher retirement or departure. While understanding teacher behavior under EPL is itself a worthwhile research question, this research is also an important first step in understanding how teacher tenure might affect student outcomes to the extent that teacher behavior changes under tenure.
I find that in the year that teachers are evaluated for tenure, they spend significantly more of their own money on classroom materials. The teachers also participate more in school committees and extracurricular activities during the evaluation year. I also find evidence that these changes in behavior are temporary. After a spike in activity during the tenure evaluation year, behavior appears to return to the baseline established prior to being evaluated for tenure.
State variation in teacher tenure
The history of teacher tenure in the United States began in 1909 when New Jersey became the first state to pass comprehensive tenure legislation for K-12 teachers. By the 1940s, seventy percent of teachers were covered by tenure protection; and today, nearly every state has passed legislation granting some form of tenure. In some states, tenure status is also called a continuing contract or permanent employment status. Regardless of its name, tenure is a series of steps or due process that must be followed in order to dismiss a tenured teacher. In order to receive tenure, new teachers in a school district must be employed for a probationary period. During the 2007–2008 school year, the most common probationary period length was three years, but the probationary period length varied between one year and five years. Table 1 shows the probationary period for each state before a teacher was eligible to receive tenure.
Table 1 Number of years before a teacher earns tenure, by state, 2008
The tenure process comprises at least the following four elements: time to tenure, criteria to earn tenure, process for conferring tenure, and tenure protections (Hassel et al. 2011). In this research, I exploit the variation in the time to tenure in order to understand the effect of tenure on teacher behavior. The other tenure elements do not vary substantially across the states. Incorporating student performance in the tenure decision is rare at the state level. According to the 2008 NCTQ State Teacher Policy Yearbook, only two states, Iowa and New Mexico, required that student academic performance be considered in the criteria for awarding teacher tenure. Even in those states, student performance is not the predominant criterion for awarding tenure. Because states often leave tenure decisions to the discretion of the local school district, I also investigate to what extent student performance, specifically student growth, is discussed in school district contracts as a requirement for teacher tenure. Out of the 50 largest school districts in the United States, only three districts specifically require teachers to demonstrate objective student growth on standardized tests prior to being awarded tenure during the 2007–2008 school year, according to the NCTQ Teacher Contract Database.
Because of the increased intensity of performance reviews and in-class evaluations during the tenure evaluation year, as well as the saliency of the process, we might expect an increase or "spike" in teacher effort and activities during that year. Primary and secondary tenure operates differently than university tenure. For example, Singell and Lillydahl (1996) find that assistant professors spend more time on teaching and research compared to full professors, but there is no difference in time allocation within the rank of assistant professor. Link et al. (2008) find similar results. Earning tenure in a university environment requires consistent and persistent efforts over a five to seven year period of time.
While the systems for conferring tenure across states are procedurally different, the outcomes are not – few teachers who are eligible for tenure are denied this status. In the 2009 report, The Widget Effect, researchers find that in five of the six school districts they studied, less than one percent of probationary teachers were denied tenure. Since the time of the report, several states and districts have increased the standards required to receive tenure. For example, in the 2010–2011 school year, New York City introduced more stringent requirements in order for teachers to achieve tenure. Teachers are now rated under a four-point scale that must incorporate student test scores, classroom observations, and parental feedback (the previous rating system only measured two levels – unsatisfactory and satisfactory). The number of teachers who were denied tenure outright increased from 1 percent in 2006 (approximately the same time period of the data used in this research) to 3 percent in 2011.
In contrast, teachers outside of New York City but still in the state of New York are not always subject to these rigorous elements for tenure. New York state law only requires that teachers be granted tenure after a majority vote of the Board of Cooperative Educational Services upon the recommendation of the district superintendent. The district superintendent must write a report to the board of cooperative educational services indicating that the teacher is "competent, efficient and satisfactory." There is no rubric or requirement that teachers meet student achievement benchmarks or undergo a certain number of observations from a principal or third-party observer. In reviewing tenure documents across the remaining states, I conclude that while the procedural nature of the tenure application varies, there is little substantive difference in the outcomes of the tenure process.
Finally, once a teacher receives tenure, every state provides a substantially higher degree of protection for a teacher's employment contract. According to data from the 2007–2008 SASS, only two percent of teachers in the United States were dismissed or failed to have their contract renewed. Table 2 shows the wide variation in dismissal rates across states. South Dakota removed almost 12 percent of its teachers for poor performance, while Arkansas removed only 0.2 percent. In some school districts, that number is even lower. I tested whether the variation in firing percentages across states generated heterogeneity in teacher response. However, I could find no significant difference between states which fire a relatively higher percentage of teachers relative to those which fire a relatively lower percentage of teachers. This lack of heterogeneity could be due to the fact that outside of South Dakota and Alaska, every state fired less than four percent of teachers for poor performance.
Table 2 Teacher dismissal rates in the 2006–2007 year, 2007–2008 SASS data, all teachers
Details on state tenure laws come from the 2008 National Council on Teacher Quality (NCTQ) State Teacher Policy Yearbook. In the yearbook, NCTQ publishes each state's probationary period before a teacher may be granted tenure, as well as a citation for the relevant state law. In addition, prior to publication of the yearbook, the organization provides state officials with a draft copy of its findings in order to check the accuracy of its claims. Because some laws were written to permit school district administrators to have authority over teacher tenure under special circumstances, at times, discretion must be used to code a state's probationary period into a numerical value. For example, the state of Maryland has a probationary period of two years, but it may be extended to three years on an individual basis. NCTQ decided to code Maryland as having a two-year probationary period. In the four states where NCTQ notes that there are potentially different interpretations, I follow NCTQ's coding scheme.
I match the 2007 NCTQ data to teacher response data from the restricted-use version of the 2007–2008 Schools and Staffing Survey (SASS), conducted by the National Center for Education Statistics. Begun in 1987, the SASS is fielded every three to four years and surveys a stratified random sample of public schools, private schools, and schools funded by the Bureau of Indian Education (BIE). The SASS collects data on teacher, administrator, and school characteristics, as well as school programs and general conditions in schools. In addition to restricting the sample to public school teachers only, teachers who indicated that they received no salary or did not work full time were dropped from the analysis. Teachers from career or vocational schools, alternative schools, and special education schools were also removed from the sample. These teachers were dropped because of the unique circumstances of these schools. Because I am only interested in looking at teacher behavior around tenure, I remove teachers who have been teaching for 8 or more years. Table 3 provides summary statistics for the 2007 SASS sample.
Table 3 Summary statistics, data from 2007 SASS
Table 3 describes overall characteristics of the teaching labor force in the 2007–2008 SASS. Teaching is a female-dominated profession with more than three-quarters of all teachers being female. Over one-half of teachers work in a district with a collective-bargaining agreement. Teachers are asked to provide the total amount of hours spent on all teaching and school-related activities during a typical full week, and the average for this variable is 53 hours per week. This self-reported number of work hours is higher than what is found in other well-known datasets like the CPS; however, the SASS prompts the teacher to include hours spent during the school day, before and after school, and on the weekends.
Teachers indicate that they spend about $420 of their own money on average on classroom supplies. The fact that teachers spend their own money in the classroom is so common that the IRS allows a tax deduction for these purchases called the Educator Expense Deduction. Teachers can deduct up to $250 of any unreimbursed expenses incurred for books, supplies, computer equipment, and other supplementary materials. In addition, I look at teacher participation in school extra-curricular activities as a measure of teacher devotion. Approximately eighteen percent of teachers coach a sport at the school they teach, and one-third of teachers sponsor a school club. Over one-half of teachers indicate that they serve on a school or district wide committee, while only ten percent serve as a curriculum specialist. Almost ninety percent of teachers participate in some form of professional development.
In order for tenure to alter behavior, teachers must have flexibility to make changes around school and extracurricular activities. In the SASS, teachers are asked, "How many hours are you required to work to receive base pay during a typical full week at this school?" On average, school districts require teachers to work 38 hours a week. Since teachers indicate that they spend 53 hours on all teaching-related activities, there is still considerable flexibility for teachers to reduce participation rates in extracurricular activities. Likewise, if teachers are required to participate in professional development activities, teacher tenure could not affect any changes in professional development participation rates. While states do require teachers to participate in some form of professional development to maintain their certification, there is often a time window in which to complete these activities. For example, the state of Ohio requires a teacher to complete 18 continuing education units over a 5 year time period in order to maintain certification. Because a teacher has flexibility around scheduling these units, the possibility of strategic behavior in response to teacher tenure exists.
Previous literature has found a relationship between teacher behavior, attitudes, instructional practices, and other activities on student achievement. Palardy and Rumberger (2008) found positive effects on student achievement by teachers who use different curricula and approaches to instructional practices (e.g., journal writing, silent reading, geometric manipulations, etc.). Other literature has found similar effects of instructional practices on student achievement (Lee et al. 1997; Xue and Meisels 2004; Guarino and Hamilton 2006). In the SASS, teachers who serve as curriculum specialists may be willing to improve their curriculum and approach to teaching. Teachers' use of their own money also signals a desire to invest more in curriculum or instructional practices in order to positively affect student achievement. In addition, a teacher's attitude at work has been shown to influence student achievement (Rowan et al. 1991). Finally, a teacher's credentialing, as evidenced by investing in professional development, has been found to produce mixed results on student achievement. Some authors have found a positive effect, while others have found little or no effect (Goldhaber and Brewer 2001; Wayne and Youngs 2003).
Empirical methodology
To estimate the effects of teacher tenure on effort, I use a type of difference-in-difference framework to exploit the cross-state variation in the time required for a teacher to earn tenure. I exclude Washington DC from the data since there is no required probationary period specified in the employment contract. Within the 50 states, I use the observations in the first year after a teacher receives tenure as the treatment group in the model. The control group consists of teachers who are in the same year of teaching as the treatment group but have not yet received tenure because of the state's longer probationary period. I use this estimation method because simply comparing the differences in outcomes before and after tenure may be confounded by other factors that drive these differences. For example, teachers with one more year of experience may not need as much time to teach the material effectively. There could also be changes in expectations around school service activities for more experienced teachers. For these and other reasons, using states with a longer probationary period controls for the differences in teacher behavior that are not related to teacher tenure.
Figure 1 provides a visual description of the identification strategy by plotting the amount of unreimbursed money that teachers spend on classroom materials by the length of a state's probationary period. For the states with either a two or three year probationary period, Figure 1 clearly shows a "spike" in classroom expenditures in the year that a teacher is being evaluated for tenure. Teachers in two year probationary period states spend more than $100 of their own money in that year relative to the previous year or following year. This sharp increase in expenditures in the year of tenure evaluation motivates the following empirical specification.
Personal money spent on classroom materials, SASS data.
For the empirical analysis, I estimate the following equation:
$$ \begin{array}{l}\begin{array}{c}\hfill {Y}_{icdst}={\beta}_0+ BeforeTenureEva{l}_{icdst}{\beta}_1+1 YrTenur{e}_{icdst}{\beta}_2+2 PlusYrTenur{e}_{icdst}{\beta}_3\hfill \\ {}\hfill +{I}_{icdst}{\beta}_4 + {C}_{cdst}{\beta}_5+{D}_{dst}{\beta}_6+{\mu}_s+{\upsilon}_{icdst}+{\varepsilon}_{icdst,}\hfill \end{array}\\ {}\kern2.28em \end{array} $$
where Y icdst is the outcome of interest for teacher i, in school c, in school district d, in state s, in teaching year t. BeforeTenureEval is a dummy variable indicating if the teacher is in a year prior to the tenure evaluation year. 1YrTenure is a dummy variable indicating if a teacher is in the first year of teaching with tenure, and 2PlusYrsTenure is a dummy variable indicating if a teacher has two or more years of teaching with tenure. Finally, I is a vector of individual characteristics, S is a vector of school-level characteristics, and D is a vector of district-level characteristics, μ s are state dummy variables, υ icdst are dummy variables for years of teaching experience, and ε icdst is an idiosyncratic error term. Coefficients on the three tenure dummy variables are interpreted relative to the tenure evaluation year.
One potential concern is how to address the tenure status of veteran teachers who transfer school districts. There are a few examples where states have made it easier for veteran teachers to acquire tenure after transferring school districts. For example, in 2011, Illinois passed SB7, which allows previously tenured teachers who earned either a "Proficient" or "Excellent" rating to be eligible for tenure in 2 years if they earned an "Excellent" rating in each of the first two years in the new district. A new teacher in Illinois would be on probation for four years, rather than two. With the limitations of the data, I cannot calculate whether or not this type of condition would be applicable for a teacher in the dataset. Therefore, I treat all teachers as under the same tenure laws as specified in the NCTQ dataset. Including potentially tenured teachers in my estimation strategy would bias my results towards zero. One other concern is that teacher behavior after tenure may be partly driven by selection effects. If less efficient teachers are less likely to receive tenure, then the results may be biased. While there is some concern about selection effects, in 48 out of 50 states, less than four percent of teachers are fired for poor performance. To the extent that selection bias may exist, the effect is likely to be small.
The estimate of the effect of teacher tenure on classroom expenditures is reported in column 1 of Table 4.
Table 4 Effect of teaching tenure on own money spent, SASS data
Relative to the tenure evaluation year, teachers spend approximately $70 less on classroom materials in the years leading up to the evaluation. Likewise, they also spend approximately $75 less in the first year of receiving tenure. This amount reflects an 18 percent decline from the average of $420 spent on classroom materials. The coefficient of $68 on the Tenure Evaluation Year dummy in Column 2 of Table 4 clearly shows the spike in expenditures in the tenure evaluation year. Column 3 of Table 4 shows the results using a simplified difference-in-difference specification with only using dummy variables for the first year of tenure as well as for two or more years with tenure. Relative to teachers without tenure, teachers in the first year of tenure spend $73 less on classroom expenditures. However, even though the coefficient on the "Two or More Years with Tenure" dummy is negative, it is statistically insignificant, suggesting that the immediate drop in classroom expenditures may be temporary. Teachers may feel that they need to "take a break" after the tenure evaluation year, but behavior reverts back to trend after a one year pause. Column 4 of Table 4, which presents the baseline specification without any teacher or district controls, shows that the decline in classroom expenditures is not driven by changes in teacher or district characteristics. Regardless of whether teachers spend their own money as an investment in their students' academic performance or as a signal of commitment, teachers appear to perceive the return on their money to decline immediately after tenure.
Table 5 shows the effect of tenure on extra-curricular activities outside of the classroom. Column 1 shows that teachers are six percentage points less likely to serve on a school or district-wide committee immediately after receiving tenure. Columns 2 and 3 of Table 5 show a participation rate spike in coaching a sport and serving as a curriculum specialist during the tenure evaluation year. Teachers who temporarily coach a sport (or serve as an assistant coach) for one year may not be ideal for student development. In contrast to these declines, once teachers receive tenure, they are five percentage points more likely to sponsor a student organization, group, or club in the year following tenure. Teachers who would like to sponsor a club may feel that their time is better spent on more visible and/or more rewarded activities during the tenure evaluation year. Tenure may allow these teachers to pursue other student development activities. This reallocation of time between different extra-curricular activities may explain why tenure does not change the overall level of teacher work hours. Consistent with the findings on classroom expenditures, the participation rates in extracurricular activities appears to be a temporary phenomenon, or "spike", associated with the tenure evaluation year. All of these results still hold under a probit model.
Table 5 Estimates of teacher tenure on extracurricular activities, SASS data
Table 6 shows the effect of teacher tenure on other measures of teacher behavior. Teachers are asked if they communicate with students or parents outside of the classroom using any of the following: email, online bulletin board, course or teacher web page, blog, or instant messaging. In column 1 of Table 6, I find no evidence that teachers change the intensity of their communication with students and parents. In column 2 of Table 6, I find that teachers are 6 percentage points less likely to participate in professional development two or more years after receiving tenure. This is the only behavioral outcome which appears to change permanently. If professional development is only partially subsidized or not at all by the school district, teachers may not feel the need to pursue professional development for maintaining employment at the school district. In column 3 of Table 6, I find that overall teacher work hours do not change. If this measure were defined only as teacher work hours at the school, then there would obviously be no change in teacher work hours. However, the SASS defines teacher work hours more broadly as all school-related activities that take place during the week – including the weekends. Under this definition, there is potential for teacher work hour changes. Since I do not find an increase or decrease in teacher work hours during the tenure evaluation year, I conclude that the spike in certain activities during this year must reduce the amount of time spent on other school-related activities. In Column 4 of Table 6, I find that tenured teachers feel they have more job security relative to the tenure evaluation year. Immediately after receiving tenure, teachers are four percentage points less likely to agree or strongly agree with the statement, "I worry about the security of my job because of the performance of my students on state and/or local tests." With two or more years of tenure, teachers are six percentage points less likely to agree with this statement relative to teachers in their tenure evaluation year. This result is evidence that the tenure definitions in equation 1 are consistent with the notion that tenure is effective in reducing concerns about job security.
Table 6 Estimates of teacher tenure on other behaviors, SASS data
Finally, Table 7 presents evidence that men and women may respond differently in the year of tenure evaluation. While column 1 shows that the difference in the amount of money spent in the classroom does not vary meaningfully, women are more likely to participate in school committees during the tenure evaluation year. They are also more likely to coach a school sport relative to the year before and the year after tenure evaluation. In contrast, men appear much less likely to sponsor a school club during their tenure evaluation year.
Table 7 Estimates of teacher tenure, by gender
Threats to identification
The validity of the estimation strategy relies on the assumption that teachers in their tenure evaluation year are similar to teachers with the same years of experience but teach in a district with a longer probationary period. For example, third year teachers in districts that award tenure in the third year of teaching are similar to third year teachers in districts that award tenure in the fourth of fifth year of teaching. Said another way, there should not be cross-sectional variation in teacher or district characteristics unrelated to tenure that modify teacher behavior during this evaluation period.
The previous finding in column 4 of Table 4, which presents the baseline specification without any teacher or district controls, showed that the decline in classroom expenditures is not driven by changes in teacher or district characteristics. This finding suggests that there is not cross-sectional variation in teacher and district characteristics at the time of tenure evaluation. Including these characteristics in the estimating equation is done to improve efficiency and not to control for confounding factors. If the key findings only exist once these covariates are included, one might speculate that there are additional factors, outside of tenure, driving the results.
I will next present two tests to provide evidence that the cross-sectional variation in the treatment and control groups would not have changed in an environment without tenure. First, if the identification strategy for teacher tenure is working properly, I would not expect to see any changes in teacher behavior after the transition period from probation to tenure. Columns 1–4 in Table 8 show that there is no statistically significant difference in teacher extra-curricular activities (the same variables as in Table 5) between the second and third years after receiving tenure. Note that the difference-in-difference strategy may still causally identify the effect of teacher tenure even if there is a change in behavior moving from year 2 to year 3 in a three year probationary period district. For example, particularly forward-looking teachers may start to increase their teaching hours in advance of the tenure application process.
Table 8 Falsification test, change in dependent variable between 2 and 3 years after tenure, SASS data
In the second test for the validity of the empirical methodology, I carry out a placebo test where I investigate if outcomes, which should be unaffected by tenure, change as a result of the identification strategy. Teachers are asked if any of the following are serious or moderate problems: student tardiness, students being unprepared, or students dropping out. We would not expect the coefficients of these variables in the estimating equation to be statistically different from zero as a result of tenure evaluation. Column 1–3 of Table 9 provide confirmation of this intuition. The coefficients on the dummies for Period Before Tenure Evaluation, First Year with Tenure, and Two or More Years with Tenure are all statistically insignificant. In addition to student behavior not changing as a result of tenure, column 4 of Table 9 shows that a teacher's base salary does not change either. In almost every school district, teacher salary is a function of experience and education. With no statistically significant coefficients on the tenure year variables, column 4 provides more evidence for the validity of the identification strategy.
Table 9 Falsification test, effect of teacher tenure on placebo variables, SASS data
This paper describes the change in teacher behavior during the tenure evaluation year. I find that in the year that teachers are evaluated for tenure, they spend significantly more of their own money on classroom materials. The teachers also participate more in school committees and extracurricular activities during the evaluation year. This paper does not make the larger and more ambitious claims about the welfare implications of this behavior. While certain activities are unlikely to benefit from a spike in activity around the tenure evaluation year (e.g., coaching a sport likely requires several years to master), tenure may also grant teachers the freedom to pursue club sponsorship and other activities that may not have been pursued under an annual evaluation.
I also find evidence that these changes in behavior are temporary. Teachers may feel that they need to "take a break" after the tenure evaluation year, but then their behavior returns to the status quo after a one year pause. This finding should concern district policymakers since it suggests that tenure is temporarily altering teacher behavior. District officials would hope that their policies have a permanent effect on teaching behavior; however, the only finding that appears to be permanent is a decrease in time spent on professional development after receiving tenure. In contrast, those states which have eliminated tenure should not see swings in teacher behavior around tenure evaluation. This consequence may make planning and staffing decisions easier for school district officials in these states.
The findings in this paper lead to interesting avenues of future research. If teachers behave strategically, the next step should be to investigate the impact of tenure on student achievement. The primary limitation to this study is that it does not establish the link between teacher behavior and student achievement. For example, does student achievement noticeably improve in the year that a teacher is being evaluated for tenure? If teachers are spending more of their own money on classroom materials and spending more time communicating with students and parents, then students' academic performance may improve. Consequently, does student performance decline in the year immediately following tenure? Since total work hours remain unchanged after tenure, a teacher's reallocation of time towards certain extracurricular activities may provide insight into the link between teacher activities and student achievement.
Aaronson D, Barrow L, Sander W (2007) Teachers and Student Achievement in the Chicago Public High Schools. J Labor Econ 25:95–135, doi:10.1086/508733
Autor DH, Kerr WR, Kugler AD (2007) Does Employment Protection Reduce Productivity? Evidence From US States*. Econ J 117:F189–F217, doi:10.1111/j.1468-0297.2007.02055.x
Blanchard O, Portugal P (2001) What Hides behind an Unemployment Rate: Comparing Portuguese and U.S. Labor Markets. Am Econ Rev 91:187–207
Chetty R, Friedman JN, Rockoff JE (2011) The Long-Term Impacts of Teachers: Teacher Value-Added and Student Outcomes in Adulthood. Research, National Bureau of Economic
Goldhaber DD, Brewer DJ (2001) Evaluating the Evidence on Teacher Certification: A Rejoinder. Educ Eval Policy Anal 23:79–86, doi:10.3102/01623737023001079
Goldhaber D, Hansen M (2010) Using Performance on the Job to Inform Teacher Tenure Decisions. Am Econ Rev 100:250–255
Guarino CM, Hamilton LS (2006) Teacher Qualifications, Instructional Practices, and Reading and Mathematics Gains of Kindergartners. http://www.rand.org/pubs/external_publications/EP20060339.html. Accessed 3 Mar 2015
Hansen M (2010) How Career Concerns Influence Public Workers' Effort: Evidence from the Teacher Labor Market. http://www.urban.org/url.cfm?ID=1001368&renderforprint=1. Accessed 3 Mar 2015
Hassel EA, Kowal J, Ableidinger J, Hassel BC (2011) Teacher Tenure Reform: Applying Lessons from the Civil Service and Higher Education. Building an Opportunity Culture for. Public Impact, America's Teachers
Heckman JJ, Pages C (2000) The Cost of Job Security Regulation: Evidence from Latin American Labor Markets. Research, National Bureau of Economic
Ichino A, Riphahn RT (2005) The Effect of Employment Protection on Worker Effort: Absenteeism During and After Probation. J Eur Econ Assoc 3:120–143, doi:10.1162/1542476053295296
Jacob BA (2010) The Effect of Employment Protection on Worker Effort: Evidence from Public Schooling. Research, National Bureau of Economic
Kugler AD, Saint‐Paul G (2004) How Do Firing Costs Affect Worker Flows in a World with Adverse Selection? J Labor Econ 22:553–584, doi:10.1086/383107
Lazear EP (1990) Job Security Provisions and Employment. Q J Econ 105:699–726, doi:10.2307/2937895
Lee VE, Smith JB, Croninger RG (1997) How High School Organization Influences the Equitable Distribution of Learning in Mathematics and Science. Sociol Educ 70:128–150, doi:10.2307/2673160
Link AN, Swann CA, Bozeman B (2008) A time allocation study of university faculty. Econ Educ Rev 27:363–374, doi:10.1016/j.econedurev.2007.04.002
Martins PS (2009) Dismissals for Cause: The Difference That Just Eight Paragraphs Can Make. J Labor Econ 27:257–279, doi:10.1086/599978
Miles TJ (2000) Common law exceptions to employment at will and U.S. labor markets. J Law Econ Organ 16:74–101, doi:10.1093/jleo/16.1.74
Palardy GJ, Rumberger RW (2008) Teacher Effectiveness in First Grade: The Importance of Background Qualifications, Attitudes, and Instructional Practices for Student Learning. Educ Eval Policy Anal 30:111–140, doi:10.3102/0162373708317680
Rivkin SG, Hanushek EA, Kain JF (2005) Teachers, Schools, and Academic Achievement. Econometrica 73:417–458, doi:10.1111/j.1468-0262.2005.00584.x
Rowan B, Raudenbush SW, Kang SJ (1991) Organizational Design in High Schools: A Multilevel Analysis. Am J Educ 99:238–266
Scoppa V (2010) Shirking and employment protection legislation: Evidence from a natural experiment. Econ Lett 107:276–280, doi:10.1016/j.econlet.2010.02.008
Singell LD, Lillydahl JH (1996) Will Changing Times Change the Allocation of Faculty Time? J Hum Resour 31:429–449, doi:10.2307/146070
Wayne AJ, Youngs P (2003) Teacher Characteristics and Student Achievement Gains: A Review. Rev Educ Res 73:89–122, doi:10.3102/00346543073001089
Xue Y, Meisels SJ (2004) Early Literacy Instruction and Learning in Kindergarten: Evidence From the Early Childhood Longitudinal Study—Kindergarten Class of 1998–1999. Am Educ Res J 41:191–229, doi:10.3102/00028312041001191
I would like to thank Bill Evans, Abigail Wozniak, and Richard Jensen for their helpful comments. I would also like to thank participants at the Notre Dame microeconomic seminars, as well as members of the Economics and Finance Department at the University of Dayton for their helpful seminar comments. Finally, I thank the anonymous referees for their feedback and suggested improvements.
Responsible editor: Pierre Cahuc.
Department of Economics, University of Cincinnati, 324 Lindner Hall, 2925 Campus Green Drive, Cincinnati, OH, 45221, USA
Michael D Jones
Search for Michael D Jones in:
Correspondence to Michael D Jones.
The IZA Journal of Labor Economics is committed to the IZA Guiding Principles of Research Integrity. The author declares that he has observed these principles.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Jones, M.D. How do teachers respond to tenure?. IZA J Labor Econ 4, 8 (2015). https://doi.org/10.1186/s40172-015-0024-6
Employment protection legislation
|
CommonCrawl
|
resolution of singular points on curve
Modified 11 years, 8 months ago
After reading Fulton's book "Algebraic Curves", I know how to do resolution of singular points on curves. Given an affine equation, I can get it's non-singular affine model, i.e the normalization of its affine coordinate ring. The problem is that in Fulton's book, he worked with algebraically closed field and he worked with the origin (0,0) also. If one do blowing-up (p.165 in Fulton's book) of a curve defined a field $k$ which is not necessary algebraically colsed and with a singular point which is not necessary $k$-rational, one get an eqaution which is defined over a finite extension of the field $k$. But for any affine curve (i.e integral scheme of dimension 1), one can form the normalization of its coordinate ring and hence get a non-singular affine curve which is birational to the original curve. My question is that how one can get the equation for this normalized affine curve? I know there are algorithms to compute the integral closure of function fields, but the result is an integral basis rather that an equation. Moreover, I would like to know if there is a systematic method to get this normalized affine equation with the blowing-up method.
ac.commutative-algebra
algebraic-curves
user565739user565739
It won't be possible in general to get a single equation, because the curve does not necessarily admit a nonsingular plane model (globally). It is possible to get equations using Computer Algebra Systems, for instance Singular does it as explained here.
Concerning blowups: if the curve has a singular point which is not rational, then all of its conjugate points will be singular too, and they'll form a scheme that is defined over $k$. I never had to deal with such examples, but if $k$ is perfect I suppose that the blowup centered at that scheme will simplify the singularity and, iterating, eventually resolve it.
quimquim
$\begingroup$ Could you say more about "blowup centered at that scheme" or give a reference for this? Because I learned blow up by Fulton's book, which is blowing-up at (closed and rational) points only. Thanks. $\endgroup$
– user565739
$\begingroup$ Hartshorne explains the general construction via the (sheaf of) graded algebras associated to a (sheaf of) ideals in II.7. There must be a simpler exposition for the affine case written somewhere (to avoid the formalism of sheaves), but unfortunately I don't know a reference. :( Sorry! $\endgroup$
– quim
$\begingroup$ The rough idea is as follows (though I recommend to get a real explanation in full, if someone can give a better reference). Let ${\mathfrak p}\subset R$ be the ideal of the scheme at which you want to blow up (I assume R=k[x,y]). Let A be the graded algebra $\bigoplus {\mathfrak p}^n$. Then the blowup is a variety, projective over R, associated to the graded algebra A. To obtain its equations in $\mathbb{P}^n_R$, you need to know a minimal set of generators (n+1 of them) and their relations. $\endgroup$
$\begingroup$ Just a quick comment, the algebra quim describes is called the Rees algebra. It might help you find references where some examples are worked out. $\endgroup$
– Karl Schwede
Is every smooth affine curve isomorphic to a smooth affine plane curve?
Resolution of "nice" and zero-dimensional singularities on a surface
Ordinary n-uple Points and Resolution of Singularities on a Surface
Picard group of a singular projective curve
Tame morphism from a curve to $\mathbb{P}^1$
|
CommonCrawl
|
EURASIP Journal on Advances in Signal Processing
Consistent independent low-rank matrix analysis for determined blind source separation
Daichi Kitamura1 na1 &
Kohei Yatabe2 na1
EURASIP Journal on Advances in Signal Processing volume 2020, Article number: 46 (2020) Cite this article
Independent low-rank matrix analysis (ILRMA) is the state-of-the-art algorithm for blind source separation (BSS) in the determined situation (the number of microphones is greater than or equal to that of source signals). ILRMA achieves a great separation performance by modeling the power spectrograms of the source signals via the nonnegative matrix factorization (NMF). Such a highly developed source model can solve the permutation problem of the frequency-domain BSS to a large extent, which is the reason for the excellence of ILRMA. In this paper, we further improve the separation performance of ILRMA by additionally considering the general structure of spectrograms, which is called consistency, and hence, we call the proposed method Consistent ILRMA. Since a spectrogram is calculated by an overlapping window (and a window function induces spectral smearing called main- and side-lobes), the time-frequency bins depend on each other. In other words, the time-frequency components are related to each other via the uncertainty principle. Such co-occurrence among the spectral components can function as an assistant for solving the permutation problem, which has been demonstrated by a recent study. On the basis of these facts, we propose an algorithm for realizing Consistent ILRMA by slightly modifying the original algorithm. Its performance was extensively evaluated through experiments performed with various window lengths and shift lengths. The results indicated several tendencies of the original and proposed ILRMA that include some topics not fully discussed in the literature. For example, the proposed Consistent ILRMA tends to outperform the original ILRMA when the window length is sufficiently long compared to the reverberation time of the mixing system.
Blind source separation (BSS) is a technique for separating individual sources from an observed mixture without knowing how they were mixed. BSS for multichannel audio signals observed by multiple microphones has been particularly studied [1–13]. The BSS problem can be divided into two situations: underdetermined (the number of microphones is less than the number of sources) and (over-)determined (the number of microphones is greater than or equal to the number of sources) cases. This paper focuses on the determined BSS problem, as high-quality separation can be achieved compared with the underdetermined BSS methods.
Independent component analysis (ICA) is the most popular and successful algorithm for solving the determined BSS problem [1]. It estimates a demixing matrix (the inverse system of the mixing process) by assuming statistical independence between the sources. For a mixture of audio signals, ICA is usually applied in the time-frequency domain via the short-time Fourier transform (STFT) because the sources are mixed up by convolution. This strategy is called frequency-domain ICA (FDICA) [2] and independently applies ICA to the complex-valued signals in each frequency. Then, the estimated frequency-wise demixing matrices must be aligned over all frequencies so that the frequency components of the same source are grouped together. Such alignment of the frequency components is called a permutation problem [3–6], and a complete solution to it has not been established. Therefore, a great deal of research has tackled this problem.
To avoid the permutation misalignment as much as possible, various sophisticated source models have been proposed. Independent vector analysis (IVA) [7–10] is one of the most successful methods in the early stage of the development. It assumes higher-order dependencies (co-occurrence among the frequency components) of each source by utilizing a spherical generative model of the source frequency vector. This assumption enables IVA to simultaneously estimate the frequency-wise demixing matrices and solve the permutation problem to a large extent using only one objective function. It has been further developed by improving its source model. One natural and powerful extension of IVA is independent low-rank matrix analysis (ILRMA) [11, 12], which integrates the source model of nonnegative matrix factorization (NMF) [14, 15] based on the Itakura–Saito divergence (IS-NMF) [16] into IVA. This extension has greatly improved the performance of separation by taking the low-rank time-frequency structure (co-occurrence among the time-frequency bins) of the source signals into account. ILRMA has achieved the state-of-the-art performance and been further developed by several researchers [17–29]. In this respect, ILRMA can be considered the new standard of the determined BSS algorithms. However, the separation performance of IVA and ILRMA is still inferior compared to the ideal performance of ICA-based frequency-domain BSS. In [30], the performances of IVA and ILRMA were compared with that of FDICA with perfect permutation alignment using reference sources (ideal permutation solver), and it was confirmed that there is still a noticeable room for improvement of ILRMA-based BSS. In fact, IVA and ILRMA often encounter the block permutation problem, that is, group-wise permutation misalignment of components between sources [31].
The consistency of a spectrogram is another promising approach for solving the permutation problem. A recent study has shown that STFT can provide some effective information related to the co-occurrence among the time-frequency bins [32]. Since an overlapping window is utilized in STFT, the time-frequency bins are related to each other based on the overlapping segments. The frequency components within a segment are also related to each other because of the spectral smearing called main- and side-lobes of the window. In other words, the time-frequency components are not independent but related to each other via the uncertainty principle of time-frequency representation. Such relations have been well-studied in phase-aware signal processing [33–43] by the name of spectrogram consistency [44–47]. In the previous study [32], the spectrogram consistency was imposed on BSS to help the algorithm solve the permutation problem. This is an approach very different from the conventional studies of determined BSS because it utilizes the general property of STFT independent of the source model (in contrast to the abovementioned methods that focused on modeling of the source signals without considering the property of STFT). As the spectrogram consistency can be incorporated with any source model, its combination with the state-of-the-art algorithm should achieve a high separation performance.
However, the paper that proposed the combination of consistency and determined BSS [32] only showed the potential of consistency in an experiment using FDICA and IVA. The paper claimed that it was a first step of incorporating the spectrogram consistency with determined BSS, and no advanced method was tested. In particular, ILRMA was not considered because its algorithm is far more complicated than that derived in [32], and thus, it is not clear whether (and how much) the spectrogram consistency might improve the state-of-the-art BSS algorithm.
In this paper, we propose a new variant of ILRMA called Consistent ILRMA that considers the spectrogram consistency within the algorithm of ILRMA. The combination of IS-NMF and spectral smoothing of the inverse STFT (see Figs. 1 and 2 in Section 2.3) achieves the source modeling for a complex spectrogram. In particular, the spectral smearing in the frequency direction ties the adjacent frequency bins together, and this effect of spectrogram consistency helps ILRMA to solve the permutation problem. Since consistency is a concept depending on the parameters related to a window function, we extensively tested the separation performance of Consistent ILRMA through experiments with various window lengths and shift lengths. The results clarified several tendencies of the conventional and proposed methods, including that the proposed method outperforms the original ILRMA when the window length is sufficiently long compared to the reverberation time of the mixing system.
Inconsistent power spectrograms |Sart|2 (left column) and their consistent version (right column) obtained by applying inverse STFT and STFT. The top-left spectrogram is artificially produced with random phase. The middle-left and the bottom-left spectrograms are music and speech signals with random dropout. Enforcing spectrogram consistency can be viewed as a smoothing process of the inconsistent spectrogram along both time and frequency axes
Smoothing effect of spectrogram consistency applied to permutation misaligned signals: a music and b speech. The left column shows the original source signals |Sn|2, and the center column shows their randomly permuted versions, which simulates the permutation problem and is denoted as \(\boldsymbol {S}_{n}^{(\text {perm})}\). The right column shows the consistent versions of \(\boldsymbol {S}_{n}^{\mathrm {(perm)}}\). The smoothing effect mixes up the signals
Permutation problem of frequency-domain BSS and spectrogram consistency
Formulation of frequency-domain BSS
Let the lth sample of a time-domain signal be denoted as x[l], and N source signals be observed by M microphones. Then, the lth samples of the multichannel source, observed, and separated signals are respectively denoted as:
$$\begin{array}{*{20}l} {}\boldsymbol{s}[l] &= \left[\, s_{1}[l], s_{2}[l], \cdots, s_{n}[l], \cdots s_{N}[l] \,\right]^{\mathrm{T}} \in \mathbb{R}^{N}, \end{array} $$
$$\begin{array}{*{20}l} {}\boldsymbol{x}[l] &= \left[\, x_{1}[l], x_{2}[l], \cdots, x_{m}[l], \cdots x_{M}[l] \,\right]^{\mathrm{T}} \in \mathbb{R}^{M}, \end{array} $$
$$\begin{array}{*{20}l} {} \boldsymbol{y}[l] &= \left[\, y_{1}[l], y_{2}[l], \cdots, y_{n}[l], \cdots y_{N}[l] \,\right]^{\mathrm{T}} \in \mathbb{R}^{N}, \end{array} $$
where n=1,⋯,N,m=1,⋯,M, and l=1,⋯,L are the indexes of sources, microphones (channels), and discrete time, respectively, and ·T denotes the transpose. BSS aims at recovering the source signal s from the observed signal x, i.e., making y as close to s as possible.
In the frequency-domain BSS, those signals are handled in the time-frequency domain via STFT. Let the window length and shifting step of STFT be denoted as Q and τ, respectively. Then, the jth segment of a signal z[l] is defined as:
$$\begin{array}{*{20}l} {}\boldsymbol{z}^{[j]} &\,=\, \left[ z\left[(j\,-\,1)\tau \,+\,1\right]\!, z\left[(j\,-\,1)\tau \,+\,2\right], \cdots, z\!\left[\!(j\,-\,1)\tau \,+\,Q\right]\! \,\right]^{\mathrm{T}}\!, \\ &=\! \left[ z^{[j]}[1], z^{[j]}[2], \cdots, z^{[j]}[q], \cdots, z^{[j]}[Q] \,\right]^{\mathrm{T}} \in \mathbb{R}^{Q}, \end{array} $$
where j=1,⋯,J and q=1,⋯,Q are the indexes of the segments and in-segment samples, respectively, and the number of segments is given by J=L/τ with some zero-padding for adjusting the signal length L if necessary. STFT of a signal \(\boldsymbol {z} =\ [\,z[1], \,z[2],\cdots,z[L]\,]^{\mathrm {T}}\in \mathbb {R}^{L}\) is denoted by:
$$ \boldsymbol{Z} = \text{STFT}_{\boldsymbol{\omega}}(\boldsymbol{z}) \;\;\in\mathbb{C}^{I\times J}, $$
where the (i,j)th bin of the spectrogram Z is given as:
$$ z_{ij} = \sum\limits_{q=1}^{Q} \omega[q]\,z^{[j]}[q]\;\mathrm{e}^{-\imath2\pi(q-1)(i-1)/F}, $$
i=1,⋯,I is the index of frequency bins, F is an integer satisfying ⌊F/2⌋+1=I,⌊·⌋ is the floor function, ı denotes the imaginary unit, and ω is an analysis window. The inverse STFT with a synthesis window \(\widetilde {\boldsymbol {\omega }}\) is also defined in the usual way and denoted as \(\text {ISTFT}_{\widetilde {\boldsymbol {\omega }}}(\cdot)\). In this paper, we assume that the window pair satisfies the following perfect reconstruction condition:
$$ \boldsymbol{z} = \text{ISTFT}_{\widetilde{\boldsymbol{\omega}}}\left(\text{STFT}_{\boldsymbol{\omega}}(\boldsymbol{z})\right)\qquad\forall \boldsymbol{z}\in\mathbb{R}^{L}. $$
By applying STFT, the (i,j)th bin of the spectrograms of source, observed, and separated signals can be written as:
$$\begin{array}{*{20}l} \boldsymbol{s}_{ij} &= \left[\, s_{ij1}, s_{ij2}, \cdots, s_{ijn}, \cdots s_{ijN} \,\right]^{\mathrm{T}} \in \mathbb{C}^{N}, \end{array} $$
$$\begin{array}{*{20}l} \boldsymbol{x}_{ij} &= \left[\, x_{ij1}, x_{ij2}, \cdots, x_{ijm}, \cdots x_{ijM} \,\right]^{\mathrm{T}} \in \mathbb{C}^{M}, \end{array} $$
$$\begin{array}{*{20}l} \boldsymbol{y}_{ij} &= \left[\, y_{ij1}, y_{ij2}, \cdots, y_{ijn}, \cdots y_{ijN} \,\right]^{\mathrm{T}} \in \mathbb{C}^{N}. \end{array} $$
We also denote the spectrograms corresponding to the nth or mth signals in (8)–(10) as \(\boldsymbol {S}_{n}\in \mathbb {C}^{I\times J}, \boldsymbol {X}_{m}\in \mathbb {C}^{I\times J}\), and \(\boldsymbol {Y}_{n}\in \mathbb {C}^{I\times J}\), whose elements are sijn,xijm, and yijn, respectively. In the ordinary frequency-domain BSS, an instantaneous mixing process for each frequency bin is assumed:
$$\begin{array}{*{20}l} \boldsymbol{x}_{ij} = \boldsymbol{A}_{i}\boldsymbol{s}_{ij}, \end{array} $$
where \(\boldsymbol {A}_{i}\in \mathbb {C}^{M\times N}\) is a frequency-wise mixing matrix. The mixture model (11) is approximately valid when the reverberation time is sufficiently shorter than the length of the analysis window used in STFT [48].
Hereafter, we consider the determined case, i.e., M=N. In this case, BSS can be achieved by estimating the inverse of Ai for all frequency bins. By denoting an approximate inverse as \(\boldsymbol {W}_{i}\approx \boldsymbol {A}_{i}^{-1}\), the separation process can be written as:
$$\begin{array}{*{20}l} \boldsymbol{y}_{ij} = \boldsymbol{W}_{i}\boldsymbol{x}_{ij}, \end{array} $$
where \(\boldsymbol {W}_{i}=\left [\boldsymbol {w}_{i1},\boldsymbol {w}_{i2},\cdots,\boldsymbol {w}_{iN}\right ]^{\mathrm {H}}\in \mathbb {C}^{N\times M}\) is a frequency-wise demixing matrix and ·H denotes the Hermitian transpose. The aim of a determined BSS algorithm is to find the demixing matrices for all frequency bins so that the separated signals approximate the source signals.
Permutation problem in determined BSS
In practice, the scale and permutation of the separated signals are unknown because the information of the mixing process is missing. That is, when the separation is correctly performed by some demixing matrix Wi as in (12), the following signal is also a solution to the BSS problem:
$$ \hat{\boldsymbol{y}}_{ij} = \hat{\boldsymbol{W}}_{i}\boldsymbol{x}_{ij} \qquad \left(\hat{\boldsymbol{W}}_{i} = \boldsymbol{D}_{i}\boldsymbol{P}_{i}\boldsymbol{W}_{i}\right), $$
where \(\boldsymbol {D}_{i}\in \mathbb {C}^{N\times N}\) and Pi∈{0,1}N×N are arbitrary diagonal and permutation matrices, respectively. While the signal scale can easily be recovered by applying the back projection [49], the permutation of the estimated signals \(\hat {\boldsymbol {y}}_{ij}\) must be aligned for all frequency bins, i.e., Pi must be the same for all i. This alignment of the permutation of estimated signals is the permutation problem, which is the main obstacle of the frequency-domain determined BSS.
In FDICA, a permutation solver (realignment process of Pi) is utilized as a post-processing applied to the frequency-wise separated signals \(\hat {\boldsymbol {y}}_{ij}\) [4–6]. In recent frequency-domain BSS methods, an additional assumption on sources (or source model) is introduced to circumvent the permutation problem. For example, IVA assumes simultaneous co-occurrence of all frequency components in the same source, and ILRMA assumes a low-rank structure of the power spectrogram Yn. Other source models have also been proposed for improving the separation performance [50–52]. These source models can avoid the permutation problem to some extent during the estimation of \(\hat {\boldsymbol {W}}_{i}\). Recent developments of determined BSS have been achieved via the quest to find a better source model that represents the source signals more precisely.
Solving permutation problem by spectrogram consistency
A recent paper reported another approach for solving the permutation problem based on the general property of STFT called spectrogram consistency [32]. The consistency is a fundamental property of a spectrogram. Since any time-frequency representation has a theoretical limitation called the uncertainty principle, the time-frequency bins of a spectrogram are not independent but related to each other. The inverse STFT always modifies the spectrogram Zn that violates this kind of inter-time-frequency relation so that the relation is recovered. That is, a spectrogram Zn properly retains the inter-time-frequency relation if and only if
$$ \mathcal{E}(\boldsymbol{Z}_{n}) = \boldsymbol{Z}_{n} - \text{STFT}_{\boldsymbol{\omega}}\left(\text{ISTFT}_{\widetilde{\boldsymbol{\omega}}}(\boldsymbol{Z}_{n})\right) $$
is zero, i.e., \(\|\mathcal {E}(\boldsymbol {Z}_{n})\|=0\) for a norm ∥·∥. Such spectrogram Zn satisfying \(\|\mathcal {E}(\boldsymbol {Z}_{n})\|=0\) is said to be consistent.
Figure 1 demonstrates the effect of spectrogram consistency, where \(\boldsymbol {S}_{\text {art}}\in \mathbb {C}^{I\times J}\) is an artificially produced complex-valued spectrogram and |Sart|2 is its power spectrogram. The notation |·|2 for a matrix input represents the element-wise squared absolute value. By applying \(\text {STFT}_{\boldsymbol {\omega }}(\text {ISTFT}_{\widetilde {\boldsymbol {\omega }}}(\cdot))\), the inconsistent spectrogram Sart shown in the left column of Fig. 1 is converted into the corresponding consistent spectrogram, which is a smoothed version of Sart, as shown in the right column. This smoothing process occurs because the main- and side-lobes of the window function (and the overlap-add process) spread the energy of a time-frequency bin.
Since the inverse STFT is a process of recovering the consistency (the inter-time-frequency relation), it has the capability of aligning the frequency components. This is also demonstrated in Fig. 2. As a simulation of the permutation problem, the frequency bins in S1 and S2 were randomly shuffled to obtain the spectrogram with permutation misalignment, \(\boldsymbol {S}_{n}^{\mathrm {(perm)}}\) (the center column in the figure), which is a typical output signal of FDICA. Note that these misaligned spectrograms are perfectly separated for each frequency because each time-frequency bin contains only one of the two sources. By enforcing spectrogram consistency, the smoothing process spreads the time-frequency components as shown in the right column of Fig. 2. In other words, the inverse STFT mixes up the separated signals if the frequency-wise permutation is not aligned correctly. Therefore, enforcing consistency within a BSS algorithm by applying \(\text {STFT}_{\boldsymbol {\omega }}(\text {ISTFT}_{\widetilde {\boldsymbol {\omega }}}(\cdot))\) can improve the separation performance to some extent [32].
Proposed method
By incorporating spectrogram consistency into ILRMA, we propose a novel BSS method named Consistent ILRMA. In this section, after stating our motivation and contributions, we first review the standard ILRMA introduced in [11, 12] and then propose the consistent version of ILRMA with an algorithm that achieves Consistent ILRMA and is openly available on the web.
Motivations and contributions
The previous paper [32] only reported that the performances of traditional BSS algorithms, FDICA and IVA, were improved by enforcing consistency during the estimation of the demixing matrix Wi. In addition, no detailed experimental analysis related to STFT parameters was provided, even though the parameters of window functions in the STFT and inverse STFT directly affect the smoothing effect of spectrogram consistency.
The spectrogram consistency is a general property of STFT, and therefore, it can be combined with any source model for determined BSS. Its combination with state-of-the-art models, including ILRMA, is of great interest because the current mainstream algorithm for determined audio source separation is centered on ILRMA, which is based on an NMF-based richer time-frequency source model. Indeed, many recent papers are based on the framework of ILRMA [17–29]. Even though combining ILRMA with the spectrogram consistency should be able to exceed the limit of existing BSS algorithms, no such method has been investigated in the literature.
In this paper, we propose a new BSS algorithm that combines ILRMA and spectrogram consistency. Our first contribution is an algorithm that achieves Consistent ILRMA by inserting \(\text {STFT}_{\boldsymbol {\omega }}(\text {ISTFT}_{\widetilde {\boldsymbol {\omega }}}(\cdot))\) into the iterative optimization algorithm of ILRMA. The second contribution is to apply a scale-aligning process called iterative back projection within the iterative algorithm. This process enhances the separation performance when it is combined with spectrogram consistency. The third contribution is an experimental finding that spectrogram consistency can work properly with the iterative back projection. We found that both Consistent IVA and Consistent ILRMA require iterative back projection to achieve a good performance. Our fourth contribution is to provide the massive experimental results for several window functions, window lengths, shift lengths, reverberation times, and source types. We also provide discussions for clarifying the tendency of ILRMA with spectrogram consistency.
Standard ILRMA [12]
The original ILRMA [12] was derived from the following generative model of the spectrograms of the separated signals:
$$ {} \boldsymbol{Y}_{n} \sim p(\boldsymbol{Y}_{n}) = \prod\limits_{i,j} \mathcal{N}_{\mathrm{c}}\left(0,r_{ijn}\right) = \prod\limits_{i,j} \frac{ 1 }{ \pi r_{ijn}} \exp{\left(-\frac{ |y_{ijn}|^{2} }{ r_{ijn}} \right)}, $$
where \(\mathcal {N}_{\mathrm {c}}\left (\mu, r\right)\) is the circularly symmetric complex Gaussian distribution with mean μ and variance r. In this model, the source component yijn is assumed to obey a zero-mean and isotropic distribution, i.e., the phase of yijn is generated from the uniform distribution in the range [0,2π) and the real and imaginary parts of yijn are mutually independent. The validity of this assumption is shown in the Appendix. The variance rijn can be viewed as an expectation value of |yijn|2. This variance rijn as a two-dimensional array indexed by (i,j) is denoted as \(\boldsymbol {R}_{n}\in \mathbb {R}_{> 0}^{I\times J}\), which is called the variance spectrogram corresponding to the nth source. In ILRMA, the variance matrix Rn is modeled using the rank-K NMF, as:
$$\begin{array}{*{20}l} \boldsymbol{R}_{n} = \boldsymbol{T}_{n}\boldsymbol{V}_{n}, \end{array} $$
where \(\boldsymbol {T}_{n}\in \mathbb {R}_{> 0}^{I\times K}\) and \(\boldsymbol {V}_{n}\in \mathbb {R}_{> 0}^{K\times J}\) are the basis and activation matrices in NMF. The basis vectors in Tn, which represent spectral patterns of the nth source signal, are indexed by k=1,⋯,K. As in FDICA, statistical independence between the source signals is also assumed in ILRMA:
$$ p(\boldsymbol{Y}_{1}, \boldsymbol{Y}_{2}, \cdots, \boldsymbol{Y}_{N}) = \prod\limits_{n} p(\boldsymbol{Y}_{n}). $$
ILRMA estimates the demixing matrix Wi so that the power spectrograms of the separated signals |Yn|2 have a low-rank structure that can be well-approximated by TnVn with small K. This BSS principle of ILRMA is illustrated in Fig. 3. When the low-rank source model can appropriately fit to the power spectrograms of the original source signals |Sn|2, ILRMA provides an excellent separation performance without explicitly solving the permutation problem afterward.
BSS principle of standard ILRMA
The demixing matrix Wi and the nonnegative matrices Tn and Vn can be obtained through maximum likelihood estimation. The negative log-likelihood to be minimized, denoted by \(\mathcal {L}\), is given as [12]:
$$ {}\begin{aligned} \mathcal{L} &= - \log p(\boldsymbol{X}_{1}, \boldsymbol{X}_{2}, \cdots, \boldsymbol{X}_{M}), \\ &= -\sum\limits_{i,j} \log \left|\det \boldsymbol{W}_{i}\right|^{2} - \log p(\boldsymbol{Y}_{1}, \boldsymbol{Y}_{2}, \cdots, \boldsymbol{Y}_{N}), \\ &\stackrel{\mathrm{c}}{=} -2J\sum\limits_{i} |\det \boldsymbol{W}_{i}| \,+\, \sum\limits_{i,j,n} \!\left(\!\frac{ \left|\boldsymbol{w}_{in}^{\mathrm{H}}\boldsymbol{x}_{ij}\right|^{2} }{ {\sum\nolimits}_{k} t_{ikn}v_{kjn}} \!+ \!\log \sum\limits_{k} t_{ikn}v_{kjn} \!\right), \end{aligned} $$
where =c denotes equality up to constant factors, and tikn>0 and vkjn>0 are the elements of Tn and Vn, respectively. The minimization of (18) can be performed by iterating the following update rules for the spatial model parameters,
$$\begin{array}{*{20}l} \boldsymbol{U}_{in} &\leftarrow \frac{1}{J} \sum\limits_{j} \frac{1}{{\sum\nolimits}_{k} t_{ikn}v_{kjn}}\boldsymbol{x}_{ij}\boldsymbol{x}_{ij}^{\mathrm{H}}, \end{array} $$
$$\begin{array}{*{20}l} \boldsymbol{w}_{in} &\leftarrow \left(\boldsymbol{W}_{i}\boldsymbol{U}_{in} \right)^{-1}\boldsymbol{e}_{n}, \end{array} $$
$$\begin{array}{*{20}l} \boldsymbol{w}_{in} &\leftarrow \boldsymbol{w}_{in} \left(\boldsymbol{w}_{in}^{\mathrm{H}}\boldsymbol{U}_{in}\boldsymbol{w}_{in} \right)^{-\frac{1}{2}}, \end{array} $$
$$\begin{array}{*{20}l} y_{ijn} &\leftarrow \boldsymbol{w}_{in}^{\mathrm{H}}\boldsymbol{x}_{ij}, \end{array} $$
and for the source model parameters,
$$\begin{array}{*{20}l} t_{ikn} &\leftarrow t_{ikn} \sqrt{ \frac{ {\sum\nolimits}_{j} \left|y_{ijn}\right|^{2} \left(\sum_{k'} t_{ik'n}v_{k'jn} \right)^{-2} v_{kjn} }{ {\sum\nolimits}_{j} \left(\sum_{k'} t_{ik'n}v_{k'jn} \right)^{-1} v_{kjn}} }, \end{array} $$
$$\begin{array}{*{20}l} v_{kjn} &\leftarrow v_{kjn} \sqrt{ \frac{ {\sum\nolimits}_{i} \left|y_{ijn}\right|^{2} \left({\sum\nolimits}_{k'} t_{ik'n}v_{k'jn} \right)^{-2} t_{ikn} }{ {\sum\nolimits}_{i} \left({\sum\nolimits}_{k'} t_{ik'n}v_{k'jn} \right)^{-1} t_{ikn}} }, \end{array} $$
where en∈{0,1}N is the unit vector with the nth element equal to unity. Update rules (19)–(24) ensure the monotonic non-increase of the negative log-likelihood function \(\mathcal {L}\). After iterative calculations of updates (19)–(24), the separated signal can be obtained by (12).
Equation 22 is equivalent to beamforming [53] to xij with the beamformer coefficients win. Thus, FDICA, IVA, and ILRMA can be interpreted as an adaptive estimation process of beamforming coefficients without having to know the geometry of microphones and sources [54]. For this reason, the estimated signal Yn obtained by (22) is a complex-valued spectrogram, and we do not need to recover its phase components using, for example, Griffin–Lim algorithm-based techniques [37–40, 43, 55–59]. Both the amplitude and phase components of each source are recovered by the complex-valued linear separation filter win.
Proposed Consistent ILRMA
To further improve the separation performance of the standard ILRMA, we introduce the spectrogram consistency into the parameter update procedure. In the proposed Consistent ILRMA, the following combination of forward and inverse STFT is performed at the beginning of each iteration of parameter updates:
$$\begin{array}{*{20}l} \boldsymbol{Y}_{n} \leftarrow \text{STFT}_{\boldsymbol{\omega}}(\text{ISTFT}_{\widetilde{\boldsymbol{\omega}}}(\boldsymbol{Y}_{n})). \end{array} $$
This procedure is the projection of the spectrogram of a separated signal Yn onto the set of consistent spectrograms [32]. That is, \(\text {STFT}_{\boldsymbol {\omega }}(\text {ISTFT}_{\widetilde {\boldsymbol {\omega }}}(\boldsymbol {Y}_{n}))\) performs nothing if Yn is consistent, but otherwise, it smooths the complex spectrogram Yn, by going through the time domain, so that the uncertainty principle is satisfied.
In Consistent ILRMA, the calculation of (25) is performed in each iteration of parameter updates based on (19)–(24). Enforcing the spectrogram consistency for the temporary separated signal Yn in each iteration guides the parameters Wi,Tn, and Vn to better solutions, which results in higher separation performance compared to that of conventional ILRMA.
Note that this simple update (25) may increase the value of the negative log-likelihood function (18), and therefore, the monotonicity of the algorithm is no longer guaranteed. However, we will see later in the experiments that the value of the negative log-likelihood function stably decreases as in the standard ILRMA. The amount of the inconsistent component (14) also settles down to some specific value after several iterations.
Iterative back projection
Since frequency-domain BSS cannot determine the scales of estimated signals (represented by Di in (13)), the spectrogram of a separated signal Yn after an iteration is inconsistent due to the scale irregularity. To take full advantage of the projection enforcing spectrogram consistency in (25), we also propose applying the following back projection at the end of each iteration so that the frequency-wise scales are aligned.
In determined BSS, the back projection is a standard procedure for recovering the frequency-wise scales. It can be written as [49]:
$$ \tilde{\boldsymbol{y}}_{ijn} = \boldsymbol{W}_{i}^{-1} \left(\boldsymbol{e}_{n} \circ \boldsymbol{y}_{ij} \right) = y_{ijn}\boldsymbol{\lambda}_{in}, $$
where \(\tilde {\boldsymbol {y}}_{ijn} = \left [\, \tilde {y}_{ijn1}, \tilde {y}_{ijn2}, \cdots, \tilde {y}_{ijnM} \right ]^{\mathrm {T}}\in \mathbb {C}^{M}\) is the (i,j)th bin of the scale-fitted spectrogram of the nth separated signal, \(\boldsymbol {\lambda }_{in} = \left [\,\lambda _{in1}, \lambda _{in2}, \cdots, \lambda _{inM}\right ]^{\mathrm {T}}\in \mathbb {C}^{M}\) is a coefficient vector of back projection for the nth signal at the ith frequency, and ∘ denotes the element-wise multiplication. In the proposed method, this update (26) is performed at the end of each iteration so that the projection (25) at the beginning of the next iteration properly smooths the spectrograms without the effect of scale indeterminacy.
One side effect of this back projection is that the value of the negative log-likelihood function (18) is also changed due to the scale modification. In IVA, this problem cannot be avoided because the only parameter in IVA is the demixing matrix Wi. However, in ILRMA, since both the demixing matrix Wi and the source model parameter TnVn can determine the scale of estimated signal Yn, the likelihood variation can be avoided by appropriately adjusting win and Tn after the back projection. To prevent the likelihood variation, the following updates are required after performing (26):
$$\begin{array}{*{20}l} \boldsymbol{w}_{in} &\leftarrow \boldsymbol{w}_{in} \lambda_{inm_{\text{ref}}}, \end{array} $$
$$\begin{array}{*{20}l} t_{ikn} &\leftarrow t_{ikn} \left|\lambda_{inm_{\text{ref}}}\right|^{2}, \end{array} $$
where mref is the index of the reference channel utilized in the back projection.
The overall algorithm of the proposed Consistent ILRMA is summarized in Algorithm 1. The iterative loop for the parameter optimization appears in the second to eighth lines. The spectrogram consistency of the temporary separated signal Yn is ensured in the third line, and the iterative back projection is applied in the sixth and seventh lines. Note that an algorithm for the conventional ILRMA can be obtained by performing only the fourth and fifth lines (i.e., ignoring the third, sixth, and seventh lines). A Python code of the conventional ILRMA is openly available online (https://pyroomacoustics.readthedocs.io/en/pypi-release/pyroomacoustics.bss.ilrma.html), and therefore, the proposed Consistent ILRMA with Python can be easily implemented by slightly modifying the codes. A MATLAB code of Consistent ILRMA is also available online (https://github.com/d-kitamura/ILRMA/blob/master/consistentILRMA.m).
In this section, we conducted two experiments using synthesized and real-recorded mixtures. The synthesized mixtures were produced by convoluting the impulse responses to dry audio sources, while the real-recorded mixtures were actually recorded by using a microphone array in an ordinary room with ambient noise.
BSS of synthesized mixtures
We conducted determined BSS experiments using synthesized music and speech mixtures with two sources and two microphones (N = M = 2). The dry sources of music and speech signals, listed in Table 1, were respectively obtained from professionally produced music and underdetermined separation tasks provided as a part of SiSEC2011 [60]. They were convoluted with the impulse response E2A (T60=300 ms) or JR2 (T60=470 ms), obtained from the RWCP database [61], to simulate the multichannel observation signals. The recording conditions of these impulse responses are shown in Fig. 4.
Recording conditions of impulse responses: a E2A and b JR2
Table 1 Music and speech dry sources obtained from SiSEC2011
In this experiment, we compared the performance of six methods: three conventional and three proposed. The conventional methods were the standard IVA [10], Consistent IVA [32], and standard ILRMA [11]. The proposed methods were Consistent IVA with iterative back projection (Consistent IVA+BP), Consistent ILRMA, and Consistent ILRMA with iterative back projection (Consistent ILRMA+BP). For all methods, the initial demixing matrix was set to an identity matrix. For the ILRMA-based methods, the nonnegative matrices Tn and Vn were initialized using uniformly distributed random values in the range (0,1). Five trials were performed for each condition using different pseudorandom seeds. The number of bases for each source, K, was set to 10 for music mixtures and 2 for speech mixtures, where it was experimentally confirmed that these conditions provide the best performance for the conventional ILRMA [11]. To satisfy the perfect reconstruction condition (7), the inverse STFT was implemented by the canonical dual of the analysis window. For both Consistent IVA+BP and Consistent ILRMA+BP, the iterative back projection was applied, where the reference channel was set to mref = 1. Since the property of spectrogram consistency depends on the window length, shift length, and type of window function, various combinations of them were tested. The experimental conditions are summarized in Table 2.
Table 2 Experimental conditions
For quantitative evaluation of the separation performance, we measured the source-to-distortion ratio (SDR), source-to-interference ratio (SIR), and source-to-artifact ratio (SAR). In a noiseless situation, SDR, SIR, and SAR are defined as follows [62]:
$$\begin{array}{*{20}l} \text{SDR} &= 10\log_{10} \frac{ {\sum\nolimits}_{l} |s_{\mathrm{t}}[l]|^{2} }{ {\sum\nolimits}_{l} |e_{\mathrm{i}}[l] + e_{\mathrm{a}}[l] |^{2} }, \end{array} $$
$$\begin{array}{*{20}l} \text{SIR} &= 10\log_{10} \frac{ {\sum\nolimits}_{l} |s_{\mathrm{t}}[l]|^{2} }{ {\sum\nolimits}_{l} |e_{\mathrm{i}}[l]|^{2} }, \end{array} $$
$$\begin{array}{*{20}l} \text{SAR} &= 10\log_{10} \frac{ {\sum\nolimits}_{l} |s_{\mathrm{t}}[l]+e_{\mathrm{i}}[l]|^{2} }{ {\sum\nolimits}_{l} |e_{\mathrm{a}}[l]|^{2} }, \end{array} $$
where st[l],ei[l], and ea[l] are the lth sample of target signal, interference, and artificial components of the estimated signal, respectively, in the time domain. SIR and SAR are used to quantify the amount of interference rejection and the absence of artificial distortion of the estimated signal, respectively. SDR is used to quantify the overall separation performance, as SDR is in good agreement with both SIR and SAR for determined BSS.
In this experiment, the energy of sources was not adjusted, i.e., the energy ratio of sources (source-to-source ratio) was automatically determined by the initial volume of the dry sources and the level of the impulse responses. That is, the source-to-source ratio of each mixture signal is different from the others. To equally evaluate the performances of different mixtures, we calculated SDR improvement (ΔSDR) and SIR improvement (ΔSIR) defined as:
$$\begin{array}{*{20}l} \Delta\text{SDR} &= \text{SDR}_{\text{sep}} - \text{SDR}_{\text{input}}, \end{array} $$
$$\begin{array}{*{20}l} \Delta\text{SIR} &= \text{SIR}_{\text{sep}} - \text{SIR}_{\text{input}}, \end{array} $$
where SDRsep and SIRsep are the SDR and SIR of the separated signal, and SDRinput and SIRinput are the SDR and SIR of the initial mixture signal input to the BSS methods. Note that SAR improvement cannot be defined because its value of the signal without artificial processing cannot be defined (SARinput=∞).
Results and discussions
Figure 5 shows examples of the value of the negative log-likelihood function (18) of Consistent ILRMA+BP. Although the algorithmic convergence of the proposed method has not been theoretically justified because of the additional projection (25), we experimentally confirmed a smooth decrease of the cost function. We also confirmed that such behavior was common for the other experimental conditions and mixtures. This result indicates that the additional procedure in the proposed method does not have a harmful effect on the behavior of the overall algorithm.
Values of negative log-likelihood function (18) of Consistent ILRMA+BP (window length, 256 ms; shift length, 32 ms)
Figures 6 and 7 show examples of the energy of the inconsistent components (14) of standard ILRMA and Consistent ILRMA+BP. The energy was normalized by that of the initial spectrograms in order to align the vertical axis. Note that the energy of inconsistency components is not directly related to the degree of permutation misalignment or the separation performance. These figures are shown to confirm whether the proposed algorithm can properly reduce the degree of inconsistency. These values are completely zero when the separated spectrograms are consistent, and hence, those at the 0th iteration (the leftmost values) are zero because no processing is performed at that point. By iterating the algorithms, this energy rapidly increased because the demixing matrix for each frequency independently tried to process and separate the signals. However, the normalized energy tended toward some specific values after several iterations. We confirmed that the converged values of Consistent ILRMA+BP were always lower than those of standard ILRMA. This result indicates that Consistent ILRMA+BP reduces the amount of the inconsistent components and tries to make the separated spectrogram more consistent. In addition, similar to Fig. 5, the algorithmic stability of Consistent ILRMA+BP can be confirmed from Figs. 6 and 7.
Examples of normalized energy of inconsistent components \(\left (\|\mathcal {E}(\mathsf {Y})\|_{2}^{2} / \|\mathsf {X}\|_{2}^{2}\right)\) of ILRMA and Consistent ILRMA+BP for music 1: a 256-ms-long window and 32-ms shifting and b 1024-ms-long window and 512-ms shifting, where X=[X1,X2],Y= [Y1,Y2], and \(\mathcal {E}(\cdot)\) is in (14)
Examples of normalized energy of inconsistent components \((\|\mathcal {E}(\mathsf {Y})\|_{2}^{2} / \|\mathsf {X}\|_{2}^{2})\) of ILRMA and Consistent ILRMA+BP for speech 1: a 256-ms-long window and 32-ms shifting and b 1024-ms-long window and 512-ms shifting, where X= [X1,X2],Y= [Y1,Y2], and \(\mathcal {E}(\cdot)\) is in (14)
Figures 8 and 9 summarize the SDR improvements for the music mixtures and speech mixtures, respectively. The window function was the Hann window. Each box contains 50 results (i.e., 5 pseudorandom seeds × 10 mixtures in Table 1), where ΔSDRs of the two separated sources in each mixture were averaged. The central lines of the box plots indicate the median, and the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. Each row corresponds to the same window length, while each column corresponds to the same shift length. As we conducted the experiment for six window lengths, four shift lengths, and two impulse responses, each figure consists of 6×4×2 subfigures. In each subfigure, six boxes are shown to illustrate the results of (1) IVA, (2) Consistent IVA, (3) Consistent IVA+BP, (4) ILRMA, (5) Consistent ILRMA, and (6) Consistent ILRMA+BP. Since the tendency of the results was the same as Figs. 8 and 9, we provide the SDR improvements for the other windows (Hamming and Blackman) in the Appendix. The SIR improvement and SAR are also given in the Appendix.
Average SDR improvements for synthesized music mixtures (music 1–10) with aE2A and bJR2, where Hann window is used in STFT
Average SDR improvements for synthesized speech mixtures (speech 1–10) with aE2A and bJR2, where Hann window is used in STFT
Since IVA and ILRMA assume the instantaneous mixing model (11) for each frequency in the time-frequency domain, the window length should be long relative to the reverberation time to achieve accurate separation. At the same time, too long a window degrades the separation performance of IVA and ILRMA, as discussed in [30]. This is because capturing the source activity and spectral patterns becomes difficult for IVA and ILRMA as the time resolution of the spectrograms becomes low due to a long window. The robustness of IVA and ILRMA is also deteriorated by a long window because the effective number of time segments is decreased. This trade-off of the separation performance caused by window length in STFT can be easily confirmed from the results for both music (Fig. 8) and speech (Fig. 9) mixtures, which is consistent with the results in [30]. As shown in the figures, the performance was poor for the shorter windows (≤128 ms), and the performance for the longer windows (≥768 ms) was more varied than that of the shorter ones. The window length best suited for these conditions (combinations of source signals and impulse responses) seems to be around 256 ms or 512 ms. While the maximum achievable performance becomes higher as the window length becomes longer due to the mixing model (11), these results indicate that the source modeling becomes difficult for both IVA and ILRMA when the window length is too long. This trade-off should be important for discussing the results further.
By comparing the performances of the conventional (IVA, Consistent IVA, and ILRMA) and proposed (Consistent IVA+BP, Consistent ILRMA, and Consistent ILRMA+BP) methods, we can see that the proposed methods tend to outperform the conventional ones. Some comparisons are made as follows: ∙ Conventional and proposed IVAs. The proposed Consistent IVA+BP performed better than the conventional IVAs (IVA and Consistent IVA) in Figs. 8b and 9b when the window length was sufficiently long (≥256 ms). In those cases, the conventional Consistent IVA resulted in a worse performance than IVA, which indicates that just using spectrogram consistency cannot improve the performance of IVA. This demonstrates the importance of the iterative back projection when spectrogram consistency is considered within determined BSS. ∙ Conventional and proposed ILRMAs. The proposed Consistent ILRMA without BP performed comparably to the conventional ILRMA. In Figs. 8a and 9a, Consistent ILRMA performed better than ILRMA when the window length was long (≥768 ms). In contrast, in Figs. 8b and 9b, Consistent ILRMA performed worse than ILRMA. This is presumably because the scale ambiguity prevented the spectrogram consistency from working properly. By incorporating iterative back projection into Consistent ILRMA, the proposed Consistent ILRMA+BP performed better than the conventional ILRMA. In the best situation (the top-left subfigure of Fig. 8), Consistent ILRMA+BP performed 8 dB better than ILRMA by bringing out the potential of spectrogram consistency in determined BSS.
To further explain the experimental results, some notable tendencies are summarized as follows: ∙ Short window. When the window length was short (64 ms), all methods performed similarly in terms of ΔSDR. This is because the achievable performance was already limited by the window length that was shorter than the reverberation time. This result contradicted our expectation before performing the experiment. Since enforcing the consistency spreads the frequency components based on the main-lobe of the window function, we expected that the ability to solve the permutation problem would be higher when the window length was shorter because of the wider main-lobe. In reality, we found that the spectrogram consistency could assist IVA and ILRMA except for the cases where the window length was short (≤128 ms in this experiment) compared to the reverberation time. ∙ Large window shift. When the shift length was 1/2 of the window length, the performance of ILRMA significantly dropped compared to smaller shift lengths (1/4,1/8, and 1/16), especially when the window length was long (e.g., 1024 ms). This is presumably because the number of time segments was small, i.e., NMF in ILRMA failed to model the source signals from the given amount of data. In addition, for a larger window, distinguishing spectral patterns of the sources became difficult for ILRMA due to the time-directional blurring effect caused by a longer window. Such performance degradation was alleviated for Consistent ILRMA+BP. This might be because the smoothing process of the inverse STFT provides some additional information for the source modeling from the adjacent bins. ∙ Length of boxes. When the length of the box of ILRMA was long, as in Figs. 8a and 9a, Consistent ILRMA+BP was able to improve the performance. Conversely, when the length of the box of ILRMA was short, as in Figs. 8b and 9b, Consistent ILRMA+BP was only able to slightly improve the performance. Note that the vertical axes are different. This result indicates that the achievable performance decided by the mixing model (11) limits the improvement obtained by spectrogram consistency. Since consistency is the characteristic of a spectrogram, it cannot manage the mixing process. The demixing-filter update of ILRMA, which is the same for the conventional and proposed methods, manages the mixing process. Hence, when the mixing model has a mismatch with the observed condition, there is less room for spectrogram consistency to improve the performance. ∙ Improvement by consistency. The proposed method tended to achieve a good performance when the conventional ILRMA also worked well, e.g., Figs. 8a and 9a. This tendency indicates that the spectrogram consistency effectively promotes the separation when the estimated source Yn accurately approaches the original source Sn during the optimization, as Sn is naturally a consistent spectrogram. This is the reason we feel that the consistency can be an assistant of the frequency-domain BSS. An important aspect is that the source model (e.g., NMF in ILRMA) actually informs the separation cue, and the spectrogram consistency enhances the separation performance when the source modeling functions correctly.
BSS of real-recorded mixtures
Next, we evaluated the conventional and proposed methods using live-recorded music and speech mixtures obtained from underdetermined separation tasks in SiSEC2011 [60], where only two sources were mixed to make the BSS problem determined (M=N=2). The signals used in this experiment are listed in Table 3. The reverberation time of these signals was 250 ms, and the microphone spacing was 1 m (see [60]). Since these source signals were actually recorded using a microphone array in an ordinary room with ambient noise, the observed signals are more realistic compared to those in Section 4.1.
Table 3 Live-recorded music and speech signals obtained from SiSEC2011
For simplicity, in this experiment, we used STFT with a fixed condition, the 512-ms-long Hann window with 1/4 shifting. The experimental conditions other than the window were the same as those in Section 4.1.1.
Figure 10 shows the results of live-recorded music and speech mixtures. The absolute scores were lower than those for the synthesized mixtures discussed in Section 4.1.2 due to the existence of ambient noise. Still, we can confirm the improvements of the proposed Consistent IVA+BP and Consistent ILRMA+BP compared to the conventional IVA and ILRMA, respectively, for both the music (upper row) and speech (lower row) mixtures. In particular, Consistent IVA+BP improved more than 4 dB over IVA in terms of the median of the ΔSDR of speech mixtures. Consistent ILRMA+BP achieved the highest performance in terms of the median of the SDR improvement for both music and speech mixtures. These results confirm that the combination of spectrogram consistency and iterative back projection can assist the separation of determined BSS for a more realistic situation.
SDR improvements (left column), SIR improvements (center column), and SAR (right column) for live-recorded music and speech mixtures, where STFT is performed using the 512-ms-long Hann window with 1/4 shifting. Top row shows the performances for music mixtures, and bottom row shows the performances for speech mixtures
In this paper, we have proposed a new variant of the state-of-the-art determined BSS algorithm called Consistent ILRMA. It utilizes the smoothing effect of the inverse STFT in order to assist the separation and enhance the performance. Experimental results showed that the proposed method can improve the separation performance when the window length is sufficiently large (≥256 ms in the experimental condition of this paper). These results demonstrate the potential of considering spectrogram consistency within the state-of-the-art determined BSS algorithm. In addition, we experimentally confirmed the importance of iterative back projection for considering spectrogram consistency within determined BSS. It should be possible to construct a new source model in consideration of the spectrogram consistency, which can pave the way for the next direction of research on determined BSS.
Independence between real and imaginary parts of spectrogram
The source generative model (15) assumes that the real and imaginary parts of a source in the time-frequency domain are mutually independent because the generative model has a zero-mean and circularly symmetric shape in the complex plane. The independence between real and imaginary parts or amplitude and phase has been investigated, but its validity may depend on the parameters of STFT. Independence can be measured by a symmetric uncertainty coefficient [63–65]:
$$\begin{array}{*{20}l} C(q_{1}, q_{2}) = 2\frac{ H(q_{1}) + H(q_{2}) - H(q_{1}, q_{2}) }{ H(q_{1}) + H(q_{2}) }, \end{array} $$
where q1 and q2 are random variables, H(q1) and H(q2) are their entropy, and H(q1,q2) is the joint entropy of q1 and q2. Since the numerator of (35) corresponds to the mutual information of q1 and q2, the symmetric uncertainty coefficient can be interpreted as normalized mutual information. When q1 and q2 are mutually independent, (35) becomes zero. In contrast, when q1 and q2 are completely dependent, (35) becomes one.
Symmetric uncertainty coefficient between real and imaginary parts for music and speech sources, where a Hann, b Hamming, or c Blackman window is used in STFT. Left and right columns correspond to the music sources and speech sources, respectively
We calculated the symmetric uncertainty coefficient (35) between the real and imaginary parts of a time-frequency bin obtained by applying STFT to music or speech sources. Let s be a complex-valued time-frequency bin of a source (the indexes of frequency and time are omitted here). The independence between the real and imaginary parts can be measured by C(Re(s),Im(s)), where Re(·) and Im(·) return the real and imaginary parts of an input complex value, respectively. Here, H(Re(s)),H(Im(s)), and H(Re(s),Im(s)) were approximately obtained by calculating the histograms of Re(s) and Im(s). The number of bins in the histograms was set to 10,000. We used the dry sources listed in Table 1: 15 music (instrumental) and eight speech sources. The parameters of STFT were the same as those in Section 4.
Figure 11 shows the symmetric uncertainty coefficients averaged over all bins and sources. Their values C(Re(s),Im(s)) were almost zero for all STFT conditions and source types (music or speech), and thus, the assumption of independence between real and imaginary parts is valid for music and speech sources. This fact leads to the generative model assumed in ILRMA. Note that those symmetric uncertainty coefficients validated the independence of real and imaginary parts at each time-frequency bin. That is, the inter-bin relation is not considered here. The proposed method captures such inter-bin relations imposed by the spectrogram consistency, which is not apparent in these bin-wise assessments of independence.
Additional experimental results for synthesized mixtures
Figures 12–15, 16–21, and 22–27 show the SDR improvements, SIR improvements, and SAR, respectively, for synthesized music and speech mixtures. These figures correspond to the results and discussions in Section 4.1.2.
Average SDR improvements for synthesized music mixtures (music 1–10) with aE2A and bJR2, where Hamming window is used in STFT
Average SDR improvements for synthesized music mixtures (music 1–10) with aE2A and bJR2, where Blackman window is used in STFT
Average SDR improvements for synthesized speech mixtures (speech 1–10) with aE2A and bJR2, where Hamming window is used in STFT
Average SDR improvements for synthesized speech mixtures (speech 1–10) with aE2A and bJR2, where Blackman window is used in STFT
Average SIR improvements for synthesized music mixtures (music 1–10) with aE2A and bJR2, where Hann window is used in STFT
Average SIR improvements for synthesized music mixtures (music 1–10) with aE2A and bJR2, where Hamming window is used in STFT
Average SIR improvements for synthesized music mixtures (music 1–10) with aE2A and bJR2, where Blackman window is used in STFT
Average SIR improvements for synthesized speech mixtures (speech 1–10) with aE2A and bJR2, where Hann window is used in STFT
Average SIR improvements for synthesized speech mixtures (speech 1–10) with aE2A and bJR2, where Hamming window is used in STFT
Average SIR improvements for synthesized speech mixtures (speech 1–10) with aE2A and bJR2, where Blackman window is used in STFT
Average SAR for synthesized music mixtures (music 1–10) with aE2A and bJR2, where Hann window is used in STFT
Average SAR for synthesized music mixtures (music 1–10) with aE2A and bJR2, where Hamming window is used in STFT
Average SAR for synthesized music mixtures (music 1–10) with aE2A and bJR2, where Blackman window is used in STFT
Average SAR for synthesized speech mixtures (speech 1–10) with aE2A and bJR2, where Hann window is used in STFT
Average SAR for synthesized speech mixtures (speech 1–10) with aE2A and bJR2, where Hamming window is used in STFT
Average SAR for synthesized speech mixtures (speech 1–10) with aE2A and bJR2, where Blackman window is used in STFT
The datasets used for the experiments in this paper are openly available: SiSEC 2011 (http://sisec2011.wiki.irisa.fr/) and RWCP-SSD (http://research.nii.ac.jp/src/en/RWCP-SSD.html). Our MATLAB implementation of the proposed method is also openly available at the following site: https://github.com/d-kitamura/ILRMA/blob/master/ consistentILRMA.m
"In the original publication, the webpage includes Section "AAA" between "Abstract" and "Introduction (Sect.1). The article has been updated to delete this empty section."
BSS:
Blind source separation
ICA:
Independent component analysis
STFT:
Short-time Fourier transform
FDICA:
Frequency-domain independent component analysis
Independent vector analysis
ILRMA:
Independent low-rank matrix analysis
NMF:
Nonnegative matrix factorization
IS-NMF:
Nonnegative matrix factorization based on the Itakura–Saito divergence
SDR:
Source-to-distortion ratio
SIR:
Source-to-interference ratio
Source-to-artifact ratio
P. Comon, Independent component analysis, a new concept?Signal Process.36(3), 287–314 (1994).
P. Smaragdis, Blind separation of convolved mixtures in the frequency domain. Neurocomputing. 22:, 21–34 (1998).
S. Kurita, H. Saruwatari, S. Kajita, K. Takeda, F. Itakura, in Proc. ICASSP. Evaluation of blind signal separation method using directivity pattern under reverberant conditions, vol. 5 (IEEE, 2000), pp. 3140–3143.
N. Murata, S. Ikeda, A. Ziehe, An approach to blind source separation based on temporal structure of speech signals. Neurocomputing. 41(1–4), 1–24 (2001).
H. Saruwatari, T. Kawamura, T. Nishikawa, A. Lee, K. Shikano, Blind source separation based on a fast-convergence algorithm combining ICA and beamforming. IEEE Trans. ASLP. 14(2), 666–678 (2006).
H. Sawada, R. Mukai, S. Araki, S. Makino, A robust and precise method for solving the permutation problem of frequency-domain blind source separation. IEEE Trans. SAP. 12(5), 530–538 (2004).
A. Hiroe, in Proc. ICA. Solution of permutation problem in frequency domain ICA using multivariate probability density functions (SpringerBerlin, Heidelberg, 2006), pp. 601–608.
T. Kim, T. Eltoft, T.-W. Lee, in Proc. ICA. Independent vector analysis: an extension of ICA to multivariate components (SpringerBerlin, Heidelberg, 2006), pp. 165–172.
T. Kim, H.T. Attias, S.-Y. Lee, T.-W. Lee, Blind source separation exploiting higher-order frequency dependencies. IEEE Trans. ASLP. 15(1), 70–79 (2007).
N. Ono, in Proc. WASPAA. Stable and fast update rules for independent vector analysis based on auxiliary function technique (IEEE, 2011), pp. 189–192.
D. Kitamura, N. Ono, H. Sawada, H. Kameoka, H. Saruwatari, Determined blind source separation unifying independent vector analysis and nonnegative matrix factorization. IEEE/ACM Trans. ASLP. 24(9), 1626–1641 (2016).
D. Kitamura, N. Ono, H. Sawada, H. Kameoka, H. Saruwatari, in Audio Source Separation, ed. by S. Makino. Determined blind source separation with independent low-rank matrix analysis (SpringerCham, 2018), pp. 125–155.
T. Tachikawa, K. Yatabe, Y. Oikawa, in Proc. IWAENC. Underdetermined source separation with simultaneous DOA estimation without initial value dependency (IEEE, 2018), pp. 161–165.
D.D. Lee, H.S. Seung, Learning the parts of objects by non-negative matrix factorization. Nature. 401(6755), 788–791 (1999).
D.D. Lee, H.S. Seung, in Proc. NIPS. Algorithms for non-negative matrix factorization, (2000), pp. 556–562.
C. Févotte, N. Bertin, J.-L. Durrieu, Nonnegative matrix factorization with the Itakura-Saito divergence. With application to music analysis. Neural Comput.21(3), 793–830 (2009).
Y. Mitsui, D. Kitamura, S. Takamichi, N. Ono, H. Saruwatari, in Proc. ICASSP. Blind source separation based on independent low-rank matrix analysis with sparse regularization for time-series activity (IEEE, 2017), pp. 21–25.
H. Kagami, H. Kameoka, M. Yukawa, in Proc. ICASSP. Joint separation and dereverberation of reverberant mixtures with determined multichannel non-negative matrix factorization (IEEE, 2018), pp. 31–35.
R. Ikeshita, Y. Kawaguchi, in Proc. ICASSP. Independent low-rank matrix analysis based on multivariate complex exponential power distribution (IEEE, 2018), pp. 741–745.
D. Kitamura, S. Mogami, Y. Mitsui, N. Takamune, H. Saruwatari, N. Ono, Y. Takahashi, K. Kondo, Generalized independent low-rank matrix analysis using heavy-tailed distributions for blind source separation. EURASIP J. Adv. Signal Process.2018:, 28 (2018).
K. Yoshii, K. Kitamura, Y. Bando, E. Nakamura, T. Kawahara, in EUSIPCO. Independent low-rank tensor analysis for audio source separation (IEEE, 2018), pp. 1657–1661.
R. Ikeshita, in EUSIPCO. Independent positive semidefinite tensor analysis in blind source separation (IEEE, 2018), pp. 1652–1656.
R. Ikeshita, N. Ito, T. Nakatani, H. Sawada, in WASPAA. Independent low-rank matrix analysis with decorrelation learning (IEEE, 2019), pp. 288–292.
N. Makishima, S. Mogami, N. Takamune, D. Kitamura, H. Sumino, S. Takamichi, H. Saruwatari, N. Ono, Independent deeply learned matrix analysis for determined audio source separation. IEEE/ACM Trans. ASLP. 27(10), 1601–1615 (2019).
K. Sekiguchi, Y. Bando, A.A. Nugraha, K. Yoshii, T. Kawahara, Semi-supervised multichannel speech enhancement with a deep speech prior. IEEE/ACM Trans. ASLP. 27(12), 2197–2212 (2019).
S. Mogami, N. Takamune, D. Kitamura, H. Saruwatari, Y. Takahashi, K. Kondo, N. Ono, Independent low-rank matrix analysis based on time-variant sub-Gaussian source model for determined blind source separation. IEEE/ACM Trans. ASLP. 28:, 503–518 (2019).
Y. Takahashi, D. Kitahara, K. Matsuura, A. Hirabayashi, in Proc. ICASSP. Determined source separation using the sparsity of impulse responses (IEEE, 2020), pp. 686–690.
M. Togami, in Proc. ICASSP. Multi-channel speech source separation and dereverberation with sequential integration of determined and underdetermined models (IEEE, 2020), pp. 231–235.
S. Kanoga, T. Hoshino, H. Asoh, Independent low-rank matrix analysis-based automatic artifact reduction technique applied to three BCI paradigms. Front. Hum. Neurosci.14:, 17 (2020).
D. Kitamura, N. Ono, H. Saruwatari, in Proc. EUSIPCO. Experimental analysis of optimal window length for independent low-rank matrix analysis, (2017), pp. 1210–1214.
Y. Liang, S.M. Naqvi, J. Chambers, Overcoming block permutation problem in frequency domain blind source separation when using AuxIVA algorithm. Electron. Lett.48(8), 460–462 (2012).
K. Yatabe, Consistent ICA: determined BSS meets spectrogram consistency. IEEE Signal Process. Lett.27:, 870–874 (2020).
T. Gerkmann, M. Krawczyk-Becker, J. Le Roux, Phase processing for single-channel speech enhancement: history and recent advances. IEEE Signal Process. Mag.32(2), 55–66 (2015).
P. Mowlaee, R. Saeidi, Y. Stylianou, Advances in phase-aware signal processing in speech communication. Speech Commun.81:, 1–29 (2016).
P. Mowlaee, J. Kulmer, J. Stahl, F. Mayer, Single channel phase-aware signal processing in speech communication: theory and practice (Wiley, 2016).
K. Yatabe, Y. Oikawa, in Proc. ICASSP. Phase corrected total variation for audio signals (IEEE, 2018), pp. 656–660.
K. Yatabe, Y. Masuyama, Y. Oikawa, in Proc. IWAENC. Rectified linear unit can assist Griffin–Lim phase recovery (IEEE, 2018), pp. 555–559.
Y. Masuyama, K. Yatabe, Y. Oikawa, in Proc. IWAENC. Model-based phase recovery of spectrograms via optimization on Riemannian manifolds (IEEE, 2018), pp. 126–130.
Y. Masuyama, K. Yatabe, Y. Oikawa, Griffin–Lim like phase recovery via alternating direction method of multipliers. IEEE Signal Process. Lett.26(1), 184–188 (2019).
Y. Masuyama, K. Yatabe, Y. Koizumi, Y. Oikawa, N. Harada, in Proc. ICASSP. Deep Griffin–Lim iteration (IEEE, 2019), pp. 61–65.
Y. Masuyama, K. Yatabe, Y. Oikawa, in Proc. ICASSP. Phase-aware harmonic/percussive source separation via convex optimization (IEEE, 2019), pp. 985–989.
Y. Masuyama, K. Yatabe, Y. Oikawa, in Proc. ICASSP. Low-rankness of complex-valued spectrogram and its application to phase-aware audio processing (IEEE, 2019), pp. 855–859.
Y. Masuyama, K. Yatabe, Y. Koizumi, Y. Oikawa, N. Harada, in Proc. ICASSP. Phase reconstruction based on recurrent phase unwrapping with deep neural networks (IEEE, 2020), pp. 826–830.
J.L. Roux, H. Kameoka, N. Ono, S. Sagayama, in Proc. DAFx. Fast signal reconstruction from magnitude STFT spectrogram based on spectrogram consistency, (2010).
J. Le Roux, E. Vincent, Consistent Wiener filtering for audio source separation. IEEE Signal Process. Lett.20(3), 217–220 (2013).
N. Perraudin, P. Balazs, P.L. Søndergaard, in Proc. WASPAA. A fast Griffin–Lim algorithm (IEEE, 2013), pp. 1–4.
K. Yatabe, Y. Masuyama, T. Kusano, Y. Oikawa, Representation of complex spectrogram via phase conversion. Acoust. Sci. Tech.40(3), 170–177 (2019).
M. Kowalski, E. Vincent, R. Gribonval, Beyond the narrowband approximation: wideband convex methods for under-determined reverberant audio source separation. IEEE Trans. ASLP. 18(7), 1818–1829 (2010).
K. Matsuoka, S. Nakashima, in Proc. ICA. Minimal distortion principle for blind source separation, (2001), pp. 722–727.
K. Yatabe, D. Kitamura, in Proc. ICASSP. Determined blind source separation via proximal splitting algorithm (IEEE, 2018), pp. 776–780.
K. Yatabe, D. Kitamura, in Proc. ICASSP. Time-frequency-masking-based determined BSS with application to sparse IVA (IEEE, 2019), pp. 715–719.
K. Yatabe, D. Kitamura, Determined BSS based on time-frequency masking and its application to harmonic vector analysis. arXiv:2004.14091 (2020).
M. Brandstein, D. Ward, Microphone arrays: signal processing techniques and applications (Springer Science & Business Media, 2013).
S. Araki, S. Makino, Y. Hinamoto, R. Mukai, T. Nishikawa, H. Saruwatari, Equivalence between frequency-domain blind source separation and frequency-domain adaptive beamforming for convolutive mixtures. EURASIP J. Adv. Signal Process.2003(11), 1157–1166 (2003).
D. Griffin, J. Lim, Signal estimation from modified short-time Fourier transform. IEEE Trans. Acoust. Speech Signal Process.32(2), 236–243 (1984).
D. Gunawan, D. Sen, Iterative phase estimation for the synthesis of separated sources from single-channel mixtures. IEEE Signal Process. Lett.17(5), 421–424 (2010).
N. Sturmel, L. Daudet, L. Girin, in Proc. DAFx. Phase-based informed source separation of music, (2012).
M. Watanabe, P. Mowlaee, in Proc. INTERSPEECH. Iterative sinusoidal-based partial phase reconstruction in single-channel source separation, (2013).
F. Mayer, D. Williamson, P. Mowlaee, D.L. Wang, Impact of phase estimation on single-channel speech separation based on time-frequency masking. J. Acoust. Soc. Am.141:, 4668–4679 (2017).
S. Araki, F. Nesta, E. Vincent, Z. Koldovsky, G. Nolte, A. Ziehe, A. Benichoux, in Proc. LVA/ICA. The 2011 signal separation evaluation campaign (SiSEC2011): -Audio source separation, (2012), pp. 414–422.
S. Nakamura, K. Hiyane, F. Asano, T. Nishiura, T. Yamada, in Proc. LREC. Acoustical sound database in real environments for sound scene understanding and hands-free speech recognition, (2000), pp. 965–968.
E. Vincent, R. Gribonval, C. Févotte, Performance measurement in blind audio source separation. IEEE Trans. ASLP. 14(4), 1462–1469 (2006).
W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, Numerical Recipes in C: The Art of Scientific Computing (Cambridge University Press, New York, 1992).
I. Andrianakis, P. White, Speech spectral amplitude estimators using optimally shaped gamma and chi priors. Speech Comm.51(1), 1–14 (2009).
P. Mowlaee, J. Stahl, Single-channel speech enhancement with correlated spectral components: limits-potential. Speech Comm.121:, 58–69 (2020).
The authors would like to thank Nao Toshima for his support on the experiment. Also, the authors would like to thank the anonymous reviewers for their valuable comments and suggestions that helped improve the quality of this manuscript.
This work was partially supported by JSPS Grants-in-Aid for Scientific Research 19K20306 and 19H01116.
Daichi Kitamura and Kohei Yatabe contributed equally to this work.
National Institute of Technology, Kagawa College, 355 Chokushi, Takamatsu, 761-8058, Kagawa, Japan
Daichi Kitamura
Waseda University, 3-4-1 Okubo, Shinjuku-ku, 169-8555, Tokyo, Japan
Kohei Yatabe
DK derived the algorithm, performed the experiment, drafted the manuscript for initial submission, and revised the manuscript. KY proposed the main idea, gave advice, mainly wrote the manuscript for initial submission, and corrected the draft of revised manuscript. The authors read and approved the final manuscript.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Kitamura, D., Yatabe, K. Consistent independent low-rank matrix analysis for determined blind source separation. EURASIP J. Adv. Signal Process. 2020, 46 (2020). https://doi.org/10.1186/s13634-020-00704-4
Audio source separation
Convolutive mixture
Demixing filter estimation
Phase-aware signal processing
Spectrogram consistency
|
CommonCrawl
|
IFN-γ and TNF-α drive a CXCL10+ CCL2+ macrophage phenotype expanded in severe COVID-19 lungs and inflammatory diseases with tissue inflammation
Fan Zhang1,2,3,4,5,
Joseph R. Mears1,2,3,4,5,
Lorien Shakib6,
Jessica I. Beynor1,2,3,4,5,
Sara Shanaj7,
Ilya Korsunsky1,2,3,4,5,
Aparna Nathan1,2,3,4,5,
Accelerating Medicines Partnership Rheumatoid Arthritis and Systemic Lupus Erythematosus (AMP RA/SLE) Consortium,
Laura T. Donlin6,7 na1 &
Soumya Raychaudhuri ORCID: orcid.org/0000-0002-1901-82651,2,3,4,5,8 na1
Immunosuppressive and anti-cytokine treatment may have a protective effect for patients with COVID-19. Understanding the immune cell states shared between COVID-19 and other inflammatory diseases with established therapies may help nominate immunomodulatory therapies.
To identify cellular phenotypes that may be shared across tissues affected by disparate inflammatory diseases, we developed a meta-analysis and integration pipeline that models and removes the effects of technology, tissue of origin, and donor that confound cell-type identification. Using this approach, we integrated > 300,000 single-cell transcriptomic profiles from COVID-19-affected lungs and tissues from healthy subjects and patients with five inflammatory diseases: rheumatoid arthritis (RA), Crohn's disease (CD), ulcerative colitis (UC), systemic lupus erythematosus (SLE), and interstitial lung disease. We tested the association of shared immune states with severe/inflamed status compared to healthy control using mixed-effects modeling. To define environmental factors within these tissues that shape shared macrophage phenotypes, we stimulated human blood-derived macrophages with defined combinations of inflammatory factors, emphasizing in particular antiviral interferons IFN-beta (IFN-β) and IFN-gamma (IFN-γ), and pro-inflammatory cytokines such as TNF.
We built an immune cell reference consisting of > 300,000 single-cell profiles from 125 healthy or disease-affected donors from COVID-19 and five inflammatory diseases. We observed a CXCL10+ CCL2+ inflammatory macrophage state that is shared and strikingly abundant in severe COVID-19 bronchoalveolar lavage samples, inflamed RA synovium, inflamed CD ileum, and UC colon. These cells exhibited a distinct arrangement of pro-inflammatory and interferon response genes, including elevated levels of CXCL10, CXCL9, CCL2, CCL3, GBP1, STAT1, and IL1B. Further, we found this macrophage phenotype is induced upon co-stimulation by IFN-γ and TNF-α.
Our integrative analysis identified immune cell states shared across inflamed tissues affected by inflammatory diseases and COVID-19. Our study supports a key role for IFN-γ together with TNF-α in driving an abundant inflammatory macrophage phenotype in severe COVID-19-affected lungs, as well as inflamed RA synovium, CD ileum, and UC colon, which may be targeted by existing immunomodulatory therapies.
Tissue inflammation is a unifying feature across disparate diseases. While tissue- and disease-specific factors shape distinct inflammatory microenvironments, seemingly unrelated diseases can respond to the same therapy. For example, anti-tumor necrosis factor (TNF) therapies have revolutionized treatment for joint inflammation in autoimmune rheumatoid arthritis (RA) [1], while patients with intestinal inflammation due to Crohn's disease (CD) and ulcerative colitis (UC), collectively known as inflammatory bowel disease (IBD), also respond to anti-TNF medications [2]. Here, we posit that the deconstruction of tissues to the level of individually characterized cells and subsequent integration of these cells from various types of inflamed tissues could provide a platform to identify shared pathologic features across diseases and provide rationale for repurposing medications in outwardly dissimilar conditions.
Recent studies have detailed features of local tissue inflammation and immune dysfunction in COVID-19 and related diseases caused by SARS and MERS coronaviruses [3]. Consensus is building that extensive unchecked inflammation involving so-called "cytokine storm" is a driver of severe late-stage disease. A single-cell study of bronchoalveolar lavage fluid (BALF) in intubated COVID-19 patients identified two inflammatory macrophage subsets—one characterized by CCL2, CCL3, and CXCL10 expression and a second by FCN1 and S100A8—as potential mediators of pathology in this late-stage disease [4]. The presence of these macrophage subsets in the lung correlated with elevated circulating cytokines and extensive damage to the lung and vascular tissue. Reports looking at peripheral blood from large numbers of COVID-19 patients have consistently documented lymphopenia (reduced lymphocyte frequency) paired with increased levels of CD14+ monocytes and inflammatory cytokines, such as IL1B, TNF-α, IFN-α, and IFN-γ [5,6,7]. These factors are ineffective in lowering viral load while possibly contributing to cytokine release syndrome (CRS) [7]. Together, these studies indicate the importance of uncovering the full extent of cell states present in COVID-19 patients including within affected tissues, and in particular among macrophages. Further, the extent to which these cell states are shared between COVID-19 and other inflammatory diseases and their disease association may further clarify disease mechanisms and precisely define therapeutic targets.
Macrophages are pervasive throughout the body and pivotal to tissue homeostasis, where they tailor their function to the parenchymal functions of each tissue type. In inflammation, tissue-resident macrophages and infiltrating monocytes are activated not only by factors from the unique tissue microenvironment, but also by disease-associating factors such as byproducts of deregulated tissue homeostasis, tissue damage, gene expression differences due to genetic variants, immune reactions, and in some cases, infecting pathogens. The unprecedented plasticity and robust reactivity of macrophages and monocytes generates a spectrum of phenotypes yet to be fully defined in human disease that mediate clearance of noxious elements, but in some cases, such as in cytokine storm, aggravate disease pathology. These phenotypes include a range of pro-inflammatory and anti-microbial states that secrete key cytokines (e.g., TNF and IL-1B) and chemokines (e.g., CXCL10 and CXCL11) and other functional states geared towards debris clearance, dampening inflammation, and tissue reconstruction, as well as a variety of intermediate states [8,9,10,11]. Meta-analysis of reactive macrophage phenotypes in inflamed tissues across diseases may further refine our understanding of the complexity of human macrophage functions, identifying subsets potentially shared across immune disorders, and thereby providing a promising route towards repurposing therapeutic strategies.
Single-cell RNA-seq (scRNA-seq) has provided an opportunity to interrogate inflamed tissues and identify expanded and potentially pathogenic immune cell types [12]. We recently defined a distinct CD14+ IL1B+ pro-inflammatory macrophage population that is markedly expanded in RA compared to osteoarthritis (OA), a non-inflammatory disease [13, 14]. Likewise, scRNA-seq studies on inflamed colonic tissues have identified inflammatory macrophage and fibroblast phenotypes with high levels of Oncostatin M (OSM) signaling factors that are associated with resistance to anti-TNF therapies [15]. Only very recently, developments in computational methods have made it possible to meta-analyze an expansive number of cells across various tissue states, while mitigating experimental and cohort-specific artifacts [16,17,18,19,20,21,22], therein assessing shared and distinct cell states in disparate inflamed tissues.
To define the key shared immune cell compartments between inflammatory diseases with COVID-19, we meta-analyzed and integrated tissue-level single-cell profiles from five inflammatory diseases and COVID-19. We created an immune cell reference consisting of 307,084 single-cell profiles from 125 donor samples from RA synovium, systemic lupus erythematosus (SLE) kidney, UC colon, CD ileum, interstitial lung disease, and COVID-19 BALF. This single-cell reference represents comprehensive immune cell types from different disease tissues with different inflammation levels, which can be used to investigate inflammatory diseases and their connections with COVID-19 in terms of immune cell responses. Using our meta-dataset reference, we identified major immune cell lineages including macrophages, dendritic cells, T cells, B cells, NK cells, plasma cells, mast cells, and cycling lymphocytes. Among these, we found two inflammatory CXCL10+ CCL2+ and FCN1+ macrophage states that are shared between COVID-19 and several of the inflammatory diseases we analyzed. To understand the factors driving these phenotypes, we stimulated human blood-derived macrophages with eight different combinations of inflammatory disease-associated cytokines and tissue-associating stromal cells. We demonstrated that the CXCL10+ CCL2+ macrophages from severe COVID-19 lungs share a transcriptional phenotype with macrophages stimulated by TNF-α plus IFN-γ. Notably, the other two conditions wherein these macrophages are most abundant are RA and CD. As patients with RA and CD show response to anti-TNF therapies, this finding supports the approach of identifying shared cellular states in unrelated inflamed tissues to define shared responses to medications. Furthermore, janus kinase (JAK) inhibitors have also proved effective in RA, presumably in large part through targeting IFN-γ responses [8, 23, 24]. Our data collectively support the potential efficacy of JAK inhibitors and anti-TNF therapies in inflammatory macrophage responses in COVID-19 due to cellular phenotype associations with select inflammatory tissue diseases already proven to respond to these medications.
Integration of scRNA-seq profiles from multiple datasets
scRNA-seq data collection, remapping, and aggregation
To build a multi-tissue immune cell reference, we obtained the raw FASTQ files and raw count matrices from the following publicly available scRNA-seq datasets: RA synovial cells from dbGaP (Zhang, et al, 2019; phs001457.v1.p1) [13] and dbGaP (Stephenson, et al, 2018; phs001529.v1.p1) [25], SLE kidney cells from dbGaP (Arazi, et al, 2019; phs001457.v1.p1) [26], UC colon cells from Single Cell Portal (Smillie, et al, 2019; SCP259) [15], CD ileum cells from GEO (Martin, et al, 2019; GSE134809) [27], interstitial and pulmonary lung disease from GEO (Reyfman, et al, 2019; GSE122960) [28], and COVID-19 and healthy BALF cells from GEO (Liao, et al, 2020; GSE145926) [4]. We also use the datasets from Grant et al. (GSE155249) [29] and Xue et al. (GSE47189) [11] as additional validations.
For the FASTQs that we obtained, we used Kallisto [30] to map the raw reads to the same kallisto index generated from GRCh38 Ensembl v100 FASTA files. We pseudo-aligned FASTQ files to this reference, corrected barcodes, sorted BUS files, and counted unique molecular identifiers (UMIs) to generate UMI-count matrices. We aggregated all the cell barcodes from 125 donor samples into one matrix. We performed consistent QC to remove the cells that expressed fewer than 500 genes or with more than 20% of the number of UMIs mapping to the mitochondrial genes, resulting in 307,084 cells in total. The number of donor samples and cells that passed QC for each tissue source, disease status, technology, and clinical data are shown in Additional file 1: Table S1.
Normalization, scaling, and feature selection
We aggregated all samples on the overlapped 17,054 genes. We then normalized each cell to 10,000 reads and log-transformed the normalized data. We then selected the top 1,000 most highly variable genes based on dispersion within each donor sample and combined these genes to form a variable gene set. Based on the pooled highly variable genes, we then scaled the aggregated data matrix to have mean 0 and variance 1. We normalized the expression matrix using the L2 norm.
Dimensionality reduction and batch effect correction
To minimize the effect from multiple datasets with different cell numbers during an unbiased scRNA-seq data integration, we performed weighted principal component analysis (PCA) and used the first 20 weighted PCs for follow-up analysis. The summation of the weights for cells from each separate single-cell dataset is equal so that each dataset contributed equally to the analysis. For all cell-type integration, we corrected batch effects on three different levels (sequencing technology, tissue source, and donor sample) simultaneously using Harmony [16]. We use default parameters and also specified theta = 2 for each batch variable, max.iter.cluster = 30, and max.iter.harmony = 20. For Harmony batch correction, we use the same weights from the weighted PCA. For macrophage only integration, we corrected the effect from donors for the 10X data, and dataset for the CEL-seq2 data since each donor generated from CEL-seq2 data only has less than 100 cells. As outputs, we obtained batch-corrected PC embeddings where the effects from different single-cell datasets and donors are removed in low-dimensional PC space.
Quantitative evaluation of batch correction and dataset integration
Variance explained from different sources: To quantitatively measure the mixture of batch effects after correction, we estimated the sources of variance explained from gene expression on the first ten principal component embeddings. We show the proportion of variance explained from the original pre-defined immune cell type, tissue origin, technology, and donor sample. We used the R package limma [31] to fit the model and ANOVA to compute the percentage of variance explained:
$$ \mathrm{principal}\ \mathrm{component}\sim \mathrm{celltype}+\mathrm{tissue}+\mathrm{technology}+\mathrm{sample}. $$
LISI score: Meanwhile, we used a LISI (local inverse Simpson's Index) metric to measure the mixture levels of batch labels based on local neighbors chosen at a specific perplexity [16, 22]. Specifically, we built Gaussian distribution of neighborhoods and computed these local distributions of batch probabilities p(b) using perplexity 30 on the first 20 principal components. B is the number of batches. Then, we calculated the inverse Simpson's index:
$$ 1/{\sum}_{b=1}^Bp(b). $$
An iLISI (integration LISI) score ranges from 1.0, which denotes no mixing, to B (the maximum score is the total number of levels in the categorical batch variable) where higher scores indicate better mixing of batches. Here batch can be tissue source, donor sample, and sequencing technology. We also calculated the cLISI (cell-type LISI), which measures integration accuracy of pre-defined cell-type annotations instead of using the same formulation. An accurate embedding has a cLISI close to 1 for every cell neighborhood, reflecting separation of distinct cell types.
Graph-based clustering
We then applied unbiased graph-based clustering (Louvain [32]) on the top 20 batch-corrected PCs at various resolution levels (0.2, 0.4, 0.6, 0.8, 1.0). We use 0.4 as the final resolution value to gain the biological interpretations that make most sense. Then, we furthermore performed dimensionality reduction using UMAP [33].
Pseudo-bulk differential expression analysis
To identify robust single-cell cluster marker genes that are shared between diseases, we performed pseudo-bulk analysis by summing the raw UMI counts for each gene across cells from the same donor sample, tissue source, and cluster assignment. We modeled raw count as a negative binomial (NB) distribution and fitted a generalized linear model (GLM) for each gene accounting for tissue, sample, and nUMI using DESeq2 [34]. We also computed AUC and P using the Wilcoxon rank-sum test by comparing pseudo-bulk samples from one cluster to the others. We use several criteria to decide statistically significant marker genes: (1) GLM-β, (2) fold change, (3) AUC, and (4) Wilcoxon rank-sum test and Bonferroni-corrected P (threshold 10−5, 0.05/5,000 tested highly variable genes). We tested all genes that were detected in more than 100 cells with non-zero UMI counts.
Identification of major immune cell-type clusters
We carefully annotated each identified immune cell-type cluster in two ways. First, we mapped the original published annotation labels [4, 13, 15, 26, 27] to our UMAP embeddings when applicable. We are able to reproduce the original cell-type subsets in our cross-disease integrative analysis. Second, we annotated the identified clusters using cell-type lineage marker genes: T cells (CD3D), NK cells (NCAM1), B cells (MS4A1), plasma cells (MZB1), macrophages (FCGR3A/CD14), dendritic cells (DCs, CD1C), mast cells (TPSAB1), and cycling cells (MKI67).
Cell culture for human blood-derived macrophages and synovial fibroblasts
We obtained human leukocyte-enriched whole blood samples from 4 healthy blood donors from the New York Blood Center and purified peripheral blood mononuclear cells (PBMC) from each using Ficoll gradient centrifugation. We isolated CD14+ monocytes from each sample using human CD14 microbeads (Miltenyi Biotec) and differentiated these cells into blood-derived macrophages for 1 day at 37 °C in Macrophage-Colony Stimulating Factor (M-CSF); 10 ng/mL) (PeproTech) and RPMI 1640 medium (Corning) supplemented with 10% defined fetal bovine serum (FBS) (HyClone), 1% penicillin-streptomycin (Thermo Fisher Scientific), and 1% l-glutamine (Thermo Fisher Scientific) in a 6-well plate at a concentration of 1.2 million cells/mL.
In parallel, we obtained human synovial fibroblasts derived from deidentified synovial tissues from RA patients undergoing arthroplasty (HSS IRB 14033). Two unique fibroblast lines were used, each paired with two distinct blood-derived macrophage donor samples. We cultured fibroblasts in alpha minimum essential medium (aMEM) (Gibco) supplemented with 10% premium FBS (R&D Systems Inc), 1% penicillin-streptomycin (Thermo Fisher Scientific), and 1% l-glutamine (Thermo Fisher Scientific) for 4 to 6 passages. To create each transwell, we seeded the mesh of polyester chambers with 0.4μm pores (Corning) with either 200,000 synovial fibroblasts or without fibroblasts for 1 day at 37 °C.
The following day, we suspended each transwell—3 with fibroblasts and 6 without fibroblasts per donor—above one well of cultured macrophages. Those transwells with fibroblasts had a fibroblast-to-macrophage ratio of 1:15. In total, we created 9 wells per donor. Next, we added IFN-β (200 pg/mL), IL-4 (20 ng/ mL), TNF-α (20 ng/mL), and/or IFN-γ (5 ng/mL) to each transwell and underlying plate per donor. All plates were incubated at 37 °C for 19 h.
RNA library preparation and sequencing
We applied a modified version of the staining protocol from CITE-seq, using only Totalseq™-A Hashing antibodies from Biolegend [35]. We harvested macrophages from each well and aliquoted one fifth of the cells, ~ 750,000 cells per condition, for staining in subsequent steps. We washed the cells in filtered labeling buffer (PBS with 1% BSA) and resuspended in 50 μL of labeling buffer with Human TruStain FcX™ (Biolegend Cat #422302, 5 μL per stain) for 10 min at 4 °C. Next, we added 50 μL of labeling buffer for a final concentration of 1.6 ng/μL of a total-seq hashtag (1, 2, 4–9, or 12) per condition per donor for 25 min at 4 °C. Next, we washed all samples in 2 mL, 1 mL, and 1 mL of labeling buffer, sequentially. We counted the remaining cells using a cellometer (Nexcelom Cellometer Auto 1000) and aliquoted the equivalent of 60,000 cells from each condition into one Eppendorf tube per donor. From here, we filtered through a 40-μm mesh and resuspended in PBS with 0.04% BSA to a concentration of 643.7 cells/μl. We followed the Chromium Single Cell 3′ v3 kit (10x Genomics) processing instructions and super-loaded 30,000 cells per lane. We used one lane per donor, with 9 conditions multiplexed per donor sample. After cDNA generation, samples were shipped to the Brigham and Women's Hospital Single Cell Genomics Core for cDNA amplification and sequencing. Pairs of libraries were pooled and sequenced per lane on an Illumina NovaSeq S2 with paired-end 150 base-pair reads.
Processing FASTQ reads into gene expression matrices and cell hashing
We quantified mRNA and antibody UMI counts, respectively. Cellranger v3.1.0 was used to process the raw BCL files and produce a final gene by cell barcode UMI count matrix. First, raw BCL files were demultiplexed using cellranger mkfastq to generate FASTQ files with default parameters. Then, these FASTQ files were aligned to the GRCh38 human reference genome. Gene/antibody reads were quantified simultaneously using cellranger count. Cell barcodes and UMIs were extracted for gene/hashtag antibodies for each run.
For quality control of the cells, we first performed mRNA-level cell QC and then hashtag-level QC. For the mRNA-level QC, we removed the cells that expressed fewer than 1,000 genes or more than 10% of UMIs mapping to the mitochondrial genes. For the hashtag QC, we removed the cells whose proportion of UMIs for the most abundant hashing antibody is less than 90%, and removed the cells whose ratio of the second most-abundant and first most-abundant antibody is greater than 0.10. After filtering, each cell was assigned a hashing antibody and donor sample on the most abundant hashing antibody barcode. After QC, we obtained 9,399, 8,775, 4,622, and 3,027 cells for the 4 donor samples. We then normalized UMI counts from each cell based on the total number of UMIs and log-transformed the normalized counts.
Linear modeling for experimental stimulation-specific genes from cell culture single-cell profiles
To more accurately identify gene signatures that are specific to each of the eight stimulatory conditions, we used linear models to test each gene for differential normalized gene expression across contrasts of interest. Specifically, we fit the following models:
$$ \mathrm{gene}\_\mathrm{expression}\sim \mathrm{stim}+1\mid \mathrm{sample}+\mathrm{nUMI}, $$
where stim is a categorical variable that represents eight stimuli and an untreated status, 1 ∣ sample is the random effect of the 4 replicated donor samples, and nUMI (number of unique molecular identifiers) represents the technical cell-level fixed effect. We obtained the fold change, T and P value, and Bonferroni-corrected P value as measurements for each tested gene signature for each applied condition. We then generated a list of differentially expressed genes whose fold change is greater than 2 and P is smaller than the Bonferroni correction threshold 10−7 (0.05/7,000 highly variable genes × 9 conditions) for each stimulatory condition.
Testing integrative macrophage clusters for association with severe/inflamed status
We tested the association of each macrophage cluster with severe/inflamed status compared to healthy with MASC (mixed-effects modeling of associations of single cells) [36]. We fit a logistic regression model for each identified cluster within one tissue and set the nUMIs and percent MT (% MT) content as cell-level fixed effects, and donor sample as a random effect:
$$ \log \left[\frac{Y_{i,c}}{1-{Y}_{i,c}}\right]={\beta}_{\mathrm{case}}{X}_{i,\mathrm{case}}+{\beta}_{\mathrm{tech}1}{X}_{i,\mathrm{tech}1}+{\beta}_{\mathrm{tech}2}{X}_{i,\mathrm{tech}2}+\left({\varphi}_d|\ d\ \right), $$
where Yi,c is the odds of cell i in cluster c, βcase is the effect log (odds ratio) for case (severe COVID-19)-control (healthy) status, βtech1 is a vector of technical cell-level (nUMIs) covariate, βtech2 is a vector of technical cell-level (% mitochondrial genes) covariate, Xi is the values for cell i in technology as appropriate, and (φd| d ) is the random effect of donor d. Thus, we used this logistic regression model to test for differentially abundant macrophage clusters associated with severe COVID-19 by correcting for the technical cell-level and donor-level covariates. Similarly, we also tested for differentially abundant macrophage clusters associated with inflamed CD compared to non-inflamed CD, RA compared to OA, and inflamed UC compared to healthy colon, accounting for technical cell-level and donor-specific covariates. We generated log likelihood-ratio test MASC P values and odds ratios for each tested cluster and used Bonferroni correction to report the macrophage clusters that are statistically significantly more abundant in severe/inflamed samples compared to healthy or non-inflamed controls.
Gene score calculation
We calculated a CXCL10+ CCL2+ gene score for each single-cell profile from an external single-cell RNA-seq dataset from severe COVID-19 BALF [29]. The gene score was calculated as the sum of counts for CXCL10+ CCL2+ genes (n = ~ 70) as a percent of total gene counts for each cell.
Pathway enrichment analysis
For pathway gene set enrichment, we use the msigdbr R package on 4872 genesets including C5 (Gene Ontology), C7 (immunologic signature), and H (Hallmarks) from MSigDB [37] to calculate enriched pathways of macrophage states for each disease tissue.
For all the analysis and plots, sample sizes and measures of center and confidence intervals (mean ± SD or SEM), and statistical significance are presented in the figures, figure legends, and in the text. Results were considered statistically significant when P < 0.05 by Bonferroni correction as is indicated in figure legends and text.
A reference of > 300,000 immune single-cell profiles across inflammatory diseases and COVID-19
To compare hematopoietic cells across inflammatory diseases and COVID-19 in an unbiased fashion, we aggregated 307,084 single-cell RNA-seq profiles from 125 healthy or inflammatory disease-affected tissues spanning six disorders: (1) colon from healthy individuals and patients with inflamed or non-inflamed UC [15]; (2) terminal ileum from patients with inflamed or non-inflamed CD [27]; (3) synovium from patients with RA or OA [13, 25]; (4) kidney from patients with SLE or healthy controls [26], (5) lung from patients with interstitial lung disease [28], and (6) BALF from healthy individuals and those with mild or severe COVID-19 [4] (Fig. 1a, b, Additional file 2: Figure S1a, Additional file 1: Table S1). We developed a pipeline for multi-tissue integration and disease association at the single-cell level (Fig. 1a, "Methods"). Where feasible, we obtained raw reads and re-mapped them to the GRCh38 genome assembly. We then aggregated raw counts for 17,054 shared genes across studies into a single matrix, performed consistent quality control (QC), library size normalization, and principal component analysis [38] (PCA) ("Methods"). To account for different cell numbers from different datasets, we performed weighted PCA, assigning higher weights to cells from datasets with a relatively small number of cells and vice versa. In the integrated PCA embedding, we modeled and removed the effects of technology, tissue, and donor with Harmony [16] to identify shared cell states across studies and diseases ("Methods"). Before Harmony, cells grouped primarily based on tissue source (Additional file 2: Figure S1b). After Harmony, < 1% of the variation explained by PC1 and PC2 was attributable to tissue source and sample, while > 60% was attributable to previously defined cell types (Fig. 1c). Importantly, rare pathogenic cell types within tissue, such as germinal center B cells in inflamed UC colon and age-associated B cells in RA synovium, were identifiable in the integrated space (Additional file 2: Figure S1c). We confirmed the degree of cross sample, tissue, technology, and cell-type mixing with an independent measure of single-cell integration: LISI [16, 22] (Local Inverse Simpson's Index). An increased iLISI (integration LISI) score after batch correction compared to before batch correction indicates a better mixing of batches after correction (Fig. 1d and Additional file 2: Figure S2a).
Integrative analysis of > 300,000 single-cell profiles from five inflammatory disease tissues and COVID-19 BALF. a Overall study design and single-cell analysis, including the integrative pipeline, a single-cell reference dataset, fine-grained analysis to identify shared macrophage states, and disease association analysis. b Number of cells and donor samples from each healthy and disease tissue. c Percent of variance explained in the gene expression data by pre-defined broad cell type, tissue, sample, and technology for the first and second principal component (PC1 and PC2) before and after batch effect correction. d iLISI score before and after batch correction to measure the mixing levels of donor samples and tissue sources. An iLISI (integration LISI) score of 1.0 denotes no mixing while higher scores indicate better mixing of batches. e Integrative clustering of 307,084 cells reveals common immune cell types from different tissue sources. f Immune cells from separate tissue sources in the same UMAP coordinates. Cells from the same cell types are projected next to each other in the integrative UMAP space. g Heatmap of cell-type lineage marker genes. Gene signatures were selected based on AUC > 0.6 and P < 0.05 by Bonferroni correction comparing cells from one cell type to the others
In this integrated space, we performed graph-based clustering [32] and visualization with UMAP (Uniform Manifold Approximation and Projection) [33]. We identified 9 major cell-type clusters (Fig. 1e) present in all six tissues (Fig. 1f) and diseases (Additional file 2: Figure S2b). We labeled the clusters with canonical markers (Fig. 1g, Additional file 3: Table S2): CD3D+ T cells, NCAM1+ NK cells, MS4A1+ B cells, MZB1+ plasma cells, FCGR3A+/CD14+ macrophages, CD1C+ dendritic cells (DCs), TPSAB1+ mast cells, and MKI67+ cycling T and B cells.
While the proportion of these immune populations differed substantially among tissues, macrophages represented a major component in each tissue (Additional file 2: Figure S2c). For example, samples obtained from lung tissues and BALF, whether from healthy controls or patients with ILD and COVID-19, contained the highest proportion of macrophages (74.8% of total hematopoietic cells) (Fig. 1f, Additional file 2: Figure S2c). In contrast, while RA synovium, SLE kidney, and CD ileum contained 9.4% macrophages, T lymphocytes comprised the majority of cells in these tissues (55.7%). The UC colon samples contained 8.3% macrophages, but had a distinctively high abundance of plasma cells (42.4%) (Additional file 2: Figure S2c).
Identification of shared inflammatory macrophage states across inflammatory disease tissues and COVID-19 lungs
To resolve the heterogeneity within the macrophage compartment, we analyzed 74,373 macrophages from 108 donors and performed weighted PCA and fine clustering analysis to define shared and distinct states across diseases (Fig. 2a, Additional file 2: Figure S3a, Additional file 4: Table S3). We identified four shared macrophage states defined by different marker sets: (1) CXCL10+ CCL2+ cells, (2) FCN1+ cells, (3) MRC1+ FABP4+ cells, and (4) C1QA+ cells (Fig. 2a, b, Additional file 2: Figure S3b). The CXCL10+ CCL2+ cells and the FCN1+ cells expressed classic inflammatory genes [15] including IL1B, S100A8, CCL3, CXCL11, STAT1, IFNGR1, and NFKB1 (Fig. 2b, c). A higher proportion of inflammatory macrophages in severe COVID-19 expressed these inflammation-associated genes compared to healthy BALF (Additional file 2: Figure S3c). We detected the gene signature for the CXCL10+ CCL2+ inflammatory macrophage state in a higher proportion of macrophages from severe COVID-19 BALF than from other inflamed tissues (Fig. 2c).
Integrative analysis of tissue-level macrophages reveals shared CXCL10+ CCL2+ and FCN1+ inflammatory macrophage states. a Integrative clustering of 74,373 macrophages from individuals from BALF, lung, kidney, colon, ileum, and synovium. b Density plot of cells with non-zero expression of marker genes in UMAP. c Proportion of inflammatory macrophages that express cytokines and inflammatory genes in severe COVID-19 compared to those in inflamed RA, CD, and UC. Orange represents CXCL10+ CCL2+ state-specific genes. d Previously defined inflammatory macrophages from diseased tissues are clustered with the majority of the macrophages from severe COVID-19. e Z-score of the pseudo-bulk expression of marker genes (AUC > 0.6 and Bonferroni-adjusted P < 10−5) for the CXCL10+ CCL2+ and FCN1+ macrophages. Columns show pseudo-bulk expression. f The proportions of CXCL10+ CCL2+ macrophages of total macrophages per donor sample are shown from healthy BALF (n = 3), mild (n = 3), and severe (n = 6) COVID-19, non-inflamed CD (n = 10) and inflamed CD (n = 12), OA (n = 2) and RA (n = 15), and healthy colon (n = 12), non-inflamed UC (n = 18), and inflamed UC (n = 18). Box plots summarize the median, interquartile, and 75% quantile range. P is calculated by Wilcoxon rank-sum test within each tissue. The association of each cluster with severe/inflamed compared to healthy control was tested. 95% CI for the odds ratio (OR) is given. MASC P is calculated using one-sided F tests conducted on nested models with MASC [36]. The clusters above the dashed line (Bonferroni correction) are statistically significant. Clusters that have fewer than 30 cells are removed. g GSEA analysis for each tissue revealed shared enriched pathways for CXCL10+ CCL2+ macrophages: TNF-α signaling via NF-kB (Hallmark gene set), response to interferon gamma (GO:0034341), Covid-19 SARS-CoV-2 infection calu-3 cells (GSE147507 [39]), positive regulation of cytokine production (GO:0001819), response to tumor necrosis factor (GO:0034612), regulation of innate immune response (GO:0045088), and defense response to virus (GO: 0051607)
Liao et al. [4] previously identified CXCL10+ CCL2+ and FCN1+ populations as inflammatory states in the COVID-19 BALF samples used in this integrated analysis. In our multi-disease clustering, the inflammatory macrophages from inflamed RA synovium and UC and CD intestinal tissue largely mapped to the same two inflammatory macrophages seen in severe COVID-19 (Fig. 2d, Additional file 2: Figure S3d-e). In most tissue types, we found all four states represented in all six tissues, and we quantified this overlap with LISI and estimated the variance explained in the PC space (Additional file 2: Figure S3f, g). Strikingly, we observed that the FCN1+ inflammatory macrophage state dominated in SLE kidney, with few in the CXCL10+ CCL2+ macrophages (Fig. 2d), suggesting that our integrative analysis was effective in identifying both shared inflammatory states while maintaining distinct patterns in a subset of tissues.
To comprehensively define markers for the two inflammatory tissue macrophage states shared across COVID-19, RA, UC, and CD, we performed a pseudo-bulk differential expression analysis ("Methods," Additional file 5: Table S4, fold change > 2, AUC > 0.6, Bonferroni-adjusted P < 10−5). The CXCL10+ CCL2+ inflammatory macrophages displayed significantly higher expression of CXCL10, CXCL11, CCL2, CCL3, GBP1, and IDO1 in severe COVID-19, inflamed RA, and CD compared to the FCN1+ macrophages (Fig. 2e). In contrast, the FCN1+ macrophages displayed high expression of FCN1 (Ficolin-1) and a series of alarmins such as S100A8 and S100A9 in most of the inflamed tissues (Fig. 2e). Both inflammatory macrophage states showed high expression of transcription factors that promote a pro-inflammatory macrophage phenotype, STAT1 and IRF1, in inflamed RA, UC, CD, and COVID-19 BALF relative to healthy or non-inflamed tissues (Fig. 2e). Within the CXCL10+ CCL2+ state, there was notable heterogeneity across cells in terms of IL1B expression indicating the possibility of further delineation of this macrophage state (Additional file 2: Figure S4a-b). Moreover, the effect size of all genes in CXCL10+ CCL2+ and FCN1+ subsets compared with the MRC1+ FABP4+ macrophages for each tissue further highlighted a similar set of inflammatory genes with greatest fold changes across all diseases for each subset (Additional file 2: Figure S5).
As validation, we assessed the macrophage phenotypes found in a recent analysis of single cells from severe COVID-19 BALF [29]. Notably, we observed a significant correlation between the cross-disease shared CXCL10+ CCL2+ macrophages and two monocyte-derived alveolar macrophage (MoAM) inflammatory phenotypes from this independent severe COVID-19 cohort (wherein they were referred to as MoAM1 and MoAM2) [29] (Additional file 2: Figure S6a-d). We further examined CXCL10+ CCL2+ macrophage-associated genes with CD14+ cells from inflamed (leukocyte-rich) RA, non-inflamed (leukocyte-poor) RA, and OA [13]; we observed significant enrichment of CXCL10+ CCL2+ state-specific genes (CXCL10, CXCL9, CCL3, GBP1, and IDO1), FCN1+ state-specific genes (FCN1, S100A9, CD300E, IFITM3, and CFP), and genes (IRF1, BCL2A1, and STAT1) associated with both states in the macrophages from inflamed RA compared to non-inflamed RA and OA (Additional file 2: Figure S6e). By integrating macrophages across multiple inflamed tissues, we show that inflammatory subsets identified in COVID-19 may share common phenotypes with macrophages from other inflammatory conditions.
To elucidate cell states that were phenotypically associated, we tested the association of each state with severe COVID-19 compared to healthy BALF using a logistic regression model accounting for technical cell-level and donor-specific effects [36] ("Methods"). We observed the CXCL10+ CCL2+ and FCN1+ states are abundant in severe COVID-19 compared to healthy BALF (Fig. 2f). The CXCL10+ CCL2+ inflammatory state was also expanded in inflamed CD compared to non-inflamed CD, RA compared to non-inflammatory OA, and inflamed UC compared to healthy colon, respectively (Fig. 2f). We indeed observed significant enrichment of the TNF-alpha signaling via nuclear factor-κB (NF-kB) pathway and the response to interferon gamma pathway in the CXCL10+ CCL2+ cells from examined inflamed tissues (Fig. 2g). Consistent with this result, we also observed reduced frequencies of MRC1+ FABP4+ macrophages in each inflamed tissue (Fig. 2f). Taken together, these results indicate that the shared CXCL10+ CCL2+ inflammatory macrophage phenotype is expanded in inflamed tissues and severe COVID-19 BALF.
Tissue inflammatory conditions that drive distinct macrophage phenotypes
To define the factors that shape disease-associated macrophage states in affected tissues, we generated human blood-derived macrophages from four donors and activated them with eight defined mixtures of inflammatory factors, focusing particularly on the effects of antiviral interferons (IFN-β and IFN-γ) and pro-inflammatory cytokines such as TNF that mediate CRS and tissue pathology in RA and IBD [40] (Fig. 3a, Additional file 2: Figure S7a, "Methods"). Co-cultured fibroblasts were a component in some conditions to generate factors produced by resident stroma. To reduce confounding batch effects during scRNA-seq barcode labeling, we used a single-cell antibody-based hashing strategy [41] to multiplex samples from different stimulatory conditions in one sequencing run (Additional file 6: Table S5, Additional file 7: Table S6). We obtained 25,823 post-QC cells after applying 10X Genomics droplet-based single-cell assay (Additional file 2: Figure S7b-d, "Methods"). In the UMAP space, a strong response to IFN-γ drove much of the observed variation; cells treated with IFN-γ clustered well apart from all other conditions (Fig. 3b). All conditions containing IFN-γ (Type II interferon) resulted in macrophages with high expression levels of the transcription factor STAT1, interferon-stimulated genes CXCL9 and CXCL10, and inflammatory receptors such as FCGR1A [42] (Fig. 3c). Consistent with well-established effects, macrophages stimulated by TNF induced MMP9, IL1B, and PLAUR expression while IL-4 stimulation increased expression of CCL23, MRC1, and LIPA (Fig. 3c).
Human blood-derived macrophages stimulated by eight mixtures of inflammatory factors reveal heterogeneous macrophage phenotypes. a Schematic representation of the single-cell cell hashing experiment on human blood-derived macrophages stimulated by eight mixtures of inflammatory factors from 4 donors. A single-cell antibody-based hashing strategy was used to multiplex samples from different stimulatory conditions in one sequencing run. Here fibro denotes fibroblasts. b The 25,823 stimulated blood-derived macrophages from 4 donors are colored and labeled in UMAP space. c Log-normalized expression of genes that are specific to different conditions are displayed in violin plots. Mean of normalized gene expression is marked by a line and each condition by individual coloring. CPM denotes counts per million. d Stimulation effect estimates of genes that are most responsive to conditions with IFN-γ or TNF-α with fibroblasts comparing to untreated macrophages are obtained using linear modeling. Fold changes with 95% CI are shown. e Fold changes in gene expression after TNF-α and IFN-γ stimulation vs. TNF-α stimulation (left), and TNF-α and IFN-γ vs. IFN-γ stimulation (right) for each gene. Genes in red have fold change > 2, Bonferroni-adjusted P < 10−7, and a ratio of TNF-α and IFN-γ fold change to TNF-α fold change greater than 1 (left) or a ratio of TNF-α and IFN-γ fold change to IFN-γ fold change greater than 1 (right). Genes that are most responsive to either IFN-γ (left) or TNF-α (right) are labeled
Using linear models, we identified the genes with the greatest changes in expression after each stimulation and estimated the effect sizes ("Methods"). We found that 403 genes (fold change > 2, FDR < 0.05) were significantly enriched in the TNF-α and IFN-γ stimulation compared to untreated macrophages. All conditions with IFN-γ resulted in similar effect sizes for induction of CCL2, CXCL9, CXCL10, SLAMF7, and STAT1 expression—indicating a robust IFN-γ driven macrophage signature (Fig. 3d left, Additional file 2: Figure S7e). This included robust induction by IFN-γ in macrophages co-treated with TNF (Fig. 3e left). Collectively, the TNF-driven gene expression patterns appeared more modifiable by co-stimulatory factors than IFN-γ. For example, co-cultured fibroblasts further increased TNF-induced MMP9, PLAUR, and VCAN expression, while co-stimulating with IFN-γ repressed TNF induction of these genes (Fig. 3d right). Nonetheless, a portion of the TNF effect was well preserved in TNF plus IFN-γ co-stimulated cells, including genes such as CCL2, CCL3, IL1B, and NFKBIA (Fig. 3e right). TNF-α and IFN-γ ultimately generated a macrophage phenotype with increased expression of NF-kB targets such as NFKBIA, IL1B, and HLA-DRA together with STAT1 targets such as CXCL9 and CXCL10, and GBP1 and GBP5 (Fig. 3d, e).
Identification of an IFN-γ and TNF-α synergistically driven inflammatory macrophage phenotype expanded in severe COVID-19 lungs and other inflamed disease tissues
Our cross-tissue integrative analysis revealed two shared inflammatory macrophage states (Fig. 2). To further understand these cell states and the in vivo inflammatory tissue factors driving them, we integrated the single-cell transcriptomes of both the tissue macrophages and our experimentally stimulated macrophages. After combining and correcting for tissue, technology, and donor effects, we identified 7 distinct macrophage clusters (Fig. 4a). We evaluated the robustness of the clustering and observed that our clusters were stable to the choice of the variable genes used in the analysis (Additional file 2: Figure S8a). The tissue CXCL10+ CCL2+ inflammatory macrophages from UC colon, CD ileum, RA synovium, and COVID-19 BALF were transcriptionally most similar to macrophages stimulated by the combination of TNF-α plus IFN-γ in cluster 1 (Fig. 4b, c, Additional file 2: Figure S8b-c). The blood-derived macrophages in cluster 1 included macrophages stimulated by four different conditions all including IFN-γ, of which the most abundant population (37.5%) were macrophages stimulated by TNF-α with IFN-γ (Fig. 4c, d). Comparing our results to a previously reported macrophage spectrum with 28 unique stimulatory conditions [11], we observed the highest expression of cluster 1-associated genes in their macrophages exposed to conditions including both TNF and IFN-γ (Additional file 2: Figure S9a).
TNF-α and IFN-γ driven CXCL10+ CCL2+ macrophages are expanded in severe COVID-19 and other inflamed tissues. a Integrative clustering of stimulated blood-derived macrophages with tissue-level macrophages from COVID-19 BALF, UC colon, CD ileum, and RA synovium. b The previously identified tissue-level CXCL10+ CCL2+ state corresponds to cluster 1 (orange), and the FCN1+ inflammatory macrophage state corresponds to cluster 2 (yellow). Macrophages from each tissue source are displayed separately in the same UMAP coordinates as in a. c Heatmap indicates the concordance between stimulatory conditions and integrative cluster assignments. Z-score of the number of cells from each stimulatory condition to the integrative clusters is shown. d For the blood-derived stimulated macrophages, the proportions of CXCL10+ CCL2+ macrophages of total macrophages per stimulated donor are shown. e PCA analysis on the identified inflammatory macrophages. The first PC captures a gradient from the FCN1+ state to the CXCL10+ CCL2+ state. f Upon this, macrophages from severe COVID-19 mapped to PC1 present a shift in cell frequency between the FCN1+ and CXCL10+ CCL2+ (Wilcoxon rank-sum test P = 1.4e−07). The TNF-α stimulated macrophages (mean − 0.27) were projected to the left of the FCN1+ tissue macrophages (mean − 0.14), while the IFN-γ (mean 0.10), and TNF-α and IFN-γ (mean 0.23), stimulated macrophages were projected to the right of the CXCL10+ CCL2+ tissue macrophages (− 0.03). g Genes associated with CXCL10+ CCL2+ driven by PC1 show high expression levels on the severe COVID-19 macrophages and also TNF-α and IFN-γ stimulated blood-derived macrophages. We recapitulate the gradient observed in vivo across multiple diseases by stimulating macrophages ex vivo with synergistic combinations of TNF-α and IFN-γ
We further identified a principal component (PC1) that captures a gradient from the FCN1+ state to the CXCL10+ CCL2+ state by applying PCA analysis to the tissue-level inflammatory macrophages (Fig. 4e), suggesting a potential continuum between the inflammatory FCN1+ and CXCL10+ CCL2+ states. Aligning cells from separate tissues along PC1, we found that the majority of inflammatory macrophages in RA, UC, and CD align more closely with the FCN1+ state (Additional file 2: Figure S9b). In severe COVID-19, we observed a shift in cell frequency between the FCN1+ and CXCL10+ CCL2+ macrophages (Wilcoxon rank-sum test P = 1.4e−07, Fig. 4f). Furthermore, we mapped the experimentally stimulated blood-derived macrophages to PC1 based on the top 50 genes with the largest and smallest PC1 gene loadings. Strikingly, the TNF-α stimulated macrophages (mean − 0.27) map to the left of the FCN1+ tissue macrophages (mean − 0.14), while the IFN-γ (mean 0.10), and TNF-α and IFN-γ (mean 0.23), stimulated macrophages map to the right of the CXCL10+ CCL2+ tissue macrophages (− 0.03) (Fig. 4f). This suggests the importance of IFN-γ stimulation in order to drive a phenotype most similar to the CXCL10+ CCL2+ state, with the addition of TNF stimulation resulting in further pushing of the macrophage phenotype along the PC1 trajectory. We observed higher expression levels of PC1-associated genes, for example CXCL10, STAT1, CCL2, CCL3, NFKBIA, and GBP1, in CXCL10+ CCL2+ severe COVID-19 compared to FCN1+ cells, and higher induced expression levels of these same genes in TNF-α and IFN-γ stimulation compared to TNF-α stimulation alone (Fig. 4g). Taken together, these results suggest we are able to recapitulate the gradient observed in vivo across multiple diseases by stimulating macrophages ex vivo with synergistic combinations of IFN-γ and TNF-α.
Our study demonstrates the power of a multi-disease reference dataset to interpret cellular phenotypes and tissue states, while placing them into a broader context that may provide insights into disease etiology and rationale for repurposing medications. Such meta-datasets can increase the resolution of cell states and aid understanding of shared cellular states found in less well-understood diseases such as COVID-19. Amassing diverse tissues from > 120 donors with a wide range of diseases, we built a human tissue inflammation single-cell reference. Applying powerful computational strategies, we integrated > 300,000 single-cell transcriptomes and corrected for factors that interfere with resolving cell-intrinsic expression patterns. In particular, we have identified a CXCL10+ CCL2+ inflammatory macrophage phenotype shared between tissues affected in autoimmune disease (RA), inflammatory diseases (CD and UC), and infectious disease (COVID-19). We observed that the abundance of this population is associated with inflammation and disease severity. With integrated analysis of an ex vivo dataset, we elucidated its potential cytokine drivers: IFN-γ together with TNF-α.
Macrophages are ideal biologic indicators for the in vivo state of a tissue due to their dynamic nature, robust responses to local factors, and widespread presence in most tissues. Through our cross-disease analysis, we defined two inflammatory macrophage states that can be found in selected groups of seemingly unrelated tissues and diseases. Most notably, the CXCL10+ CCL2+ inflammatory macrophages predominate in the bronchoalveolar lavage of patients with severe COVID-19, and are also detected in synovial tissue affected by RA and inflamed intestine from patients with IBD. These cells are distinguished by high levels of CXCL10 and CXCL11, STAT1, IFNGR1, and IFNGR2, as well as CCL2 and CCL3, NFKB1, TGFB1, and IL1B. This gene expression pattern of the JAK/STAT and NF-kB-dependent cytokines implicates induction by an intriguing combination of both the IFN-induced JAK/STAT and TNF-induced NF-kB pathways and, in conjunction, the overall transcriptome program most closely aligns with macrophages stimulated by IFN-γ plus TNF-α. As both JAK inhibitors and anti-TNF medications have outstanding efficacy in treating RA and anti-TNFs are the most common medications treating inflammatory bowel disease, including Crohn's Disease [2], these therapies may target the inflammatory macrophages in severe COVID-19 lung during the phase involving cytokine release syndrome [43].
Infection with SARS-CoV2 triggers local immune response and inflammation in the lung compartment, recruiting macrophages that release and respond to inflammatory cytokines and chemokines [6]. This response may change with disease progression, in particular during the transition towards the cytokine storm associated with severe disease. Intriguingly, our cross-disease tissue study strongly suggests that IFN-γ is an essential component in the inflammatory macrophage phenotype in severe COVID-19. Most studies on interferons and coronaviruses have focused on Type I interferons, such as IFN-β, due to their robust capacity to interfere with viral replication [44]. Indeed, ongoing research into the administration of recombinant IFN-β has shown promise in reducing the risk of severe COVID-19 disease [45]. However, other studies have indicated that targeting IFN-γ may be an effective treatment for cytokine storm, a driver of severe disease in COVID-19 patients [46, 47]. Additionally, several studies have indicated that targeting IFN-γ using JAK inhibitors such as ruxolitinib, baricitinib, and tofacitinib offers effective therapeutic effects in treating severe COVID-19 patients [43, 48,49,50,51]. Clinical trials of Type II interferon inhibitors in COVID-19 are under way (NCT04337359, NCT04359290, and NCT04348695) [43]. Recent research has also identified that the synergism of TNF-α and IFN-γ can trigger inflammatory cell death, tissue damage, and mortality in SARS-CoV-2 infection [52], and shown increased levels of IFN-γ, TNF-α, CXCL10, and CCL2 in the serum of severe COVID-19 patients [53]. In agreement with these studies, our findings indicate that IFN-γ is an important mediator together with TNF-α of severe disease, in part through activating the inflammatory CXCL10+ CCL2+ macrophage subset. We hypothesize that anti-Type II interferon (like JAK inhibitors) and anti-TNF combinatorial treatment might prove effective at inhibiting the cytokine storm driving acute respiratory distress syndrome in patients with severe COVID-19. We are aware of the limited number of longitudinal BALFs from COVID-19 patients involved in our across-tissue study due to the current crisis situation, so we expect to replicate our findings in a broader generalization of COVID-19 patients in the future. Of course, the presence of an IFN-γ and TNF phenotype is an association that may not be causal. Whether targeting these cytokines is reasonable or not will depend on additional clinical investigation.
In this study, we built a single-cell immune reference from multiple inflamed disease tissues and identified two inflammatory macrophage states, CXCL10+ CCL2+ and FCN1+ inflammatory macrophages, that were shared between COVID-19 and inflammatory diseases such as RA, CD, and UC. We demonstrated that the CXCL10+ CCL2+ macrophages are transcriptionally similar to human blood-derived macrophages stimulated by IFN-γ and TNF-α and were expanded in severe COVID-19 lungs and inflamed RA, CD, and UC tissues. This finding indicates that Type II interferon and TNF responses may be involved in late-stage cytokine storm-driven severe COVID-19 and inhibiting these responses in the inflammatory macrophages may be a promising treatment. Our cross-tissue single-cell integrative strategy along with our disease association analysis provides a proof-of-principle that identifying shared pathogenic features across human inflamed tissues and COVID-19 lungs has the potential to guide drug repurposing.
The single-cell RNA-seq data for blood-derived macrophages are available in the Gene Expression Omnibus database with accession number GSE168710, https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE168710 [54]. Source code repository to reproduce analyses is located at https://github.com/immunogenomics/inflamedtissue_covid19_reference [55].
The publicly available datasets analyzed during the study are available from the GEO repository:
GSE134809 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE134809) [27]
GSE145926 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE145926) [4]
GSE47189 (https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE47189) [11]
dbGap repository:
phs001457.v1.p1 (https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs001457.v1.p1) [13]
Single Cell Portal:
SCP259 (https://singlecell.broadinstitute.org/single_cell/study/SCP259/intra-and-inter-cellular-rewiring-of-the-human-colon-during-ulcerative-colitis) [15]
McInnes IB, Schett G. The pathogenesis of rheumatoid arthritis. N Engl J Med. 2011;365(23):2205–19. https://doi.org/10.1056/NEJMra1004965.
Neurath MF. Cytokines in inflammatory bowel disease. Nat Rev Immunol. 2014;14(5):329–42. https://doi.org/10.1038/nri3661.
Liu J, Zheng X, Tong Q, Li W, Wang B, Sutter K, Trilling M, Lu M, Dittmer U, Yang D. Overlapping and discrete aspects of the pathology and pathogenesis of the emerging human pathogenic coronaviruses SARS-CoV, MERS-CoV, and 2019-nCoV. J Med Virol. 2020;92(5):491–4. https://doi.org/10.1002/jmv.25709.
Liao M, Liu Y, Yuan J, Wen Y, Xu G, Zhao J, Cheng L, Li J, Wang X, Wang F, Liu L, Amit I, Zhang S, Zhang Z. Single-cell landscape of bronchoalveolar immune cells in patients with COVID-19. Nat Med. 2020;26(6):842–4. https://doi.org/10.1038/s41591-020-0901-9.
Wen W, Su W, Tang H, le W, Zhang X, Zheng Y, Liu X, Xie L, Li J, Ye J, Dong L, Cui X, Miao Y, Wang D, Dong J, Xiao C, Chen W, Wang H. Immune cell profiling of COVID-19 patients in the recovery stage by single-cell sequencing. Cell Discov. 2020;6(1):31. https://doi.org/10.1038/s41421-020-0168-9.
Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, Zhang L, Fan G, Xu J, Gu X, Cheng Z, Yu T, Xia J, Wei Y, Wu W, Xie X, Yin W, Li H, Liu M, Xiao Y, Gao H, Guo L, Xie J, Wang G, Jiang R, Gao Z, Jin Q, Wang J, Cao B. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497–506. https://doi.org/10.1016/S0140-6736(20)30183-5.
Lucas C, et al. Longitudinal analyses reveal immunological misfiring in severe COVID-19. Nature. 2020;584(7821):463–9. https://doi.org/10.1038/s41586-020-2588-y.
He W, Kapate N, Shields CW 4th, Mitragotri S. Drug delivery to macrophages: a review of targeting drugs and drug carriers to macrophages for inflammatory diseases. Adv Drug Deliv Rev. 2019;165-166:15–40. https://doi.org/10.1016/j.addr.2019.12.001.
Kinne RW, Bräuer R, Stuhlmüller B, Palombo-Kinne E, Burmester GR. Macrophages in rheumatoid arthritis. Arthritis Res. 2000;2(3):189–202. https://doi.org/10.1186/ar86.
Ma W-T, Gao F, Gu K, Chen D-K. The role of monocytes and macrophages in autoimmune diseases: a comprehensive review. Front Immunol. 2019;10:1140. https://doi.org/10.3389/fimmu.2019.01140.
Xue J, Schmidt SV, Sander J, Draffehn A, Krebs W, Quester I, de Nardo D, Gohel TD, Emde M, Schmidleithner L, Ganesan H, Nino-Castro A, Mallmann MR, Labzin L, Theis H, Kraut M, Beyer M, Latz E, Freeman TC, Ulas T, Schultze JL. Transcriptome-based network analysis reveals a spectrum model of human macrophage activation. Immunity. 2014;40(2):274–88. https://doi.org/10.1016/j.immuni.2014.01.006.
Papalexi E, Satija R. Single-cell RNA sequencing to explore immune cell heterogeneity. Nat Rev Immunol. 2018;18(1):35–45. https://doi.org/10.1038/nri.2017.76.
Zhang F, et al. Defining inflammatory cell states in rheumatoid arthritis joint synovial tissues by integrating single-cell transcriptomics and mass cytometry. Nat Immunol. 2019;20(7):928–42. https://doi.org/10.1038/s41590-019-0378-1.
Kuo D, Ding J, Cohn IS, Zhang F, Wei K, Rao DA, Rozo C, Sokhi UK, Shanaj S, Oliver DJ, Echeverria AP, DiCarlo EF, Brenner MB, Bykerk VP, Goodman SM, Raychaudhuri S, Rätsch G, Ivashkiv LB, Donlin LT. HBEGF+ macrophages in rheumatoid arthritis induce fibroblast invasiveness. Sci Transl Med. 2019;11(491):eaau8587. https://doi.org/10.1126/scitranslmed.aau8587.
Smillie CS, et al. Intra- and inter-cellular rewiring of the human colon during ulcerative colitis. Cell. 2019;178:714–730.e22.
Korsunsky I, Millard N, Fan J, Slowikowski K, Zhang F, Wei K, Baglaenko Y, Brenner M, Loh PR, Raychaudhuri S. Fast, sensitive and accurate integration of single-cell data with harmony. Nat Methods. 2019;16(12):1289–96. https://doi.org/10.1038/s41592-019-0619-0.
Stuart T, Satija R. Integrative single-cell analysis. Nat Rev Genet. 2019;20(5):257–72. https://doi.org/10.1038/s41576-019-0093-7.
Hie B, Bryson B, Berger B. Efficient integration of heterogeneous single-cell transcriptomes using Scanorama. Nat Biotechnol. 2019;37(6):685–91. https://doi.org/10.1038/s41587-019-0113-3.
Haghverdi L, Lun ATL, Morgan MD, Marioni JC. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors. Nat Biotechnol. 2018;36(5):421–7. https://doi.org/10.1038/nbt.4091.
Butler A, Hoffman P, Smibert P, Papalexi E, Satija R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nat Biotechnol. 2018;36(5):411–20. https://doi.org/10.1038/nbt.4096.
Polański K, Young MD, Miao Z, Meyer KB, Teichmann SA, Park JE. BBKNN: fast batch alignment of single cell transcriptomes. Bioinformatics. 2020;36(3):964–5. https://doi.org/10.1093/bioinformatics/btz625.
Tran HTN, Ang KS, Chevrier M, Zhang X, Lee NYS, Goh M, Chen J. A benchmark of batch-effect correction methods for single-cell RNA sequencing data. Genome Biol. 2020;21(1):12. https://doi.org/10.1186/s13059-019-1850-9.
Ivashkiv LB. IFNγ: signalling, epigenetics and roles in immunity, metabolism, disease and cancer immunotherapy. Nat Rev Immunol. 2018;18(9):545–58. https://doi.org/10.1038/s41577-018-0029-z.
Barrat FJ, Crow MK, Ivashkiv LB. Interferon target-gene expression and epigenomic signatures in health and disease. Nat Immunol. 2019;20(12):1574–83. https://doi.org/10.1038/s41590-019-0466-2.
Stephenson W, Donlin LT, Butler A, Rozo C, Bracken B, Rashidfarrokhi A, Goodman SM, Ivashkiv LB, Bykerk VP, Orange DE, Darnell RB, Swerdlow HP, Satija R. Single-cell RNA-seq of rheumatoid arthritis synovial tissue using low-cost microfluidic instrumentation. Nat Commun. 2018;9(1):791. https://doi.org/10.1038/s41467-017-02659-x.
Arazi A, et al. The immune cell landscape in kidneys of patients with lupus nephritis. Nat Immunol. 2019;20(7):902–14. https://doi.org/10.1038/s41590-019-0398-x.
Martin JC, et al. Single-cell analysis of Crohn's disease lesions identifies a pathogenic cellular module associated with resistance to anti-TNF therapy. Cell. 2019;178:1493–1508.e20.
Reyfman PA, Walter JM, Joshi N, Anekalla KR, McQuattie-Pimentel AC, Chiu S, Fernandez R, Akbarpour M, Chen CI, Ren Z, Verma R, Abdala-Valencia H, Nam K, Chi M, Han SH, Gonzalez-Gonzalez FJ, Soberanes S, Watanabe S, Williams KJN, Flozak AS, Nicholson TT, Morgan VK, Winter DR, Hinchcliff M, Hrusch CL, Guzy RD, Bonham CA, Sperling AI, Bag R, Hamanaka RB, Mutlu GM, Yeldandi AV, Marshall SA, Shilatifard A, Amaral LAN, Perlman H, Sznajder JI, Argento AC, Gillespie CT, Dematte J, Jain M, Singer BD, Ridge KM, Lam AP, Bharat A, Bhorade SM, Gottardi CJ, Budinger GRS, Misharin AV. Single-cell transcriptomic analysis of human lung provides insights into the pathobiology of pulmonary fibrosis. Am J Respir Crit Care Med. 2018;199(12):1517–36. https://doi.org/10.1164/rccm.201712-2410OC.
Grant RA, et al. Circuits between infected macrophages and T cells in SARS-CoV-2 pneumonia. Nature. 2021;590(7847):635–41. https://doi.org/10.1038/s41586-020-03148-w.
Bray NL, Pimentel H, Melsted P, Pachter L. Near-optimal probabilistic RNA-seq quantification. Nat Biotechnol. 2016;34(5):525–7. https://doi.org/10.1038/nbt.3519.
Ritchie ME, et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015;43:e47.
Blondel VD, Guillaume J-L, Lambiotte R, Lefebvre E. Fast unfolding of communities in large networks. arXiv [physics.soc-ph]. 2008;(10):P10008. https://arxiv.org/abs/0803.0476.
McInnes L, Healy J, Melville J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426. 2018. https://arxiv.org/abs/1802.03426.
Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014;15(12):550. https://doi.org/10.1186/s13059-014-0550-8.
Stoeckius M, Hafemeister C, Stephenson W, Houck-Loomis B, Chattopadhyay PK, Swerdlow H, Satija R, Smibert P. Simultaneous epitope and transcriptome measurement in single cells. Nat Methods. 2017;14(9):865–8. https://doi.org/10.1038/nmeth.4380.
Fonseka CY, Rao DA, Teslovich NC, Korsunsky I, Hannes SK, Slowikowski K, Gurish MF, Donlin LT, Lederer JA, Weinblatt ME, Massarotti EM, Coblyn JS, Helfgott SM, Todd DJ, Bykerk VP, Karlson EW, Ermann J, Lee YC, Brenner MB, Raychaudhuri S. Mixed-effects association of single cells identifies an expanded effector CD4+ T cell subset in rheumatoid arthritis. Sci Transl Med. 2018;10(463):eaaq0305. https://doi.org/10.1126/scitranslmed.aaq0305.
Liberzon A, Birger C, Thorvaldsdóttir H, Ghandi M, Mesirov JP, Tamayo P. The molecular signatures database (MSigDB) hallmark gene set collection. Cell Syst. 2015;1(6):417–25. https://doi.org/10.1016/j.cels.2015.12.004.
Raychaudhuri S, Stuart JM, Altman RB. Principal components analysis to summarize microarray experiments: application to sporulation time series. Pac Symp Biocomput. 2000:455–66. https://pubmed.ncbi.nlm.nih.gov/10902193/.
Blanco-Melo D, et al. Imbalanced Host Response to SARS-CoV-2 Drives Development of COVID-19. Cell. 2020;181:1036–1045.e9.
Robinson PC, Liew DFL, Liew JW, Monaco C, Richards D, Shivakumar S, Tanner HL, Feldmann M. The Potential for Repurposing Anti-TNF as a Therapy for the Treatment of COVID-19. Med. 2020;1(1):90–102. https://doi.org/10.1016/j.medj.2020.11.005.
Stoeckius M, Zheng S, Houck-Loomis B, Hao S, Yeung BZ, Mauck WM III, Smibert P, Satija R. Cell Hashing with barcoded antibodies enables multiplexing and doublet detection for single cell genomics. Genome Biol. 2018;19(1):224. https://doi.org/10.1186/s13059-018-1603-1.
Dallagi A, Girouard J, Hamelin-Morrissette J, Dadzie R, Laurent L, Vaillancourt C, Lafond J, Carrier C, Reyes-Moreno C. The activating effect of IFN-γ on monocytes/macrophages is regulated by the LIF-trophoblast-IL-10 axis via Stat1 inhibition and Stat3 activation. Cell Mol Immunol. 2015;12(3):326–41. https://doi.org/10.1038/cmi.2014.50.
Luo W, Li YX, Jiang LJ, Chen Q, Wang T, Ye DW. Targeting JAK-STAT signaling to control cytokine release syndrome in COVID-19. Trends Pharmacol Sci. 2020;41(8):531–43. https://doi.org/10.1016/j.tips.2020.06.007.
Wang BX, Fish EN. Global virus outbreaks: Interferons as 1st responders. Semin Immunol. 2019;43:101300. https://doi.org/10.1016/j.smim.2019.101300.
Davoudi-Monfared E, Rahmani H, Khalili H, Hajiabdolbaghi M, Salehi M, Abbasian L, Kazemzadeh H, Yekaninejad MS. Efficacy and safety of interferon β-1a in treatment of severe COVID-19: A randomized clinical trial. Antimicrobial Agents and Chemotherapy. 2020. https://aac.asm.org/content/64/9/e01061-20.
Nile SH, Nile A, Qiu J, Li L, Jia X, Kai G. COVID-19: pathogenesis, cytokine storm and therapeutic potential of interferons. Cytokine Growth Factor Rev. 2020;53:66–70. https://doi.org/10.1016/j.cytogfr.2020.05.002.
Ye Q, Wang B, Mao J. The pathogenesis and treatment of the `cytokine storm' in COVID-19. J Inf Secur. 2020;80:607–13.
Cao Y, et al. Ruxolitinib in treatment of severe coronavirus disease 2019 (COVID-19): A multicenter, single-blind, randomized controlled trial. J Allergy Clin Immunol. 2020;146:137–146.e3.
Ahmed A, Merrill SA, Alsawah F, Bockenstedt P, Campagnaro E, Devata S, Gitlin SD, Kaminski M, Cusick A, Phillips T, Sood S, Talpaz M, Quiery A, Boonstra PS, Wilcox RA. Ruxolitinib in adult patients with secondary haemophagocytic lymphohistiocytosis: an open-label, single-centre, pilot trial. Lancet Haematol. 2019;6(12):e630–7. https://doi.org/10.1016/S2352-3026(19)30156-5.
Zizzo G, Cohen PL. Imperfect storm: is interleukin-33 the Achilles heel of COVID-19? Lancet Rheumatol. 2020;2(12):e779–90. https://doi.org/10.1016/S2665-9913(20)30340-4.
Kalil AC, Patterson TF, Mehta AK. Baricitinib plus remdesivir for hospitalized adults with COVID-19. N Engl J Med. 2021;384(9):795–807. https://doi.org/10.1056/NEJMoa2031994.
Karki R, Sharma BR, Tuladhar S, Williams EP, Zalduondo L, Samir P, Zheng M, Sundaram B, Banoth B, Malireddi RKS, Schreiner P, Neale G, Vogel P, Webby R, Jonsson CB, Kanneganti TD. Synergism of TNF-α and IFN-γ triggers inflammatory cell death, tissue damage, and mortality in SARS-CoV-2 infection and cytokine shock syndromes. Cell. 2021;184(1):149–68.
Garcia-Beltran WF, et al. COVID-19-neutralizing antibodies predict disease severity and survival. Cell. 2021;184:476–488.e11.
Zhang F, Mears JR, Shakib L, Beynor JI, Shanaj S, Korsunsky I, Nathan A, Accelerating Medicines Partnership Rheumatoid Arthritis and Systemic Lupus Erythematosus (AMP RA/SLE) Consortium, Donlin LT, Raychaudhuri S. IFN-γ and TNF-α drive a CXCL10+ CCL2+ macrophage phenotype expanded in severe COVID-19 lungs and inflammatory diseases with tissue inflammation. GSE168710, Gene Expression Omnibus, https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE168710 (2021).
Zhang F, Mears JR, Shakib L, Beynor JI, Shanaj S, Korsunsky I, Nathan A, Accelerating Medicines Partnership Rheumatoid Arthritis and Systemic Lupus Erythematosus (AMP RA/SLE) Consortium, Donlin LT, Raychaudhuri S. IFN-γ and TNF-α drive a CXCL10+ CCL2+ macrophage phenotype expanded in severe COVID-19 lungs and inflammatory diseases with tissue inflammation. Github, https://github.com/immunogenomics/inflamedtissue_covid19_reference (2021).
We thank the Brigham and Women's Hospital Single Cell Genomics Core for assistance in the single-cell hashing experiment. We thank members of the Raychaudhuri Laboratory for discussions.
Accelerating Medicines Partnership Rheumatoid Arthritis & Systemic Lupus Erythematosus (AMP RA/SLE) Consortium:
Jennifer Albrecht9, Jennifer H. Anolik9, William Apruzzese5, Brendan F. Boyce9, Christopher D. Buckley10, David L. Boyle11, Michael B. Brenner5, S. Louis Bridges Jr12, Jane H. Buckner13, Vivian P. Bykerk7, Edward DiCarlo14, James Dolan15, Andrew Filer10, Thomas M. Eisenhaure4, Gary S. Firestein10, Susan M. Goodman7, Ellen M. Gravallese5, Peter K. Gregersen16, Joel M. Guthridge17, Nir Hacohen4, V. Michael Holers18, Laura B. Hughes12, Lionel B. Ivashkiv19,20, Eddie A. James13, Judith A. James17, A. Helena Jonsson5, Josh Keegan15, Stephen Kelly21, Yvonne C. Lee22, James A. Lederer15, David J. Lieb4, Arthur M. Mandelin II22, Mandy J. McGeachy23, Michael A. McNamara7, Nida Meednu9, Larry Moreland23, Jennifer P. Nguyen15, Akiko Noma4, Dana E. Orange24, Harris Perlman22, Costantino Pitzalis25, Javier Rangel-Moreno9, Deepak A. Rao5, Mina Ohani-Pichavant26,27, Christopher Ritchlin9, William H. Robinson26,27, Karen Salomon-Escoto28, Anupamaa Seshadri15, Jennifer Seifert18, Darren Tabechian9, Jason D. Turner10, Paul J. Utz26,27, Kevin Wei5.
9Division of Allergy, Immunology and Rheumatology, Department of Medicine, University of Rochester Medical Center, Rochester, NY, USA.
10Rheumatology Research Group, Institute for Inflammation and Aging, NIHR Birmingham Biomedical Research Center and Clinical Research Facility, University of Birmingham, Queen Elizabeth Hospital, Birmingham, UK.
11Department of Medicine, Division of Rheumatology, Allergy and Immunology, University of California, San Diego, La Jolla, CA, USA.
12Division of Clinical Immunology and Rheumatology, Department of Medicine, Translational Research University of Alabama at Birmingham, Birmingham, AL, USA.
13Translational Research Program, Benaroya Research Institute at Virginia Mason, Seattle, WA, USA.
14Department of Pathology and Laboratory Medicine, Hospital for Special Surgery, New York, NY, USA.
15Department of Surgery, Brigham and Women's Hospital and Harvard Medical School,
Feinstein Boston, MA, USA.
16Feinstein Institute for Medical Research, Northwell Health, Manhasset, NY, USA.
17Department of Arthritis & Clinical Immunology, Oklahoma Medical Research Foundation, Oklahoma City, OK, USA.
18Division of Rheumatology, University of Colorado School of Medicine, Aurora, CO, USA.
19Graduate Program in Immunology and Microbial Pathogenesis, Weill Cornell Graduate School of Medical Sciences, New York, NY, USA.
20David Z. Rosensweig Genomics Research Center, Hospital for Special Surgery, New York, NY, USA.
21Department of Rheumatology, Barts Health NHS Trust, London, UK.
22Division of Rheumatology, Department of Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.
23Division of Rheumatology and Clinical Immunology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
24The Rockefeller University, New York, NY, USA.
25Centre for Experimental Medicine & Rheumatology, William Harvey Research Institute, Queen Mary University of London, London, UK.
26Division of Immunology and Rheumatology, Department of Medicine, Stanford University School of Medicine, Palo Alto, CA, USA.
27Immunity, Transplantation, and Infection, Stanford University School of Medicine, Stanford, CA, USA.
28Division of Rheumatology, Department of Medicine, University of Massachusetts Medical School, Worcester, MA, USA.
This work is supported in part by funding from the National Institutes of Health (NIH) Grants UH2AR067677, U01HG009379, and R01AR063759 (to S.R.) and NIH R01AI148435, UH2 AR067691, Carson Family Trust, and Leon Lowenstein Foundation (to L.T.D.).
Laura T. Donlin and Soumya Raychaudhuri jointly supervised this work.
Center for Data Sciences, Brigham and Women's Hospital, Boston, MA, 02115, USA
Fan Zhang, Joseph R. Mears, Jessica I. Beynor, Ilya Korsunsky, Aparna Nathan & Soumya Raychaudhuri
Division of Genetics, Department of Medicine, Brigham and Women's Hospital, Boston, MA, 02115, USA
Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, USA
Broad Institute of MIT and Harvard, Cambridge, MA, 02142, USA
Division of Rheumatology, Inflammation, and Immunity, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, 02115, USA
Graduate Program in Physiology, Biophysics and Systems Biology, Weill Cornell Graduate School of Medical Sciences, New York, NY, 10065, USA
Lorien Shakib & Laura T. Donlin
Arthritis and Tissue Degeneration, Hospital for Special Surgery, New York, NY, USA
Sara Shanaj & Laura T. Donlin
Arthritis Research UK Centre for Genetics and Genomics, Centre for Musculoskeletal Research, The University of Manchester, Manchester, UK
Soumya Raychaudhuri
Fan Zhang
Joseph R. Mears
Lorien Shakib
Jessica I. Beynor
Sara Shanaj
Ilya Korsunsky
Aparna Nathan
Laura T. Donlin
Accelerating Medicines Partnership Rheumatoid Arthritis and Systemic Lupus Erythematosus (AMP RA/SLE) Consortium
F.Z. and S.R. conceptualized the study and designed the statistical strategy. F.Z. and J.R.M performed the analyses. J.R.M. collected public single-cell datasets. F.Z., J.R.M., and S.R. wrote the initial manuscript. L.T.D., A.N., I.K., J.I.B., L.S., and S.S. edited the draft. L.T.D obtained blood samples from human subjects. L.T.D, L.S., J.I.B., and S.S. organized processing, transportation, and experiment of the blood samples. S.R. and L.T.D. supervised the work. All authors read and approved the final manuscript.
Correspondence to Laura T. Donlin or Soumya Raychaudhuri.
Healthy blood samples were purchased from the New York Blood Center (NYBC), provided by volunteer donors who consented for the blood to be used in biomedical research and other uses at the discretion of NYBC. The samples are deidentified by the NYBC, the research study investigators had no access to identifiable private information. As per the NIH guidelines, this does not constitute Human Subjects research. For the stimulated blood-derived macrophage experiment, co-cultures with synovial fibroblast involved synovial fibroblast lines generated from patients with RA undergoing arthroplasty (HSS IRB 14-033). Patients provided informed consent and all appropriate measures were taken for compliance with the Helsinki Declaration.
Basic information and demography of multiple single-cell datasets.
Overall integration of immune cells from multiple scRNA-seq datasets. Figure S2. Quantification of the performance of all cell type multi-disease tissue integration. Figure S3. Tissue-level macrophage integrative analysis of multiple scRNA-seq datasets. Figure S4. Heterogeneity of shared inflammatory macrophages from multiple tissues. Figure S5. Single-cell differential gene expression analysis of comparing inflammatory macrophages with non-inflammatory macrophages within each individual tissue source. Figure S6. Examination of the CXCL10+ CCL2+ macrophage marker genes in additional diseased cohort studies. Figure S7. Experimental design and quality control of human blood-derived macrophages stimulated by different conditions. Figure S8. Integrative analysis of tissue-level macrophages and human blood-derived macrophages. Figure S9. Assessment of previously reported stimulated macrophage spectrum analysis and alignment of macrophages from different disease tissues to a trajectory.
Cell type marker genes and statistics.
Number of cells per cluster, per disease and tissue for macrophage integration analysis.
Macrophage cluster marker genes and relative statistics.
Hashtag antibodies for the 10X single-cell cell hashing experiment.
Details for the 10X single-cell cell hashing experiment.
Zhang, F., Mears, J.R., Shakib, L. et al. IFN-γ and TNF-α drive a CXCL10+ CCL2+ macrophage phenotype expanded in severe COVID-19 lungs and inflammatory diseases with tissue inflammation. Genome Med 13, 64 (2021). https://doi.org/10.1186/s13073-021-00881-3
Single-cell transcriptomics
Single-cell multi-disease tissue integration
Inflammatory diseases
Macrophage stimulation
Macrophage heterogeneity
Coronavirus Resource
|
CommonCrawl
|
If you want to find the hidden secrets of the universe, you must think in terms of energy, frequency, and vibration. This famous quote from Tesla cannot be closer to the truth. An oscillation is a periodic motion that can be repeated in a cycle, such as a wave. Because of de Broglie's hypothesis, we learned that all matter has properties of particles and waves. Oscillations are one of the most important phenomena in physics, as they are to describe the nature of particles in quantum mechanics. They are also important for understanding how society works in the 21st century. All electronic devices, the internet, TV signals, communication systems, and medical imaging are all using electromagnetic waves. Now that we know how important oscillations are, let's learn more about them and their properties.
Defining Oscillations
Oscillatory motion is a movement that repeats itself. So, an oscillation is a back-and-forth motion about an equilibrium position. An equilibrium position is a location where the net force acting on the system is zero. A vibrating string of a guitar is an example of an oscillation.
A guitar string oscillates, JAR (CC BY 2.0)
Period and Frequency of Oscillations
The frequency is defined as the inverse of the period. For example, a large period implies a small frequency.
$$f=\frac1T$$
Where \(f\) is the frequency in hertz, \(\mathrm{Hz}\), and \(T\) is the period in seconds, \(\mathrm{s}\).
The period is the time required to complete one oscillation cycle. The period of an oscillation cycle is related to the angular frequency of the object's motion. The expression for the angular frequency will depend on the type of object that is oscillating. The equation that relates the angular frequency denoted by \(\omega\) to the frequency denoted by \(f\) is
$$\omega=2\pi f.$$
Substituting \(\dfrac{1}{f}\) for \(T\) and rearranging for \(T\) we obtain
$$T=\frac{2\pi}\omega.$$
Where \(\omega\) is the angular frequency in radians per second, \(\frac{\mathrm{rad}}{\mathrm s}\). If we think about it, this expression makes sense, as an object with a large angular frequency will take a lot less to make one complete oscillation cycle.
Harmonic Oscillators
A harmonic oscillation is a type of oscillation in which the net force acting on the system is a restoring force. A restoring force is a force acting against the displacement in order to try and bring the system back to equilibrium. An example of this is Hooke's Law given by
$$F_s=ma_x=-k\Delta x,$$
where \(m\) is the mass of the object at the end of the spring in kilograms, \(\mathrm{kg}\), \(a_x\) is the acceleration of the object on the \(\text{x-axis}\) in meters per second squared, \(\frac{\mathrm m}{\mathrm s^2}\), \(k\) is the spring constant that measures the stiffness of the spring in newtons per meter, \(\frac{\mathrm{N}}{\mathrm m}\), and \(\Delta x\) is the displacement in meters, \(\mathrm{m}\).
If this is the only force acting on the system, the system is called a simple harmonic oscillator. This is one of the most simple cases, as the name suggests.
Most oscillations occur in the air or other mediums, where there is some type of force proportional to the system's velocity, such as air resistance or friction forces. These may act as damping forces. The equation for the damping force is
$$F_{damping}=-cv,$$
where \(c\) is a damping constant in kilograms per second, \(\frac{\mathrm{kg}}{\mathrm s}\), and \(v\) is the velocity in meters per second, \(\frac{\mathrm{m}}{\mathrm s}\).
As a consequence, part of the system's energy is dissipated in overcoming this damping force, so the amplitude of the oscillation will start to decrease as it reaches zero. These types of harmonic oscillators are called damped oscillators. We can write Newton's Second Law for the case where there is a restoring force and a damping force acting on the system,
$$ma=-cv-kx.$$
Writing the above expression as a differential equation, we obtain
$$m\frac{\operatorname d^2x}{\operatorname dt^2}+c\frac{\operatorname dx}{\operatorname dt}+kx=0.$$
The solution to the above equation is an exponential function. The damping term will exponentially dissipate the oscillations until the system decays to rest.
\[x=A_0e^{-\gamma t}\cos\left(wt+\phi\right),\] where \(\gamma=\frac c{2m}\)
$$x=A_0e^{-\frac c{2m}t}\cos\left(wt+\phi\right)$$
We can prove this is a solution by differentiating it and substituting it into the differential equation:
$$\begin{array}{rcl}\frac{\operatorname dx}{\operatorname dt}&=&-A_0\omega e^{-\frac c{2m}t}\sin(\omega t+\phi)\;-A_0\frac c{2m}e^{-\frac c{2m}t}\cos(\omega t+\phi)\\\frac{\mathrm d^2x}{\mathrm dt^2}&=&\begin{array}{c}-A_0\omega^2e^{-\frac c{2m}t}\cos(\omega t+\phi)\;+A_0\omega\frac cme^{-\frac c{2m}t}\sin(\omega t+\phi)\;+A_0\frac{c^2}{4m^2}e^{-\frac c{2m}t}\cos(\omega t+\phi)\end{array}\end{array}.$$
Now we can go back to the differential equation and prove that we found a solution for it.
$$m\frac{\operatorname d^2x}{\operatorname dt^2}+c\frac{\operatorname dx}{\operatorname dt}+kx=0$$
$$\begin{array}{rcl}\frac{A_0c^2e^{\displaystyle\frac{-bt}{2m}}\cos\left(\omega t+\phi\right)}{4m}+\cancel{A_0c\omega e^\frac{-bt}{2m}\sin\left(\omega t+\phi\right)}\;-A_0\omega^2me^\frac{-bt}{2m}\cos\left(\omega t+\phi\right)\;-\frac{A_0c^2e^{\displaystyle\frac{-bt}{2m}}\cos\left(\omega t+\phi\right)}{2m}&-\cancel{A_0c\omega e^\frac{-bt}{2m}\sin\left(\omega t+\phi\right)}+A_0ke^\frac{-bt}{2m}\cos\left(\omega t+\phi\right)=&0\end{array}$$
$$\begin{array}{rcl}-\frac{\cancel{A_0}c^2\cancel{e^{\displaystyle\frac{-bt}{2m}}\cos\left(\omega t+\phi\right)}}{4m}-\cancel{A_0}\omega^2m\cancel{e^\frac{-bt}{2m}\cos\left(\omega t+\phi\right)}\;+\;\cancel{A_0}k\cancel{e^\frac{-bt}{2m}\cos\left(\omega t+\phi\right)}&=&0\end{array}$$
$$-\frac{c^2}{4m^2}-\omega^2+\frac km=0$$
$$\omega=\sqrt{\frac km-\frac{c^2}{4m^2}}.$$
The damped oscillators with oscillations and an amplitude that decreases with time are called underdamped oscillators. While the ones that do not oscillate and immediately decay to equilibrium position are called overdamped oscillators. The boundary limit between under-damping and over-damping is called critical damping. To confirm the damped oscillator is undergoing critical damping we verify that the damping coefficient \(\gamma\) is equal to the system's natural angular frequency \(\omega_0\). The damping coefficient \(\gamma\) can be determined with the following equation:
$$\gamma=\frac c{2m},$$
where \(c\) is a damping constant measured in units of kilograms per second, \(\frac{\mathrm{kg}}{\mathrm s}\), and \(m\) is the system's mass in kilograms, \(\mathrm{m}\).
The angular frequency for the damped oscillator can be defined in terms of the damping coefficient and the natural angular frequency.
$$\begin{array}{rcl}\omega&=&\sqrt{\frac km-\frac{c^2}{2m}}\\\omega&=&\sqrt{\omega_0-\gamma}\end{array}$$
These 3 cases can be summarized as follows:
Underdamping: \(\omega_0>\gamma\)
Critical damping: \(\omega_0=\gamma\)
Overdamping: \(\omega_0<\gamma\)
There is also another type of oscillator called forced oscillators. In these, the oscillations are caused by an external force that is a periodic force. If the frequency of this force is equal to the system's natural frequency, this causes a peak in the amplitude of oscillation. The natural frequency is the frequency at which an object will oscillate when it is displaced out of equilibrium.
Oscillations in a Spring-Mass System
We will consider the simplest case of Simple Harmonic Motion to understand oscillations in a spring-mass system. For a spring, we already know the equation for Newton's second law:
$$F_s=ma_x=-k\Delta x.$$
Rearranging for the acceleration we obtain
$$a_x=-\frac km\Delta x.$$
So, comparing the equation for a spring with the general equation for harmonic motion \(a=-\omega_0^2x\), we can derive the angular frequency \(\omega\) for a spring, which is given by the equation
$$\omega_0^2=\frac km,$$
expressed more explicitly as
$$\omega_0=\sqrt{\frac km}.$$
Where \(m\) is the mass of the object at the end of the spring in kilograms, \(\mathrm{kg}\), and \(k\) is the spring constant that measures the stiffness of the spring in newtons per meter, \(\frac{\mathrm N}{\mathrm m}\).
The formula for the time period of an oscillating spring-mass system is
$$T_s=2\pi\sqrt{\frac mk}.$$
What is the period of oscillation for a spring-mass system with a mass of \(4\;\mathrm{kg}\) and a spring constant of \(1\;{\textstyle\frac{\mathrm N}{\mathrm m}}\)?
$$T_s=2\pi\sqrt{\frac{4\;\mathrm{kg}}{1\;{\displaystyle\frac{\mathrm N}{\mathrm m}}}}$$
$$T_s=2\pi\sqrt{\frac{4\;\mathrm{kg}}{1\;{\displaystyle\frac{\frac{m\;kg}{s^2}}m}}}$$
$$T_s=4\pi\;\mathrm s$$
Graphing oscillations
If we plot the displacement as a function of time for an object undergoing simple harmonic motion, we would identify the period as the time between two consecutive peaks or any two analogous points on two waves with the same phase. To locate the amplitude, we look at the highest peak in distance.
Displacement vs Time for a system in simple harmonic motion. From this graph, we can identify the amplitude and period of oscillation, Yapparina, Wikimedia Commons (CC0 1.0).
We can also graph the displacement as a function of time for damped oscillators, to visually understand and compare their characteristics. Critical damping provides the quickest way for the amplitude to reach zero. Overdamping takes you faster to the zero position, but decaying oscillations still occur. Underdamped oscillations take more time to reach an amplitude of zero.
Oscillations - Key takeaways
An oscillation is a back-and-forth motion about an equilibrium position. An equilibrium position is a location where the net force acting on the system is zero.
A harmonic oscillation is a type of oscillation where the net force acting on the system is a restoring force. A restoring force is a force acting against the displacement in order to try and bring the system back to equilibrium.
The period is the time required to complete one oscillation cycle. The frequency is defined as the reciprocal of period, \(f=\frac1T\).
If the restoring force is the only force acting on the system, the system is called a simple harmonic oscillator. A damping force may also act on an oscillating system. It is some type of force proportional to the system's velocity, such as air resistance or friction forces, \(F_{damping}=-cv\).
For damped oscillators, part of the system's energy is dissipated in overcoming the damping force, so the amplitude of the oscillation will start to decrease as it reaches zero. The damped oscillators with oscillations and an amplitude that decreases with time are called underdamped oscillators. The overdamped oscillators are the ones that do not oscillate and immediately decay to the equilibrium position.
The boundary limit between an underdamped and overdamped oscillator is called critical damping. To confirm the damped oscillator is undergoing critical damping we verify that the damping coefficient \(\gamma=\frac c{2m}\) is equal to the system's angular frequency \(\omega=2\pi f\). These threecases can be summarized as follows:
In Forced oscillators, the oscillations are caused by an external force that is a periodic force. If the frequency of this force is equal to the system's natural frequency this causes a peak in the amplitude of oscillation.
Frequently Asked Questions about Oscillations
How to calculate time for 1 oscillation?
The period is the time taken for one oscillation cycle. The period for Simple Harmonic Motion is related to the angular frequency of the object's motion. The expression for the angular frequency will depend on the type of object that is undergoing the Simple Harmonic Motion.
What is an oscillator?
An oscillator is an object that moves back and forth about an equilibrium position.
What does oscillating mean?
An oscillation is back and forth movement about an equilibrium position.
How to find frequency of oscillation from graph?
To find the frequency we first need to get the period of the cycle. To do so we find the time it takes to complete one oscillation cycle. This can be done by looking at the time between two consecutive peaks or any two analogous points. After we find the period, we take its inverse to determine the frequency.
How to find amplitude of oscillation from graph?
To find amplitude we look for the peak values of distance.
Final Oscillations Quiz
If the only force acting on an oscillating system is a restoring force that varies linearly with displacement from the equilibrium position, we have:
A damped oscillator.
A damping force is proportional to a system's:
In damped oscillators:
The amplitude starts to increase with time but then suddenly goes to zero.
The oscillators that do not oscillate and immediately decay to equilibrium position are called:
Underdamped oscillators.
The damped oscillators with oscillations and an amplitude that decreases with time slowly are called:
To confirm the damped oscillator is undergoing critical damping we verify that the damping coefficient \(\gamma\):
Is equal to the system's angular frequency.
If the damping coefficient \(\gamma\) is greater than the system's angular frequency \(\omega\):
We have an underdamped oscillator.
If the damping coefficient \(\gamma\) is smaller than the system's angular frequency \(\omega_0\):
We have a critically damped oscillator.
Oscillators in which the oscillations are caused by an external force that is a periodic force are called:
Forced oscillators.
The greater the mass of a system:
The greater the period of oscillation.
... takes more time for the amplitude to reach zero.
Overdamping.
... provides the quickest way for the amplitude to reach zero
Critical damping.
Which of the following are harmonic oscillators?
Simple harmonic oscillator.
Which of the following is an example of a restoring force?
Gravitational Law.
A spring with a large spring coefficient will yield:
A small period of oscillation.
More about Oscillations
of the users don't pass the Oscillations quiz! Will you pass the quiz?
More explanations about Oscillations
Phase Angle Learn
Energy in Simple Harmonic Motion Learn
Pendulum Learn
Velocity Learn
Spring-Block Oscillator Learn
Mechanical Energy in Simple Harmonic Motion Learn
Torsional Pendulum Learn
Simple Pendulum Learn
Energy Time Graph Learn
Restoring Force Learn
Kinetic Energy in Simple Harmonic Motion Learn
Period, Frequency and Amplitude Learn
Period of Pendulum Learn
Physical Pendulum Learn
|
CommonCrawl
|
This website uses features that are not well-supported by your browser. Please consider upgrading to a browser and version that fully supports CSS Grid and the CSS Flexible Box Layout Module.
Our commitment to inclusivity
Dartmouth's capacity to advance its dual mission of education and research depends upon the full diversity and inclusivity of this community. We must increase diversity among our faculty, students, and staff. As we do so, we must also create a community in which every individual, regardless of gender, gender identity, sexual orientation, race, ethnicity, socio-economic status, disability, nationality, political or religious views, or position within the institution, is respected. On this close-knit and intimate campus, we must ensure that every person knows that they are a valued member of our community.
Prospective Undergraduates
For First-Years
Byrne Scholars Program
Math Orgs on Campus
Off Campus Opportunities
For Current Grad Students
Recent Ph.D.s
Administration (Contact Info)
Faculty and Research Interests
JWY and ACM Instructors
Shapiro Visitors
All Members / Search
Applied and Computational Mathematics
Combinatorics and Discrete Mathematics
Undergraduate Activities
Prosser Lectures
Kemeny Lectures
Honors and Recognition
Department News and Happenings
Electronic Teaching Materials
Math Research Support Guide
Math Webmail
Math Cloud
Topics and Graduate Course Descriptions
New or Modified Courses
Topics Course Descriptions
Selected Course Syllabi
Web Pages by Term
Web Pages by Course
ORC Course Descriptions
Math 72: Topics in Geometry: An Introduction to Lie groups
Instructor: Craig Sutton
Created by Sophus Lie (1842-1899) for the purpose of developing a "Galois theory" of differential equations, Lie groups are a mathematically rigorous realization of our intuitive notion of "continuous transformation groups" and play a fundamental role in the study of geometry and physics. For example, the exceptional Lie group $E_8$ appears in string theory and the associated "$E_8$-lattice" was used by Freedman to construct an example of a four-dimensional topological manifold that does not admit a smooth structure. Technically speaking, a Lie group is a group $G$ equipped with the structure of a smooth manifold with respect to which the group operations (i.e., multiplication and inversion) are smooth. Our exploration of Lie groups will begin with the study of "matrix groups" (e.g., $\operatorname{SO}(n)$, $\operatorname{SU}(n)$ $\operatorname{Sp}(n)$ and $\operatorname{SL}_n(\mathbb{R})$). By focusing on this concrete class of examples, we will build our intuition and encounter many of the interesting themes that arise in the general theory of Lie groups.
Audience and prerequisites: This course will be of interest to students who are curious about geometry, (theoretical) physics, robotics, computer animation and/or applications of geometric techniques. I will assume you have a working knowledge of multivariable calculus, (abstract) linear algebra and abstract algebra, or possess the willingness to fill in any gaps on the fly.
Math 118: Topics in Combinatorics: Language Theory and Applications to Enumerative Combinatorics
Instructor: Jay Pantone
The topic of this course is Language Theory and Applications to Enumerative Combinatorics. Among the topics we will cover are: finite state automata (deterministic and non-deterministic), regular languages, pushdown automata, context free grammars, Turing machines, finite state transducers, the pumping lemmas, and decidability. We will see many examples of combinatorial families that are in bijection with the objects above, allowing for automatic enumeration through the theory of generating functions.
Prerequisites: An undergraduate combinatorics course (preferably, some familiarity with generating functions). Ask the instructor if in doubt.
Math 105: Topics in Number Theory
Instructor: Naomi Tanabe
Math 108: Topics in Combinatorics: Permutations, partitions and lattice paths
Instructor: Sergi Elizalde
This course focuses on three important objects in combinatorics: permutations, partitions and lattice paths, as well as the connections between them. Although many of the topics that we will cover are well-known results in enumerative combinatorics, we will also get a glimpse of current research.
This course is aimed at graduate students and strong undergraduate students who have taken some combinatorics course before.
Math 115: Elliptic Curves
Instructor: John Voight
Elliptic curves are almost too beautiful to exist: somehow, they manage to embody many different kinds of mathematical objects simultaneously. On the one hand, the set of points of an elliptic curve naturally forms a group, so you can add two points on the curve to make a third. This group law is provided by algebraic equations, but can also be described geometrically: three points sum to zero when they lie on a line. From a complex analytic point of view, an elliptic curve is just a torus. And on top of all of this, elliptic curves over finite fields are ideal for use in cryptography.
This course will be a graduate-level introduction to elliptic curves in which we will cover the fundamentals of the subject. Topics may include: rudiments of algebraic curves, Weierstrass equations, isogenies and endomorphisms, elliptic curves over finite fields and applications to cryptography, elliptic curves over the complex numbers and complex multiplication, the Mordell-Weil theorem and applications to Diophantine equations, and other topics as time permits.
The prerequisites for the course are one year of abstract algebra (groups, rings, fields), preferably at the graduate level, and some complex analysis. The course may be suitable for first-year graduate students--please see the instructor.
Math 118: Combinatorics
This course will start by covering the symbolic method of Flajolet and Sedgewick and proceed to the asymptotic analysis of generating functions. After that we'll cover the theory of languages (e.g., regular languages, context-free grammars, etc). If there is time, then we'll finish the semester with a series of short special topics, including pattern-avoiding permutations.
Prerequisites: An undergraduate combinatorics course (in particular, familiarity with generating functions). Ask the instructor if in doubt.
Math 123: Automorphic forms, representations and C*-algebras
Instructor: Pierre Clare
The representation theory of Lie groups is a vast subject, with ramifications ranging from harmonic analysis to number theory. The main objective of this course will be to explore some of its ties to the classical theory of automorphic forms. Along the way, we will discuss standard Lie-theoretic methods and the approach of non-commutative geometry to representation theory via group C*-algebras.
Prerequisites: a good acquaintance with linear and general algebra (as provided for instance in Math 71) is necessary. Some exposure to complex and functional analysis is preferable. No prior knowledge of representation theory or C*-algebras will be assumed. Contact the instructor for more details.
Math 100—COSC 49/149
An Introduction to Mathematics Beyond Calculus: Game Theory
Instructor: Peter Winkler
Game Theory is one of the great accomplishments of modern mathematics, with myriad applications to economics, computer science, political science, decision theory, warfare, and more. This course will tackle the key results in each of the main branches of game theory, using a variety of mathematical techniques.
We will begin with the simplest and most famous game model: two-player, simultaneous, one-move games such as the "prisoners' dilemma," and the notion of Nash equilibrium. (Sadly, John Nash, a "beautiful mind," recently lost his life in a car crash on the NJ Turnpike.) From this we will branch out to multiple players, repeated games, auctions, voting, and mechanism design.
Prerequisites: Mathematical background of a senior mathematics major or a beginning graduate student in mathematics or theoretical computer science; including a course in probability (e.g., MATH 20 or Math 60). If in doubt, please see the instructor.
Math 112: Geometric Group Theory
Instructor: Bjoern Muetzel
By associating a group to one of its Cayley graphs, the properties of a group can be studied from a geometric point of view. The group itself acts on the Cayley graph via isometries which is reflected in the symmetries of the graph. The inherent beauty of a group can thus be visualized in its Cayley graph making these graphs a fascinating object of study.
Geometric group theory examines the connection between geometric and algebraic invariants of a group. In order to obtain interesting invariants one usually restricts oneself to finitely generated groups and takes invariants from large scale geometry. Geometric group theory closely interacts with low-dimensional topology, hyperbolic geometry and differential geometry and has numerous applications to problems in classical fields, like combinatorial group theory, graph theory and differential topology.
Topics: Graphs and trees, Cayley graphs, free groups, hyperbolic groups, large scale geometry.
Prerequisites: Math 71 and 101 and a solid background in topology (point set topology, fundamental group, covering space theory). This course aims at second year graduate students, but will be accessible to other students with the appropriate background.
Math 125: Geometry of Discrete Groups
This course will provide an introduction to the geometry of discrete groups. Specifically, we will study the action of discrete subgroups (lattices) inside groups of isometries of hyperbolic spaces, especially dimensions 2 and 3. An interesting class of such groups arises from arithmetic constructions, and the corresponding quotients give rise to orbifolds and manifolds of interest in number theory, geometry, spectral theory, combinatorics, and topology.
Math 116: Applied Mathematics: Mathematical Modeling in Biology
Instructor: Olivia Prosper
Biology presents complex problems requiring quantitative approaches to tackle them. This term, Math 116 will serve as an introduction to mathematical modeling in biology, with an emphasis in modeling disease dynamics. You will learn to construct, analyze, and simulate models and interpret your results within their biological context. The course will focus on two or three of the following topics: classical techniques for analyzing nonlinear ordinary differential equations, metapopulation models, delay-differential equations, age-structured and time-since infection (PDE) models, stochastic models, optimal control theory and its application to biological problems, and model validation. By the end of the term, you will have experience posing biological problems and using mathematics to elucidate them.
Math 100: Topics in Probability: Large Networks and Graph Limits
Large networks are everywhere these days and in this course we will study a whole new branch of mathematics designed to help us understand these beasts. Our text (and about 250 research papers) constitute the literature of the field. The idea is that by introducing limit structures, one can state and prove theorems about graphical structures so large that they can be appreciated only via statistical tests.
Prerequisites: A solid background in mathematics, including calculus and at least one course in probability. Exposure to graph theory or measure theory will be handy but won't be assumed. Graduate students and advanced undergraduates studying mathematics or the theory of computing will most likely have adequate mathematical sophistication, but fair warning: this stuff is at the frontier of research; it isn't easy!
Math 105: Algebraic number theory
This course will be a graduate-level introduction to algebraic number theory, in which we will cover the fundamentals of the subject. Topics may include: rings of integers, Dedekind domains, factorization of prime ideals, Galois theory in number fields, geometry of numbers and Minkowski's theorem, finiteness of the class number, Dirichlet's unit theorem, selected topics from analytic number theory, quadratic and cyclotomic fields, localization and local rings, valuations (i.e. p-adic) and completions, an introduction to class field theory, application to Diophantine equations, and other topics as time permits.
The prerequisite for the course is one year of abstract algebra (groups, rings, fields) at the advanced undergraduate or graduate level.
Math 96: Mathematical Finance II
Instructor: Sutton
In the study of ordinary differential equations the parameters and coefficients of the equations are assumed to be deterministic. In contrast, a stochastic differential equation is a differential equation in which one or more of its terms is governed by a stochastic (or random) process. As one might imagine, stochastic differential equations (SDEs) arise naturally in the study of finance, engineering, economics, physics and mathematics. This term Math 96 will serve as an introduction to the theory and applications of SDEs with an eye towards finance. Topics may include some of the following:
Probability spaces, Stochastic processes & Brownian Motion
Ito Integrals, the Ito formula and the Martingale Representation Theorem
Existence & Uniqueness of Solutions
Basic Properties of Diffusions
Feynman-Kac Theorem & the Girsanov Theorem
Optimal Stopping Time
Applications to financial derivatives on equities and fixed-income securities
Applications to Boundary-Value Problems & Control Theory
1. Math 86 or Permission of the instructor
2. Some knowledge of analysis at the level of Math 63/35 or 103 will be useful
Math 112: Geometry — The Structure and Representation Theory of Compact Lie Groups
The theory of Lie groups has its roots in Sophus Lie's quest to develop a Galois theory for differential equations and Klein's "Erlanger Programm" which places symmetry groups at the center of the study of geometry. During the ensuing 120 plus years, Lie groups have become an indispensable part of the study of modern geometry and physics. In this course we will focus on the structure and representation theory of compact Lie groups with an eye towards applications in geometry. Topics will include some of the following:
Lie Groups and Lie Algebra Basics: Lie groups; the adjoint representation, the Lie bracket and Lie algebras; left-invariant vector fields, one-parameter subgroups and the exponential map; logarithmic coordinates and Dynkin's formula; the Killing form.
Structure of Compact Lie groups: the theorem on maximal tori; roots, root systems, Dynkin diagrams and their classifications; the co-root, central and integral lattices; the center & fundamental group of a compact Lie group; Structure.
Representation theory of Compact Lie groups: Schur's lemma, the Peter-Weyl Theorem, induced representations, Weyl's character and dimension formulae, representations of the classical Lie groups.
Introduction to Symmetric Spaces
Prerequisites: familiarity/comfort with manifolds (e.g. Math 124) & a solid background in linear algebra (e.g., Math 24) and groups (e.g., Math 71).
Math 116: Applied Mathematics — Great Papers in Numerical Analysis (Alex Barnett)
Prereqs: programming (eg CS1 or Math26), Math 63, Math 23, Math 22/24. Graduate analysis (73/103) will help.
We will overview key background in numerical analysis and computation, and then students, with instructor help, will read and present from a set of a dozen or so classic influential papers that have shaped modern numerical methods. The course will include coding up some of the algorithms discussed.
Math 128: Current Problems in Symmetric Functions (Rosa Orellana)
Prerequisite: An algebra course ( e.g., M71 or M101) and a basic combinatorics course (M28) as well as a desire to learn and solve problems. If you have not had a course in combinatorics and would like to take the course, talk to Rosa.
The first 2–3 weeks will be a crash course on symmetric functions. Then we will spend the rest of the term surveying open problems related to Kostka numbers, Littlewood-Richardson coefficients and Kronecker coefficients. We will also survey the literature for recent results in the area.
Students will learn how to use SAGE to generate data and look for patterns in the data. We will make conjectures and work on them throughout the term and beyond if there is enough interest.
Homework: Students will be expected to write simple programs and read articles and present them to the class. There will be no exams.
Math 123: Geometry and Quantization (Erik Van Erp)
I will start with explaining the formalism of Hamiltonian mechanics. First we develop it in Euclidean space, and then generalize to the setting of symplectic manifolds. We study Hamiltonian flow, Poisson brackets, moment maps, Darboux's theorem, and perhaps Noether's theorem on the relation between symmetry and conservation laws. We then discuss the relation between Hamiltonian mechanics and quantum mechanics, and study mathematically precise ways in which classical mechanics is a limit of quantum mechanics (the "correspondence principle"). We then cover various topics from geometric quantization theory, and/or contact topology, depending on time and interest.
SPECIAL TOPICS COURSE: Homogeneous Ricci flow and solitons
Instructor: Jorge Lauret (visiting from University of Cordoba, Argentina). Preparation lectures: Carolyn Gordon
Prerequisite: Differential topology. Students should have had some exposure to Riemannian geometry. However, if you are interested in the course and have not had a course in Riemannian geometry, we can include an introduction in January. Please discuss your background in advance with Carolyn Gordon.
The Ricci flow on Riemannian manifolds has been a subject of intense activity in recent years, especially since it played a central role in the solution of the Poincare Conjecture. Jorge Lauret has done groundbreaking working on Ricci flow and Ricci solitons in the setting of homogeneous spaces.
Lauret will give twelve 90-minute lectures during the month of February. The first six lectures will address flows on homogeneous spaces, in particular, the Ricci flow. The second half of the lecture series will address homogeneous Ricci solitons; these are metrics that essentially remain stable under the Ricci flow.
In preparation for Lauret's lectures, we will hold regular 65-minute classes in January to address background material on Lie groups and homogeneous spaces. The exact content of the January lectures will depend on the backgrounds of the students.
Math 17: Imaginary numbers are real! Complex numbers are simple! (Doyle)
We will survey the role of complex numbers across the mathematical spectrum, from the central limit theorem of probability, to the distribution of prime numbers, to hyperbolic geometry, to the mathematical apparatus of quantum mechanics.
Math 102: Topics in Geometry (Doyle)
This course will be a general introduction to Fuchsian groups, with emphasis on spectral geometry and arithmetic groups. The subject of Fuchsian groups lies at the intersection between geometry, topology, algebra, number theory, and analysis, so it should be of interest to a wide range of graduate students, including first-year students. The course should also be accessible to undergraduates who have taken Math 63 and 81 (or in exceptional case, 63 and 71).
The primary course text will be Svetlana's Katok's book 'Fuchsian groups'. A good part of the course will be devoted to developing the techniques to compute explicit examples, using Mathematica, Sage, and Magma.
Math 118: Topics in Combinatorics (Elizalde)
Prerequisites: An "advanced" undergraduate combinatorics course
I've chosen topics that I haven't covered in my more recent graduate courses, so they should be mostly new for everyone. Possible topics include sieve methods (Gessel-Viennot formula, involutions), partitions (Euler, Jacobi and Rogers-Ramanujan identities), P-partitions, partially ordered sets, plane partitions, tilings, reduced decompositions of permutations, parking functions, and an introduction to polytopes (Dehn-Somerville equations, associahedron, permutahedron).First paragraph
Math 125: Quadratic forms and spaces (Shemanske)
Prerequisites: Everyone should have adequate algebra by spring, and while it would be nice to be acquainted with number fields and their (p-adic) completions, the essentials can be picked up with reasonable ease.
In the mid 1930's Witt dramatically changed the perspective by which mathematicians viewed the theory of quadratic forms, from a study of homogeneous polynomials of degree 2 to the study of vector spaces endowed with a symmetric bilinear form. The theory then became one of determining canonical decompositions and of classification of these spaces in terms of arithmetic invariants.
The course will be distinctly algebraic in flavor, and in general we will be interested in the theory over number fields and Dedekind rings, meaning the focus will also be influenced by questions in algebraic number theory.
Math 100: Topics in Probability [Random Walk on a Graph] (Winkler)
Prerequisites: Basic probability (e.g. Math 20 or 60), and some experience with proofs; graph theory or combinatorics will be useful but not necessary. Graduate students at all levels in math and in computer science are welcome, as are advanced undergrad majors.
When a token moves randomly from vertex to vertex via the edges of a graph, it is taking a random walk. This simple process is as useful as it is elegant, with multiple analogies in electrical networks (cf. Doyle and Snell's Random Walks and Electrical Networks , which will be one of our sources) and applications to the theory of computing. There have been remarkable new advances in this now-classical subject and we will spend much time with recent work and unsolved problems.
Math 112: Topics in Geometry [Introduction to Riemannian Geometry] (Sutton)
Manifolds are one way in which mathematicians deal with the concept of "space," and the presence of a Riemannian metric on a manifold provides us with a framework and a set of tools with which we may explore the geometric and topological nature of the space in question. Among the tools available to us, curvature is perhaps the most fundamental. The three primary flavors of curvature are sectional curvature, Ricci curvature and scalar curvature, and each provides us with a local characterization of how much a given space deviates from being Euclidean. After laying the basic foundations of Riemannian geometry, one of our main objectives will be to examine how (global) constraints on curvature influence the topological and geometric structure of the space. In particular, we will prove the topological sphere theorem and---as time and interest permits---we will discuss the differentiable sphere theorem of Brendle and Schoen from 2009.
Topics: connections, Riemannian metrics, volume, curvature and isometries; geodesics, Jacobi fields, the energy functional, first & second variations of energy; spaces of constant curvature; (locally) symmetric spaces; isoperimetirc inequalities; spectral geometry; sphere theorems.
Math 126: Topics in Applied Mathematics (Gillman)
Prerequisites: Graduate students at all levels in math and in computer science are welcome, as are advanced undergrad majors who have taken Math 23.
Suggested background: Some coding experience (Matlab, Fortran, or C), Math 46, Math 63
Partial differential equations (PDEs) are essential for the modelling of physical phenomena appearing in a variety of fields from geophysics and fluid dynamics to geometry. In this course, we will study three major topics one should understand when modelling with PDEs. The topics are:
(i) the theory (e.g. existence and uniqueness of solutions)>
(ii) when and how can solutions be found analytically>
(iii) classic numerical techniques (e.g. finite difference and finite element methods) and how to determine if the method is stable and convergent.
In addition, we will discuss the limitations of existing solution techniques in the context of open research questions.
Math 105: Topics in Number Theory (Pomerance)
Prerequisites: A knowledge of elementary number theory and some abstract algebra.
This class has been scheduled for the 10A period.
This course will be an introduction to Diophantine Equations and Diophantine Approximation. The first topic involves determining if a polynomial equation in several variables has any integer or rational solutions, and if it does, how to find them. The second topic refers to the approximation of real numbers with rational numbers. We shall see that the two topics are closely intertwined. Some of the more specific topics will be the Pell equation (quadratic in two variables) and the Thue equation (a homogeneous polynomial in two variables equal to a constant). We shall see how to prove that e and pi are transcendental numbers, and we'll discuss some other pretty constants.
We will start out using the book Lecture notes on Diophantine analysis by Umberto Zannier. This book is available from amazon.com for about $30 including shipping, but I'm not sure how robust the supply is. It would be good to order a copy soon. Partway into the term we'll branch out using some other material, which I'll make available.
There will be some written homework assignments, and students will be expected to occasionally present topics to the class, possibly a homework solution or a new topic. There will be no formal examinations.
Math 109: Topics in Logic (Groszek)
Prerequisites: no prerequisites for graduate students.
The course will be on the general topic of mathematical logic and arithmetic. Depending on the interests of the students, we will begin with a discussion of Godel's Incompleteness Theorem, and then move on to reverse mathematics (using logic to analyze the relative strength of theorems of classical mathematics), nonstandard analysis (calculus with infinitesimals), the unsolvability of Hilbert's Tenth Problem (maybe my favorite choice), or a related topic of interest.
For students who have begun thesis work in another area, attendance will be the major part of the grade. Students who are (possibly) preparing for a logic qual should talk to me about targeted homework assignments.
Math 121: Current problems in algebra (Webb)
Math 125: Reflection and Coxeter groups; Buildings and Classical Groups (Shemanske)
Prerequisites: 101, 111 (suitable for first year students).
Any number theory needed (not much) will be developed.
This is bits of algebra, geometry, topology and a little combinatorics all rolled into one course. The goal is to say something about the theory of buildings, with slightly more interest in their combinatorial/topological structure and number theoretic applications than their original purpose, which was to understand the structure and classify classical p-adic groups (using the building as a representation space), in analogy to semi-simple Lie theory.
Prerequisites: Math 118. If you have had an advanced undergraduate course in combinatorics and are interested, talk to Sergi about your preparation.
The course will cover pattern-avoiding permutations, permutations in dynamical systems, bijections, counting methods, generating trees, and perhaps asymptotic analysis of generating functions, among other topics to be decided.
Math 17: An Introduction to Mathematics Beyond Calculus (Shemanske)
Prerequisite: Math 8, or placment into Math 11.
Details: This year's offering will be "From Caculus to Elliptic Curve Cryptography in ten weeks"
See the web site.
Math 89: Set theory (Weber)
Prerequisite: Math 39 or Math 69 or familiarity with the language of first-order logic and readiness for an upper level math course.
Math 89 satisfies the culminating experience requirement for mathematics majors, and is appropriate for any graduate student who wants to take a course in logic and doesn't yet know a lot of set theory.
We will study the axioms of set theory, what they tell us about the universe of sets, and how we can use the universe of sets to study the universe of mathematics.
Math 112: Geometry (Gordon)
Prerequisite: A course in differential topology, including vector fields and their flows. (The course that used to be called Math 124 and is called Math 102 this fall is ideal.)
Lie groups and Lie algebras are ubiquitous throughout mathematics, e.g. in geometry, number theory, combinatorics, analysis and dynamics. The first part of the course will address the basic notions and results elucidating the Lie group -- Lie algebra correspondence. We will then introduce various types of Lie groups and Lie algebras (semisimple, solvable, nilpotent, compact). There are various directions that we could pursue after that, depending on time and the interests of the class.
Math 122: The Atiyah-Singer index theorem and the heat kernel proof (van Erp)
Prerequisites: the introductory graduate level analysis and topology sequences (103/113, 124/114).
Math 126: Numerical analysis for PDEs and wave scattering (Barnett)
Prerequisite: some programming experience (preferred: Matlab/octave, C, or fortran; esp. the first).
Recommended background: some PDEs (could be at undergrad level, eg Math 46) and real analysis (Math 63 and some graduate-level functional analysis). However the background is flexible: a motivated advanced undergrad or other science/engineering/CS student (undergrad or grad) could pick up enough to learn a lot and do well.
This will cover modern integral-equation-based methods for solving piecewise-constant coefficient PDEs (focusing on those arising in electrostatics and time-harmonic waves), including the theory, analysis, and coding experience required for proficiency. Much content will overlap with previous Math 116's offered by Barnett.
We start with an introduction to numerical analysis, conditioning and stability, numerical interpolation and integration, followed by potential theory, Laplace's equation and Helmholtz equation. The latter leads to scattering problems, a flavor of fast algorithms (FFT, fast multipole), and final projects. Homework will be dominated by coding and implementing the algorithms, but also include some proofs. Projects can be coding-based or a deeper study of numerical analysis results.
Math 102: Foundations of Smooth Manifolds (Sutton)
Prerequisites: Linear algebra (Math 24), point-set topology (math 54) and multivariable analysis (Math 73). It will also help to be familiar with covering spaces and the fundamental group.
Manifolds provide mathematicians and other scientists with a way of grappling with the concept of "space." The space occupied by an object. The space that we inhabit. Or, perhaps, the space of configurations of a mechanical system. While manifolds are center stage in the study of geometry and topology, they also provide an appropriate framework in which to explore aspects of mathematical physics, dynamics, control theory, medical imaging, econometrics and robotics, to name just a few. This course will serve as an introduction to the basics of manifold theory and lay the foundations needed to explore problems in which "space" plays a fundamental role.
Topics will include some of the following: manifolds & tangent bundles; vector fields & vector bundles; smooth maps and the inverse function theorem; cotangent bundle & differential forms; densities, integration on manifolds and the generalized theorem of Stokes; Whitney's imbedding theorem; Lie groups, group actions & homogeneous spaces; Tensor Fields & Riemannian metrics; Differential forms & orientation; integral curves & the existence and uniqueness theorem for ODEs; Distributions and Frobenius' Theorem.
Math 105: Primes and polynomials (Pomerance)
Prerequisites: An undergrad number theory course, as well as some abstract algebra. I'll be happy to try and fill in gaps for motivated students.
Description/Grading
This course will develop elementary estimates on the distribution of prime numbers, including elementary sieve methods. A recurrent theme will be the prime factorization of integer-polynomial values. We will follow the first 3 chapters of Pollack's "Not always buried deep", as well as Chapter 6, and also Chapter 3 of Montgomery & Vaughan's "Multiplicative number theory". I recommend acquiring Pollack's book, the second book is optional.
Grading: There will be weekly written assignments, plus some class presentations. There will be no formal exams. Enrolled graduate students who have been admitted to candidacy will be excused from the written assignments.
Math 108: Combinatorial Representation Theory (Orellana)
Prerequisites: Linear algebra and algebra (Math 31, 71, or 101). No prior knowledge of combinatorics or representation theory is expected.
This is an introduction to algebraic combinatorics, specifically symmetric function theory and its connections to representation theory, algebraic geometry, and tableaux combinatorics.
Full description.
Math 17: Beyond Calculus
W11 topic: Random Walks and Electric Networks
Math 102: Topics in Geometry
W11 topic: The soccer ball and the solution of equations of the fifth degree
Math 17 (An Introduction to Mathematics Beyond Calculus)
W10 topic: Applications of algebraic structures in number theory and geometry
W08 topic: The Isoperimetric Problem
(Chaos!)
W07 topic: The Geometry of the Fourth Dimension
Math 112 (Introduction to Riemannian Geometry)
Math 116 (Boundary Methods and Wave Asymptotics)
Powered by GNU / Linux
Mathematics at Dartmouth
27 N. Main Street | 6188 Kemeny Hall | Hanover NH 03755-3551 | 603.646.2415 | [email protected]
|
CommonCrawl
|
Astronomy Meta
Astronomy Stack Exchange is a question and answer site for astronomers and astrophysicists. It only takes a minute to sign up.
Can I make a black hole with one or two atoms?
So I was watching something that said
if we compressed Earth into the size of a peanut: we would get a black hole;
if we compressed Mount Everest into a few nano-meters; we would get a black hole.
Can I make a black hole with one or two atoms? If yes, would it become larger and turn into normal-sized black hole?
black-hole
BradleyBradley
$\begingroup$ Similar question here: astronomy.stackexchange.com/questions/12466/… At the mass of a couple atoms you run into the quantum gravity problem, which is unsolved. $\endgroup$
– userLTK
$\begingroup$ This is a meaningless and poor question. The dynamics of atoms is described by quantum mechanics, while black holes are the prediction of a classical (non-quantum) theory. $\endgroup$
– Walter
$\begingroup$ @Walter The fact that we haven't developed the theory needed to answer a question does not make that question "meaningless" or "poor". Indeed, every advance that has ever been made in theory has been made because somebody asked a question which the then current theory wasn't capable of answering. $\endgroup$
$\begingroup$ @DavidRicherby I disagree respectfully. The correct answer to this question (other than "Yes and No" :-) ) is that it's not a well-formed question. $\endgroup$
– Carl Witthoft
$\begingroup$ @CarlWitthoft Saying that it's not a well-formed question is fine. My objection was to saying that it's meaningless and poor just because we don't have a theory of quantum gravity. $\endgroup$
There are two answers: yes and no.
Yes because every mass M has a Schwarzschild radius given by $\frac{2GM}{c^2}$ (where G is the gravitational constant (about $6.7\times10^{-11}$ and c is the speed of light (about 300 000 000 $\mathrm{m/s}$). If something is compressed to its Schwarzschild radius it becomes a black hole. You can do this for an atom. An atom of carbon (for example) has a mass of $2\times10^{-26}\mathrm{kg}$ so its Schwartzschild radius is $$ \frac{2\times (6.7\times10^{-11})\times (2\times10^{-26})}{300 000 000^2}\approx 3\times10^{-53}\ \mathrm{metres}$$
So the actual answer is no as there is no feasible way of compressing an atom to this size. Of significance here is the fact that this size is so small that objects this small don't behave like small balls but as quantum mechanical objects. But a black hole is a gravitational object modeled by General Relativity, and Relativity and quantum mechanics don't work well together. In other words, we don't have a scientific model for describing how an atomic mass black hole would behave.
Stephen Hawking has shown that small black holes are unstable, so an atomic mass black hole would be very unstable, evaporating in a very short time.
James KJames K
$\begingroup$ Isn't there a bit of a transitive property that applies here? In a "normal" black hole, isn't everything so compressed that the even the atoms hit the Schwarzschild radius? $\endgroup$
– David says Reinstate Monica
$\begingroup$ Hasn't Stephen Hawking in fact proposed a mechanism by which small black holes would be unstable and evaporate? One can prove that this mechanism is consistent with the theory, but that doesn't prove that it actually happens. $\endgroup$
$\begingroup$ @DavidRicherby Yes, and Einstein has proposed a mechanism by which masses are attracted to each other. It's all theory. No-one has directly observed a black hole. But Black holes and Hawking radiation are generally accepted. $\endgroup$
– James K
$\begingroup$ Since that value is roughly $10^{-18}$ of the Planck Length, that pretty much rules out the "yes" part $\endgroup$
I think the answer is No.
If we try and compress these atoms, we end up (eventually) with the nuclei close enough to be forced to fuse. Fusion would mean we've formed a single nucleus.
This stage is unavoidable.
So your two atoms question now reduces to whether a single nucleus can form a black hole ?.
A nucleus is a kind of complex quark-gluon mix and if we compress it more we end up with a very dense version of that which we basically don't have physics to model properly.
It's extremely unlikely that conventional general relativity can be applied to something that will be so small it's actually smaller than we think we can apply quantum theory. And the energy density involved at that point would be so high our current theories don't make sense any more. We need a quantum theory of gravity to do this and we don't have one that works well enough. In fact we're not even sure a quantum theory of gravity would allow us to go to such small, high energy scales - even that is unknown.
So we're in uncharted waters.
So why "no" ?
Well, to force such a compression of a nucleus we'd have to apply energies to a very small region of space - smaller than we think it's possible to do, because of the consequences of the uncertainty principle. Put simplistically, beyond some point we'd not be able to simultaneously say where the nucleus is and how fast it's moving. It would be impossible to confine to a smaller region. This would happen long before we reach the Schwarzschild radius, at around the Planck length.
As you'll see from the answer by @James-K , the Schwarzschild radius is about 10−53 m, but the Planck length is 18 orders of magnitude larger at about 10−35 m.
So we could not realistically confine and compress our nucleus into a small enough space to ever reach its black hole size.
Now we can make a generic catch-all statement that a new theory might provide some loophole that lets us get around that, but it does seem unlikely as we'd expect a new theory to reproduce most of what we already know at those limits. It's hard to imagine the uncertainty principle "going away" so I don't see a way around that.
There's an unproven possibility of a yes.
A quantum theory of gravity that works might (repeat might or might not) find that gravity at that scale changes its character and allows it to form event horizons at larger sizes than we'd currently expect for such mass-energy ranges.
But we lack any evidence to support that idea, and I won't convert a "no" to a "maybe yes" simply to allow room for any wild idea. That's science fiction, not science.
StephenGStephenG
$\begingroup$ MathJax doesn't show units like that… m was formatted as a variable. $\endgroup$
– JDługosz
A small addition to the answers above (I like the Planck length answer). It was thought that it might be possible to make very small black holes at CERN, theoretically anyway, but that theory required extra dimensions to exist. Because no black holes were observed, the extra dimensions (on very small scales) theory took a hit.
Even if those black holes could be created, they are predicted to evaporate very very quickly. (billionth of a billionth of a billionth of a second), but even that rate of decay should be noticeable. None were noticed.
It's also worth asking, if CERN smashes two protons together really really fast, and, if that makes a black hole (in theory), as in, pretend it's possible . . . Would this theoretical black hole really be made up of two protons or is it made up of two protons and 14 TeV plus kinetic energy? I think it's more accurate to say that such a black hole is really created out the kinetic energy not the atoms themselves.
Some might call that splitting hairs on Schrodinger's cat, but I think it's an important point. The enormous kinetic energy of a near light speed collision, might just be able to create a micro black hole, and in that case, it's the kinetic energy that should be called the primary ingredient not the atoms.
M.A.R.
userLTKuserLTK
$\begingroup$ An interesting way of looking at it. $\endgroup$
– StephenG
$\begingroup$ The idea of theories with extra dimensions is that there are extra (4th, 5th etc.) space dimensions which are very small and as a consequence Gravitation is much stronger at scales smaller than the size of these extra dimensions. This brings down the Planck (energy) scale to energies accessible at colliders such as the LHC. $\endgroup$
– Andre Holzner
Thanks for contributing an answer to Astronomy Stack Exchange!
Not the answer you're looking for? Browse other questions tagged black-hole or ask your own question.
Graduation of the Astronomy Stack Exchange
Can we compress any object to create black Holes?
What is a singularity? What is at the center of a black hole? Specifically regarding space-time
How small would you have to crush an object for it to become a black hole?
Black hole without singularity?
The size of the radius of the event horizon of a black hole created by the merger of a black hole binary system
Is it correct to assume that the relationship between an objects mass and size, is what decides if it's a black hole or not?
Could gravitational waves near merging black holes collapse to a black hole themselves?
Can a star eat a black hole?
What happens if a black hole is produced inside another black hole?
Does a black hole necessarily contain a singularity?
Can black holes be a subset of quark stars? If not, what are black holes made of?
|
CommonCrawl
|
Search all SpringerOpen articles
Journal of the European Optical Society-Rapid Publications
Complimentary code keying of spectral amplitude coding signals in optical buffering with increased capacity
Kai-Sheng Chen1 &
Wien Hong ORCID: orcid.org/0000-0002-2873-01902
Journal of the European Optical Society-Rapid Publications volume 16, Article number: 13 (2020) Cite this article
Signal buffering services such as contention resolution and congestion avoidance are essential in optical packet switching networks. In this paper, an optical memory scheme based on spectral amplitude coding (SAC) and complementary code keying (CCK) is proposed to increase the buffer capacity. CCK is applied to packet buffering by selecting an available code set and encoding the payload bits with either an SAC signal or its complementarity. The capacity constraint is effectively released, as the usable codes for queuing packets are twice those for the conventional code-domain buffers. To minimize system costs by reducing the codec number, a shared structure based on an arrayed waveguide grating (AWG), which is capable of processing both the typical and complimentary coded signals simultaneously, is also investigated.
Optical packet switching (OPS) [1, 2] networks show signs of future success in optical communications with high speed and large capacity. However, OPS deployment is currently limited by packet-buffering issues resulting from the lack of optical memory devices. The buffering function is essential to resolve the competence between two or more packets in the same channel for a common route. A similar approach of electrical buffering based on the store-and-forward technique is not compatible with OPS because optical random-access memory cannot be used. Therefore, the networks face difficulty in performing buffering-based services such as contention resolution and congestion avoidance without the aid of optical memory. To reduce the contention effect on packet switching and enhance network performance, several optical buffering schemes have been proposed for OPS by queuing packets in three signal dimensions: time [3, 4], wavelength [5, 6], and code [7, 8].
For optical buffering in the time domain, fiber delay lines (FDLs) [3, 4] have the ability to execute the first-in-first-out algorithm on the packet flows entering the router. Optical packets travel the FDLs for an arranged length for a specific time until the assigned output port is available. The wavelength-domain buffering in OPS is achieved by tunable wavelength convertors [5, 6] that modulate each buffered packet with a specific wavelength signal. Multiple packets can be kept in a common device with collisions by employing the wavelength-division multiplexing (WDM) technique. Using optical codes enables network routers to create buffering scenarios known as code switching for optical packets. Code switching [7, 8] is a process executed by an optical switch that changes the original code of the input packet into a new output code. In the code-domain buffering, each payload bit in the packet is converted to a code sequence before entering a buffer. Moreover, WDM and optical codes can be combined to improve the buffering model by enlarging its capacity.
Optical buffering in the code domain has several advantages, namely, it has a shorter buffered packet stream length and does not require extra optical bandwidth compared with its counterparts in the time- and wavelength domains. However, the greatest capacity of a code-based buffer is determined by the code length and the codec scale. Although launching more available codes is a solution granting increased capacity, a large-scale codec with a higher system cost is required to generate optical codes with a large cardinality. In the author's previous work, two-code keying [8] was chosen to encode optical packets, where payload bits "1" and "0" were respectively represented by an optical code and its complement. The performance improvement came from the increased signal-to-noise ratio of the packet signals such that more packets could be stored in the buffer for an acceptable bit-error rate.
The optical code-division multiple access (OCDMA) systems based on spectral-amplitude-coding (SAC) have had significant advancement recently. In particular, several code-construction ways aiming to generate optical codes with very low cross-correlations have been proposed to reduce the noise variance at decoders [9, 10]. Users assigned with such codes have improved bit-error rates (BERs) and high data rates. Furthermore, some algebraic methods, such as the ones based on the Pascal triangle matrix [10] and the Jordan block matrix [11], have been used to design the codes with flexible code lengths, code weights, and accommodated user numbers. These codes have more excellent suitability for the networks with various conditions than the conventional codes.
In this paper, complementary code keying (CCK) [12, 13] is introduced for capacity enhancement. Instead of simply employing typical codes, either one of typical or complementary codes can be assigned to a packet for encoding. Due to the increment of the code number in the buffering system, the scale and the number of codecs are also increased, which causes high system complexity. The optical coding technique of spectral amplitude coding (SAC) [14, 15] and the codec design based on arrayed waveguide grating (AWG) [16, 17] are employed to simplify the buffer architecture. Both processes of coding and complementary coding can be performed by a shared encoder. As the implemented codec number is reduced, the system cost can be decreased.
The buffer processing packets is approximated to a queuing model, where independent random variables with an identical distribution are used to model the inter-arrival times of the incoming packet flow. The inter-arrival times follow the exponential distribution, and therefore the number of arriving packets in the time duration from 0 to t can be treated as a Poisson stream characterized by the mean and variance that are both equal λ. The processing times for each packet performed by the buffer are also independent and identically distributed and have the memoryless property. Their density is an exponential function with the mean 1/μ and the variance 1/μ2. Based on the analysis results, the capacity constraint for the proposed buffer is effectively released, and the packet dropping probability (PDP) is reduced, as the number of available codes for CCK is twice that for previous code-based buffers.
In this section, a code-domain buffer combining CCK and SAC signals is described. By converting packets into coded and complementary coded signals, the buffer capacity is increased. For the conventional memory scheme based on optical codes with amplitude-shift keying (ASK), at most N packets can be queued simultaneously in the same space, where N is the code length or code cardinality. Packet dropping occurs when the number of arriving packets exceeds the buffer capacity, as shown in Fig. 1a. If CCK is adopted, additional N packets can be conveyed to the complementary coded signals. Then, the buffer can store up to 2 N packets during the buffering procedure, as shown in Fig. 2b. Throughout this manuscript, the code sequences selected for encoding packets are known as Walsh-Hadamard codes and their complementarities [18, 19], which are respectively denoted as Hk and Hk* in Fig. 1, where 1≦k≦N. Although stored in a common channel at the same time, each encoded packet is identifiable from the multiplexed signals, as the multiple-access interference among them can be canceled by the following algorithm [19]:
$$ {\mathbf{C}}_{\boldsymbol{k}}\odot {\mathbf{H}}_{\boldsymbol{k}}-{\mathbf{C}}_{\boldsymbol{k}}\odot {\mathbf{H}}_{\boldsymbol{k}}^{\ast}=\left\{\begin{array}{c}N/2,\kern1em {\mathbf{C}}_{\boldsymbol{k}}={\mathbf{H}}_{\boldsymbol{k}},k=j\\ {}-N/2,{\mathbf{C}}_{\boldsymbol{k}}={\mathbf{H}}_{\boldsymbol{k}}^{\ast},k=j\\ {}0,\kern4.75em \mathrm{otherwise}\end{array}\right. $$
where ⊙ denotes the dot-product operator.
Optical code assignment for packet buffering: a amplitude-shift keying (ASK) buffer and b complementary code keying (CCK) buffer
Process of SAC encoding with Hadamard code H1 = (1010)
The other technique employed in the packet buffering in this paper is SAC, which was initially proposed in optical access networks to achieve asynchronous and simultaneous transmissions without signal interferences [14, 15]. The author introduces SAC as a possible method to cancel the interference resulting from overlapping packets in the buffer. In SAC, the code sequence assigned to each user is a binary sequence with elements {0, 1} and is encoded in the spectrum of an optical carrier. Each chip "1" in the sequence is represented by a specific wavelength signal, while each chip "0" is represented by a null wavelength. Figure 2 shows the optical signal in the time-wavelength dimension before and after the SAC encoding is performed. For Hadamard code H1 = (1100) with N = 4, the SAC representation consists of two wavelengths λ1 and λ3, which map the first and the second elements of chip "1" in H1.
An optical buffering scheme based on CCK with a capacity of 2 K is shown in Fig. 3. When electrical payload bits arrive at the buffer input, the buffer manager searches all available code sets that are not occupied by other buffered packets and selects one of them for packet encoding. Once the code set is determined, a link configuration in the first optical cross connect is established to forward the optical carrier generated from a broadband light source to the corresponding encoder. Two optical packets can be structured simultaneously by respectively modulating their payload bits with the typical and complementary signals of the selected code set.
Optical encoding for packet buffering in the code domain on CCK and Hadamard codes. EOM: electrical-to-optical modulator; OXC: optical cross connect
The AWG properties of cyclic-shift and free spectral range are employed to generate the SAC signals of the Hadamard codes. Code sequences with a length of 4 are taken as the example in Fig. 4. Optical spectra consisting of eight wavelengths, λ1 to λ8, are divided into two parts, one for generating H1 and the other for H1*. The wavelength distribution of H1 = (1100) is (λ1λ200), and that of H1* = (0011) is (00λ7λ8). The reason that H1 and H1* are encoded in different optical bands is that the decoder is unable to identify the buffered packets if these two codes are mixed in the same channel. Therefore, the number of usable codes is increased at the expense of the increased signal bandwidth. Based on the encoder design, the proposed system is cost-effective because it is capable of generating a large number of coded signals by using a relatively small number of codecs. Generally, for a code-domain buffer with capacity K, at least K pairs of codecs should be implemented, with each of them generating/detecting a specific code Hk, where 1≦k≦K. In CCK, the maximum number of coded packets in the buffer increases from K to 2 K. If a general coding structure is used, up to 2 K pairs of codecs are required. In the proposed encoder based on the AWG, the signals of both Hk and Hk* can be simultaneously created; thus, the implemented codec number is reduced from 2 K to K, which decreases the system cost.
AWG-based encoder of Hadamard codes with a length of 4
Except for spectral-amplitude-coding (SAC), one of the most common coding methods in optical networks is time spreading (TS) [20, 21]. In an optical coding scheme based on TS, a bit signal with duration Tb is divided into N chips with duration Tc = Tb/N, where N is the code length. As Tc < Tb/N for general TS cases, the decoder must have the ability to detect the short pulses, which requires high-speed components and induces an increased receiver bandwidth. For a given Tc, using a long-length code extends the bit duration, causing a lower throughput. Furthermore, due to the time-domain coding, strict chip synchronization between the encoder and decoder is required. On the other hand, codes are encoded on the optical spectrum in SAC, so Tc is constantly the same as Tb and does not increase with the code length. Therefore, for a SAC codec, one could release its requirement of the processing speed so that the system complexity is effectively reduced.
In this section, the author models the processes of code-domain buffering as a queuing system, which was employed to model the code-domain packet switches in previous research [20, 21]. The buffer, incoming packet sequence, and optical buffering are considered as the service center, population of customers, and provided service, respectively. The arrival times between packets are a collection of independent and identically distributed random variables of exponential distribution with mean λ. The service time, including the processing delay of the decoding, remodulation, switching, and encoding in the buffer, is also exponentially distributed with mean μ. The arbitrariness of service times comes from the variable lengths of packets. For the server number, despite multiple decoders in the buffer, only the one matching the code carried by the incoming packet has an output signal to activate the optical modulator. Then, the optical carrier is converted to an optical code by the corresponding encoder. Over a small time interval, only one encoder is operated, which implies a single-server scenario. The service discipline of first-in-first-out is assumed, as the buffer processes the incoming packets in order. Based on the above assumptions, the buffering scenario can be described as the Kendall notation of the M/M/1/K model [22], where K is the code number assigned to a buffer.
The properties of the M/M/1/K model are birth–death processes, where only one or none of the events occurs at a time. The events can be a packet arriving at the buffer input or a coded packet leaving from the output. From [22], the steady-state probabilities of k packets in the buffer P(k) are given by:
$$ P(k)=\left\{\begin{array}{c}\frac{\rho^k\left(1-\rho \right)}{1-{\rho}^{K+1}},\rho \ne 1\\ {}\frac{1}{K+1},\kern2em \rho =1\end{array}\right. $$
where ρ is the utilization ratio, defined as λ/μ. The author employs steady-state probabilities as a performance measure of the buffering efficiency. When a new packet gets to the buffer input, several coded packets already exist. One of the codes unused by the queued packets is selected for queuing the received one. If all codes have been distributed, the system fails to perform code conversion, and the packet is abandoned. In this case, the buffer capacity is full, and no available code can be used for buffering. The PDP is defined as the probability that K packets are stored in the buffer, which is given by P(K).
In the CCK scenario, the buffer first reads the code information, and if all Hadamard codes are completely used, the arriving packets are transferred to the encoders that create the complementary signals. If both types of codes are fully assigned to packets, packet dropping is inevitable because the buffer space is not available. Given the same assumptions of the previous modeling method, the only difference in CCK is that when an encoder is activated, it is capable of processing two classes of packets at the same time. This scheme enhances the buffering performance, as the buffer has increased the capacity from K to 2 K. Therefore, it is reasonable to treat the buffer as the M/M/2/2 K model [22]. However, as the bandwidth of CCK packets is half that of the ASK packets, the supporting signal rate is also halved. The utilization ratio is increased to 2ρ because for the same amount of payload bits, the duration of CCK packets is twice that of ASK. The steady-state probability of k packets occupied in the buffer is given by:
$$ {P}^{\ast }(k)=2{\rho}^k{P}^{\ast }(0) $$
where P*(0) is given as:
$$ {P}^{\ast }(0)={\left[1+2\rho \left(1+\frac{1-{\rho}^{2K-1}}{1-\rho}\right)\right]}^{-1} $$
Similarly, as at most 2 K packets can be stored in the buffer, the PDP for CCK is given by P*(2 K).
Except for the PDP, the author uses the mean number in a steady-state queue as another performance measure to more sufficiently describe the merits of the proposed buffer scheme. Q and Q*, respectively representing the average numbers of packets waiting in the queue before entering the ASK and CCK buffers, are given as [22]:
$$ Q=\left\{\begin{array}{c}\frac{\rho \left[K{\rho}^{K+1}-\left(K+1\right){\rho}^K+1\right]}{\left(1-{\rho}^{K+1}\right)\left(1-\rho \right)}-\left[1-P(0)\right],\rho \ne 1\\ {}\frac{K}{2}-\left[1-P(0)\right],\kern9.5em \rho =1\end{array}\right. $$
$$ {Q}^{\ast }=\frac{2{\rho}^3}{{\left(1-\rho \right)}^2}\left\{{\rho}^{2K-2}\left[2\left(\rho -1\right)\left(K-1\right)-1\right]+1\right\}{P}^{\ast }(0),\rho \ne 1 $$
At the end of the buffering process, multiple coded packets are multiplexed and then sent to the common buffer output, as shown in Fig. 3 in the manuscript. The signal quality degrades when the photo-detectors in the decoder perform the optical-to-electrical conversion on the coded packets. Due to the presented noise sources such as phase-intensity induced noise (PIIN) and thermal noise, the decoded photo-current may not correctly reflect the power variations of the original optical signal. Such errors are described by a parameter known as code-error probability, which is expressed as follows [14]:
$$ {P}_{\mathrm{C}}(K)=\frac{1}{2}\mathit{\operatorname{erfc}}\left\{\frac{I}{2\sqrt{\left[{\sigma}_{\mathrm{PIIN}}^2(K)+{\sigma}_{\mathrm{TH}}^2\right]}}\right\} $$
where I is the photo-current at the decoder's output, \( {\sigma}_{\mathrm{PIIN}}^2(i) \) is the variance of PIIN, and \( {\sigma}_{\mathrm{TH}}^2 \) is the variance of thermal noise. The terms of current and noises are respectively expressed as follows:
$$ I=\left\{\begin{array}{c} RP/2,\mathrm{for}\ \mathrm{ASK}\\ {} RP/4,\mathrm{for}\ \mathrm{CCK}\end{array}\right. $$
$$ {\sigma}_{\mathrm{PIIN}}^2(K)=\left\{\begin{array}{c}{R}^2{P}^2 BK\left(K+1\right)/4,\kern4.5em \mathrm{for}\ \mathrm{ASK}\\ {}{R}^2{P}^2B\left\lfloor K/2\right\rfloor \left(\left\lfloor K/2\right\rfloor +1\right)/8,\mathrm{for}\ \mathrm{CCK}\end{array}\right. $$
$$ {\sigma}_{\mathrm{TH}}^2={BS}_{\mathrm{TH}} $$
, where P is the received optical power, B is the electrical bandwidth of the receiver, v is the bandwidth of the light source, R is the responsiveness of the photo-detector, and STH is the power spectral density (PSD) of the thermal noise. The symbol of ⌊·⌋ denotes the floor function. For ASK, the encoding is executed on the entire optical bandwidth, while for CCK, the bandwidth is equally divided into two channels, one for the typical encoding and the other for the complementary coding. Two different coding methods result in different mathematical expressions in photo-current and noise sources.
Results and discussions
Figure 5 shows the PDP as a function of buffer capacity K (the number of optical codes assigned to the buffer). The utilization ratio ρ is set at 0.25, 0.4, and 0.55. When K is relatively small, the number of packet droppings for CCK is similar to that of the conventional ASK scheme. As K increases, the proposed buffer can guarantee a lower PDP because each encoder generates an extra coded signal that can be employed for packet buffering. The ASK buffer cannot reduce packet dropping efficiently by increasing K because the complementary coded signals are not used. In Fig. 6, the PDP versus the utilization ratio for the two buffering schemes is shown. As one would anticipate, given a fixed ρ, CCK achieves a lower PDP than that of ASK. For a higher utilization ratio, the PDPs for both buffers become large, as the buffer is more likely to be in full capacity and it is not possible to store any new incoming packets.
The packet-dropping probability (PDP) as a function of the assigned code number K in the CCK and the ASK buffer
The PDP versus utilization ratio ρ for K = 7
In Fig. 7, the average packet numbers in the queue at the inputs of the conventional ASK and CCK buffers are analyzed under different buffer capacities. The increase in K induces an increase in the queue length, as more packets can wait to be buffered in the device before they are possibly dropped due to the larger capacity. The CCK buffer reduces the number of packets waiting in line because each packet has a higher chance of obtaining a free space as soon as it reaches the buffer. In Fig. 8, one can observe the number of packets in queue versus different utilization ratios. This figure shows that Q and Q* increase when ρ increases. The high-intensity traffic has a greater influence on CCK than on ASK because the buffer is required to process packets with a longer length. Therefore, when ρ is relatively large, Q* is expected to exceed Q.
The average packet number in queue as a function of the assigned code number K in the CCK and the ASK buffer for ρ = 0.4
The average packet number in queue versus the utilization ratio ρ for K = 7
Figure 9 shows the code-error probability versus the received optical power P for K = 8. Compared to the ASK-based buffer, the CCK-based one has improved performance in terms of PC(K). A large P indicates a large photo-current and PIIN variance. CCK suppresses noise increment more significantly than ACK does when P increases. The system parameters used in the numerical analysis and following software simulations are listed in Table 1.
Code-error probability versus the received optical power P for K = 8
Table 1 System parameters used in the calculations and simulations
Figures 10 and 11 show the buffered packet signals coded with H1 and H1*, respectively. Each figure includes the power spectral density, the time waveform, the coded packet, and the decoded electrical payload. The wavelength distribution of H1, (λ1λ200), includes two optical pulses centered at 193.1 and 193.2 THz, as shown in Fig. 10a. Figure 10b shows the optical payload sequence, which has a similar amplitude to that of the decoded electrical signal in Fig. 10c. Figure 11a depicts the PSD of the buffered packet coded with H1*, where two pules at 193.7 and 193.8 THz are employed to represent the wavelength distribution of (00λ7λ8). Note that when a complementary coded packet is decoded, the result is the inverse number of the original optical payload, based on eq. (1). Therefore, an additional logic operation of negation is required to recover the original number of payload bits, as shown in Figs. 11b and 11c. The simulations were conducted by using Optisystem 7.0. The light source power is 10 dBm. Each wavelength is filtered out by an optical second-order Bessel filter with a bandwidth of 100 GHz. The electrical signal shown at the photodetector output is passed to a low-pass Bessel filter with a cut-off frequency of 130 MHz and an order of 4 to obtain the final result of the decoded payload sequence. Other parameters are demonstrated in Table 1.
Fig. 10
Optical packet signals encoded with H1: a power spectral density; b time waveform; and c decoded payload sequence
Optical packet signals encoded with H1*: a power spectral density; b time waveform; and c decoded payload sequence
When a packet encoded with multiple wavelengths travels through a fiber channel, nonlinear effects lead to crosstalk between code chips. Two of the most dominant effects that impair the quality of SAC signals are self-phase modulation (SPM) and cross-phase modulation (XPM). Figure 12a-c show the optical spectrum of H1* for the transmission distances of 0, 40, and 80 km, respectively. To clearly demonstrate the spectrum distortions caused by SPM and XPM, linear effects such as dispersion and attenuation are neglected in the simulation. One could observe that two peaks corresponding to chips "1" gradually become flat as fiber length increases, while the power levels for chips "0" are raised by the leaked signals from chips "1". As the chip distribution for a SAC signal is varied, the multiple-access interference could not be completely mitigated in the receiver. The quantification of nonlinear effects on decoding performance requires further investigations.
Power spectral density of H1* after traveling the fiber channel of (a)0 km (b)40 km (c)80 km
An optical buffering scheme of queuing packets in the code domain was proposed in this paper. The buffer capacity was increased by using the SAC signals and CCK for packet encoding. The buffering scenario was modeled as queuing systems to drive the PDP performance measure. The proposed CCK buffer could lead to PDP reduction, as the number of available codes for queuing packets is twice that of the previous ASK buffers. For a cost-effective buffer architecture, the encoders of a specific code and its complement were integrated into a shared device based on the AWG to reduce the required codec number in the proposed system.
AWG:
Arrayed Waveguide Grating
Amplitude-Shift Keying
CCK:
Complementary Code Keying
FDL:
Fiber Delay Line
OPS:
Optical Packet Switching
PDP:
Packet Dropping Probability
Spectral Amplitude Coding
WDM:
Wavelength-Division Multiplexing
Segawa, T., Ibrahim, S., Nakahara, T., Muranaka, Y., Takahashi, R.: Low-power optical packet switching for 100-Gb/s burst optical packets with a label processor and 8 × 8 optical switch. J. Lightwave Technol. 34, 1844–1850 (2016)
Zhao, Z., Wu, B., Li, B., Xiao, J., Fu, S., Liu, D.: Multihop routing enabled packet switching with QoS guarantee in optical clos for data centers. IEEE/OSA J Opt Commun Netw. 10, 624–632 (2018)
Datta, A.: Construction of polynomial-size optical priority queues using linear switches and fiber delay lines. IEEE/ACM Trans Netw. 25, 974–987 (2017)
Lim, H.: Number of tunable wavelength converters and internal wavelengths needed for cost-effective design of asynchronous optical packet switching system with shared or output fibre delay line buffer. IET Commun. 7, 1419–1429 (2013)
Liu, W., et al.: A wavelength tunable optical buffer based on self-pulsation in an active microring resonator. J. Lightwave Technol. 34, 3466–3472 (2016)
Hirayama, T., Miyazawa, T., Furukawa, H., Harai, H.: Optical and electronic combined buffer architecture for optical packet switches. J Opt Commun Netw. 7, 776–784 (2015)
Kazemi, R., Rashidinejad, A., Nashtaali, D., Salehi, J.A.: Virtual optical buffers: a novel interpretation of OCDMA in packet switch networks. J. Lightwave Technol. 30, 2964–2975 (2012)
Chen, K.S., Chen, C.S., Wu, S.L.: Two-code keying and code conversion for optical buffer design in optical packet switching networks. Electronics. 8, 1117 (2019)
Nisar, K.S., Sarangal, H., Thapar, S.S.: Performance evaluation of newly constructed NZCC for SAC-OCDMA using direct detection technique. Photon Netw. Commun. 37, 75–82 (2019)
Nisar, K.S., Djebbari, A., Kandouci, C.: Development and performance analysis zero cross correlation code using a type of Pascal's triangle matrix for spectral amplitude coding optical code division multiple access networks. Optik. 159, 14–20 (2018)
Ahmed, H.Y., Nisar, K.S.: Diagonal eigenvalue unity (DEU) code for spectral amplitude coding-optical code division multiple access. Opt. Fiber Technol. 19, 335–347 (2013)
Yang, C.C.: Code space enlargement for hybrid fiber radio and baseband OCDMA PONs. J. Lightwave Technol. 29, 1394–1400 (2011)
Yang, C.C., Huang, J.F., Chang, H.H., Chen, K.S.: Radio transmissions over residue-stuffed-QC-coded optical CDMA network. IEEE Commun. Lett. 18, 329–331 (2013)
Chen, K.S., Yang, C.C., Huang, J.F.: Using stuffed quadratic congruence codes for SAC labels in optical packet switching network. IEEE Commun. Lett. 19, 1093–1096 (2015)
Noshad, M., Jamshidi, K.: Bounds for the BER of codes with fixed cross correlation in SAC-OCDMA systems. J. Lightwave Technol. 30, 1944–1950 (2011)
Yang, C.C.: Hybrid wavelength-division-multiplexing/spectral-amplitude-coding optical CDMA system. IEEE Photon. Technol. Lett. 17, 1343–1345 (2005)
Yang, C.C., Huang, J.F., Tseng, S.P.: Optical CDMA network codecs structured with M-sequence codes over waveguide-grating routers. IEEE Photon. Technol. Lett. 16, 641–643 (2004)
Huang, J.F., Yang, C.C., Tseng, S.P.: Complementary Walsh–Hadamard coded optical CDMA coder decoders structured over arrayed-waveguide grating routers. Opt. Commun. 229, 241–248 (2014)
Wei, Z., Shalaby, H., Ghafouri-Shiraz, H.: Modified quadratic congruence codes for fiber Bragg-grating based spectral-amplitude-coding optical CDMA systems. J. Lightwave Technol. 19, 1274–1281 (2001)
Beyranvand, H., Salehi, J.A.: All-optical multiservice path switching in optical code switched GMPLS core networks. I J. Lightwave Technol. 27, 2001–2012 (2009)
Beyranvand, H., Salehi, J.A.: Multiservice provisioning and quality of service guarantee in WDM optical code switched GMPLS core networks. J. Lightwave Technol. 27, 1754–1762 (2009)
Cassandras, C.G., Lafortune, S.: Introduction to Discrete Event Systems. Springer, New York (2006)
MATH Google Scholar
This work is funded by the Department of Education of Guangdong Province, Guangzhou, under grant no. 2018 K TSCX322.
School of Electrical and Computer Engineering, Nanfang College of Sun Yat-Sen University, No. 882, Wenquan Ave., Conghua Dist., Guangzhou, 510970, Guangdong Province, China
Kai-Sheng Chen
College of Intelligence, National Taichung University of Science and Technology, No. 129, Sec. 3, Samin Rd., North Dist., Taichung City, 404, Taiwan
Wien Hong
K.S. Chen contributed to the conceptualization, methodology, and writing of this paper. W. Hong conceived the simulation setup and formal analysis and conducted the investigation. The author(s) read and approved the final manuscript.
Correspondence to Wien Hong.
Chen, KS., Hong, W. Complimentary code keying of spectral amplitude coding signals in optical buffering with increased capacity. J. Eur. Opt. Soc.-Rapid Publ. 16, 13 (2020). https://doi.org/10.1186/s41476-020-00135-6
Spectral amplitude coding (SAC)
Complementary code keying (CCK)
Optical buffering
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page
|
CommonCrawl
|
On the evolution of the water ocean in the plate-mantle system
Takashi Nakagawa ORCID: orcid.org/0000-0003-3179-64621,2,
Hikaru Iwamori3,5,
Ryunosuke Yanagi4 &
Atsushi Nakao5
Here, we investigate a possible scenario of surface seawater evolution in the numerical simulations of surface plate motion driven by mantle dynamics, including thermo-chemical convection and water migration, from the early to present-day Earth to constrain the total amount of water in the planetary system. To assess the validity of two hypotheses of the total amount of water inferred from early planetary formation processes and mineral physics, we examine the model sensitivity to the total water in the planetary system (both surface and deep interior) up to 15 ocean masses. To explain the current size of the reservoir of surface seawater, the predictions based on the numerical simulations of hydrous mantle convection suggest that the total amount of water should range from 9 to 12 ocean masses. Incorporating the dense hydrous magnesium silicate (DHMS) with a recently discovered hydrous mineral at lower mantle pressures (phase H) indicates that the physical mechanism of the mantle water cycle would not be significantly influenced, but the water storage region would be expanded in addition to the mantle transition zone. The DHMS solubility field may have a limited impact on the partitioning of water in the Earth's deep mantle.
Understanding the water ocean on the Earth's surface is essential for understanding the habitability of Earth-like planets (Maruyama et al. 2013). Geological records suggest that the Earth's surface seawater likely formed between 3.8 Ga and 4.5 Ga based on the appearance of metamorphic rocks derived from sedimentary rocks (one line of evidence that water was present on the Earth's surface) and the initiation of plate tectonics, as suggested by zircon dating (Appel et al. 1998; Maruyama and Komiya 2011; Mojzsis et al. 2001; Valley et al. 2014). However, although constraining the initiation of plate tectonics is very important for determining when water started to be transported into the deep planetary interior, this timing remains debated, with estimates ranging from 2.5 to 4.3 Ga (Condie 2016; Hopkins et al. 2008). To reconcile such geological evidence related to the survival of the water ocean over a period of ~ 4 billion years, it is crucial to understand the size of the surface seawater reservoir. There are two hypotheses regarding the size of the reservoir that influences the evolution of the water ocean from the early to present-day Earth: (1) estimates based on cosmochemical and geochemical analyses of chondritic material and the modeling of the formation of a water ocean as a result of the solidification of a surface magma ocean (Marty 2012; Hamano et al. 2013) and (2) petrological estimates of the actual water content of the mantle (e.g., Hirschmann 2006). In the early planetary formation hypothesis, the total amount of water in the planetary system after the completion of early planetary formation or the solidification of the surface magma ocean ranges from 5 to 15 ocean masses, which means that a very large amount of water is expected to reside in the mantle (Marty et al. 2012; Hamano et al. 2013). Petrological estimates suggest that the mantle water content may range from 0.2 to 2.3 ocean masses. By adding seawater on the surface, eventually, it turns out that the total present amount of water in the planetary system should be approximately 1.2 to 3.3 ocean masses (Hirschmann 2006).
During the evolution of surface seawater over time, surface plate motions may play an important role, as has been implied by geological records (e.g., Maruyama and Okamoto 2007) and numerical modeling (e.g., Rüpke et al. 2004; Iwamori 2007). The evolution of surface seawater in the plate-mantle system has also been computed in semi-analytical models (Crowley et al. 2011; Sandu et al. 2011; Korenaga 2011). In such models, the partitioning of water between the surface and the deep mantle is determined by the flux balance of water uptake (regassing) by plate subduction and degassing beneath mid-ocean ridges. This modeling results in an estimate of the total water mass in the planetary system of ~2 to 3 ocean masses and consistent with petrological measurements (Hirschmann 2006).
However, our previous studies have indicated that the evolution of the mantle water mass cannot be determined with a simple regassing-degassing balance, which essentially incorporates the excess water transport associated with the dehydration reaction (Nakagawa et al. 2015; Nakagawa and Spiegelman 2017; Nakagawa and Iwamori 2017). Hence, the water solubility limit of each mantle mineral should be included to compute the excess water in the dehydration reaction. By considering this effect, our previous studies have indicated that the mantle water evolution may be strongly regulated by the water solubility limits of upper mantle minerals. In addition, the rheological properties of hydrous mantle rocks may affect the behavior of surface plate motion, with hydrous mantle conditions allowing for more vigorous surface plate motion and a much larger friction coefficient than dry mantle conditions (Nakagawa and Iwamori 2017). Such vigorous surface plate motion induced by a pseudo-plastic rheology may transport large amounts of water (i.e., several ocean masses) into the deep mantle over geologic timescales (approximately 2 billion years), and the mantle transition zone can absorb this amount of water over geologic timescales (Nakagawa and Iwamori 2017). Therefore, the total reservoir size of water at the surface and in the deep interior should be resolved in a plate-mantle system using numerical mantle convection simulations that include the water solubility limits of mantle minerals.
The issues identified in our previous studies include three model assumptions: (1) The boundary condition of mantle water transport is assumed to be an infinite reservoir of water, which is not a realistic situation. To improve the boundary condition of the mantle water transport, the lifetime and amount of water ocean in the planetary system associated with surface plate motions should be evaluated. (2) Defining the water solubility limits of mantle minerals, we only assume those of the upper mantle minerals and assign a constant value to those of the lower mantle minerals. In the lower mantle, the existence of dense hydrous magnesium silicate (DHMS) has previously been established (e.g., Ohtani et al. 2001), and according to recent mineral physics measurements, a new hydrous phase that is stable under lower mantle conditions has been found, which is a DHMS phase called "phase H." This DHMS may affect the evolution of the mantle water mass because, for instance, the solubility limit of phase H is very high (~ 12 wt.%; Nishi et al. 2014; Ohtani et al. 2014; Ohira et al. 2014). (3) Moreover, the pseudo-plastic yielding associated with hydrous mantle rocks is addressed as a reduction in the bulk value of the yield strength of oceanic lithosphere. More realistically, the water-weakening effect on hydrous mantle rocks likely only influences the friction coefficient, not the bulk yield strength of the oceanic lithosphere (Gerya et al. 2008).
The aim of this study is to resolve the issues that arose from our previous study and thereby reveal a possible scenario of the evolution of surface seawater in a plate-mantle system via numerical mantle convection simulations with water migration processes, including the computation of a finite amount of water at the surface, the water solubility limit of DHMS (including phase H), and the improved pseudo-plastic yielding of the oceanic lithosphere associated with hydrous mantle rocks. Using this modeling approach, we can resolve which hypothesis is preferable for understanding the survival time of surface seawater.
Methods/Experimental
Mantle convection simulations
The numerical modeling of global-scale mantle dynamics with water migration has been described by Nakagawa et al. (2015) and Nakagawa and Spiegelman (2017). This process is briefly described here. We assume the thermo-chemical multiphase mantle convection of a compressible and truncated anelastic approximation in a 2D spherical annulus geometry (Hernlund and Tackley 2008). The modeled mantle can be decomposed into depleted harzburgite and enriched basaltic material composed of two-phase transition systems, i.e., olivine-spinel-bridgmanite-post-perovskite and pyroxene-garnet-bridgmanite-post-perovskite, which are associated with changes in the basaltic material. A reference state for each phase transition system is computed as in Tackley (1996). All phase transition parameters can be found in Nakagawa and Tackley (2011). A partial melting effect is included to create an oceanic crust and to allow its segregation. The viscosity of the modeled mantle is dependent on temperature, pressure, water content, and yield strength and is determined via the following equations:
$$ {\eta}_d={A}_d{\sum}_{i,j=1}^{\mathrm{nphase}=3,4}\Delta {\eta}_{ij}^{\Gamma_{ij}f}\exp \left[\frac{E_d+p{V}_d}{RT}\right] $$
$$ {\eta}_w={A}_w{\left(\frac{C_w}{C_{w,\mathrm{ref}}}\right)}^{-1}{\sum}_{i,j=1}^{\mathrm{nphase}=3,4}\Delta {\eta}_{ij}^{\Gamma_{ij}{f}_j}\exp \left[\frac{E_w+p{V}_w}{RT}\right] $$
$$ {\eta}_Y=\frac{\sigma_Y\left(p,{C}_w\right)}{2\dot{e}} $$
$$ \eta ={\left(\frac{1}{\eta_d}+\frac{1}{\eta_w}+\frac{1}{\eta_Y}\right)}^{-1} $$
where Ad,w is the prefactor determined by T = 1600 K and the ambient pressure at the surface (the subscripts d and w indicate dry and hydrous mantle, respectively), Ed,w is the activation energy; Vd,w is the activation volume; Cw is the water content in the mantle; Cw,ref is the reference water content and is assumed to be 620 ppm (Arcey et al. 2005); the exponent of the prefactor, which is dependent on the water content, is based on the results of deformation experiments (Mei and Kohlstedt 2000) and is assumed to be 1; Γij is the phase function; f is the basaltic composition (varying from 0 to 1); R is the gas constant (8.314 J K mol−1); T is the temperature; p is the pressure; σY(p, Cw) is the yield strength of the oceanic lithosphere, which is a function of pressure and water content; \( \dot{e} \) is the second invariant of the strain rate tensor; and ∆ηij is the viscosity jump associated with the phase transition, which is assumed to increase by 30 times during the phase transition from ringwoodite or garnet to bridgmanite.
The yield strength of oceanic lithosphere is dependent on pressure and the mantle water content and is determined as follows:
$$ {\sigma}_Y\left(p,{C}_w\right)={C}_Y+\mu \left({C}_w\right)p;\mu \left({C}_w\right)=\min \left[1,{\left(\frac{C_w}{C_{w,\mathrm{ref}}}\right)}^{-1}\right]{\mu}_0 $$
where μ0 is the Byerlee-type friction coefficient. We include only the water-weakening effect caused by hydrated rocks (Gerya et al. 2008).
Another important influence of hydrated mantle rocks is the variation in density caused by hydrated mantle minerals, as noted by Nakagawa et al. (2015):
$$ \rho \left({T}_{ad},p,C,{C}_w\right)=\rho \left({T}_{ad},C,p\right)\left(1-\alpha \left({T}_{ad},C,p\right)\left(T-{T}_{ad}\right)\right)-{\Delta \rho}_w{C}_w $$
where ρ(Tad, C, z) is the combined reference density between harzburgite and mid-ocean ridge basalt (MORB) compositions, with a 2.7% density difference, as shown in Fig. 1 (and a 3.6% density difference between olivine- and pyroxene-related phases); Tad is the adiabatic temperature in the mantle; ∆ρw is the density variation due to the water content; and Cw is the water content. Generally, the densities of hydrous minerals are less than those of dry minerals, but the value of ∆ρw is less constrained by high-pressure/high-temperature (high P-T) experiments; the densities of hydrous minerals are generally 0.1 to 1.0% less than those of dry mantle minerals (Richard and Iwamori 2010).
a Water solubility maps of the mantle. Upper mantle only (Iwamori 2007). b The full range of mantle temperatures and pressures. Lines indicate cold (blue), average (green), and hot (red) mantle geotherms for cases of 12 ocean masses of total water without (a) or with (b) DHMS effects. c Fitting for determining the water solubility of phase H included in b (Nishi et al. 2014; Ohtani et al. 2014; Ohira et al. 2014; Ohira et al. 2016)
To solve the equations of thermo-chemical mantle convection and to model the chemical composition, we use the numerical code of a finite-volume multigrid flow solver with tracer particles (StagYY; Tackley 2008). To determine the water migration processes, the tracer particle approach is used for the water advection process and degassing process via volcanic eruptions, and a numerical scheme involving a discrete migration velocity approximation is used for the dehydration process (Iwamori and Nakakuki 2013; Nakagawa et al. 2015; Nakao et al. 2016; Nakagawa and Spiegelman 2017). The dehydration process is modeled as the upward migration of excess water, which is defined as the difference between the actual water content and the water solubility at a certain temperature and pressure in a grid (see Fig. 2 in Nakagawa et al. 2015); this upward migration may have a velocity comparable to that of fluid movement in fully two-phase flow modeling (Wilson et al. 2014), which ranges from 0.5 to 50 m/year. Although this may affect the numerical resolution of the model, it should not have a significant influence on the results based on an assessment of the Appendix of Nakagawa and Iwamori (2017). Note that our model of water migration allows for migration only in the vertical direction, whereas in Wilson et al. (2014), the fluid component may migrate appreciably in the horizontal direction over several tens of kilometers near the corner of a mantle wedge. This conventional scheme seems to be valid for global-scale water circulation in a convecting mantle. In addition, we also assume the water partitioning of partially molten material, as in Nakagawa et al. (2015) and Nakagawa and Spiegelman (2017). The partition coefficient of water between solid mantle material and melt is set to 0.01 (Aubaud et al. 2008). In the numerical scheme of material transportation in Nakagawa and Iwamori (2017), two distinct compositional types of tracers are assumed, which can track the water migration for each composition separately; however, here, the chemical composition assigned to each tracer continuously varies with arbitrary melting (see Rozel et al. 2017; Lourenço et al. 2018). Hence, the water migration should be tracked using a bulk composition, which is given as:
$$ S\left(T,P,C\right)={S}_{\mathrm{Peridotite}}\left(T,P\right)\left(1-C\right)+{S}_{\mathrm{Basalt}}\left(T,P\right)C $$
where Speridotite and Sbasalt are the water solubilities of mantle peridotite and oceanic crust as functions of temperature and pressure, respectively, and C is the basaltic fraction. This represents a major difference between this study and Nakagawa and Iwamori (2017). The difference between the two different melting approaches is discussed in Appendix.
The numerical setup used in this study is described as follows: 1024 (azimuthal) × 128 (radial) grid points with four million tracers are used to track the chemical composition, melt fraction, and mantle water content. The boundary conditions for temperature are fixed temperatures at the surface (300 K) and the core-mantle boundary (CMB; 4000 K). The initial conditions include an adiabatic temperature of 2000 K at the surface plus a thin thermal boundary layer (to explain the thermal boundary conditions at the surface and CMB), a basaltic composition of 20%, and a dry mantle (zero water content). The composition of the mantle is assumed to be uniform so that partial melting can create heterogeneous features in the mantle. The mantle can become hydrated up to the boundary conditions of the mantle water content (described in the "Computing the water ocean mass") via surface plate motion.
Water solubility maps
Figure 1 shows the maximum H2O content of the mantle peridotite system based on the work of Iwamori (2004, 2007) and Nakagawa et al. (2015) (Fig. 1a) and includes the stability fields of hydrous phases that are stable at lower mantle conditions (Fig. 1b). These data are used to compute the excess water migration in the convecting mantle. At the lower mantle condition, without DHMS solubility, the water solubility of the lower mantle minerals is set as 100 wt. ppm. With DHMS solubility, in addition to the existing DHMS (phases A to D), because a new hydrous mineral phase has recently been discovered to exist at lower mantle pressures, i.e., "phase H" (Komabayashi and Omori 2006; Nishi et al. 2014; Ohira et al. 2014; Walter et al. 2015; Ohtani 2015), we have added the stability field of DHMS including phase H to the water solubility map of the mantle peridotite system (Fig. 1c). Compared to the water solubility map for pressures of less than 28 GPa (Iwamori 2004, 2007), few experimental results are available for pressures of greater than 28 GPa to constrain the exact phase boundaries and the maximum amount of H2O in the peridotite system. The effect of aluminum on the stability of DHMS including phase H depends on the partitioning of Al among mantle minerals, including DHMS, phase H, and δ-AlOOH, which is currently poorly constrained. For instance, we assume that the stability of phase H in the natural peridotite system is similar to that of the pure MgSiO4H2 phase H (Ohira et al. 2016), which can be used to establish a minimum P-T stability range. Considering the bulk peridotite composition and the maximum modal amount of phase H in the peridotite system, we estimate that the maximum H2O content in the phase H-bearing P-T range is 8 wt.%. At pressures and temperatures higher than the stability field of DHMS with phase H, we set a maximum H2O content of 100 ppm below the solidus (e.g., Panero et al. 2015) and 0 ppm above the solidus, as in Nakagawa et al. (2015).
Computing the water ocean mass
In a previous study (Nakagawa and Iwamori 2017), we assumed a fixed boundary condition for mantle water migration in terms of the water ocean mass, such as a fixed value of 1.4 × 1021 kg (one ocean mass). This is not very realistic for understanding geologic records of the evolution of the water ocean associated with surface plate motion (e.g., Maruyama et al. 2013). To formulate a finite reservoir of surface seawater with a box model assumption, the mass of the water reservoir can be computed as:
$$ {X}_{w,s}={X}_{w,\mathrm{total}}-{X}_{w,m}\left({F}_R,{F}_H,{F}_{\mathrm{G}}\right) $$
where Xw, s is the mass of the water ocean, Xw, total is the mass of the total amount of water in the system, and Xw, m is the mass of mantle water as a function of regassing (FR), dehydration (FH), and degassing (FG). The boundary condition of mantle water migration at the surface is described as follows:
$$ {C}_w\left(\mathrm{surface}\right)=\left\{\begin{array}{c}{C}_{w,\mathrm{sol}}\left(300\ K,\mathrm{surface}\right)\ if\ {X}_{w,s}>0\\ {}0\ if\ {X}_{w,s}=0\end{array}\right. $$
where Cw, sol(300K, surface) is the water solubility of mantle rocks at the surface.
All physical parameters used in this study are listed in Table 1. This study examines a total of 20 cases that vary with the total amount of water in the planetary system (3 to 15 ocean masses) and the strength of the oceanic lithosphere. The parameters used in each case are listed in Table 2. First, we examine the effect of DHMS in conjunction with a large reservoir of surface seawater. Second, we examine the two hypotheses of the total amount of water in the planetary system. Third, the strength of the oceanic lithosphere is investigated by varying the friction coefficient (μ0) from 0.1 to 0.6. This range represents the suggested range of lithospheric strength inferred from island loading (0.2–0.75; Zhong and Watts 2013). The minimum friction coefficient is based on the value associated with stable plate-like behavior in a dry mantle convection model (~ 0.13; Moresi and Solomatov 1998). All numerical simulations are performed for approximately 4.6 billion years.
Table 1 Physical parameters
Table 2 Run summary
Chemical-hydrous structure with effects of DHMS
Figure 2 shows the chemical-hydrous structure of the mantle with and without the water solubility effects of DHMS at t = 4.6 billion years and for μ0 = 0.2, with an assumed total water volume of 12 ocean masses in the planetary system (exosphere plus interior). In both cases, extremely high-viscosity structures are found in the lower mantle due to the high efficiency of heat transfer of hydrous mantle convection (Nakagawa et al. 2015). Without DHMS, we assume that the water solubility of the lower mantle minerals is set as 100 wt. ppm. Without DHMS solubility, the dry mantle transition zone corresponds to upwelling plumes, which can have a temperature that is hotter than 2000 K. As shown in Fig. 6c of Nakagawa and Iwamori (2017), the high temperatures associated with mantle plumes correspond to water contents of zero. Incorporating the DHMS stability field into the water solubility maps (see Fig. 1a, b) reveals that the high-temperature mantle plumes may have a certain water content, unlike in the case without DHMS solubility limits. Another major difference between the cases with and without DHMS solubility is that the hydrated region at a depth of 660 km expands to the uppermost lower mantle in cold downwelling regions because of the high water content of DHMS (see arrows at the bottom of Fig. 2). Since we also assume that a density reduction occurs due to the hydration of the mantle minerals (Nakagawa et al. 2015), smaller-scale basaltic piles are found in the deep mantle compared to those generated by dry mantle convection (e.g., Nakagawa and Tackley 2011).
Compositional (left) and hydrous (right) structures at t = 4.6 billion years. Top: without phase H (old look-up database); bottom: with DHMS solubility effects (new look-up database)
To confirm how DHMS affects the mantle water content, Fig. 3 shows the 1D horizontally averaged mantle water content corresponding to the hydrous structure in Fig. 2. Without the effects of DHMS, the mantle transition zone is the main water absorber; however, with the effects of DHMS, an additional region of high water content can be found at depths of up to ~ 1500 km. The water content in the mantle transition zone in these cases ranges from 0.1 to 1.0 wt.%. This range is similar to those inferred from the electrical conductivity structure of the mantle transition zone (e.g., Kelbert et al. 2009) and from seismic imaging (Houser 2016; Schmandt et al. 2014). Moreover, these results also seem to be consistent with the measurements of the water content of diamond inclusions in wadsleyite (Pearson et al. 2014). To confirm the cause of the water enhancement in DHMS found in Fig. 3, Fig. 1a, b shows the water solubility maps on which the 1D vertical structures of temperature have been plotted. Comparing a cold temperature profile with the water solubility map in Fig. 1b appears to indicate that phase H appears in the temperature profile at a depth of ~ 1500 km (corresponding to a pressure of approximately 60 GPa).
1D horizontally averaged mantle water content with and without DHMS solubility as a function of pressure scaled as GPa units
Water mass evolution and plate-like behavior: with and without DHMS
Figure 4 shows the temporal variations in the surface water mass and surface mobility that can be computed using the ratio of the surface velocity to the root mean square velocity in the entire mantle with and without DHMS, as shown in Fig. 2. Over 4.6 billion years, the surface seawater comprises 2.2 ocean masses without DHMS and 1.2 ocean masses with DHMS. Both cases are started with 12 ocean masses in total amount of water. This means that, for the mantle water mass, 9.8 ocean masses without DHMS and 10.8 ocean masses with DHMS are absorbed in the mantle. Therefore, when the DHMS effect is included, this water is more likely to be partitioned into the deep mantle to match the constraint on the present-day Earth's ocean mass. In Figs. 1b and 3, with DHMS, more water can be absorbed in the upper lower mantle compared to a case without DHMS, which is caused by phase D and/or phase H. However, as shown in Table 3 of Iwamori (2007), the maximum amount of water that can be stored is 9 to 11 ocean masses, assuming that the maximum H2O content in the lower mantle is 100 ppm and assuming an average geotherm corresponding to that beneath 60 Ma oceanic lithosphere with a mantle potential temperature of 1300 °C (Iwamori 2007). This may suggest that the DHMS plays a minor role in the mantle water mass, which contributes at most to ~ 1 ocean mass of water mass in the whole mantle because the effects of DHMS only appeared in the cold subducting regions.
Temporal variations in the mass of surface seawater with and without DHMS effects
Figure 5 shows the mantle water fluxes (ingassing, dehydration, and degassing) as a function of time with and without DHMS. The procedures for computing these water fluxes can be found in Nakagawa and Iwamori (2017) and Nakagawa and Spiegelman (2017). They are given as follows:
$$ {F}_R=\left\{\begin{array}{c}\underset{S}{\int }{\rho}_m{u}_z{C}_w dS\ \left(\mathrm{if}\ {u}_z<0\right)\\ {}0\ \left(\mathrm{if}\ {u}_z\ge 0\right)\end{array}\right. $$
$$ {F}_G={\dot{M}}_{\mathrm{erupt}}{C}_{w,\mathrm{erupt}} $$
$$ {F}_E=\underset{S}{\int }{\rho}_m{C}_{w,\mathrm{ex}}{u}_f dS $$
where ρm is the mantle density; \( {\dot{M}}_{\mathrm{erupt}} \) is the total mass of erupted material; Cw is the water content; uz is the radial velocity; Cw,erupt is the water content of the erupted material, which is equivalent to the water content of the molten material; Cw, ex is the excess water content relative to the water solubility; and uf is the conventional form of fluid migration velocity expressed numerically, which is computed as the ratio of the radial grid spacing to the time step size but is not very sensitive to the grid spacing shown in the Appendix of Nakagawa and Iwamori (2017), and the subscripts of those fluxes are R = ingassing, G = degassing, and E = dehydration. Basically, the uptake of water from the surface reservoir by plate subduction is the dominant flux process, with a magnitude on the order of 1013 kg/year, with the two other fluxes involving the release of water at the surface of approximately an order of magnitude lower than that involved in the uptake process. Thus, the surface seawater is gradually reduced by the ingassing process with surface plate motion. These profiles of water fluxes without DHMS are not very different from those in a case with the DHMS solubility field because the effective depth of the mantle water cycle associated with those fluxes is approximately 150 to 200 km (Nakagawa and Spiegelman 2017). This means that the DHMS solubility field is less sensitive to the physical processes of the mantle water cycle.
Water flux diagnostics with and without DHMS solubility as function of time. a Ingassing. b Dehydration. c Degassing
To assess the model sensitivities to the total amount of water in the planetary system, Fig. 6 shows the temporal variations in the surface seawater, mantle mass, and surface mobility with DHMS, which indicate that the surface seawater completely dries up before reaching the age of the Earth for a total amount of water of less than nine ocean masses. This implies that the total amount of water in the planetary system should be preferably in about 12 ocean masses, for the surface seawater to be consistent with the surface seawater on the present-day Earth. This preferred amount of water coincides with the storage capacity of the silicate mantle, as mentioned above (Iwamori 2007), although the mantle temperature reproduced in this study (e.g., Fig. 1) is lower than the 60 Ma geotherm assumed by Iwamori (2007). This indicates that (1) DHMS in the lower mantle, which was ignored in Iwamori (2007), plays a minor role, and (2) the present-day mantle could contain a maximum amount of water due to continuous hydration (Fig. 6). As discussed in the the "Introduction," the total amount of water in the Earth system has been estimated to be 1.2 to 3.3 ocean masses, which is based on the water contents of oceanic basalts (mid-ocean ridge basalts (MORB) and ocean island basalts (OIB)) and assumes that the entire mantle is sampled by MORB and OIB (Hirschmann 2006). While the upper mantle comprises the MORB source, OIB likely represents only a part of the crust-mantle cycling system (e.g., White and Hofmann 1982; Christensen and Hofmann 1994). The apparent difference in the estimated amount of water (i.e., 10–12 vs. 1.2–3.3 ocean masses) could be due to the regions that are not sampled by either MORB or OIB.
Temporal variations in the mass of a surface seawater, b mantle temperature, and c surface mobility varying with the total amount of water in the entire planetary system. For cases with 12 and 15 ocean masses, the mantle behaviors (water content and surface mobility) are the same due to the remaining surface seawater
Regarding the surface mobility, which is one diagnostic for assessing the occurrence of plate-like behavior and is computed as the ratio of surface velocity to the root mean square of the convective velocity of the entire mantle (bottom of Fig. 6), it appears to not change very much after all the surface seawater is absorbed into the deep mantle. This suggests that the surface seawater found on the Earth's surface is not strongly correlated with the occurrence of surface plate motion. To confirm this hypothesis, Fig. 7 shows the sensitivity of the total amount of water to water fluxes, which indicates that the degassing flux is still active when the surface seawater is exhausted. This suggests that the surface seawater could be regenerated with mantle degassing, and the water-weakening effect is still valid but that it is immediately returned to the deep mantle via plate subduction. Therefore, surface plate motion will still be very active even if all surface seawater is exhausted.
Water flux diagnostics as a function of time varying with the total amount of water in the entire system as a function of time. a Ingassing. b Dehydration. c Degassing
Water mass evolution: model sensitivity to the strength of the oceanic lithosphere
Figure 8 shows the temporal variations in both the surface and mantle water masses over 4.6 billion years for different friction coefficients (0.1, 0.2, 0.3, and 0.6) corresponding to different strengths of oceanic lithosphere. All of the cases shown here include the solubility effects of DHMS. For weaker oceanic lithosphere (μ0 = 0.1 and 0.2), the surface seawater can be partitioned with a similar value as the present-day Earth's ocean mass over 4 billion years. On the other hand, for stronger oceanic lithosphere (μ0 = 0.3 and 0.6), the amount of surface seawater still remains at approximately five ocean masses. This mechanism can be explained by the mass-averaged temperature profile as a function of time as shown in Fig. 7b, c. In weaker oceanic lithosphere, heat transport is more efficient due to the vigorous surface plate motion and large number of plate boundaries (Nakagawa and Iwamori 2017). The mantle temperature is cold enough to pass into the water solubility field of DHMS with phase H. However, stronger oceanic lithosphere cases indicate higher mantle temperatures, so that the mantle temperature does not pass into the water solubility field of DHMS. To confirm this implication, Fig. 9 shows the 1D horizontally averaged mantle water content as a function of pressure. For weaker oceanic lithosphere, a high water content region can be found at both the mantle transition zone and upper lower mantle corresponding to DHMS solubility. In contrast, for stronger oceanic lithosphere, water enhancement is only found in the mantle transition zone. This is caused by the difference in mantle temperatures shown in Fig. 7b and the heat transfer efficiency of plate-mantle dynamics.
Temporal variations in the mass of a surface seawater, b mantle temperature, and c surface mobility varying with the friction coefficient of the yield strength of the oceanic lithosphere. The total amount of water is fixed as 12 ocean masses
1D horizontally averaged mantle water content varying with the friction coefficient of the yield strength of the oceanic lithosphere as a function of pressure scaled as GPa units
The findings from this study are described as follows:
The presence of DHMS may have a limited impact on the evolution of surface seawater but is not very different from that of the physical processes of the mantle water cycle. The physical mechanism of the mantle water cycle is strongly regulated by the choke point of the water solubility of the mantle minerals (Nakagawa and Spiegelman 2017). However, the DHMS may play some role in expanding the storage region of water transported by plate subduction. Quantitatively, a water reservoir with a water content of 0.1 wt.% is located in the upper lower mantle due to DHMS solubility. To discuss the consistency between this amount and the realistic hydrous conditions of the Earth, more observational and experimental measurements are required.
The survival time of surface seawater is dependent on the total amount of water in the entire planetary system and the strength of the oceanic lithosphere. To partition water into the surface reservoir with the present-day mass of the surface seawater on the Earth's surface, the entire planetary system should contain ten ocean masses or more, although more accurate knowledge about mantle hydrous phases is required. However, the total amount of water in the planetary system would be reduced if the strength of the oceanic lithosphere was slightly stronger but a large amount of water (~ 7 ocean masses) was absorbed in the deep mantle. This finding, i.e., that a large amount of water is required for the early Earth, is consistent with an estimate derived from the experimental measurements of chondritic material and inferences based on the solidification of the surface magma ocean during early planetary formation (Marty 2012; Hamano et al. 2013). The total absorbed amount of water in the mantle is also consistent with the estimate obtained based on the material properties of mantle rocks along the realistic geotherm (Iwamori 2007). These results are consistent because the water budget in the mantle is primarily controlled by the water released by dehydration reactions rather than that released by degassing effects, which has been the main focus of simplified models of hydrous mantle evolution (Franck and Bounama 2001; Rüpke et al. 2004; Crowley et al. 2011; Sandu et al. 2011; Korenaga 2011)
DHMS and other potential mechanisms of water transport into the deep mantle
First, the stability fields of DHMS including phase H that are stable under lower mantle pressure conditions are poorly constrained, as was described earlier in the "Water solubility map" section. In this study, our stability field of phase H corresponds to that of Ohira et al. (2016) and features a minimum stability field of 70 to 80 GPa. If the stability field is raised to more than 100 GPa and higher temperatures, more water can be transported via plate subduction, which may shorten the lifetime of surface seawater; to achieve an Earth-like planet, more water is required in the system.
Second, other potential host minerals of water exist in the deep mantle, such as δ-AlOOH (Ohira et al. 2014) or pyrite-type FeOOH (Nishi et al. 2017). These minerals appear to be stable at typical lower mantle temperatures and pressures. However, the amount of water that these minerals can retain in the deep mantle remains unclear. The possible water solubilities of the lower mantle minerals, including phase H, δ-AlOOH, and pyrite-type FeOOH, which could potentially be accurately determined via experiments, could greatly influence the survival time of surface seawater. The inclusion of additional lower mantle minerals could result in a much shorter seawater survival time than that observed in the cases examined in this study. Note that the volume-averaged water content of the lower mantle is be expected to be ~ 100 ppm or less (Karato 2011; Panero et al. 2015), but these estimates remain highly controversial and are dependent on identifying the possible host minerals of water in the deep lower mantle. Therefore, these estimates should be determined more quantitatively to resolve this issue.
Third, a hydrogen diffusion mechanism is also a significant process in the mantle water cycle (Richard et al. 2002, 2006) and is still unresolved in this study. Numerical modeling results suggest that hydrogen diffusion seems to saturate the mantle with water on a much shorter timescale than that without incorporating hydrogen diffusion (Nakagawa 2017). The incorporation of a water-saturated mantle leads to a steady state in the evolution of surface seawater induced by the mantle water cycle. As a result, the lifetime of surface seawater can be expected to be longer in this study.
Moreover, the rheological properties of the hydrous lower mantle are highly uncertain because of the difficulty of experimental determination, even for dry mantle conditions (e.g., Girard et al. 2016). Under dry mantle conditions, the lower mantle minerals are expected to exhibit the shear localization mechanism with diffusion creep deformation (Girard et al. 2016). However, it is unclear whether this type of deformation mechanism occurs under hydrous lower mantle conditions. If the rheological properties of the lower mantle are similar to those of the upper mantle, more water could be transported into the deep mantle, and the lifetime of surface seawater may be shorter due to the more vigorous convective dynamics in the lower mantle.
Strength of the oceanic lithosphere
As indicated in Fig. 7, the friction coefficient has a great influence on the partitioning of water between the surface and deep mantle reservoir; more surface seawater is absorbed in conjunction with smaller friction coefficients (weaker oceanic lithosphere) because the mantle temperature is sufficiently cold so that both the mantle transition zone and upper lower mantle may be worked as large water reservoirs (see Fig. 8). Hence, a large amount of water should be required when the strength of oceanic lithosphere is very weak; however, using the range of friction coefficients suggested by observational data analysis (up to 0.7; Zhong and Watts 2013), the total amount of water in the entire system could be reduced but still be larger than those of the suggested by simple parameterized convection models (e.g., Sandu et al. 2011). However, it should be noted that a weaker oceanic lithosphere is preferable for understanding the plate-like behavior in a dry mantle convection system (μ0 < 0.1; Moresi and Solomatov 1998; Crameri and Tackley 2015). Moreover, in this study, we incorporate the effect of "water weakening" to reduce the yield strength of the oceanic lithosphere under a water-saturated situation.
Total amount of water in the planetary system
In most geodynamic models with water circulation, the partitioning of water between surface seawater and the deep mantle at the present is assumed to range from 1:1 to 1:2 (Franck and Bounama 2001; Rüpke et al. 2004; Sandu et al. 2011; Korenaga 2011), which is consistent with the results of the mineral physics experiments (Hirschmann 2006). In particular, Franck and Bounama (2001) also suggested that the lifetime of surface seawater may be dependent on the efficiency of regassing caused by plate subduction. However, in this study, the regassing flux is automatically regulated by the water solubility map, including the "choke point" observed at a depth of 150 to 200 km, which is reduced by up to a few orders of magnitude for the regassing flux at this depth. The most important issue in those geodynamic models is that it is difficult to account for the water solubility of mantle minerals, and the water cycle is assumed to involve only degassing for the water release from the deep interior to the exosphere and water uptake by the regassing process from the surface to the deep mantle. In addition, degassing is assumed to occur only along mid-ocean ridges but should also occur along island arcs. Furthermore, the scaling relationship is based only on heat transfer in steady-state mantle convection, which is only applicable for cases with less vigorous mantle convection. Therefore, simplified mantle dynamics models with water circulation underestimate both the regassing flux and the degassing flux and therefore require a smaller amount of total water in the entire planetary system (~ 3 ocean masses) than the amount inferred from early planetary formation estimates (5 to 15 ocean masses). To avoid these underestimates caused by the assumption of scaling relationships in plate-mantle dynamics, we conducted a series of full mantle convection simulations with water migration, including actual water solubility maps for deep mantle minerals. The results indicate that a large amount of water is needed in the planetary system to achieve a consistent lifetime of surface seawater in the plate-mantle system. This finding is consistent with the estimates of the total amount of water in the entire planetary system inferred from early planetary formation processes.
However, this argument leads to issues related to the sources of the volatile components in the early Earth and their initial amounts (e.g., Albarede 2009). In Nakagawa and Spiegelman (2017), these issues did not affect the initial amount of water in the deep mantle, but the deep mantle should contain a certain amount of water, as illustrated in the water solubility maps of the mantle minerals. These issues remain in models of the early thermal and chemical state of the planetary mantle that do not start from a full magma ocean condition (a fully molten mantle is expected to have been present in the early Earth). To further resolve these issues, the initial state of mantle convection should be seriously examined in a future study that assumes initially fully molten mantle conditions (Lourenço et al. 2016) and then checks the consistency of these conditions with the theoretical estimates of the size of the water reservoir at the surface (Hamano et al. 2013).
Finally, the following evolution of surface seawater associated with a plate-mantle system is proposed; a certain amount of water (volatiles) is delivered to the planet before or after the magma ocean forms (potentially by a giant impact or late veneer accretion, see review by Genda (2016)). After the magma ocean solidifies, surface seawater forms. The partition ratio of water between the surface and the deep mantle is dependent on the water solubility of the deep mantle, which is in turn dependent on the temperature and composition of the early Earth's mantle. Vigorous surface plate motion can transport surface water into the deep mantle; however, because of the high temperatures in the deep mantle, relatively little water stays in the deep mantle. When the mantle is sufficiently cooled by mantle convection, the water transported via plate subduction can be stored in the mantle transition zone and the uppermost lower mantle, thereby gradually reducing the volume of surface water to the present-day amount. Although this conceptual model would be slightly modified by a better understanding of the global-scale mantle water circulation and more accurate water solubility limits of lower mantle mineral assemblage, it is overall a robust model.
In this study, the evolution of surface seawater in the plate-mantle system with the effects of deep mantle water solubility (e.g., DHMS) is investigated to resolve the controversial issue of the total amount of water in the entire planetary system. The conclusions are as follows:
The DHMS solubility field may have a small impact on the evolution of surface seawater but is not very sensitive to the physical mechanism of the mantle water cycle because the mantle water cycle is effective up to a depth of 150~200 km, which is much shallower than the DHMS solubility field.
In numerical simulations, the total amount of water in the entire planetary system should be greater than at least 7 to 12 ocean masses, which is consistent with the water mass estimate based on early planetary formation (Marty 2012; Hamano et al. 2013) and a petrological estimate that includes a realistic maximum H2O solubility of mantle material along the slab geotherm (after Iwamori 2007). The main mechanism of the large amount of water required in the entire system is the incorporation of water solubility maps so that the dehydration process can be addressed in numerical mantle convection simulations.
DHMS including phase H may represent an additional water reservoir in the deep mantle and may affect the evolution of surface seawater on the present-day Earth. To better estimate the seawater evolution, more accurate constraints on the stability and amount of water stored in DHMS, phase H, and other hydrous minerals (e.g., δ-AlOOH and pyrite-type FeOOH) in the actual mantle are required.
In this study, we successfully computed the evolution of surface seawater in the plate-mantle system caused by hydrous mantle convection and incorporate the solubility fields of hydrous mineral phases in the deep mantle, such as the DHMS. However, it must be noted that due to the global scale of the model, it is very difficult to resolve the detailed physical and chemical processes occurring in the mantle wedge as in Nakao et al. (2016, 2018) and van Keken et al. (2011); therefore, some significant improvements are required to reveal the geological and petrological constraints on mantle water evolution, which requires future investigation.
CMB:
DHMS:
Dense hydrous magnesium silicate
Albarede F (2009) Volatile accretion history of the terrestrial planets and dynamic implications. Nature 461:1227–1233
Appel PWU, Fedo CM, Moorbath S, Myers JS (1998) Recognizable primary volcanic and sedimentary features in a low-strain domain of the highly deformed, oldest known (~3.7–3.8 Gyr) Greenstone Belt, Isua, West Greenland. Terra Nova 10:57–62
Arcey D, Tric E, Doin MP (2005) Numerical simulations of subduction zones: effect of slab dehydration on the mantle wedge dynamics. Phys Earth Planet Int 149:133–153
Aubaud C, Hirschmann MH, Withers AC, Hervig RL (2008) Hydrogen partitioning between melt, clinopyroxene, and garnet at 3 GPa in a hydrous MORB with 6 wt. % H2O. Contrib Mineral Petrol 156:607–625
Christensen UR, Hofmann AW (1994) Segregation of subducted oceanic crust in the convecting mantle. J Geophys Res 99:19867–19884
Condie KC (2016) A planet in transition: the onset of plate tectonics on Earth between 3 and 2 Ga? Geosci Front. https://doi.org/10.1016/j.gsf.2016.09.001
Crameri F, Tackley PJ (2015) Parameters controlling dynamically self-consistent plate tectonics and single-sided subduction in global models of mantle convection. J Geophys Res Solid Earth 120:3680–3706. https://doi.org/10.1002/2014JB011664
Crowley J, Gérault M, O'Connell RJ (2011) On the relative influence of heat and water transport on planetary dynamics. Earth Planet Sci Lett 310:380–388. https://doi.org/10.1016/j.epsl.2011.08.035
Franck S, Bounama C (2001) Global water cycle and Earth's thermal evolution. J Geodyn 32:231–246
Genda H (2016) Origin of Earth's oceans: an assessment of the total amount history and supply of water. Geochem J 50:27–42
Gerya T, Connolly JAD, Yuen DA (2008) Why terrestrial subduction one-sided? Geology 36:43–46. https://doi.org/10.1130/G24060A.1
Girard J, Amulele G, Farta R, Mohiuddin A, Karato S-I (2016) Shear deformation of bridgmanite and magnesiowüstite aggregates at lower mantle conditions. Science 351:144–147. https://doi.org/10.1126/science.aad3113
Hamano K, Abe Y, Genda H (2013) Emergence of two types of terrestrial planet on solidification of magma ocean. Nature 497:607–610. https://doi.org/10.1038/nature12163
Hernlund JW, Tackley PJ (2008) Modeling mantle convection in the spherical annulus. Phys Earth Planet Int 171:48–54
Hirschmann MH (2006) Water, melting, and the deep Earth H2O cycle. Annu Rev Earth Planet Sci 34:629–653. https://doi.org/10.1146/annurev.earth.34.031405.125211
Hopkins M, Harrison TM, Manning CE (2008) Low heat flow inferred from >4Gyr zircon suggests Hadean plate boundary interaction. Nature 456:493–496. https://doi.org/10.1038/nature07465
Houser C (2016) Global seismic data reveal little water in the mantle transition zone. Earth Planet Sci Lett 448:94–101. https://doi.org/10.1016/j.epsl.2016.04.018
Iwamori H (2004) Phase relations of peridotites under H2O saturated conditions and ability of subducting plates for transportation of H2O. Earth Planet Sci Lett 227:57–71. https://doi.org/10.1016/j.epsl.08.013
Iwamori H (2007) Transportation of H2O beneath the Japan arcs and its implications for global water circulation. Chem Geol 239:182–198. https://doi.org/10.1016/j.chemgeo.2006.08.011
Iwamori H, Nakakuki T (2013) Fluid processes in subduction zones and water transport to the deep mantle. In: Karato S-i (ed) Physics and chemistry of the deep mantle. John Wiley & Sons, Ltd., Oxford, pp 372–391
Karato S, Wu P (1993) Rheology of the upper mantle: a synthesis. Science 260:771–778
Karato S-i (2011) Water distribution across the mantle transition zone and its implications for global material circulation. Earth Planet Sci Lett 301:413–423. https://doi.org/10.1016/j.epsl.2010.11.038
Kelbert A, Schultz A, Egbert G (2009) Global electromagnetic induction constraints on transition-zone water content variations. Nature 460:1003–1006. https://doi.org/10.1038/nature08257
Komabayashi T, Omori S (2006) Internally consistent thermodynamics data set for dense hydrous magnesium silicates up to 35 GPa, 1600C: implications for water circulation in the Earth's deep mantle. Phys Earth Planet Int 156:89–107. https://doi.org/10.1016/j.pepi.2006.02.002
Korenaga J (2011) Thermal evolution with a hydrating mantle and the initiation of plate tectonics in the early Earth. J Geophys Res 116:B12403. https://doi.org/10.1029/2011JB008410
Korenaga J, Karato S-I (2008) A new analysis of experimental data on olivine rheology. J Geophys Res 113:B02403. https://doi.org/10.1029/2007JB005100
Lourenço D, Rozel A, Tackley PJ (2016) Melting and crustal production helps plate tectonics on Earth-like planets. Earth Planet Sci Lett 439:18–28. https://doi.org/10.1016/j.epsl.2016.01.024
Lourenço DL, Rozel A, Gerya TV, Tackley PJ (2018) Efficient cooling of rocky planets by intrusive magmatism. Nat Geosci 11:322–327. https://doi.org/10.1038/s41561-018-0094-8
Marty B (2012) The origins and concentrations of water, carbon, nitrogen and noble gases on Earth. Earth Planet Sci Lett 313-314:56–66. https://doi.org/10.1016/j.epsl.2011.10.040
Maruyama S, Ikoma M, Genda H, Hirose K, Yokoyama T, Santosh M (2013) The naked planet Earth: most essential pre-requisite for the origin and evolution of life. Geosci Front 4:141–165
Maruyama S, Komiya T (2011) The oldest pillow lavas, 3.8–3.7 Ga from the Isua supracrustal belt, SW Greenland: plate tectonics already begun by 3.8 Ga. J Geogr 120:869–876
Maruyama S, Okamoto K (2007) Water transportation from the subducting slab into the mantle transition zone. Gonwana Res 11:148–165. https://doi.org/10.1016/j.gr.2006.06.001
Mei S, Kohlstedt DL (2000) Influence of water on plastic deformation of olivine aggregates 1. Diffusion creep regime. J Geophys Res 105:21457–31469
Mojzsis SJ, Harrison TM, Pidgeon RT (2001) Oxygen-isotope evidence from ancient zircons from liquid qater at the Earth's surface 4,300 Myr ago. Nature 409:178–81
Moresi L, Solomatov V (1998) Mantle convection with a brittle lithosphere: thoughts on the global tectonic styles of the Earth and Venus. Geophys J Int 133:669–682
Nakagawa T (2017) On the numerical modeling of the deep mantle water cycle in global-scale mantle dynamics: the effects of the water solubility limit of lower mantle minerals. J Earth Sci 28:563–577. https://doi.org/10.1007/s12583-017-0755-3
Nakagawa T, Iwamori H (2017) Long-term stability of plate-like behavior caused by hydrous mantle convection and water absorption in the deep mantle. J Geophys Res Solid Earth 122. https://doi.org/10.1002/2017JB014052
Nakagawa T, Nakakuki T, Iwamori H (2015) Water circulation and global mantle dynamics: insight from numerical modeling. Geochem Geophys Geosyst 16:1449–1464. https://doi.org/10.1002/GC005071
Nakagawa T, Spiegelman MW (2017) Global-scale water circulation in the Earth's mantle: implications for the mantle water budget in the early earth. Earth Planet Sci Lett 464:189–199. https://doi.org/10.1016/j.epsl.2017.02.010
Nakagawa T, Tackley PJ (2011) Effects of low-viscosity post-perovskite on thermos-chemical mantle convection in a 3-D spherical shell. Geophys Res Lett 38:L04309. https://doi.org/10.1029/2010GL046494
Nakao A, Iwamori H, Nakakuki T (2016) Effects of water transportation on subduction dynamics: roles of viscosity and density reduction. Earth Planet Sci Lett 454:178–191. https://doi.org/10.1016/j.epsl.2016.08.016
Nakao A, Iwamori H, Nakakuki T, Suzuki YJ, Nakamura H (2018) Role of hydrous lithospheric mantle in deep water transportation and subduction dynamics. Geophys Res Lett. https://doi.org/10.1029/2017GL076953
Nishi M, Irifune T, Tsuchiya J, Tange Y, Nishihara Y, Fujino K, Higo Y (2014) Stability of hydrous silicate at high pressures and water transport to the deep lower mantle. Nat Geosci:224–227. https://doi.org/10.1038/NGEO2074
Nishi M, Kuwayama Y, Tsuchiya J, Tsuchiya T (2017) The pyrite-type high-pressure form of FeOOH. Nature 547:205–208. https://doi.org/10.1038/nature22823
Ohira I, Ohtani E., Kamada S., Hirao N (2016) Formation of phase H-δ-AlOOH solid solution in the lower mantle, Goldschmidt Conference Abstract, 2346
Ohira I, Ohtani E, Sakai T, Miyahara M, Hirao N, Ohishi Y, Nishijima M (2014) Stability of a hydrous δ-phase, AlOOH-MgSiO2(OH)2, and a mechanism for water transport into the base of lower mantle. Earth Planet Sci Lett 401:12–17. https://doi.org/10.1016/j.epsl.05.059
Ohtani E (2015) Hydrous minerals and the storage of water in the deep mantle. Chem Geol 418:6–15. https://doi.org/10.1016/j.chemgeo.2015.05.005
Ohtani E, Amaike Y, Kamada S, Sakamaki T, Hirao N (2014) Stability of hydrous phase H MgSiO4 under lower mantle conditions. Geophys Res Lett. https://doi.org/10.1002/2014GL061690
Ohtani E, Toma M, Litasov K, Kubo T, Suzuki A (2001) Stability of dense hydrous magnesium silicate phases and water storage capacity in the transition zone and lower mantle. Phys Earth Planet Int 124:105–117. https://doi.org/10.1016/S0031-9201(01)00192-03
Panero WR, Pigott JS, Reaman DM, Kabbes JE, Liu Z (2015) Dry (Mg,Fe)SiO3 perovskite in the Earth's lower mantle. J Geophys Res Solid Earth 120. https://doi.org/10.1002/2014JB011397
Pearson DG, Brenker FE, Nestola F, McNeill J, Nasdala L, Hutchison MT, Mateev S, Mather K, Silversmit G, Schmitz S, Vekemans B, Vincze L (2014) Hydrous mantle transition zone indicated by ringwoodite included within diamond. Nature 507:221–224. https://doi.org/10.1038/nature13080
Richard G, Bercovici D, Karato S-I (2006) Slab dehydration in the Earth's mantle transition zone. Earth Planet Sci Lett 251:156–167. https://doi.org/10.1016/j.epsl.2006.09.006
Richard G, Monneraeu M, Ingrin J (2002) Is the transition zone an empty water reservoir? Influence from numerical model of mantle dynamics. Earth Planet Sci Lett 205:37–51
Richard GC, Iwamori H (2010) Stagnant slab, wet plumes and Cenozoic volcanism in East Asia. Phys Earth Planet Int 183:280–287. https://doi.org/10.1016/j.pepi.2010.02.009
Rozel A B, Golabek G J, Jain C, Tackley P J, Gerya T V (2017) Continental crust formation on early Earth controlled by intrusive magmatism. Nature 545: 332–335. doi: https://doi.org/10.1038/nature22042
Rüpke LH, Morgan JP, Hort M, Connolly JAD (2004) Serpentine and the subduction zone water cycle. Earth Planet Sci Lett 223:17–34
Sandu C, Lenardic A, McGovern P (2011) The effects of deep water cycling on planetary thermal evolution. J Geophys Res 116:B12404. https://doi.org/10.1029/2011JB008405
Schmandt B, Jacobsen SD, Becker TW, Liu Z, Dueker KG (2014) Dehydration melting at the top of the lower mantle. Science 344:1265–1268. https://doi.org/10.1126/sciecnce.1253358
Tackley PJ (1996) Effects of strongly variable viscosity on three-dimensional compressible convection in planetary mantles. J Geophys Res Solid Earth 101:3311–3322
Tackley PJ (2008) Modelling compressible mantle convection with large viscosity contrast in a three-dimensional spherical shell using the yin-yang grid. Phys Earth Planet Int 171:7–18
Valley JW, Cavosie AJ, Ushikubo T, Reinhard DA, Lawrence DF, Larson DJ, Clifton PH, Kelly TF, Wilde SA, Moser DE, Spicuzza MJ (2014) Hedean age for a post-magma-ocean zircon confirmed by atom-probe tomography. Nat Geosci 7:219–223
van Keken PE, Hacker BR, Syracuse EM, Abers GA (2011) Subduction factory: 4. Depth-dependent flux of H2O from subducting slabs worldwide. J Geophys Res 116:B01401. https://doi.org/10.1029/2010jb007922
Walter MJ, Thomson AR, Wang W, Lord OT, Ross J, McMahon SC, Baron MA, Melekhova E, Kleppe AK, Kohn SC (2015) The stability of hydrous silicates in Earth's lower mantle: Experimental constraints from the systems MgO–SiO2–H2O and MgO–Al2O3–SiO2–H2O. Chem Geol 418:16–29. https://doi.org/10.1016/j.chemgeo.2015.05.001
White WM, Hofmann AW (1982) Sr and Nd isotope geochemistry of oceanic basalt and mantle evolution. NAature 296:821–825
Wilson CR, Spiegelman M, van Keken PE, Hacker BR (2014) Fluid flow in subduction zones: the role of solid rheology and compaction pressure. Earth Planet Sci Lett 401:261–274. https://doi.org/10.1016/j.epsl.2014.05.052
Xie S, Tackley PJ (2004) Evolution of U–Pb and Sm–Nd systems in numerical models of mantle convection. J Geophys Res 109:B11204. https://doi.org/10.1029/2004JB003176
Yamazaki D, Karato S (2001) Some mineral physics constraints on the rheology and geothermal structure of Earth's lower mantle. Am Mineral 86:385–301
Zhong S, Watts AB (2013) Lithospheric deformation induced by loading of the Hawaiian Islands and its implications for mantle rheology. J Geophys Res Solid Earth 118:6025–6048. https://doi.org/10.1002/2013JB010408
The authors thank Guillaume Richard and an anonymous reviewer for significantly improving the original manuscript. We also thank Tomoeki Nakakuki for providing a numerical module of water migration for mantle convection simulations; Masayuki Nishi for providing information about the water contents of deep mantle hydrous phases; Marc Spiegelman for the fruitful discussion of the evolution of finite volumes of water ocean during TN's sabbatical leave to Lamont-Doherty Earth Observatory, Columbia University, New York City; Paul Tackley for providing his numerical code for mantle convection; and Bjorn Mysen for an invitation to write this manuscript based on our presentation at the 2017 JpGU-AGU joint meeting. All numerical computations were performed with the Data Analyzer (DA) system in the JAMSTEC.
This work was supported by the JSPS KAKENHI Grant Numbers 16K05547 and 18H04467 and by the FLAGSHIP2020 MEXT within the CBSM2 Project "Structure and Properties of Materials in Deep Earth and Planets".
All simulation data are available upon request to the corresponding author (Takashi Nakagawa), but the simulation code used in this study is the property of the original developer (Paul Tackley) and not open to the public.
Department of Mathematical Science and Advanced Technology, Japan Agency for Marine-Earth Science and Technology, 3173-25, Showa-machi, Yokohama, 236-0001, Japan
Takashi Nakagawa
Department of Earth Sciences, University of Hong Kong, Pokfulam Road, Hong Kong, Hong Kong
Department of Solid Earth Geochemistry, Japan Agency for Marine-Earth Science and Technology, 2-15, Natsushima-cho, Yokosuka, 237-0061, Japan
Hikaru Iwamori
Department of Earth and Planetary Sciences, Tokyo Institute of Technology, 2-12-1, Ookayama, Meguro, Tokyo, 152-8551, Japan
Ryunosuke Yanagi
Earthquake Research Institute, The University of Tokyo, 1-1-1, Yayoi, Bunkyo, Tokyo, 113-0032, Japan
Hikaru Iwamori & Atsushi Nakao
Atsushi Nakao
TN designed the entire study. HI, RY, and AN provided the parameterized water solubility of hydrated mantle rocks. TN developed a numerical model and carried out all the numerical simulations. TN and HI wrote the manuscript. All authors interpreted the results of the numerical simulations and read and approved the final manuscript.
Correspondence to Takashi Nakagawa.
Comparison between Nakagawa and Iwamori (2017) and the current study
All of the results in this study are very different from those presented in Nakagawa and Iwamori (2017). In particular, as shown in Fig. 6, the sudden change in the mantle water content appears to occur on a shorter time scale than that in Nakagawa and Iwamori (2017). As mentioned in the "Methods/Experimental" section, the major difference in the numerical code between these two studies is the use of arbitrary continuous melting (this study) or discretized melting (previous study). Figure 10 shows a comparison of the mantle water content field and diagnostics (surface seawater, mantle water mass, and mantle temperature) between the numerical code used in this study and that used in Nakagawa and Iwamori (2017), which does not assume DHMS solubility. Unfortunately, the original version of the numerical code in Nakagawa and Iwamori (2017) also includes some incorrect treatments of the numerical procedure of the dehydration reaction, which is related to the data transfer in each numerical domain associated with MPI implementations and resetting an array of excess water due to dehydration reaction. These incorrect stuffs are corrected in this study. Following these corrections, both cases yield similar water evolution results. Using the discretized melting approach, the mantle temperature is expected to be much hotter than that with the arbitrary melting approach, which can allow for continuous compositional variations in each tracer because the discretized melting approach is strongly dependent on the heterogeneous distributions of tracer particles as a result of the migrations caused by numerical simulations, which is determined with a probabilistic approach for the degree of melting and is more likely to indicate a smaller degree of melting than the arbitrary melting approach. Due to this difference, the heat transport caused by melt migration is more efficient in the arbitrary melting approach; therefore, different mantle temperature profiles can be found for each case. The mantle water content profiles that correspond to such a difference between these cases are somewhat different for up to a few billion years, but the final results obtained at 4.6 billion years are not very different.
a Mantle water content taken at 3.1 billion years computed from two different numerical codes without assuming DHMS solubility (left: current version of numerical code; right: the version used in Nakagawa and Iwamori (2017) but corrected for significant issues with the dehydration reaction) and their diagnostics (b evolution of surface seawater, c mantle water mass, and d mantle temperature) as a function of time. On the legend of diagnostics plots, new represents a result from the numerical code used in this study and old represents a result from the numerical code used in Nakagawa and Iwamori (2017)
Nakagawa, T., Iwamori, H., Yanagi, R. et al. On the evolution of the water ocean in the plate-mantle system. Prog Earth Planet Sci 5, 51 (2018). https://doi.org/10.1186/s40645-018-0209-2
Ocean mass
Dense hydrous magnesium sillicate (DHMS)
Plate motion
Mantle dynamics
|
CommonCrawl
|
Last edited by Vumi
Wednesday, August 12, 2020 | History
5 edition of Construction and validation of a film slide test to measure area of high school physics found in the catalog.
Construction and validation of a film slide test to measure area of high school physics
Harvey John Goehring
by Harvey John Goehring
Physics -- Study and teaching (Secondary) -- Audio-visual aids
Statement Harvey John Goehring.
Pagination vii, 187 leaves
Data Translation's DT series of high-accuracy, dynamic signal acquisition modules for USB are suitable for precision measurements with microphones, accelerometers, and other transducers with a wide dynamic range. Common applications include audio, acoustic, and vibration testing. The DT can be combined with the ready-to-measure VIBpoint Framework to create a fast-Fourier-transform Cited by: 1. Top Dot Physics Posts of In case you haven't noticed, it's about the end of the year As is the tradition of my people (bloggers), I will now describe my top posts of the year.
Compliments of Patrick Haney Horn High School Mesquite ISD Mesquite, Texas A accelerate a 1, kg car at a rate of the bicycle's acceleratio 3. A ball moving at 30 m/s has a momentum of 15 kg m/s. The mass of the ball is – 45 kg B 15 kg C kg D kg 4. A mechanic used a hydraulic lift to raise a 12, N car m above the floor of. software All software latest This Just In Old School Emulation MS-DOS Games Historical Software Classic PC Games Software Library. Internet Arcade. Top Full text of "A Laboratory Manual of Physics for Use in High Schools" See other formats.
Physics grades will be awarded based on the student's demonstrated achievement in physics. Students who want to do well in Physics need to keep up with the work, ask questions, and take advantage of the help available. Students are encouraged to work on a project of interest to them, either in addition to or instead of the regular class work. The measurement must be relevant and appropriate to represent the property. Ex. Measuring height with a tape measure, measuring high school graduation with standardized testing scores or measuring performance on an assessment with rubrics.
Elements of crimes and basic procedure
Molecular Determinants of Radiation Response
Heterogeneous enantioselective hydrogenation
The dawnattack
The Arab mind.
Pluckings from the tree of Smarandache
The Sounds At Rivers Edge
Souvenir of art engraving.
Serial publications in the library
Fine Irish paintings and drawings from various sources.
Mike Fink
500 Self-Portraits
guide to the evaluation of educational experiences in the armed services
Construction and validation of a film slide test to measure area of high school physics by Harvey John Goehring Download PDF EPUB FB2
High School Introductory Physics Test The spring high school Introductory Physics test was based on learning standards in the Introductory Physics content strand of the Massachusetts Science and Technology/Engineering Curriculum Framework ().
These learning standards appear on pages 74–77 of the Framework, which is available on the Department website at File Size: 3MB. Age 16 - 19 Physics At A Glance. Click below to select a topic. Astrophysics Atomic physics Electricity and magnetism Electronics General physics: Mechanics Medical physics Nuclear physics Optics Properties of matter: Quantum physics Relativity Sound Thermal physics Wave properties.
Start studying PSYC - Ch. Learn vocabulary, terms, and more with flashcards, games, and other study tools. a test developer assembles a group of high school science teachers to evaluate and compare the themes and wording of the items on a science test with the intended objectives.
Jones has developed a new test to measure. Free High School Physics practice problem - High School Physics Diagnostic Test 2. Includes score reports and progress tracking. Create a free account today. (a) What is the density of the physics book if it weighs 21 N.
1 kg/m3 (b) Find the pressure that the physics book exerts on a desktop when the book lies face up. 2 Pa (c) Find the pressure that the physics book exerts on the surface of a desktop when the book is balanced on its spine. 3 Pa. Guidelines for High School Physics Programsin Because the AAPT High School Committee has physics curriculum and instruction as its major con-cern, this committee completed the current revision of the guidelines in The AAPT Committee on Physics in High Schools acknowledges and thanks the.
Physics Builder for Admission and Standardized Tests (Test Preps) by The Editors of REA () on *FREE* shipping on qualifying offers. Physics Builder for Admission and Standardized Tests (Test Preps) by The Editors of REA ()5/5(1).
High school might have been a while ago for some of us but could you still pass a high school biology exam. Take the quiz to find out. apost. Only 1 In 50 People Can Pass This High School Biology Test.
Share on Facebook. High school might have been a while ago for some of us but could you still pass a high school biology exam?. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics.
Basic high school physics question, on measuring voltage from CRO screen: Ask Question Perhaps the book is suggesting that the meter is on an AC setting and will show the RMS voltage.
This is $\frac{}{\sqrt2} mV$ for a. We did this in high school. Frustrating part was we were in the middle of a section on acceleration and momentum, and then the teacher just announces, "Okay, for our project, we're going to build a bridge!" We had no idea why, but it sounded like fun.
The winner's bridge was done by a student whose father was also a civil engineer. Vasquez High School -- Physics -- Test #1 -- points Write TRUE if the statement is true OR write the word that substitutes for the underlined word that would make it true. Writing false only earns partial credit.
Three points each. _____ 1) The metric system is File Size: 90KB. High School It is the policy of the State Board of Education and a priority of the Oregon Department of Education that there will be no discrimination or harassment on the grounds of race, color, sex, marital status, religion, national origin, age or handicap in any educational programs, activities, or employment.
Physics Test 1 - Intro to Physics Name: Date: 1. A B C. the mass of a book D. the area of a desk top 4.
A reasonable height for a chalk tray above the oor is closest to The approximate height of a high school physics student is A.
m B. m C. m D. m File Size: KB. To characterize the evolution of student understanding better than what is possible by pre-and post-testing, we posed simple conceptual questions several times per week to separate, randomly selected groups of introductory physics students.
This design avoids issues of retesting and allows for tracking of student understanding of a given topic during the course with a resolution on the order Cited by: Chapter 1 Test.
This test will cover speed, velocity, acceleration, conversions, motion problems with average velocity, motion problems with constant velocity, motion problems with constant acceleration in a straight line (kinematics), significant figures, graphing, Galileo, and all of the reading and homeworks from Chapter 1 in Conceptual Physics by Paul Hewitt.
Questions []. S.I. unit of magnetic flux is (a) tesla (b) oersted (c) weber (d) gauss. A body of mass m is moving towards east and another body of equal mass is moving towards north. A student performs an experiment and must measure the lengths of four different objects: a textbook, a pencil, a cup, and a piece of bread.
There are so many units of measuring length of an object like centimeter, millimeter, kilometer etc. The weight of a typical high school physics student is closest to N; N; N; 60 N; The work done in lifting an apple one meter near Earth's surface is approximately 1 J; J; J; 1, J; The total work done in lifting a typical high school physics textbook a vertical distance of meter is approximately J; View Test Prep - AP Physics C Test prep 8 from SCIENCE at Central High School.
A particle of mass m moves in a conservative force eld described by the potential energy function U(r) =Author: Georgerussel.
First of all, Good luck for your exams, and keep in mind, if you have studied with focus and along with enjoying the subject, scoring is not a big deal. I loved Chemistry in school, and never had any problem in scoring less than 99% in any exam.
Because the total energy of the system decreases, the skater will not be able to use as much energy to get up the track and will not get back up to her original starting height on the far side of the track. Student B: As the skater moves along the track, friction will transform some of her kinetic and potential energy into thermal energy, but the total energy of the skater-track system still.A volumetric cylinder that is used to measure the volume of the liquid has a bump on the cylinder.
C. An ammeter shows a current of A before we link it to the circuit/5.Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
freelancerscomic.com - Construction and validation of a film slide test to measure area of high school physics book © 2020
|
CommonCrawl
|
Talk.Common Text Elements
Discussion on: "In science and engineering, superscript and subscript is very common. It is semantic, not a formatting issue."
Radomir: However, the terms used in science can always be read out loud, so they have their, somewehat more elaborate, replacements ("m²"="square meter", "H₂O"="Dihydrogen Monoxide", "∑ₓ₌₀ⁿx"= "sum of numbers from 0 to n", "⌬"="benzene ring", etc.) -- the symbols are only shortcuts created for use inside formulas.
Gregor: This is partly true, but a) some symbols like "x₂" are simply read as "x-two", which is not acceptable in writing, and b) scientific writing is governed by rules of conduct, which consider the spelling-out option often inacceptable. The point is: are non-programmers willing to use wikis or do they walk away back to their "Microsoft Word + Adobe PDF" preference?
Radomir: From my own experience (mainly math and physics, sorry), the symbols are used either when you refer to them used in formulas and figures or when you're lazy and using jargon that everyone should be familar anyways. Tha latter is often considered bad style, but it depends on conext, of course -- it can be a real time saver and allows you to get the idea across fast. This is important in wikis.
Then, if you already have the formulas, it is a good idea to use exactly the same technique to create the symbols when you refer to them -- so that they look the same, for example (LaTeX is going to render the formulas with its special fonts that are very different from the browser's ones, for example). It also saves the users learning new markup and thinking of several ways to achieve the same effect in different context -- they can just copy and paste.
I agree that a wiki should provide features needed by its community -- for a scientific wiki, a LaTeX or very similar extension is simply a must. I even listed a proposed markup in the HintsOnExtending -- the same that is used in LaTeX for embedding math: $x_2$. I believe it is at least as easy to type as x2 and has several additional benefits: the symbols appear exactly as in formulas, being distinguished from the main text, the markup is identical to that in the formulas and you can even copy parts of the text directly to/from your LaTeX sources. Plus the risk of any collisions or surprising effects is much lower.
On the other hand, a wiki dedicated to different topics, like gardening, will be much better off with just a simple filter converting commonly used phrases, like m2->m² or h2o->H₂O. As long as they are readable in the source form, the interoperability doesn't really suffer.
I always like to bring up the Sensei's Library as an example of a wiki that has its markup adopted to suit the needs of its community. One can hardly imagine such an extension in a core specification for all the wikis in the world.
Many markup languages to which Creole is going to be translated (mostly HTML) expect some degree of semantic (or at least screen reader friendly) markup, like indicating the LanguageOfText and marking AbbreviationsAndAcronyms: on the other hand, if Creole is going to support all useful HTML features, I'd rather use directly HTML whose syntax is more consistent.
What about proposing in Additions to reserve single angle brackets for HTML? Typically, it would be filtered to avoid abuses.
-- YvesPiguet, 2007-Sep-20
Yes, I tried to use neutral language as much as possible. Note however, that there are use cases of both denoting the language and using abbreviations in wikis, so I think it's worth our attention -- possibly to be dismissed, but at least discussed before that.
As to mixing HTML with wiki, I have two objections, on two different levels:
The > and < characters will appear in any technical text randomly on their own, both surrounded by digits and by letters. Sure, in a perfect world they whould all be in a {{{...}}} or $$...$$, but wikis are not perfect worlds. Reserving single character without any additional context (like "only at the beginning of a line") is imho a very bad idea.
Once you allow HTML (you can white list it, so it's fairly safe), all advanced users will be using HTML and the page will become read-only for new wiki users -- the exact opposite of the wikicreole goals.
-- RadomirDopieralski 2007-Sep-20
So if you have a markup suggestion... If it could be some kind of generic named attribute which could also be used with images, links and other elements, it would be nice. I know the resistance against names, but at some point, we're bound to run out of characters. Or should we use unicode? :-)
You don't really need words for languages -- just the ISO-639 codes, possibly distinguished somehow from normal text (uppercase? enclosed in braces? preceded or followed by some symbol?). It would switch the language until the end of paragraph (or list, or table cell). When alone on a line, it would switch the default language, to the next such marker or end of the page. Since the list of all language symbols is known, there is really no need for special markup.
Abbreviations and acronyms also don't really need any special markup, as described on AbbreviationsAndAcronyms.
Actually, come to think of it, we don't need any special markup in Creole to solve the issues -- both of them depend solely on the underlying wiki engine implementing the heuristics behind them.
Still, mentioning them seems worthwhile.
-- RadomirDopieralski, 2007-Sep-20
Some common ISO codes are frequent words; for instance it in English, en or es in French, etc. Uppercase might not be enough, because some people like shouting.
Good point. So the engines that support this kind of multilingual features (whether for coloring and other neat things, like that multilingual experiment, or just for marking the spans with correct "lang" attribute) need to extend the Creole and introduce appropriate markup -- either as a plugin/macro or as a separate markup. Still, it has no impact on the engines that don't support the languages.
|
CommonCrawl
|
How would you express, in ZFC, that the number of countable models of Th($\mathbb{N}$), up to isomorphism, is at most $2^{\aleph_0}$?
Intuitively, it is clear to me why, up to isomorphism, there are at most $2^{\aleph_0}$ non-isomorphic models of Th($\mathbb{N}$): I can choose as "representative" of any countable model $\mathfrak{A}$ of Th($\mathbb{N}$) a model $\mathfrak{N}$ such that its universe is $\mathbb{N}$ and such thath $\mathfrak{A}$ and $\mathfrak{N}$ are isomorphic, using the bijection between |$\mathfrak{A}$| and $\mathbb{N}$. We can then observe, by easy cardinality considerations, that the number of countable models of Th($\mathbb{N}$) that have as universe $\mathbb{N}$ is, at most, $2^{\aleph_0}$.
Well, now, if I want to formalize in ZFC this argument, how should I proceed? In particular, how can I just express in ZFC something like " up to isomorphism, there are at most $2^{\aleph_0}$ countable models of Th($\mathbb{N}$)", wich is the claim of the theorem ? What we are saying is that, up to isomophism, the cardinality of the set of countable models of Th($\mathbb{N}$) is at most $2^{\aleph_0}$. But I cannot get (1) how to express "up to isomorphism" and (2) how to refer to the object "the set of countable models of Th($\mathbb{N}$)", because that's a proper class. Clearly, (1) e (2) are linked, and I think that the solution is to find the right expression in ZFC and resolve them togheter.
The best thing that I could find is this:
ZFC $\vdash$ [$\forall\mathfrak{A}$ $\forall\mathfrak{B}$[($\mathfrak{A}\ and\ \mathfrak{B}\ are\ isomorphic\ models\ of\ Th(\mathbb{N})$)$\rightarrow\ \mathfrak{A}=\mathfrak{B} $]] $\rightarrow \exists A(A$ is the set of countable models of $Th(\mathbb{N})$ and $|A|\ \leq 2^{\aleph_0})$
Any improvements or suggestions?
logic set-theory model-theory nonstandard-models
Matteo __
Matteo __Matteo __
Here is one way to express this in the language of set theory: There is a set $X$ of cardinality $2^{\aleph_0}$ such that for every $\mathfrak{A}$, if $\mathfrak{A}$ is a countable model of $\mathrm{Th}(\mathbb{N})$, then there exists $\mathfrak{B}\in X$ such that $A\cong \mathfrak{B}$.
Of course, this is a natural language abbreviation for the sentence in question. If it's not clear to you how to write this in the first-order syntax, I can expand on it.
Why does this formalize the natural language statement "there are at most $2^{\aleph_0}$ non-isomorphic countable models of $\mathrm{Th}(\mathbb{N})$"? Well, letting $C$ be the class of countable models of $\mathrm{Th}(\mathbb{N})$, we can pick a well-ordering of $X$ anad define a class function $F\colon C\to X$ by mapping $\mathfrak{A}$ to the least $\mathfrak{B}\in X$ such that $\mathfrak{A}\cong \mathfrak{B}$. Then $F$ respects isomorphism, so it descends to an injective function $C/{\cong} \hookrightarrow X$, demonstrating that $|C/{\cong}|\leq 2^{\aleph_0}$.
From this formulation, it's even clear how to go about proving the statement: just let $X$ be the set of all structures in the language of arithmetic with domain $\omega$.
The language of ZFC doesn't allow us to talk directly about classes (and in particular, we can't quantify over class functions), so we have to do this kind of translation of statements about classes into equivalent formulations just about sets in order to formalize them.
The sentence you wrote has lots of problems:
$$[\forall\mathfrak{A}\, \forall\mathfrak{B}\, [(\mathfrak{A}\text{ and } \mathfrak{B}\text{ are isomorphic models of }\mathrm{Th}(\mathbb{N}))\rightarrow \mathfrak{A}=\mathfrak{B}]]\rightarrow \\\exists A\,(A \text{ is the set of countable models of }\mathrm{Th}(\mathbb{N})\text{ and }|A| \leq 2^{\aleph_0}).$$
Both the antecedent and the consequent of the implication are false:
ZFC proves that there exist $\mathfrak{A}$ and $\mathfrak{B}$ models of $\mathrm{Th}(\mathbb{N})$ such that $\mathfrak{A}\neq \mathfrak{B}$.
ZFC proves that the class of countable models of $\mathrm{Th}(\mathbb{N})$ is a proper class, so there does not exist a set $A$ which is the set of all countable models of $\mathrm{Th}(\mathbb{N})$, much less one of size $\leq 2^{\aleph_0}$.
So ZFC does prove the sentence you wrote down (by propositional logic - the statement is vacuously true). But this theorem is basically meaningless.
Alex KruckmanAlex Kruckman
Let me add a very minor result which provides a methodological coda to Alex Kruckman's answer.
There is a natural theory I'll call "$\mathsf{NBG}^{hyper}$" (basically a boring repackaging of $\mathsf{NBG}$) which, unlike $\mathsf{ZFC}$ itself, can very straightforwardly talk about collections of classes and so within which "$T$ has continuum-many isomorphism types of countable models" can be expressed naively, as "There is a hyperclass bijection between the set $\mathbb{R}$ and the hyperclass of isomorphism classes of countable models of $T$." Let's call this statement $\star_T$.
This much is boring. Things get interesting, though, when we look at how $\mathsf{NBG}^{hyper}$ interfaces with $\mathsf{ZFC}$. There are two key points:
$\mathsf{NBG}^{hyper}$ is a conservative extension of $\mathsf{ZFC}$ (in the stronger, model-theoretic sense, even!).
$\mathsf{NBG}^{hyper}$ proves the following: "For every theory $T$ we have $\star_T\leftrightarrow \mathbb{A}_T$," where $\mathbb{A}_T$ is the "sets-only" formulation of "$T$ has continuum-many countable models" given in Alex's answer.
Together this says that our extra-theoretic decision to construe $\mathbb{A}_T$ as a faithful "$\mathsf{ZFC}$-ification" of the a-priori-hyperclass-referencing statement "$T$ has continuum-many countable models" is actually justified - in a precise sense it will never lead us to false conclusions. While probably not very illuminating in this specific case, I think that understanding this sort of analysis will increase one's confidence in our reflexive implementations of various ideas in set theory.
(Finally, we note that in particular it's easy to prove $\mathbb{A}_{Th(\mathbb{N})}$ inside $\mathsf{ZFC}$ per Alex's answer.)
Noah SchweberNoah Schweber
Not the answer you're looking for? Browse other questions tagged logic set-theory model-theory nonstandard-models or ask your own question.
How many countable models of ZFC are there?
Finding the exactly number of countable models of a theory
For a complete truth-set $T$ is a countable transitive model satisfying $T$ unique?
Non-standard model of $Th(\mathbb{R})$ with the same cardinality of $\mathbb{R}$
The number of non isomorphic homogenous models of T
Nonstandard cardinalities and $\mathbb{N}$
Upper bound of the "number" of countable models of Th$\mathbb{N}$ up to isomorphism
|
CommonCrawl
|
FYKOS.org
Basic informationPeopleHistoryContact Us
RulesElectronic solutions
SearchArchiveSerialYearbooks
Events Experiments
česky: Seriál 34. ročníku
Problems Results Fyziklani 2022 Online Physics Brawl
Serial of year 34
This serial has been translated since the 4th part. We are sorry, previous parts have not been translated.
Text of serial
Serial in Czech
Serial in English
1. Series 34. Year - S. oscillating
Let us begin this year's serial with analysis of several mechanical oscillators. We will focus on the frequency of their simple harmonic motion. We will also revise what does an oscillator look like in the phase space.
Assume that we have a hollow cone of negligible mass with a stone of mass $M$ located in its vertex. We will plunge it into water (of density $\rho $) so that the vertex points downwards and the cone will float on the water surface. Find the waterline depth $h$, measured from the vertex to the water surface, if the total height of the cone is $H$ and its radius is $R$. Find the angular frequency of small vertical oscillation of the cone.
Let us imagine a weight of mass $m$ attached to a spring of negligible mass, spring constant $k$ and free length $L$. If we attach the spring by its second end, we will get an oscillator. Find the angular frequency of its simple harmonic motion, assuming that the length of the spring does not change during the motion. Subsequently, find a small difference in angular frequency $\Delta \omega $ between this oscillator and the one in which the spring is substituted by a stiff rod of the same length. Assume $k L \gg m g$.
A sugar cube with mass $m$ is located in a landscape consisting of periodically repeating parabolas of height $H$ and width $L$. Describe its potential energy as a function of horizontal coordinate and outline possible trajectories of its motion in phase space, depending on the velocity $v_0$ of the cube on the top of the parabola. Mark all important distances. Use horizontal coordinate as displacement and appropriate units of horizontal momentum. Neglect kinetic energy of cube motion in the vertical direction and assume it remains in contact with the terrain.
Solution in Czech
mechanics of a point massoscillationsgravitational field
Štěpán found a few basic oscillators.
2. Series 34. Year - S. series 2
Consider a circuit with a coil, a capacitor, a resistor and a voltage source connected in series (i.e. they are not parallel to each other). The coil has an inductance $L$, the capacitor has a capacitance $C$ and the resistor has a resistance $R$. The voltage source creates a voltage $U = U_0 \cos \(\omega t\)$. Assume all devices to be ideal. Using the law of conservation of energy, write the equation relating the charge, the velocity of the charge (current $I$) and the acceleration of the charge (rate of change of the current $I$). This is an equation of a damped oscillator. Compared to the equation of damped oscillations of a mass on a spring, what are the quantities analogous to mass, stiffness of the spring and friction? Find the natural frequency of these oscillations.
Furthermore, using the quantities $L$, $R$ and $\omega $, find the capacity $C$ which causes a phase shift of the voltage on the capacitor equal to $\frac {\pi }{4}$. What is the amplitude of the voltage on the capacitor, assuming this phase shift?
oscillationselectric current
Non-mechanical oscillations are oscillations as well.
3. Series 34. Year - S. electron in field
Consider a particle with charge $q$ and mass $m$, fixed to a spring with spring constant $k$. The other end of the spring is fixed at a single point. Assume that the particle only moves in a single plane. The whole system exists in a magnetic field of magnitude $B _ 0$, which is perpendicular to the plane of movement of the particle. We will try to describe possible modes of oscillation of the particle. Start by the determination of equations of motion - do not forget to include the influence of the magnetic field.
Next assume that the particle oscillates in both of the cartesian coordinates of the particle and carry out Fourier substitution - substitute derivatives by factors of $i \omega $, where $\omega $ is the frequency of the oscillations. Solve the resultant set of equations in order to determine the ration of the amplitudes of oscillations in both coordinates and the frequency of oscillations. The solution obtained in this way is quite complicated, and better physical insight can be gained in a simpler case. From now on, assume that the magnetic field is very strong, i.e. $\frac {q ^ 2 B _ 0 ^ 2}{m ^ 2} \gg \frac {k}{m}$. Determine the approximate value(s) of $\omega $ in this case, always up to the first non-zero order. Next, sketch the motion of the particle in the direct (i.e. real) space in this (strong field) case.
mechanics of a point massoscillationsmagnetic field
Štěpán wanted to create a classical diamagnet.
4. Series 34. Year - S. Oscillations of carbon dioxide
We will model the oscillations in the molecule of carbon dioxide. Carbon dioxide is a linear molecule, where carbon is placed in between the two oxygen atoms, with all three atoms lying on the same line. We will only consider oscillations along this line. Assume that the small displacements can be modelled by two springs, both with the spring constant $k$, each connecting the carbon atom to one of the oxygen atoms. Let mass of the carbon atom be $M$, and mass of the oxygen atom $m$.
Construct the set of equations describing the forces acting on the atoms for small displacements along the axis of the molecule. The molecule is symmetric under the exchange of certain atoms. Express this symmetry as a matrix acting on a vector of displacements, which you also need to define. Furthermore, determine the eigenvectors and eigenvalues of this symmetry matrix. The symmetry of the molecule is not complete – explain which degrees of freedom are not taken into account in this symmetry.
Continue by constructing a matrix equation describing the oscillations of the system. By introduction of the eigenvectors of the symmetry matrix, which are extended so that they include the degrees of freedom not constrained by the symmetry, determine the normal modes of the system. Determine frequency of these normal modes and sketch the directions of motion. What other modes could be present (still only consider motion along the axis of the molecule)? If there are any other modes you can think of, determine their frequency and direction.
mechanics of a point massoscillationsmathematics
Štěpán was thinking about molecules
5. Series 34. Year - S. resonance and damped oscillations
On a tense rope, waves can exist with the deflection $\f {u}{x, t}$ from the equilibrium, that satisfy the wave equation with damping
\[\begin{equation*} \ppder {u}{t} = v^2 \ppder {u}{x} + \Gamma \pder {u}{x} , \end {equation*}\] where $v$ is the phase velocity and $\Gamma $ is the coefficient of damping. Do a fourier substitution and find the dispersion relation. Solve it for the wavenumber $k$. What condition, in terms of frequency $\omega $, phase velocity $v$ and the coefficient $\Gamma $, must the waves meet in order to create nodes on the rope (i.e. points in which the rope stays in equilibrium position, but around which the rope is moving)?
Consider a jump rope attached firmly at one end to a fixed wall. At the distance $L$ from the wall, we start moving the rope up and down to create waves. The jump rope has a linear density $\lambda $ and the constant tension $T$ in the direction away from the wall. The deflection then satisfies the equation
\[\begin{equation*} \ppder {u}{t} = \frac {T}{\lambda } \ppder {u}{x} . \end {equation*}\] For the deflection of the end of the rope that is moving satisfies $\f {u_0}{t} = A \f {\cos }{\omega _0 t}$. Assume the solution can be written in the form of two planar waves moving in the opposite direction to each other. Find the solution using only the parameters given in this problem statement, that is $T$, $\lambda $, $L$, $A$ and $\omega _0$. For certain frequencies, the solution has a diverging amplitude (i.e. growing beyond any limits). Find their values and the respective wavelenghts.
wave mechanicswave optics
Štěpán was playing with a jump rope.
6. Series 34. Year - S. charged chord
Assume a charged chord with linear density $\rho $, uniformly charged with linear charge density $\lambda $. The tension in the chord is $T$. It is placed in a magnetic field of constant magnitude $B$ pointing in the direction of the chord in equilibrium. Your task is to describe several aspects of the chord's oscillations. First, we want to write the appropriate wave equation. Neglect the effects of electromagnetic induction (assume the chord to be a perfect insulator; that also means the charge density does not change) and find the Lorentz force acting on an unit length of the chord for small oscilations in both directions perpendicular to the equilibrium position. Use this force to write the wave equation (which will also include the effects of the tension). Apply the Fourier substitution and determine the disperse relation in the approximation of a weak field $B$; more specifically, neglect the terms that are of higher than linear order in $\beta = \frac {\lambda B}{k \sqrt {\rho T}} \ll 1$, where $k$ is the wavenumber. Find two polarization vectors, this time neglect even the linear order of $\beta $. Now suppose that in a particular spot on the chord, we create a wave oscilating only in one specific direction. How far from the original spot will be the wave rotated by ninety degrees from the original direction?
wave mechanicsoscillationsmagnetic fieldwave optics
Štěpán was nostalgically remembering the third serial task.
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.