text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Effective blind speech watermarking via adaptive mean modulation and package synchronization in DWT domain
Hwai-Tsu Hu1,
Shiow-Jyu Lin1 &
Ling-Yuan Hsu2
This paper outlines a package synchronization scheme for blind speech watermarking in the discrete wavelet transform (DWT) domain. Following two-level DWT decomposition, watermark bits and synchronization codes are embedded within selected frames in the second-level approximation and detail subbands, respectively. The embedded synchronization code is used for frame alignment and as a location indicator. Tagging voice active frames with sufficient intensity makes it possible to avoid ineffective watermarking during the silence segments commonly associated with speech utterances. We introduce a novel method referred to as adaptive mean modulation (AMM) to perform binary embedding of packaged information. The quantization steps used in mean modulation are recursively derived from previous DWT coefficients. The proposed formulation allows for the direct assignment of embedding strength. Experiment results show that the proposed DWT-AMM is able to preserve speech quality at a level comparable to that of two other DWT-based methods, which also operate at a payload capacity of 200 bits per second. DWT-AMM exhibits superior robustness in terms of bit error rates, as long as the recovery of adaptive quantization steps is secured.
In the digital era, copyright protection of multimedia data (e.g., images, audios, and videos) is an important issue for content owners and service providers. Digital watermarking technology has received considerable attention owing to its potential application to the protection of intellectual property rights, content authentication, fingerprinting, and covert communications. Watermarking technology generally takes four factors (i.e., imperceptibility, security, robustness, and capacity) into consideration [1, 2]. An ideal watermarking algorithm minimizes perceptual distortion due to signal alteration while embedding a sufficient quantity of information within a host signal to ensure resistance against malicious attacks. The fact that the requirements of capacity, robustness, and imperceptibility are contradictory necessitates a tradeoff in the design of various watermarking schemes. For example, medical information systems are primarily intended to provide security and ensure the integrity of information, whereas payload capacity is of paramount importance in air traffic control systems. The annotation watermarking of music products emphasizes imperceptibility and robustness.
Robust watermarks are strongly resistant to attacks, whereas fragile watermarks are supposed to crumble under any attempt at tampering. There are numerous ways to categorize watermarking techniques. Depending on the requirement of the source material, watermarking schemes can be classified as blind, semi-blind, and non-blind. Blind watermarking is designed to recover an embedded watermark without the presence of the original source, while the non-blind approach can only be carried out using the original source. Semi-blind watermarking involves situations where information other than the source itself is required for watermark extraction.
Over the past two decades, numerous watermarking methods have been developed for images, audios, and videos. Far less attention has been paid to the watermarking of speech signals. Speech is a specific form of audio signal; therefore, the techniques developed for audio watermarking are presumed to be applicable to speech watermarking. However, speech differs from typical audio signals with regard to spectral bandwidth, intensity distribution, signal continuity, and production modeling [3, 4]. The techniques developed for audio watermarking are not necessarily suitable for speech watermarking [5].
In [6], Hofbauer et al. exploited the fact that the ear is insensitive to the phase of a signal in non-voiced speech. They performed speech watermarking by replacing the excitation signal of an autoregressive representation in non-voiced segments. Chen and Liu [7] modified the position indices of selected excitation pulses in a watermarking scheme based on the codebook-excited linear prediction (CELP)-based speech codec. Coumou and Sharma [8] embedded data via pitch modification in voiced segments. The fact that multiple voiced segments may coalesce into a single voiced segment (or vice-versa) in a communication channel means that mismatches in voiced segments can lead to insertion, deletion, and substitution errors in the estimates of embedded data. They resorted to a concatenated coding scheme for synchronization and error recovery.
The vocal tract transfer function modeled by linear prediction (LP) has also been employed as an embedded target. Chen and Zhu [9] achieved robust watermarking by embedding watermark bits into codebook indices, while applying multistage vector quantization (MSVQ) to the derived LP coefficients. Yan and Guo [10] converted the LP coefficients to reflection coefficients, which were then transformed to inverse sine (IS) parameters. Watermark embedding was achieved by modifying the IS parameters using odd-even modulation [11].
Many watermarking algorithms applied to audio signals are implemented in the transform domain, such as discrete Fourier transform (DFT) [12,13,14], discrete cosine transform (DCT) [15,16,17,18,19], discrete wavelet transform (DWT) [15, 20,21,22,23,24], and cepstrum [25,26,27]. The objective is to take advantage of signal characteristics and/or auditory properties [28]. Among the transforms used to perform audio watermarking, DWT is currently the most popular due to its perfect reconstruction and good multi-resolution characteristics. The effectiveness of this approach in audio watermarking leads to conclude that it may also work for speech watermarking as well if speech characteristics can be adequately taken into account.
In this study, we introduce a robust blind watermarking scheme for hiding two types of information (watermark bits and synchronization codes) within embeddable regions of DWT subbands designated as information packages. The position of the synchronization codes is used in frame alignment to indicate the start of packaged binary data. This scheme allows the watermark to be dissembled into parts during the embedding phase and reassembled during extraction.
The remainder of this paper is organized as follows. Section 2 describes the watermarking framework, whereby information bits and synchronization codes are embedded in selected DWT subbands. Section 3 discusses configuring the watermark to cope with speech signals. We also outline a package strategy used for information grouping and synchronization and delineate the complete watermarking process. Section 4 presents experiment results aimed at evaluating speech quality and watermark robustness against commonly encountered attacks. Conclusions are drawn in Section 5.
Mean modulation in the DWT domain
In this study, we embedded two types of binary data (watermark bits and synchronization codes) within the same time frame under the same framework. This was achieved using DWT to conduct signal decomposition, thus allowing the embedding of different types of binary data within separate DWT subbands. However, the detectability of the embedded binary information differs somewhat between the watermark bits and synchronization codes. Unlike watermarks, where each bit conveys individual information, the bit sequence of a synchronization code can be considered a distinct entity. The existence of the synchronization code depends on a certain number of bits being recognizable, which means that the synchronization code may have some tolerance for faults. Thus, the watermark bits in our design are inserted within the lowest subband, wherein the coefficient magnitudes are larger than that observed in the subband used for the insertion of synchronization codes. Larger coefficients enable the use of stronger strengths to embed watermark bits. This is conducive to the robustness of the watermark and helps to keep it perceptually imperceptible.
Watermarking by adaptive mean modulation in DWT domain
Assuming that a speech signal is sampled at a rate of 16 kHz with 16-bit resolution, a two-level one-dimensional (1-D) DWT is employed to decompose the speech signal into a single approximation subband and two detail subbands. Here, the second-level DWT is performed on the approximation coefficients obtained from the first-level DWT of the host speech signal. The Daubechies-8 basis [29] is used as a wavelet function. Thus, the resulting second-level approximation subband occupies a frequency range roughly between 0 and 2000 Hz, while the second-level detail subband spans 2000 to 4000 kHz. The spectral density of speech signals is normally concentrated below 4 kHz; therefore, these two subbands are considered suitable candidates for watermarking applications. Analogous to most watermarking methods, we divide the selected DWT coefficients into frames in order to facilitate the embedding and detection of watermarks. Within each frame, l adjacent coefficients, termed c(i)'s, drawn from the selected subband are gathered as a subgroup for the implementation of binary embedding.
$$ {G}_k=\left\{ c\left({\kappa}_k+1\right),\kern0.5em c\left({\kappa}_k+2\right),\kern0.5em \cdots \kern0.5em , c\left({\kappa}_k+ l\right)\right\};{\kappa}_k=\left( k-1\right) l. $$
In this study, the embedding of a binary bit w k within G k is achieved by modulating the coefficient mean m k , which is defined as follows, using quantization index modulation (QIM) [30]:
$$ {m}_k=\frac{1}{l}{\displaystyle \sum_{i=1}^l c\left({\kappa}_k+ i\right)}. $$
The formulation of the QIM can be expressed as
$${\hat{m}}_k=\left\{\begin{array}{ll}\left\lfloor \frac{m_k}{\varDelta_k}+0.5\right\rfloor {\varDelta}_k,\kern2.75em \mathrm{i}\mathrm{f}\kern0.75em {w}_k=0;\\ {}\left\lfloor \frac{m_k}{\varDelta_k}\right\rfloor {\varDelta}_k+\frac{\varDelta_k}{2},\kern2.5em \mathrm{i}\mathrm{f}\kern0.75em {w}_k=1,\kern0.5em \end{array}\right.$$
where ⌊ • ⌋ denotes the floor function and ∆ k represents a quantization step. In essence, Eq. (3) changes m k to the nearest integer multiple of ∆ k if w k = 0 and to the middle of two integers multiples of ∆ k whenever w k = 1. We note that QIM can be regarded as a special case of dither modulation [30, 31], wherein dither noise is added first and then quantized. The distortion compensation technique introduced in [30, 32] may also be incorporated into the quantization realization. Distortion compensated QIM allows the adjustment of the quantization steps without introducing extra distortion while pursuing robustness. In our former studies [33, 34], we have shown that the incorporation of distortion compensation into QIM can successfully enhance the robustness of the watermark while the imperceptibility is still maintained.
In this study, the mean value of the coefficients in each subgroup is selected as the embedding target because this statistical property is less susceptible to intentional attacks and/or unintentional modifications. Besides the insusceptibility to probable perturbation, employing the mean value as the embedding target makes it fairly easy to control the signal-to-watermark ratio, while using mean modulation for binary embedding. The robustness of the embedded watermark generally depends on the number of involved coefficients (i.e., l) and the embedding strength characterized by the quantization step size (i.e., ∆ k ). Our scheme shares some similarities with those in [35, 36], where the one in [35] changed the DWT coefficients based on the average of a relevant frame and the one in [36] took into account the average of the linear regression of fast Fourier transform (FFT) values.
With QIM embedding, embedding strength is reflected in the quantization step size. The use of a large step size tends to increase robustness but impair quality by imposing more alterations to the speech signal. Making the embedded watermarks inaudible would require suppressing the distortion below the auditory masking threshold. Because the maximum tolerable noise level in each critical band is generally proportional to the short-time energy of the host speech, a sensible strategy involves adapting the quantization step according to the intensity of the speech segment. In other words, the quantization step is augmented when the energy of the DWT coefficients climbs, and it is reduced when the energy drops. By referring to the approach presented in [37], we derive the local energy level from previous coefficients in a recursive manner.
$$ {\overline{\rho}}_k=\left(1-\alpha \right){\widehat{\rho}}_{k-1}+\alpha {\overline{\rho}}_{k-1}, $$
where \( {\widehat{\rho}}_{k-1}={\displaystyle {\sum}_{i=1}^l{\widehat{c}}^2\left({\kappa}_{k-1}+ i\right)} \) is the energy computed from the modified coefficients in the (k − 1)th subgroup and \( {\overline{\rho}}_k \) is the output of the first-order recursive low-pass filter. α is a positive controlling parameter deliberately rendering a unity DC gain. It should be pointed out that the coefficients in the current subgroup cannot be used for estimation, due to the fact that they are about to be modified by watermarking. Consequently, \( {\overline{\rho}}_k \) can be regarded as an estimate of the short-time energy derivable from previous coefficients. Owing to the short-time stability of speech signals, the resulting \( {\overline{\rho}}_k \) can be regarded as a smoothed version of \( {\widehat{\rho}}_k \).
The acquisition of short-time energy \( {\overline{\rho}}_k \) makes it possible to regulate the signal-to-watermark ratio, which is defined as the energy ratio between the signal and watermarking perturbation measured in decibels. The relationship among \( {\overline{\rho}}_k \), η, and ∆ k is expressed mathematically as follows:
$$ \begin{array}{c}{10}^{\frac{\eta}{10}}=\frac{{\displaystyle \sum_{i=1}^l{c}^2\left({\kappa}_k+ i\right)}}{E\left[{\displaystyle \sum_{i=1}^l{\left(\widehat{c}\left({\kappa}_k+ i\right)- c\left({\kappa}_k+ i\right)\right)}^2}\right]}\\ {}\approx \frac{{\displaystyle \sum_{i=1}^l{c}^2\left({\kappa}_k+ i\right)}}{\frac{\varDelta_k^2}{12}}\\ {}\approx \frac{12{\overline{\rho}}_k}{\varDelta_k^2},\end{array} $$
where E[•] denotes the expected probability distribution. The term on the left-hand side of Eq. (5) is meant to convert a specific decibel value η to its linear magnitude, while the numerator and denominator of the fractional expression on the right-hand side of Eq. (5) denote the energy levels of the signal and noise, respectively. The alteration due to the QIM in Eq. (3) is presumably distributed uniformly over [−Δ k /2, Δ k /2]. As a result, ∆ k can be computed directly once η is specified, as follows:
$$ {\varDelta}_k\approx \sqrt{12{\overline{\rho}}_k}\times {10}^{-\kern0.5em \frac{\eta}{20}}. $$
Following the acquisition of \( {\widehat{m}}_k \) as in Eq. (3), binary embedding in each subgroup is accomplished by modifying the corresponding DWT coefficients.
$$ \widehat{c}\left({\kappa}_k+ i\right)= c\left({\kappa}_k+ i\right)-{m}_k+{\widehat{m}}_k. $$
After all of the watermark bits are embedded within designated DWT subbands, we take inverse DWT to obtain the watermarked speech signal using the modified subband coefficients.
Watermark extraction follows the basic procedure used in embedding. After applying two-level DWT to the watermarked file, the coefficients in the selected subband are divided into subgroups in order to derive the coefficient mean \( {\tilde{m}}_k \) and quantization step \( {\tilde{\varDelta}}_k \) using Eqs. (2) and (6). The watermark bit \( {\tilde{w}}_k \) residing in each subgroup is extracted based on standard QIM:
$$ {\tilde{w}}_k=\left\{\begin{array}{l}1,\kern1em \mathrm{if}\kern0.5em \left|\frac{{\tilde{m}}_k}{{\tilde{\varDelta}}_k}-\left\lfloor \frac{{\tilde{m}}_k}{{\tilde{\varDelta}}_k}\right\rfloor -0.5\right|\le 0.25;\\ {}0,\kern1em \mathrm{otherwise},\end{array}\right. $$
where the tilde atop participating variables implies the effect of possible attacks.
Frame synchronization via DWT-AMM
The prerequisite for accurate watermark extraction using the abovementioned adaptive mean modulation (AMM) scheme is the perfect alignment of the boundary of each subgroup. A simple strategy by which to synchronize the locations used in watermark insertion and detection is to insert synchronization codes within the host signal. Actual watermark extraction begins after identifying the locations of the synchronization codes.
In fact, mean modulation with a fixed quantization step has previously been explored for the embedding of synchronization codes in the time domain for many audio watermarking algorithms [14, 15, 17, 23, 38]. In principle, watermark bits and synchronization codes are hidden in different segments of the audio file to avoid mutual interference. In this study, we propose embedding watermark bits and synchronization codes within separate DWT subbands. This arrangement offers additional advantages other than an increase in payload capacity. For example, the successful detection of synchronization codes in one subband can signify the presence of a data sequence in another subband.
Assume that the second-level detail subband has been selected as the embedding target. The derivation of a detail coefficient sequence can be imagined as a process of high-pass filtering and subsequent downsampling; hence, the spectral orientation is reversed for the detail coefficients. To flip the spectrum back to its normal direction, we simply alter the sign of the odd index coefficients, as follows:
$$ {\widehat{c}}_d^{(2)}\left({\kappa}_k+ i\right)={\left(-1\right)}^i{c}_d^{(2)}\left({\kappa}_k+ i\right), $$
where \( {c}_d^{(2)}\left({\kappa}_k+ i\right) \) denotes the ith coefficient in the kth subgroup of the second-level detail subband. The DWT level is specified in the superscript alongside the coefficient variable. The subscript "d" denotes the initial of the word "detail." Note that the spectral energy of speech signals is normally concentrated in low frequencies, and AMM tends to track low-frequency variations. Once Eq. (9) restores the energy distribution in the low frequencies, spectral flipping enables the rendering of larger quantization steps that eventually enhances robustness.
In our design, the synchronization code is a random binary sequence φ(i) ∈ {0, 1} of length L code. Each φ(i) bit is inserted into l sync coefficients in the second-level detail subband. We tentatively choose L code = 120, l sync = 4, and η = 10 to provide adequate resistance against possible attacks. The overall length of the synchronization code covers an interval of l sync × L code (=480) coefficients. Variable α used in the recursive filter is set to 0.9 in order to render a slowly varying estimate of the short-time energy. The search for subgroup demarcation is on a sample-by-sample basis. Because the second-level detail coefficients are derived from a signal of which the length is four times the coefficient amount, we conduct four times of DWT decomposition respectively from the first to fourth position of the speech signal and then attach the resultant coefficient sequence to every sample location. While detecting the presence of synchronization codes, we switch around the four sequences as the process proceeds from sample to sample. Specifically, for every replacement of a new coefficient in one of the four sequences, we regroup l sync coefficients and recompute the short-time energy. After obtaining the coefficient mean and quantization step for each subgroup, we acquire binary bit b w (i) using Eq. (8). The synchronization code is then detected using a matched filter. The entire computation proceeds through three steps. First, the information bit sequence and synchronization code are both converted to bipolar form. Second, the extracted bipolar stream is convolved with the reversed version of the bipolar-converted synchronization code. Third, the presence of the synchronization code is presumed whenever the filter output y(i) exceeds threshold T, which is set as 0.45L code. The following inequality summarizes the aforementioned three steps.
$$ y(i)={\displaystyle \sum_{n=0}^{L_{\mathrm{code}}-1}\left(2\varphi \left({L}_{\mathrm{code}}-1- n\right)-1\right)\left(2{b}_w\left( i-{l}_{\mathrm{sync}} n\right)-1\right)\kern1em \overset{?}{\ge}\kern1em T}=0.45{L}_{\mathrm{code}}. $$
As shown on the left side of Eq. (10), the bipolar stream is decimated by l sync during convolution due to the fact that each bit stems from every l sync coefficient. There can be two types of error during the search for synchronization codes. A false-positive error (FPE) involves declaring a non-embedded speech signal as an embedded one, whereas a false-negative error (FNE) involves classifying an embedded speech signal as a non-embedded one.
Assuming that the extracted watermark bits are independent random variables with probability P e , then FPE P fp can be computed as follows:
$$ {P}_{fp}={\displaystyle \sum_{k={T}^{\prime}}^{L_{\mathrm{code}}}\left(\begin{array}{c}\hfill {L}_{\mathrm{code}}\hfill \\ {}\hfill k\hfill \end{array}\right){\left({P}_e\right)}^k}{\left(1-{P}_e\right)}^{L_{\mathrm{code}}- k}, $$
where k denotes the number of matched bits in a total of L code bits. \( \left(\begin{array}{c}\hfill {L}_{\mathrm{code}}\hfill \\ {}\hfill k\hfill \end{array}\right) \) represents the binomial coefficient. The threshold T and the number of matched bits T' hold the relationship as T' = (L code − T)/2, because T is the summed result of T' matched bits and L code − T' unmatched bits.
$$ T=\left(+1\right)\times {T}^{\prime }+\left(-1\right)\times \left({L}_{code}-{T}^{\prime}\right), $$
where a matched bit corresponds to +1 and an unmatched bit corresponds to −1. Since non-embedded bits are either 0 or 1 with pure randomness, P e is assumed to be 0.5. Thus, Eq. (11) can be further simplified as
$$ {P}_{fp}=\frac{1}{2^{L_{\mathrm{code}}}}{\displaystyle \sum_{k= T}^{L_{\mathrm{code}}}\left(\begin{array}{c}\hfill {L}_{\mathrm{code}}\hfill \\ {}\hfill k\hfill \end{array}\right)}. $$
Given that L code = 120 and T = 0.45L code =54, P fp turns out to be 4.34 × 10−7, which implies that FPE rarely happens while using the presumed parameter setting.
Analogous to the discussion on the derivation of FPE, the FNE P fn can be computed as
$$ \begin{array}{c}{P}_{fn}={\displaystyle \sum_{k=0}^{T-1}\left(\begin{array}{c}\hfill {L}_{\mathrm{code}}\hfill \\ {}\hfill k\hfill \end{array}\right){\left(1-{P}_{\mathrm{BER}}\right)}^k\times }{\left({P}_{\mathrm{BER}}\right)}^{L_{\mathrm{code}}- k}\\ {}={\displaystyle \sum_{k= T}^{L_{\mathrm{code}}}\left(\begin{array}{c}\hfill {L}_{\mathrm{code}}\hfill \\ {}\hfill k\hfill \end{array}\right){\left({P}_{\mathrm{BER}}\right)}^k}\times {\left(1-{P}_{\mathrm{BER}}\right)}^{L_{\mathrm{code}}- k},\end{array} $$
where P BER denotes the error rate for each bit. According to Eq. (14), P fn remains below 0.982 even if P BER is as high as 0.2.
Watermarking with package synchronization
In Sections 2 and 3, we discuss the DWT-AMM framework and its application to watermark synchronization. Further considerations must be taken into account in the development of a practical speech watermarking system. It has been pointed out in the introduction that a speech signal exhibits several distinct acoustic characteristics, which differentiate the speech from other types of audio. Unlike most music signals, silent segments commonly occur in speech utterances. The insertion of watermark bits into silent segments would render them vulnerable to noise perturbation and susceptible to attacks through the simple removal of silence. Thus, we developed an energy-based scheme to enable the selection of frames for the embedding of watermarks and synchronization codes. One general principle in watermarking is to hide information among large coefficients in the transformed domain, because this enables the employment of stronger embedding to resist attacks with less concern for imperceptibility.
The energy of a speech signal is normally concentrated below 4 kHz. To make the best use of DWT decomposition, we selected the second-level approximation subband for the embedding of binary information and reserved the second-level detail subband for frame synchronization on condition that the speech is sampled at 16 kHz. After taking two-level DWT of the host signal, the coefficients in the second-level approximation and detail subbands are both partitioned into non-overlapping frames of size L f . In this study, L f is tentatively set to 160 to facilitate subsequent scheme development. Then, we calculate the root-mean-square (RMS) values, termed σ a (t) and σ d (t), respectively, for the second-level approximation and detail subbands.
$$ {\sigma}_a(t)=\sqrt{\frac{1}{L_f}{\displaystyle \sum_{i=1}^{L_f}{\left({c}_a^{(2)}\left( i;\kern0.5em t\right)\right)}^2}}; $$
$$ {\sigma}_d(t)=\sqrt{\frac{1}{L_f}{\displaystyle \sum_{i=1}^{L_f}{\left({c}_d^{(2)}\left( i;\kern0.5em t\right)\right)}^2}}, $$
where \( {c}_a^{(2)}\left( i;\kern0.5em t\right) \) and \( {c}_d^{(2)}\left( i;\kern0.5em t\right) \) are respectively the ith second-level approximation and detail coefficients in the tth frame. Let ψ a and ψ d be the corresponding thresholds, which are assigned as ratios proportional to the maximum values. The frames with RMS values exceeding pre-specified thresholds are selected for watermarking. This type of frame selection can be expressed as follows:
$$ \varLambda (t)=\left\{\begin{array}{cc}\hfill "\mathrm{embeddable}",\hfill & \hfill \mathrm{If}\kern1em {\sigma}_a(t)\ge {\psi}_a\kern0.5em \&\kern0.5em {\sigma}_d(t)\ge {\psi}_d;\hfill \\ {}\hfill "\mathrm{non}\hbox{-} \mathrm{embeddable}",\hfill & \hfill \mathrm{otherwise},\hfill \end{array}\right. $$
$$ {\psi}_a=0.035 \max \left\{{\sigma}_a(t)\right\}; $$
$$ {\psi}_d=0.04 \max \left\{{\sigma}_d(t)\right\}. $$
The frame attribute Λ(t) is categorized as "embeddable" if both σ a (t) and σ d (t) surpass their respective thresholds.
Figure 1 illustrates the process of searching embeddable frames within a speech signal. In Fig. 1e, the frame is categorized as embeddable, as long as the corresponding approximation and detail coefficients are of sufficiently large magnitude to allow the embedding of watermark bits and synchronization codes. The insertion and detection of synchronization codes are illustrated in Fig. 2. In this example, the synchronization code is embedded in four places, each spanning an interval of three frames. The four embedding segments are rendered in red in (b). As indicated by the four sharp peaks precisely at the ends of the red areas, the output of the matched filter is sufficient to identify the synchronization code. For a segment comprising consecutive embeddable frames, the synchronization code is inserted only within the first three frames in the second-level detail subband, whereas binary embedding is applied to every embeddable frame in the approximation subband.
a–e Illustration of searching for embedding frames. The embeddable frames are indicated by symbol "⨁"
a, c Detection of the synchronization code from a noise-corrupted speech signal with SNR = 30 dB. The leading three frames in each embeddable segment are particularly drawn in red in b
Depending on the number of frames available for data hiding, we divide the watermark bits into several packages, each containing a header in conjunction with a series of watermark bytes. The implantation of a complete synchronization code requires an interval stretching 480 coefficients; therefore, only the speech segments extending beyond three consecutive frames are used as watermark packages. During the embedding phase, the DWT-AMM settings are \( {l}_a^{(2)}=20 \) and \( {\eta}_a^{(2)}=20 \) for watermark embedding in the approximation subband and \( {l}_d^{(2)}=4 \) and \( {\eta}_d^{(2)}=10 \) for synchronization in the detail subband. The subscripts "a" and "d" alongside the variables represent subband attributes. In accordance with these specifications, each frame in the approximation subband carries 8 bits of information. Thus, for a wideband speech signal sampled at 16 kHz, the maximum payload capacity would be 200 (=16000/(22 × 20)) bits per second (bps).
The header of each package consists of a 15-bit message produced by a [11, 15] BCH encoder [39]. The message contains information in two parts: 7 bits indicating the allocated position and 4 bits specifying the total length. This means that there are 27 starting positions that could be assigned. The length of data allowable in each package stretches from 1 to 16 bytes. The maximum size of watermark bits that can be accommodated is 8 × 27. Through the BCH encoder, the 11-bit message is appended with a parity symbol to form a code of length 15. The resulting BCH code is capable of correcting 1 bit error.
Figure 3 illustrates the means by which embeddable frames are configured for various lengths of data. The start locations of embeddable segments implicitly synchronize the time windows for the embedding and extraction of the watermark. The watermark is tentatively selected as a binary image logo of size 32 × 32 with an equal number of "1s" and "0s". To reinforce security, we scrambled the watermark using the Arnold transform [40] and then converted it to a 1-D bit sequence. The bit sequence was then divided into packages of various size matching the lengths of the embeddable segments in different locations. Multiple watermarks can be embedded as long as the speech file is of sufficient length. When reconstructing the watermark, we employ a majority voting scheme to verify each retrieved bit in cases where multiple copies are received.
Bit arrangement in each package. The BCH code contains 7 "S" bits for the location index, 4 "L" bits for the package length, and 4 "P" bits for the parity symbol. The remaining "Ws" represent watermark bits
To conclude this section, Fig. 4 outlines the processing flow of the proposed watermarking method. The required steps are summarized as follows:
Embedding procedure of the proposed DWT-AMM watermarking algorithm
Scramble the watermark logo using an encryption key and convert the results to a bit stream.
Decompose the host speech signal using two-level DWT.
Seek embeddable segments.
Implant the synchronization code into the first three frames of an embeddable segment in the second-level detail subband.
Partition the watermark bit sequence into packages in accordance with the size of the embeddable segment. The location and size of each watermark package are saved as a 15-bit message using a (15,11) BCH encoder, and the resulting BCH code is combined with scrambled watermark bits to form a package.
For each embeddable segment, the packaged bits are embedded within the approximation coefficients using AMM.
Repeat steps 3~5 if the end of the file is reached; otherwise, perform a two-level inverse DWT to attain a watermarked speech signal with synchronization information inside.
Watermark extraction follows the same procedure as that used in embedding. Figure 5 provides an illustrative depiction of the process, as briefly outlined in the following:
Extraction procedure of the proposed DWT-AMM watermarking algorithm
Decompose the host speech signal using two-level DWT. To take every sample shift into account, we need to perform two-level DWT four times starting from the first to fourth position.
Inspect the segment beginning with current sample i. Detect the synchronization code using the technique developed in Section 4. If the synchronization code is present, go to step 3; otherwise, move one sample forward (i ← i + 1) and repeat step 2.
Extract the bits residing in each package using AMM. The located position of the retrieved watermark bits is resolved from the BCH decoder. Update the current index i to the new position.
If the index reaches the end, go to step 5. Otherwise, go to step 2.
Adopt the majority vote strategy to determine the ultimate value of each bit.
Convert the 1-D bit sequence to a matrix and apply the inverse Arnold transform to descramble the matrix using the correct key.
The test materials consisted of 192 sentences uttered by 24 speakers (16 males and 8 females) drawn from the core set of the TIMIT database [41]. Speech utterances were recorded at 16 kHz with 16-bit resolution. For the convenience of computer simulation, speech files belonging to the same dialect region were concatenated to form a longer file. Since each speech utterance was recorded separately, the maximum amplitude of each file was uniformly rescaled to an identical level to maintain consistent intensity. The watermark bits for the test were a series of alternate 1s and 0s of sufficient length to cover the entire host signal.
Smoothing factor for the recursive filter
Our initial concern lies in the choice of an appropriate value for variable α used in the recursive filtering (i.e., Eq. (4)) of the DWT-AMM framework. The recursive filter is meant to render a smooth estimate of short-time energy. To understand the influence of variable α, we conducted a pilot test examining the watermarked speech in the presence of white Gaussian noise with signal-to-noise set at 20 dB. The testing set included an arithmetic sequence ranging from 0.3 to 0.975 in increments of 0.025. We measured the variations in signal-to-noise ratio (SNR), mean opinion score of listening quality objective (MOS-LQO), and bit error rate (BER) under changes in α. Among the three abovementioned measures, SNR and MOS-LQO reflect the impairment of quality due to watermarking, while BER indicates the robustness of the embedded watermark against possible attacks. The definition of SNR is given as follows:
$$ \mathrm{S}\mathrm{N}\mathrm{R}=10{ \log}_{10}\left(\frac{{\displaystyle \sum_n{s}^2(n)}}{{\displaystyle \sum_n{\left(\widehat{s}(n)- s(n)\right)}^2}}\right), $$
where s(n) and ŝ(n) denote the original and watermarked speech signals, respectively. MOS-LQO is the consequence of the perceptual evaluation of speech quality (PESQ) metric [42], which was developed to model subjective tests commonly used in telecommunications. The PESQ assesses speech quality on a −0.5 to 4.5 scale. A mapping function to MOS-LQO is described under ITU-T Recommendation P.862.1, covering a range from 1 (bad) to 5 (excellent). Table 1 specifies the MOS-LQO scale. In this study, we adopted the implementation released from ITU-T website [43].
Table 1 Speech quality characterized by MOS-LQO scores
To determine the effect on robustness, we examined the BER between the recovered watermark \( \tilde{W}=\left\{{\tilde{w}}_n\right\} \) and the original watermark W = {w n }:
$$ \mathrm{B}\mathrm{E}\mathrm{R}\left( W,\tilde{W}\right)=\frac{{\displaystyle \sum_{n=1}^{N_w}{w}_n\oplus {\tilde{w}}_n}}{N_w}, $$
where N w denotes the number of watermark bits.
Figure 6 presents the average BER, SNR, and MOS-LQO obtained from the test set with the parameters \( {l}_a^{(2)}=20 \) and \( {\eta}_a^{(2)}=20 \). In this experiment, non-embeddable frames were not excluded from average calculation. The obtained MOS-LQOs were therefore slightly lower than that attained by the actual watermarking scheme, and the resulting BERs were somewhat higher than the outcomes involving merely the embeddable frames. As shown in Fig. 6, the average BER, SNR, and MOS-LQO remain roughly steady when α < 0.5 and gradually descend with an increase in α. The increasing tendency become increasingly obvious once α exceeds 0.8. The lower SNR values at α > 0.9 can be attributed to the fact that the computation of \( {\overline{\rho}}_k \) refers more to previous data than recent data. This often results from large quantization steps at the end of a speech segment where the volume drops abruptly. A lower SNR also implies a more pronounced modification to the speech signal; therefore, the MOS-LQO presents a downward trend. In subsequent experiments, we eventually set α to 0.8 for embedding watermark bits, as this achieves suitable BER and SNR values without deviating MOS-LQO too far from a desirable score.
The effect due to the use of various αs. Subplot a presents the SNRs observed in the second-level approximation subband and time-domain signal. Subplot b delineates the MOS-LQO scores. Subplot c is the result when the watermarked speech is corrupted by white Gaussian noise with SNR = 20
Detection rate of synchronization codes
In Section 2.2, we discuss how to embed and detect the synchronization codes in the second-level detail subband. All the theoretical analysis in that section is deduced from a probability aspect. Here, we present the experiment results with respect to the test materials. In accordance to the rule given in Section 3, there were 781 speech segments selected for embedding synchronization codes over a length of 9,229,695 samples in total (or equivalently, 576.86 s). The embedding of synchronization codes as per the specifications in Section 2.2 led to a SNR of 27.73 dB and a MOS-LQO score of 4.35. The competence of the proposed method was verified by inspecting the frequency counts of miss and false alarm in the presence of various attacks. The attack types in this study involved resampling, requantization, amplitude scaling, noise corruption, low-pass and high-pass filtering, DA/AD conversion, echo addition, jittering, and compression. Table 2 lists the details of these attacks. The time-shifting attack considered in case N is intended to find out the consequence if frames are slightly misaligned. This particular attack is not designed for the synchronization test but will be examined in the evaluation of watermarking performance. For the other attack types ranging from A to M, the test results are tabulated in Table 3.
Table 2 Attack types and specifications
Table 3 Detection of the embedded synchronization codes under various attacks. Overall, the synchronization code has been embedded in 781 speech segments over a length of 9,229,695 samples in total
As revealed by the results in Table 3, the false alarm events seldom occurred because of the choice of a relative high detection threshold, i.e., T = 0.45L code = 54. The proposed synchronization technique survived most attacks except for high-pass filtering above 1 kHz. The reason can be attributed to the fact that the synchronization codes are inserted into the second-level detail subband, of which the spectrum is primarily distributed from 2 to 4 kHz. Consequently, obliterating the frequency components above 1 kHz will ruin the synchronization. Apart from the high-pass filtering, the noise corruption with SNR = 20 dB was another attack that caused obvious damage. Nonetheless, the miss rate 66/781 is still considered acceptable since over 91.5% of the package locations are recoverable.
Comparison with other WT-based watermarking methods
This study compared the performance of three wavelet transform (WT)-based speech watermarking methods, namely, DWT-SVD [4], LWT-DCT-SVD, [3] and the proposed DWT-AMM. For the sake of a fair comparison, the watermark bits were embedded in the second-level approximation subband using an identical payload capacity of 200 bps for all three methods. It should also be noted that the idea of embedding the watermark bits and synchronization codes within different subbands is applicable to any wavelet-based method. We assumed that the second-level detail subband was reserved for the embedding of synchronization codes in all cases to ensure that each method was equally capable of resisting cropping and/or time-shifting attacks. Only frames satisfying the conditions in (17) were used to embed binary information. Furthermore, in order to provide more insights into the proposed DWT-AMM approach, we also implemented watermark embedding at a rate of 100 bps with respect to the third-level approximation and detail subbands, both of which were obtained by splitting the second-level approximation subband. The parametric settings followed those in the second-level approximation subband. That is, \( {l}_a^{(3)}={l}_d^{(3)}=20 \) and \( {\eta}_a^{(3)}={\eta}_d^{(3)}=20 \).
The quality of the watermarked speech signal obtained using the abovementioned methods was evaluated based on SNR and PESQ. We intentionally adjusted the parameters of the three methods to permit SNR values nearby 22 dB, which is above the level (20 dB) recommended by International Federation of the Phonographic Industry (IFPI) [28]. The commensurate SNR values also imply the use of comparable embedding strengths for all three methods. As shown in Table 4, the MOS-LQO values for DWT-SVD, LWT-DCT-SVD, and DWT-AMM were distributed over a range just above 3.2. These outcomes suggest that these three methods render comparable quality. Nonetheless, the score of 3.2 merely reflects a fair auditory perception. The cause is conceivably connected with the embedding strength and payload capacity. For the DWT-AMM implemented in the third-level approximation subband with a payload capacity of 100 bps, the average MOS-LQO value has been raised above 4.0. The MOS-LQO score could be further lifted beyond 4.2 when the DWT-AMM was applied to the third-level detail subband with the same capacity.
Table 4 Statistics of the measured SNRs and MOS-LQO scores. The data in the second and third columns are interpreted as "mean [±standard deviation]." The payload capacity for each method is listed in the last column
We examined the BER defined in Eq. (21) to evaluate the robustness of the algorithms against various attacks previously specified in Table 2. Table 5 presents the average BERs obtained by each of the methods in the presence of various attacks. All three methods successfully retrieved the watermark when no attack was present. All of them demonstrated comparable satisfactory resistances against G.722 and G.726 codecs. They also survived low-pass filtering (I) and resampling, due to the fact that these two attacks do not have a severe effect on coefficients in the second-level approximation subband. For the same reason, these three methods did not pass the high-pass filtering attack, by which the low-frequency components below 1 kHz were destroyed. The low-pass filtering with a cutoff frequency of 1 kHz inflicted obvious damage on DWT-SVD and LWT-DCT-SVD; however, only minor damage was observed in the results of DWT-AMM. This can be ascribed to the use of the statistical mean for watermarking.
Table 5 Average bit error rates (in percentage) for three compared watermarking methods under various attacks
In cases involving echo addition (Attack J) and slight time-shift (Attack N), DWT-AMM outperformed DWT-SVD and LWT-DCT-SVD, due primarily to its adaptability to signal intensity. Adaptively adjusting the quantization steps also enables DWT-AMM to withstand amplitude scaling attacks. By contrast, both DWT-SVD and LWT-DCT-SVD failed in the case of amplitude scaling, due to the use of a fixed quantization step.
The addition of Gaussian white noise with SNR controlled at 30 and 20 dB did not appear to cause any problems for DWT-SVD or LWT-DCT-SVD; however, DWT-AMM suffered minor deterioration. The reason is conceivably due to the imperfect acquisition of quantization steps from noise-corrupted speech. Requantization can be regarded as a type of noise corruption [44]; therefore, DWT-AMM is also subject to performance degradation. The same explanation applies to the results obtained under DA/AD conversion attacks, which led to composite impairment in time scaling, amplitude scaling, and noise corruption [44]. DWT-AMM was unable to entirely avoid damage under these conditions; however, DWT-SVD and LWT-DCT-SVD suffered even more due to a lack of amplitude scaling.
For the two 100-bp versions of DWT-AMM, the one implemented in the third-level approximation subband, termed \( \mathrm{D}\mathrm{W}\mathrm{T}\hbox{-} {\mathrm{AMM}}_a^{(3)} \), generally exhibited superior robustness in terms of BER, and yet, the resultant MOS-LQO is above 4.0. The reduction of BER is ascribed to the fact that the watermark embedding is performed over a subband with higher intensity, while the imperceptibility seems improvable at the cost of payload capacity. By contrast, the DWT-AMM implemented in the third-level detail subband, termed \( \mathrm{D}\mathrm{W}\mathrm{T}\hbox{-} {\mathrm{AMM}}_d^{(3)} \), offered an average MOS-LQO of 4.238. The associated SNR of 30.611 dB reflects a weaker embedding strength, thus resulting in a worse BER in comparison with the one obtained from \( \mathrm{D}\mathrm{W}\mathrm{T}\hbox{-} {\mathrm{AMM}}_a^{(3)} \).
This paper proposes a novel DWT-based speech watermarking scheme. In the proposed scheme, information bits and synchronization codes are embedded within the second-level approximation and detail subbands, respectively. The synchronization code serves in frame alignment and indicates the start position of an enciphered bit sequence referred to as a package. The watermarking process is executed on a frame-by-frame basis to facilitate detectability. Binary embedding in the second-level subband is performed by adaptively modifying the mean value of the coefficients gathered in each subgroup. During watermark extraction, all fragments of binary bits are retrieved with the assistance of a synchronization scheme and repacked according to the header content of each package. The robustness of the embedded watermark is reinforced through the selection of frames with sufficient intensity. The proposed formulation makes it possible to specify the embedding strength in terms of the SNR of the intended subband. Specifically, the quantization steps can be acquired from the speech signal by referring to the energy level of the passing coefficients in a recursive manner.
The watermarking scheme outlined in this paper has a maximum rate of 200 bps. PESQ test results indicate that the proposed DWT-AMM renders speech quality comparable to that obtained using two existing wavelet-based methods. With the exception of attacks that compromise the retrieval of quantization steps, the proposed DWT-AMM generally outperforms the compared methods. Overall, the proposed DWT-AMM demonstrates satisfactory performance. The incorporation of the package synchronization scheme allows the splitting of the watermark to cope with the intermittent characteristic of speech signals.
N Cvejic, T Seppänen, Digital audio watermarking techniques and technologies: applications and benchmarks (Information Science Reference, Hershey, 2008)
X He, Watermarking in audio: key techniques and technologies (Cambria Press, Youngstown, 2008)
B Lei, I Song, SA Rahman, Robust and secure watermarking scheme for breath sound. J Syst Softw 86(6), 1638–1649 (2013)
MA Nematollahi, SAR Al-Haddad, F Zarafshan, Blind digital speech watermarking based on eigen-value quantization in DWT. J King Saud Univ Comp Inf Sci 27(1), 58–67 (2015)
MA Nematollahi, SAR Al-Haddad, An overview of digital speech watermarking. Int J Speech Tech 16(4), 471–488 (2013)
K Hofbauer, G Kubin, WB Kleijn, Speech watermarking for analog flat-fading bandpass channels. IEEE Trans on Audio Speech and Language Processing 17(8), 1624–1637 (2009)
OTC Chen, CH Liu, Content-dependent watermarking scheme in compressed speech with identifying manner and location of attacks. IEEE Trans on Audio Speech Language Processing 15(5), 1605–1616 (2007)
DJ Coumou, G Sharma, Insertion, deletion codes with feature-based embedding: a new paradigm for watermark synchronization with applications to speech watermarking. IEEE Trans Inf Forensics Secur 3(2), 153–165 (2008)
N Chen, J Zhu, Multipurpose speech watermarking based on multistage vector quantization of linear prediction coefficients. J China Univ Posts Telecom 14(4), 64–69 (2007)
B Yan, Y-J Guo, B Yan, Y-J Guo, Speech authentication by semi-fragile speech watermarking utilizing analysis by synthesis and spectral distortion optimization. Multimed Tools Appl 67(2), 383–405 (2013)
D Kundur, (1999). Multiresolution digital watermarking: algorithms and implications for multimedia signals, Ph. D. Thesis, University of Toronto, Ontario, Canada.
W Li, X Xue, P Lu, Localized audio watermarking technique robust against time-scale modification. IEEE Trans Multimedia 8(1), 60–69 (2006)
R Tachibana, S Shimizu, S Kobayashi, T Nakamura, An audio watermarking method using a two-dimensional pseudo-random array. Signal Process 82(10), 1455–1469 (2002)
D Megías, J Serra-Ruiz, M Fallahpour, Efficient self-synchronised blind audio watermarking system based on time domain and FFT amplitude modification. Signal Process 90(12), 3078–3092 (2010)
X-Y Wang, H Zhao, A novel synchronization invariant audio watermarking scheme based on DWT and DCT. IEEE Trans Signal Processing 54(12), 4835–4840 (2006)
I-K Yeo, HJ Kim, Modified patchwork algorithm: a novel audio watermarking scheme. IEEE Trans Speech Audio Processing 11(4), 381–386 (2003)
BY Lei, IY Soon, Z Li, Blind and robust audio watermarking scheme based on SVD–DCT. Signal Process 91(8), 1973–1984 (2011)
B Lei, IY Soon, F Zhou, Z Li, H Lei, A robust audio watermarking scheme based on lifting wavelet transform and singular value decomposition. Signal Process 92(9), 1985–2001 (2012)
H-T Hu, L-Y Hsu, Robust, transparent and high-capacity audio watermarking in DCT domain. Signal Process 109, 226–235 (2015)
X-Y Wang, P-P Niu, H-Y Yang, A robust digital audio watermarking based on statistics characteristics. Pattern Recogn 42(11), 3057–3064 (2009)
S Wu, J Huang, D Huang, YQ Shi, Efficiently self-synchronized audio watermarking for assured audio data transmission. IEEE Trans Broadcasting 51(1), 69–76 (2005)
X Wang, P Wang, P Zhang, S Xu, H Yang, A norm-space, adaptive, and blind audio watermarking algorithm by discrete wavelet transform. Signal Process 93(4), 913–922 (2013)
H-T Hu, L-Y Hsu, H-H Chou, Variable-dimensional vector modulation for perceptual-based DWT blind audio watermarking with adjustable payload capacity. Digital Signal Processing 31, 115–123 (2014)
A Al-Haj, An imperceptible and robust audio watermarking algorithm, EURASIP J Audio Speech Music Processing 2014, 37 (2014) doi:10.1186/s13636-014-0037-2
X Li, HH Yu, Transparent and robust audio data hiding in cepstrum domain, in IEEE Int Conf Multimedia and Expo (2000), 397–400
SC Liu, SD Lin, BCH code-based robust audio watermarking in cepstrum domain. J Inf Sci Eng 22(3), 535–543 (2006)
MathSciNet Google Scholar
H-T Hu, W-H Chen, A dual cepstrum-based watermarking scheme with self-synchronization. Signal Process 92(4), 1109–1116 (2012)
S Katzenbeisser, FAP Petitcolas, in Information hiding techniques for steganography and digital watermarking, ed. by FAP Petitcolas (Artech House, Boston, 2000)
I Daubechies, Ten lectures on wavelets (SIAM, Philadelphia, 1992)
Book MATH Google Scholar
B Chen, GW Wornell, Quantization index modulation: a class of provably good methods for digital watermarking and information embedding. IEEE Trans Inf Theory 47(4), 1423–1443 (2001)
MathSciNet Article MATH Google Scholar
B Chen, GW Wornell, Quantization index modulation methods for digital watermarking and information embedding of multimedia. J VLSI Signal Processing Systems Signal Image Video Technol 27(1), 7–33 (2001)
P Moulin, R Koetter, Data-hiding codes. Proc IEEE 93(12), 2083–2126 (2005)
H-T Hu, J-R Chang, L-Y Hsu, Windowed and distortion-compensated vector modulation for blind audio watermarking in DWT domain, Multimed Tools Appl 1-21 (2016) doi:10.1007/s11042-016-4202-8
H-T Hu, L-Y Hsu, Supplementary schemes to enhance the performance of DWT-RDM-based blind audio watermarking. Circuits Syst Signal Process 36(5), 1890–1911 (2016)
M Fallahpour, D Megias, DWT-based high capacity audio watermarking. IEICE Trans Fundam Electron Commun Comput Sci E93-A(1), 331–335 (2010)
M Fallahpour, D Megias, High capacity robust audio watermarking scheme based on FFT and linear regression. Int J Innovative Comput Inf Control 8(4), 2477–2489 (2012)
H-T Hu, L-Y Hsu, A DWT-based rational dither modulation scheme for effective blind audio watermarking. Circuits Syst Signal Process 35(2), 553–572 (2016)
H-T Hu, L-Y Hsu, Incorporating spectral shaping filtering into DWT-based vector modulation to improve blind audio watermarking. Wireless Personal Communications 94(2), 221–240 (2017)
G Forney Jr, On decoding BCH codes. IEEE Trans Inf Theory 11(4), 549–557 (1965)
VI Arnold, A Avez, Ergodic problems of classical mechanics (Benjamin, New York, 1968)
W Fisher, G Doddington, K Goudie-Marshall, The DARPA speech recognition research database: Specifications and status, in Proceedings of DARPA Workshop on Speech Recognition (1986), pp. 93–99
P Kabal, An examination and interpretation of ITU-R BS.1387: Perceptual evaluation of audio quality, TSP Lab Technical Report, Dept. Electrical & Computer Engineering, McGill University (2002)
ITU-T Recommendation P.862 Amendment 1, Source code for reference implementation and conformance tests, [Online]. (2003) Available: http://www.itu.int/rec/T-REC-P.862-200303-S!Amd1/en
S Xiang, Audio watermarking robust against D/A and A/D conversions. EURASIP J Adv Signal Process 2011(1), 1–14 (2011)
This research work was supported by the Ministry of Science and Technology, Taiwan, Republic of China, under grants MOST 104-2221-E-197-023 & MOST 105-2221-E-197-019.
In this research work, HTH and LYH jointly developed the algorithms and conducted the experiments. HTH was responsible for drafting the manuscript. SJL provided valuable comments and helped to improve the manuscript. All authors have read and approved the final manuscript.
Department of Electronic Engineering, National Ilan University, Yi-Lan, Taiwan
Hwai-Tsu Hu & Shiow-Jyu Lin
Department of Information Management, St. Mary's Junior College of Medicine, Nursing and Management, Yi-Lan, Taiwan
Ling-Yuan Hsu
Hwai-Tsu Hu
Shiow-Jyu Lin
Correspondence to Hwai-Tsu Hu.
Hu, HT., Lin, SJ. & Hsu, LY. Effective blind speech watermarking via adaptive mean modulation and package synchronization in DWT domain. J AUDIO SPEECH MUSIC PROC. 2017, 10 (2017). https://doi.org/10.1186/s13636-017-0106-4
Blind speech watermarking
Adaptive mean modulation
Package synchronization
Discrete wavelet transform
|
CommonCrawl
|
What is the volume of the cone?
V=\pi [tex] r^{2} [tex] \frac{h}{3}
The volume of a cylinder is 4πx3 cubic units and its height is x units. Which expression represents the radius of the cylinder, in units?
The height of a pyramid is doubled, but its length and width are cut in half. What is true about the volume of the new pyramid?
What is the volume of the cylinder in the diagram radius 6 height 13
What is the volume of this figure A 12 cubic centimeters B 16 cubic cenitmeters C 24 cubic centimeters D 48 cubic centimeters
A rectangular pyramid has a base area of 24in and a volume of 48in what is the height of the pyramid
What is the radius of a cone with a height of 24 and a slant height of 26?
What is the surface area of a cone with a radius of 15 and a height of 8 NOTE: Answer must not be fraction or decimal
The diameter of the base of the cone measures 8 units. The height measures 6 units. What is the volume of the cone?
What is the volume of a waterbed mattress that is 7 ft by 4 ft by 1 ft?
What is the volume of an oblique square pyramid with base edges 25 ft and height 24 ft?
Helpp me please. What is the volume of the square pyramid? Round to the nearest tenth. 480.0 cm3 720.0 cm3 40.0 cm3 147.3 cm3
Find the volume of a cube that has a total area of 96 sq. in.
The volume of a cylinder is 525 cm3. The radius of the base of the cylinder is 5 cm. What is the height of the cylinder? The height of the cylinder is cm. (Simplify your answer.)
A right cone has a base diameter that is 12cm and a slant height that is 45cm what is the surface area
A cereal manufacturer decides to offer a new family-sized box based on the regular-sized box. They want the volume of the family-sized box to be three times the volume of the regular-sized box. However, they want the length of the family-sized box to be the same as the regular-sized box. If they decide to double the width to create the family-sized box, by what factor must they increase the height?
A cylindrical canister contains 3 tennis balls. Its height is 8.75inches, and its radius is 1.5. The diameter of one tennis ball is 2.5. How much of the canister's volume is unoccupied by tennis balls? Use 3.14 for π, and round your answer to the nearest hundredths place.
Find the area volume of a cube if each edge has length 4n/5 in
Find the volume of the following
What is the volume of the cylinder shown 14 radious and 10 height
The surface areas of two similar figures are 16 in squared and 25 in squared. if the volume of the larger figure is 500 in cubed what is the volume of the smaller figure?
If 3000 ft3 of air is crossing an evaporator coil and is cooled from 75°F to 55°F, what would be the volume of air, in ft3 , exiting the evaporator coil
What is the volume of the cone to the nearest tenth?
What is the surface area of the cone?
The lateral area of a cone is 19.6straight pi in2. The radius is 2.8 in. Find the slant height.
Using v=lwh what is an expression for the volume of the following rectangular prism? L=d-2/3d-9 w=4/d-4 h=2d-6/2d-4 A.4(d-2)/3(d-3)(d-4) B.4d-8/3(d-4)^2 C.4/3D-12 D.1/3d-3
Help pls what is 4/3 x 4.45 ( i need it for the volume of a sphere )
A spherical fish bowl is half-filled with water. The center of the bowl is C, and the length of segment AB is 16 inches, as shown below. Use Twenty two over seven for pi. A sphere with diameter 16 inches is drawn. Which of the following can be used to calculate the volume of water inside the fish bowl? 1 over 24 over 322 over 7(82)(16) 1 over 24 over 322 over 7(83) 1 over 24 over 322 over 7(162)(8) 1 over 24 over 322 over 7(163)
One cylinder has a volume that is 8 cm³ less than 7/8 of the volume of a second cylinder. If the first cylinder's volume is 216 cm², what is the correct equation and value of x, the volume of the second cylinder
Ryleigh received a $50 gift card to the frozen yogurt shop for her birthday. The shop sells yogurt sundaes for $4 and yogurt cones for $3. How many sundaes and cones can she buy with her card? If x represents the number of yogurt sundaes and y represents the number of yogurt cones, which graph correctly shows the solution to the problem?
A farmer is building a grain silo for storage. he estimates that will need 256 cubic yards of storage. the grain silo will be shaped as a cube. how long should one side of the grain silo be? (recall that the volume of a cube is calculated by l3, where l is the length of one side).
|
CommonCrawl
|
NetAct: a computational platform to construct core transcription factor regulatory networks using gene activity
Kenong Su1 na1,
Ataur Katebi ORCID: orcid.org/0000-0001-8656-51812,3 na1,
Vivek Kohar4,
Benjamin Clauss3,5,
Danya Gordin2,3,
Zhaohui S. Qin6,
R. Krishna M. Karuturi7,8,9,
Sheng Li7,8 &
Mingyang Lu2,3,4,5
Genome Biology volume 23, Article number: 270 (2022) Cite this article
A major question in systems biology is how to identify the core gene regulatory circuit that governs the decision-making of a biological process. Here, we develop a computational platform, named NetAct, for constructing core transcription factor regulatory networks using both transcriptomics data and literature-based transcription factor-target databases. NetAct robustly infers regulators' activity using target expression, constructs networks based on transcriptional activity, and integrates mathematical modeling for validation. Our in silico benchmark test shows that NetAct outperforms existing algorithms in inferring transcriptional activity and gene networks. We illustrate the application of NetAct to model networks driving TGF-β-induced epithelial-mesenchymal transition and macrophage polarization.
One of the major goals of systems biology is to infer and model complex gene regulatory networks (GRNs) which underlie the biological processes of human disease [1,2,3,4,5,6]. Particularly important are those gene networks that control decisions regarding cellular state transitions (e.g., replicative to quiescent [7,8,9], epithelial to mesenchymal (EMT) [10], pluripotent to differentiated [11, 12]), given the central importance of such regulatory processes to both healthy development as well as disease formation such as cancer tumorigenesis.
To construct and model GRNs associated with the biological process under investigation, researchers have developed two primary systems biology approaches. The first is a bottom-up approach, in which researchers focus on identifying a core GRN composed of a small set of master regulators [13]. Once the core GRN is obtained, mathematical modeling is then applied to simulate the gene expression dynamics [14,15,16,17], which helps elucidate the potential gene regulatory mechanism driving the biological process in question. The current practice for synthesizing a core GRN is by compiling data via an extensive literature search, e.g., in these studies [18,19,20]. While this works well for systems where sufficient knowledge has been gained and accumulated, it is less effective in cases where key component genes and regulatory interactions have yet to be discovered. Due to the rapid increase of biomedical publications, manual synthesis of literature information has become extremely time-consuming and prone to human error in data interpretation. One way to address the labor-intensive issue is to rely on existing manually curated databases, such as KEGG [21] and Ingenuity Pathway Analysis (IPA) [22]. However, these databases often compile gene regulatory interactions from different tissues, species, or diseases. Therefore, it is hard to obtain context-specific interactions directly from these types of databases.
The second approach adopts a top-down perspective, in which researchers apply bioinformatics and statistical methods on genome-wide transcriptomics and/or genomics data to infer large-scale GRNs [13]. These data-driven methods are ideal for obtaining a global picture of gene regulation and the overall structure of gene-gene interactions. This approach can also be used to characterize key regulators and regulatory interactions between genes that are specific to the biological context of the study. However, conventional bioinformatics methods for gene network inference are usually not designed to identify an integrated working system. These methods typically rely on significance tests to determine the nodes and edges of a gene network, yet it is rare to evaluate whether the constructed gene network is capable of operating as a functional dynamical system [23]. Moreover, many statistical methods work well to identify the association between genes, but not their causation, thus limiting the applicative value of the top-down approach in characterizing gene regulatory mechanisms.
To overcome the abovementioned issues, a relatively new approach has been explored in several studies in which the top-down and bottom-up approaches are integrated to infer and model a core GRN [23,24,25,26,27,28,29,30,31]. In this combined approach, a GRN is constructed with bioinformatics tools using genome-wide gene expression data, followed by mathematical modeling of the GRN to simulate gene expression steady states and explore their similarity with biological cellular states. The simulations can help validate the accuracy of the constructed GRN and further clarify the regulatory roles of genes and interactions in driving cellular state transitions. This combined approach helps to discover existing and new regulatory interactions specific to the cell types and experimental conditions under study. Additionally, it helps pinpoint master regulators and reduce the system's overall complexity. The GRN modeling is particularly crucial for cases with non-trivial cellular state transitions, such as multi-step state transitions as observed in epithelial-mesenchymal transition (EMT) [32], and bifurcating state transitions, as observed in stem cell differentiation [33]. This is because the GRNs constructed by the top-down approach are not guaranteed to capture these state transition patterns. So far, to the best of our knowledge, there is no computational platform available that utilizes this combined approach for systematic GRN inference and modeling.
In this study, we introduce a computational platform, named NetAct, for inferring a core GRN of key transcription factors (TFs) using both transcriptomics data and a literature-based TF-target database. Integrating both resources allows us to take full advantage of the existing knowledgebase of transcriptional regulation. NetAct adopts the combined top-down bioinformatics and bottom-up systems biology approaches, designed specifically to address the following two major issues.
First, many network inference methods rely on correlations of gene expression data, yet the actual transcriptional activities of many master regulators may not be reflected in their gene expression. Instead, the activity may be better associated with either their protein level, the level of a certain posttranslational modification, localization, or their DNA binding affinity. As a result, the master regulators with weak correlations between the expression level and the transcriptional activity will likely be discarded in the network. Some algorithms have been developed to infer the activities of regulators from transcriptomics data, such as VIPER [2], NCA [34], and AUCELL [35]. However, most of these algorithms (1) are not designed for gene network modeling, (2) still rely on the coexpression of a TF and its targeted genes, or (3) do not take advantage of the known regulatory interactions from the literature, hindering their applicability as automated algorithms for generic use in systems biology.
Second, conventional mathematical modeling approaches have been applied over the years to simulate the dynamics of a GRN, yet they are not particularly effective in analyzing core GRNs. A popular method models the gene expression dynamics of a system using the chemical rate equations that govern the associated gene regulatory processes. However, it is difficult to directly measure most of the kinetic parameters of a GRN. Although some parameter values can be learned from published results, many others are often based on educated guesses which significantly limits the predictive power of mathematical modeling. Moreover, a core GRN is not an isolated system. Thus, an ideal modeling paradigm should also consider other genes that interact with the core network. To address this infamous parameter issue, we have developed the modeling algorithm RACIPE [29, 36, 37] in previous work that analyzes a large ensemble of mathematical models with random kinetic parameters. RACIPE has been applied to model the dynamical behavior of gene regulatory networks of different biological processes, such as epithelial-mesenchymal transition [23, 29], cell cycle [37], and stem cell differentiation [38].
The new NetAct platform addresses the abovementioned issues by (1) inferring the activities of TFs for individual samples using the gene expression levels of their targeted genes, (2) identifying the regulatory interactions between two TFs based on their activities rather than their expressions, and (3) subsequently simulating the constructed core GRN with RACIPE to validate and evaluate the gene expression dynamics of the core GRN. In this paper, we describe in detail the NetAct platform, extensive benchmark tests for TF-target databases, TF activity inference, network construction, and two examples of applications to model GRNs with time series gene expression data.
We developed a computational systems biology platform, named NetAct, to construct transcription factor (TF)-based GRNs using TF activity. The method uniquely integrates both generic TF-target relationships from literature-based databases and context-specific gene expression data. NetAct also integrates our previously developed mathematical modeling algorithm RACIPE to evaluate whether the constructed network functions properly as a dynamical system. It evaluates the roles of every gene in the network by in silico perturbation analysis. NetAct has three major steps: (1) identifying the core TFs using gene set enrichment analysis (GSEA) [39] with an optimized TF-target gene set database (Fig. 1a), (2) inferring TF activity (Fig. 1b), and (3) constructing a core TF network (Fig. 1c). Then, the network is validated and analyzed by simulating its dynamics using mathematical modeling by RACIPE. Details of each step are given in the "Methods" section and Additional file 1: Supplementary Note 5. Below, we demonstrate how we optimized the NetAct algorithm, compared its performance of activity inference with three existing methods using in silico gene expression data, and applied the network modeling approach to two biological datasets.
Schematics of NetAct. a First, key transcription factors (TFs) are identified using gene set enrichment analysis (GSEA) with a literature-based TF-target database. b Second, the TF activity of an individual sample is inferred from the expression of target genes. From the co-expression and modularity analysis of target genes, we find target genes that are either activated (blue), inhibited (red), or not strongly related to the TF (gray). The activity is defined as the weighted average of target genes activated by the TF minus the weighted average of target genes inhibited by the TF. c Lastly, a TF regulatory network is constructed according to the mutual information of inferred TF activity and literature-based regulatory interactions. d Performance of GSEA for various TF-target gene set databases. The plot shows the sensitivity and specificity with different q-value cutoffs. The gene set databases in the benchmark include the combined literature-based database (D1); FANTOM5-based databases (D2) with 20, 50, and 100 target genes per TF; the combined experimental-based database (D3, ChIP); and RcisTarget databases (D4), one with 10 targets per TF binding motif and another with 50 total number of targets per TF
Literature-based TF-target relationships facilitate TF inference
To establish a comprehensive gene set database containing TF-target relationships, we considered data from different sources (Additional file 3: Table S1, also see Additional file 1: Supplementary Note 1). They are (D1) a literature-based database, consisting of data from TRRUST [40], RegNetwork [41], TFactS [42], and TRED [43]; (D2) a gene regulatory network database FANTOM5 [44], whose interactions are extracted from networks constructed using RNA expression data from 394 individual tissues; (D3) a database derived from resources of putative TF binding targets, including ChEA [45], TRANSFAC [46], JASPAR [47], and ENCODE [48]; and (D4) a database derived from motif-enrichment analysis, RcisTarget [35]. These databases have been frequently used to study the transcriptional regulations and have already been utilized for network construction [29, 49].
We evaluated the performance of these databases by GSEA on a benchmark gene expression dataset. GSEA is a popular statistical method that can be used to evaluate the significant overlapping between a set of genes and differentially expressed genes between two experimental conditions. Using various types of TF-target databases, our goal is to find the best version of the database, so that GSEA can detect the target gene sets of the relevant TFs to be statistically significant. The benchmark dataset, denoted as set B, consists of a compilation of 12 microarray and 32 RNA-seq gene expression data (Additional file 3: Table S2). Each of these datasets contains at least three samples under the normal condition (control) and three samples under the treatment condition in which a specific TF is treated by knockdown (KD). We applied GSEA (with slight modifications, details in the "Methods" section) on set B to evaluate whether the enrichment analysis can detect the perturbed TFs. The underlying assumption is that, with a better TF-target gene set database, GSEA will be more likely to detect the corresponding perturbed TFs. For each TF-target database and each gene expression data in set B, we calculated the q-values of all the TFs in the database by GSEA to determine whether the target genes of the perturbed TF are enriched in the differentially expressed genes. We found that more significant q-values are usually associated with relatively larger number of targets for each TF; however, too many (e.g., greater than 2000) targets will result in non-significant q-values. The summary statistics, such as the total number of TFs and the average number of target genes per TF, are summarized in Additional file 3: Table S1. Furthermore, these corresponding q-values from all the gene expression data are converted to specificity and sensitivity values (see the "Methods" section), and different databases are compared based on the area under the sensitivity-specificity curves (Fig. 1d). We found that the literature-based database has the best overall performance; thus, we used this database for further analyses. Our results are in line with a previous benchmark study [50] that literature-based TF-target database outperforms others in capturing transcriptional regulation.
Inferring TF activity without using TF expression
NetAct can accurately infer TF activity for an individual sample directly from the expression of genes targeted by the TF (see the "Methods" section). Here, we will illustrate how NetAct infers TF activity on two cases of microarray KD experiments—one case for shRNA KD of FOXM1 and shRNA KD of MYB in lymphoma cells (GEO: GSE17172 [51]), and another case for KD of BCL6 on both OCI-Ly7 and Pfeiffer GCB-DLBCL cell lines (GEO: GSE45838 [2]). NetAct first successfully identified the TFs that undergo knockdown in each case, i.e., FOXM1, MYB, and BCL6, by applying GSEA on the optimized TF-target database (q-value < 0.15).
Next, for each identified TF, NetAct calculates its activity using the mRNA expression of the direct targets of the TF. We first constructed a Spearman correlation matrix from the expression of the targeted genes. As shown in Fig. 2a, the correlation matrix after hierarchical clustering analysis typically consists of two red diagonal blocks, two blue off-diagonal blocks, and the remaining elements with low correlations which will be filtered out subsequently (details in the "Methods" section). Within the red blocks, the expression of any column gene is positively correlated with that of any row gene, while within the blue blocks, the expression of any column gene is negatively correlated with that of any row gene. This indicates that the genes in the two red blocks are anti-correlated in the gene expression with each other. However, if the correlation matrix is constructed from 100 or 200 randomly selected genes (Fig. 2b, c), such a clear pattern disappears. Thus, our observation suggests that genes from one of the red blocks are activated by the TF, whereas genes from the other block are inhibited by the TF. Moreover, filtered genes are not likely to be directly targeted by the TF in this context, or they are regulated by multiple factors simultaneously and are thus likely not a good indicator for the TF activity.
Illustration of the grouping scheme for target genes of a transcription factor. a The co-expression matrix of MYB target genes in shRNA knockdown of MYB lymphoma cells by hierarchical clustering analysis (Pearson correlation and complete linkage). b, c The poor clustering results from the co-expression of randomly selected 100 (b) and 200 genes (c). In panels a–c, the left subplots show the outcomes of all tested genes, and the right subplots show the outcomes of genes after the filtering step. Compared to the random cases, MYB target genes have a clear pattern of red and blue diagonal blocks from their co-expression. d, e The percentage of differentially expressed genes remained after the filtering step in the case of FOXM1 and MYB knockdown, respectively. f, g The proportion of genes from the activation group that are positively correlated with the TF expression (red bars) and the proportion of genes from the inhibition group that are negatively correlated with the TF expression (blue bars). h Spearman correlation (average and standard deviation) between TF activity and target expression (red) and between TF expression and target expression (blue)
We further evaluated how the filtering step removes noise and retains the important genes in the analysis. We found that, after the filtering step, most of the differentially expressed (DE) genes are retained, as evidenced by Fig. 2d. Here, DE genes from each comparison were retrieved by using limma with a cutoff for the adjusted p-values at 0.05 and a cutoff for the log2 fold changes at 2. Subsequently, for DE TFs, we evaluated the Spearman correlations between the TFs and the corresponding targeted genes. In traditional approaches (such as ARACNe [1], WGCNA [52], and BEST [53]), the co-expression between a TF and its targeted genes is commonly used to identify its association and assign the sign (activation or inhibition) of the regulation. We found that, for each TF, most of the genes in a block either positively correlate with the TF expression (Fig. 2f, g, blue bars), or they negatively correlate with the TF expression (Fig. 2f, g, red bars). The tests demonstrate that, without directly using TF expression, NetAct can successfully identify two groups of important target genes—genes in each group are either activated or inhibited by the TF. These two groups of genes are further used to infer TF activity by a weighted average of their gene expression (Eq. 1 in the "Methods" section). Additionally, we found that the correlations between inferred TF activity and target expression are usually higher than the correlations between TF expression and target expression (Fig. 2h).
Evaluating activity inference and network construction in a simulation benchmark
To evaluate the accuracy and robustness of inferred TF activity, we performed extensive benchmark tests to compare NetAct with other existing methods. We first performed the benchmark tests on simulated data because TF activity is usually not directly measurable. The activity of a TF can be related to its protein level or the level of a particular posttranslational modification, such as phosphorylation. Therefore, it is very difficult to obtain the ground truth of TF activity from an experimental dataset. Thus, in this benchmark test, we rely on mathematical modeling to simulate both the expression and activity of each TF from a synthetic TF-target network. With this simulated data, we benchmark NetAct against other methods.
To establish the simulated benchmark dataset, we first constructed a synthetic TF-target network with a total of 30 TFs. Each TF has 20 target genes randomly selected with replacement from a pool of 1000 genes. In addition, each TF also regulates two (randomly selected) of the 30 TFs. This synthetic network has a hierarchical structure, where a target gene may be co-regulated by multiple TFs. The type of each TF-to-TF regulation is either excitatory, inhibitory, or signaling, with a chance of 25%, 25%, and 50%, respectively; the type of each TF-to-target regulation is either excitatory or inhibitory with a 50% chance for each. Here, the signaling regulation changes the activity of a TF without changing its expression, whereas the excitatory or inhibitory interactions change both the activity and expression. From one realization of the synthetic network generation, the synthetic GRN contains a total of 477 genes (30 TFs, 447 targeted genes) and 660 regulatory links (Fig. 3a). See Additional file 1: Supplementary Note 4 for more details.
Simulation of both gene expression and activity of a synthetic GRN. a The synthetic GRN consisting of 30 TFs and 447 target genes. An edge of transcriptional activation is shown as black line with an arrowhead; an edge of transcriptional inhibition as red line with a blunt head; an edge of signaling interaction as green line with an arrowhead. Transcription factor labeled as TF9 was selected for knockdown simulations. b The summary of the correlation analyses of the simulated expression and activity. The left, middle, and right columns represent the outcomes for TF and target activities, TF and target expressions, and TF activities and target expressions, respectively. For each category, the histograms of Spearman correlations are shown for non-interacting gene pairs (first row), interacting gene pairs (second row), gene pairs of excitatory transcriptional regulation (third row), gene pairs of excitatory signaling regulation (fourth row), and gene pairs of inhibitory transcriptional regulation (fifth row). Here, the target activity is set to be the same as the target expression for non-TF genes. c The histograms of Spearman correlations for gene pairs of target genes from the same TF. d Jaccard indices between the ground truth regulons of the synthetic GRN and the regulons inferred by ARACNe using either the simulated expression (red) or activity data (blue)
To simulate the gene expression of the TF-target network, we applied a generalized version of the mathematical modeling algorithm, RACIPE [37]. Using the network topology as the only input, RACIPE can generate an ensemble of random models, each corresponds to a set of randomly sampled parameters. Here, we used RACIPE to generate simulated data including gene expression and TF activity for the benchmark. Some previous studies have also adopted a similar modeling approach for benchmarking [54, 1]. To consider the effects of a signaling regulatory link, we generalized RACIPE to simulate both expression and activity for each TF. See Additional file 1: Supplementary Note 5 for more details.
In the benchmark test, we used RACIPE to simulate 100 models with randomly generated kinetic parameters. From these 100 models, we obtained 83 stable steady-state gene expression and activity profiles for the 477 genes. As expected, TF activity and target activity from a regulatory link are correlated (1st column, 2nd row in Fig. 3b), TF activity and target expression (3rd column, 2nd row in Fig. 3b) are correlated, and the expression of two target genes (Fig. 3c) are correlated. However, there is no strong correlation between TF expression and target expression (2nd column, 2nd row in Fig. 3b) and, for a signaling regulatory link, between TF activity and target expression (3rd column, 4th row in Fig. 3b). Next, we applied ARACNe to predict the regulon (i.e., the list of targeted genes by a specific TF) using either the simulated expression profiles or the simulated activity profiles. We found that the regulons predicted from the activity profiles are substantially more similar to the predefined ground truth regulons (measured by the Jaccard index [55]) than those predicted from the expression profiles (Fig. 3d). The results indicate the need of using the TF activity, instead of TF expression, to identify TF-target relationships.
Next, we compared the performance of NetAct with several related algorithms, NCA, VIPER, and AUCell, in inferring TF activity using both the simulated expression profiles from the 83 models and a predefined regulon (i.e., the association of each TF with its target genes) (details for the implementation of these algorithms in Additional file 1: Supplementary Note 3). The predicted activity was then compared with the simulated activity (ground truth) to evaluate the performance. To mimic the real-life scenario where the target information may not be complete and accurate, we consider more challenging tests where the regulon data is randomly perturbed. Here, for a specific perturbation level, we generated 100 sets of regulon data by replacing a certain number of target genes for each TF with non-interacting genes. The numbers of replaced genes are 0 (0% level of perturbation), 5 (25%), 10 (50%), and 15 (75%) in different tests. We then evaluated the performance of NetAct, NCA, and VIPER. AUCell protocol advises to include the target genes with only positive interactions in the regulons. To satisfy this criterion, we updated the regulons for both unperturbed and perturbed regulons. For the unperturbed regulons, we retained only the positive interactions; for the perturbed regulons, we retained the positive target genes that were not replaced and a random half of the replaced target genes (assuming that half of the genes are positively regulated by the TF). We then evaluated AUCell performance using these updated regulons (denoted AUCell 1) and non-updated regulons (denoted AUCell 2). As shown in Fig. 4a (also Additional file 2: Figs. S3-S6), NetAct significantly outperforms each of the other methods in reproducing the simulated activity profiles at each perturbation level. As expected, the performance of NetAct is decreased by increasing the perturbation levels of the regulon data; however, NetAct still performs reasonably well even when only 25% of the actual target genes are kept in the regulon data. The results indicate that NetAct can robustly and accurately infer TF activity even with a noisy TF-target database.
The performance of activity and network inference from a simulation benchmark. a TF activity inference. TF activity was inferred by several methods using the gene expression data simulated from the synthetic TF-target gene regulatory network (GRN) and the corresponding regulons. For each TF, we computed Spearman correlations between the inferred activity and simulated activity (ground truth) for all the simulated models. Then, we calculated the average correlation values over all TFs. The plots show the median of average correlations for the cases where we used the original regulons defined by the TF-target network (0% perturbation), and the regulons where 5 (25% perturbation), 10 (50% perturbation), and 15 (75% perturbation) target genes are randomly replaced with non-interacting genes. The median values were computed over 100 repeats of random replacement for each perturbation level, and the values of the average correlations are reported for the case of zero perturbation. Shown are the results for NetAct (black), NCA (gray), VIPER (cyan), AUCELL 1 where regulons contain only positively associated target genes (orange), and AUCELL 2 where regulons contain all target genes (red). b–d Network inference. The panels show the performance of network inference algorithms from the simulation benchmark by the precision and recall for different link selection thresholds. b Network inference performance against all ground truth regulatory interactions. Tested methods are GENIE3, GRNBoost2, and PPCOR, using transcription factor (TF) expression; GENIE3 using TF activity inferred by AUCell; NetAct using its inferred TF activity. For the latter two methods, original (unperturbed) regulons obtained from the regulatory network were used. c Network inference performance of NetAct against all ground truth regulatory interactions using the regulons with 0% (the original), 25%, 50%, and 75% target perturbations. d Network inference performance of NetAct in discovering new regulatory interactions not existing in the regulons. NetAct was applied using the regulons at different perturbation levels (25%, 50%, and 75%). The benchmark results shown here are for the case of the untreated simulation. The results for the case of the knockdown simulation are shown in Additional file 2: Fig. S7
Furthermore, we tested another scenario where the test data contains simulated data from two experimental conditions, e.g., one representing an unperturbed condition and the other representing a perturbed condition. Here, we used the same synthetic network but compiled 40 expression and activity data from the abovementioned simulation (unperturbed condition), together with 43 expression and activity data from the simulations in which a specific TF (TF9) is knocked down (perturbed condition). We then performed a similar test as above and found that NetAct outperformed each of the other methods (Additional file 2: Fig. S2, Additional file 2: Fig. S7a). The notable performance gain of NetAct mainly emanates from the removal of incoherent (or noisy) targets of a TF before the activity calculation in NetAct (see the "Methods" section).
In addition, we performed a network construction benchmark of NetAct and a few other network construction algorithms using the in silico simulation dataset, as shown in Fig. 4b–d. NetAct, using the TF activity inferred from the original regulon database, outperforms not only network construction methods using gene expression, such as GENIE3 [56], GRNBoost2 [57], and ppcor [58, 59], but also GENIE3 using the TF activity inferred by AUCell (Fig. 4b). The last approach was presented to mimic a popular method SCENIC. Moreover, we evaluated the performance of NetAct when using a perturbed regulon database. We found that NetAct remains performing well when the perturbation level is as large as 50%, when evaluated by all the ground truth interactions (Fig. 4c) and by those not presented in the regulon database (Fig. 4d). The latter case was designed to evaluate the capability of NetAct in predicting novel interactions. We observed similar outcomes for the case of the second scenario of the simulation data from two conditions (Additional file 2: Fig. S7b-d, see Additional file 1: Supplementary Note 6 for details of the benchmark method). In summary, our in silico benchmark test demonstrates the high performance of NetAct over existing state-of-the-art methods in both inferring TF activity and gene regulatory networks.
Characterizing cellular state transitions by GRN construction and modeling
In the previous sections, we demonstrated the capability of NetAct in identifying the key TFs and predicting TF activity. With these data, NetAct further constructs a TF-based GRN using the mutual information (MI) of the activity from the identified TFs (details in the "Methods" section). We then applied RACIPE to the constructed network to check whether the simulated network dynamics are consistent with the experimental observations. Below, we show the utility of NetAct with two biological examples: epithelial-mechanical transition (EMT) and macrophage polarization.
In the first case (EMT), we analyzed a set of time-series microarray data on A549 epithelial cells undergoing TGF-β-induced epithelial-mesenchymal transition (EMT) (GEO: GSE17708) [60]. According to the overall structure of the transcriptomics profiles, we arranged the samples from different time points into three groups—early stage (time points 0 h, 0.5 h, and 1 h), middle stage (time points 2 h, 4 h, and 8 h), and late stage (time points 16 h, 24 h, and 72 h). We then performed three-way GSEA with our human literature-based TF-target database to identify the enriched TFs that are active between early-middle, early-late, and middle-late time points. Forty-one TFs (q-value cutoff 0.01) were identified including many major transcriptional master regulators, such as BRCA1, CTNNB1, MYC, TWIST1, TWIST2, and ZEB1, and factors that are directly associated with TGF-β signaling pathways, such as SMAD3 [61], FOS, and JUN [62]. The hierarchical clustering analysis (HCA) of the expression and activity profiles for these TFs is shown in Fig. 5a. While the expression profiles are quite noisy, the activities show a clear gradual transition from the epithelial (E) to mesenchymal (M) state. Note that the signs of the activity of a few non-DE TFs were flipped according to experimental evidence of protein-protein interactions and the nature of transcriptional regulation (see the "Methods" section for detailed procedures and Additional file 3: Table S3 for a list of the changes).
Network modeling of TGF-β-induced EMT. Application of NetAct to an EMT in human cell lines using time-series microarray data. a Experimental expression and activity of enriched transcription factors. b Inferred TF regulatory network. Blue lines and arrowheads represent the gene activation; red lines and blunt heads represent gene inhibition. c The relationship between SMAD3 gene activity and the first principal component of the activity of all network genes from RACIPE simulations. d Hierarchical clustering analysis of simulated gene activity (with Pearson correlation as the distance function and Ward.D2 linkage method). Colors at the top indicate the two clusters from the simulated gene activity. The blue cluster represents the mesenchymal state, and the yellow cluster represents the epithelial state. The color legend for the heatmap is at the bottom right. e Knockdown simulations of the TF regulatory network. The bar plot shows the proportion of RACIPE models in each state (epithelial or mesenchymal) for the conditions of the knockdown of every TF
We then constructed a TF regulatory network (Fig. 5b) and performed mathematical modeling to simulate the dynamical behavior of the network using RACIPE (Fig. 5c, d). We found that, consistent with the expression and activity profiles (Fig. 5a), the network clearly allows two distinct transcriptional clusters that can be associated with E (the yellow cluster in Fig. 5d) and M states (the blue cluster in Fig. 5d). To assess the role of TGF-β signaling in inducing EMT, we performed a global bifurcation analysis [29] in which the SMAD3 level is used as the control parameter (Fig. 5c). Here, SMAD3 was selected as it is the direct target of TGF-β signaling [61]. As shown in Fig. 5c, when SMAD3 level is either very low or high, the cells reside in E or M states. However, when SMAD3 is at the intermediate level, the cells could be driven into some rare hybrid phenotypes. These results are consistent with our previous studies on the hybrid states of EMT [32, 63]. Using RACIPE, we systematically performed perturbation analyses by knocking down every TF in the network. Our simulation results (Fig. 5e) suggest that knocking down TFs, such as RELA, SP1, EGR1, and CREBBP, has major effects in driving M to E transition (MET), while knocking down TFs, such as TP53, AR, and KLF4, has major effects in driving E to M transition (EMT). These predictions are all consistent with existing experimental evidence (Additional file 3: Table S4).
Compared to a previous model of the EMT network based on an extensive literature survey [19], the GRN constructed by NetAct identified some of the same regulators induced by the TGF-β pathway, such as SMAD3/4, TWIST2, ZEB1, CTNNB1, NFKB1, RELA, FOS, and EGR1. Because of the lack of microRNAs and protein-protein interactions in the database, NetAct did not identify factors like miR200 and signaling molecules like PI3K. Interestingly, the NetAct model identifies STAT1/3, which was connected to other signaling pathways, such as HGF, PDGF, IGF1, and FGR, but not TGF-β in the previous network model. In addition, the NetAct model identified regulators in other important pathways in TGF-β-induced EMT in cancer cells, e.g., cell cycle pathway (RB1 and E2F1) and DNA damage pathway (P53).
In the second case, we studied the macrophage polarization program in mouse bone marrow-derived macrophage cells using time series RNA-seq data (GEO: GSE84517) [64]. In this experiment, macrophage progenitor cells (denoted as UT condition) were treated with (1) IFNγ to induce a transition to the M1 state, (2) IL4 to induce a transition to the M2 state, and (3) both IFNγ and IL4 to induce a transition to a hybrid M state. Here, we reprocessed the raw counts of RNA-seq with a standard protocol (details in Additional file 1: Supplementary Note 2). From principal component analysis (PCA) on the whole transcriptomics (Fig. 6b), we found that the gene expression undergoes distinct trajectories when macrophage cells were treated with either IFNγ (M1 state) or IL4 (M2 state). When both IFNγ and IL4 were administered, the gene expression trajectories are in the middle of the previous two trajectories, suggesting that cells are in a hybrid state (hybrid M state). We aim to use NetAct to elucidate the crosstalk in transcriptional regulation downstream of cytokine-induced signaling pathways during macrophage polarization.
Network modeling of macrophage polarization. Application of NetAct to induced macrophage polarization via drug treatment in mice using RNA-seq data. a Experimental expression and activity of enriched TFs. b PCA projection of genome-wide gene expression profiles. Different point shapes indicate the time after treatment, and colors indicate treatment types c PCA projection of gene activity of enriched TFs. d Inferred TF regulatory network. Blue lines and arrowheads represent the gene activation; red lines and blunt heads represent the gene inhibition. e PCA projection of simulated gene activity of inferred network colored by mapping each model back to experimental data. f Hierarchical clustering analysis of simulated gene activity (with Pearson correlation as the distance function and Ward.D2 linkage method). Colors at the top indicate the mapped experimental conditions. The color legend of the heatmap is at the bottom
Here, we applied GSEA on six comparisons—untreated versus IFNγ-treated samples (one comparison between the untreated and the treated after 2 h, another between the untreated and the treated after 4 h, same for the other comparisons), untreated versus IL4-treated samples, and untreated versus IFNγ + IL4-treated samples. Using our mouse literature-based TF-target database, we identified 79 TFs (q-value cutoff 0.05 for UT vs IL4-2 h and 0.01 for all others). The expression and activity profiles of these TFs (Fig. 6a–c) capture the essential dynamics of transcriptional state transitions during macrophage polarization as follows. NetAct successfully identified important TFs in these processes, including Stat1, the major target of IFNγ, Stat2, Stat6, Cebpb, Nfkb family members, Hif1a, and Myc [65,66,67]. Myc is known to be induced by IL-4 at later phases of M2 activation and required for early phases of M1 activation [66]. Interestingly, we find Myc has high expression in both IL4 stimulation and its co-stimulation with IFN but its activity is high only in IL4 stimulation. We then constructed a TF regulatory network that connects 60 TFs (Fig. 6d) and simulated the network with RACIPE, from which we found that simulated gene expression (Fig. 6f) matches well with experimental gene expression data (Fig. 6a) (see Additional file 1: Supplementary Note 7). RACIPE simulations display disparate trajectories from UT to IL4 or IFNγ activation and stimulation with both IL4 and IFNγ. Strikingly, we found in the simulation that there is a spectrum of hybrid M states between M1 and M2 (Fig. 6e), which is consistent with the experimental observations of macrophage polarization [65]. Moreover, we also predict from our GRN modeling that the transition from UT to hybrid M is likely to first undergo a transition to either M1 or M2 before a second transition to hybrid M (Fig. 6e). This is because of our observation from the simulation data that there are fewer models connecting UT and hybrid M than any of the other two routes (i.e., UT to M1, and UT to M2) (Additional file 2: Fig. S10). Taken together, we showed that the NetAct-constructed GRN model captures the multiple cellular state transitions during macrophage polarization.
In conclusion, we show that NetAct can identify the core TF-based GRN using both the literature-based TF-target database and the gene expression data. We also demonstrate how RACIPE-based mathematical modeling complements NetAct-based GRN inference in elucidating the dynamical behaviors of the inferred GRNs. Together, these two methods can be applied to infer biologically relevant regulatory interactions and the dynamical behavior of biological processes.
In this study, we have developed NetAct—a computational platform for constructing and modeling core transcription factor (TF)-based regulatory networks. NetAct takes a data-driven approach to establish gene regulatory network (GRN) models directly from transcriptomics data and takes a mathematical modeling approach to characterize cellular state transitions driven by the inferred GRN. The method specifically integrates both literature-based TF-target databases and transcriptomics data of multiple experimental conditions to accurately infer TF transcriptional activity based on the expression of their target genes. Using the inferred TF activity, NetAct further constructs a TF-based GRN, whose dynamics can then be evaluated and explored by mathematical modeling. Our approach in combining top-down and bottom-up systems biology approaches will contribute to a better understanding of the gene regulatory mechanism of cellular decision-making.
One of the key components of NetAct is a pre-compiled TF-target gene set database. Here, we have evaluated different types of TF-target databases in identifying knocked-down TFs using publicly available transcriptomics datasets. In this test, we have considered databases derived from the literature, gene co-expression, cis-motif prediction, and TF-binding motif data. Our benchmark tests suggest that the literature-based database clearly outperformed the other databases. The literature-based database usually contains a small (~ 30) number of target genes for each TF, but these data have direct experimental evidence, therefore being more reliable than those from the other sources. However, the literature-based database for sure has missing regulatory interactions, therefore maybe limiting the overall performance of NetAct. One way to address this issue is to further update the literature-based database, once new information is available. Another potential approach is to compile a database by combining different types of databases together. However, this might be quite challenging as different databases have data of very different sizes (the number of target genes) and quality. Future investigations on this direction can help to expand our knowledge of transcriptional regulation and meanwhile improve the performance of the algorithm.
NetAct also has a unique approach to infer the TF activity from the gene expression of the target genes with the consideration of activation/inhibition nature. From our in silico benchmark tests, we found that NetAct outperforms major activity inference methods, owing to the design of the filtering step and the use of a high-quality TF-target database. NetAct is also robust against some inaccuracy in the TF-target database and noises in gene expression data, because of its capability of filtering out irrelevant targets as well as remaining key targets.
One potential issue is the assignment of the sign of TF activity, as it is algorithmically assigned according to the correlation with TF expression. In the case where the TF expression is very noisy or the expression is completely unrelated to TF activity, the sign assignment might be inaccurate. To deal with this issue, we have devised a semi-manual approach that identifies the sign of TF activity according to the sign of other interacting TFs. Another potential issue is that some TFs from the same family may have very similar target genes; therefore, NetAct will have difficulty in identifying exactly which TF from the family is most relevant. Additional data resources, such as epigenomics [68], TF-binding data [35], and Hi-C data [69], will be helpful to address this problem. One of the future directions is to design methods to integrate these data resources.
Lastly, instead of constructing a global transcriptional regulatory network, NetAct focuses on modeling a core regulatory network with only interactions between key TFs. The underlying hypothesis is that these TFs and the associated regulatory interactions play major roles in controlling the gene expression of different cellular states and the patterns of state transitions. With the core network identified using NetAct, we can further perform simulations with mathematical modeling algorithms, such as RACIPE, to analyze the control mechanism of the core network. These simulations allow us to generate new hypotheses, which can be further tested experimentally. The validation data can further help to improve the model. Ideally, this needs to be an iterative process to refine a core network model, which is indeed another interesting future direction.
We developed NetAct, a computational platform for constructing and modeling core transcription factor regulatory networks using both transcriptomics data and literature-based transcription factor-target gene databases. Utilizing both types of resources allows us to identify regulatory genes and links specific to the data and fully take advantage of the existing knowledgebase of transcriptional regulation. Our method of combining top-down and bottom-up systems biology approaches contributes to a better understanding of the mechanism of gene regulation driving cellular state transitions.
Selecting enriched TFs
For a comparison between two experimental conditions, we obtained a ranked gene list quantified by the absolute value of the test statistics (t statistics in microarray and Wald test statistics in RNA-seq) from differential expression (DE) analysis [70], followed by gene set enrichment analysis (GSEA) [39] using our optimized transcription factor (TF)-target gene set database. Here, for each TF, the corresponding gene set consists of all its target genes. GSEA identifies important TFs whose targets are enriched in DE genes between the two conditions. The significance test is achieved through 10,000 permutations of the gene list names and TFs are kept for further analysis when the q-value is below a certain threshold cutoff (0.05 by default). A C++ implementation of this version of GSEA, specifically for gene name permutations, has been provided in NetAct for fast computation. For multiple comparisons, a set of enriched TFs are first identified from each pairwise comparison and then a union of the multiple sets of TFs is considered.
In the database benchmark test, for each database, we computed the sensitivity and specificity values for different q-value cutoffs. Here, for each cutoff value, we defined the sensitivity as the proportion of datasets where the gene sets for the KD TFs were enriched with q-values below the cutoff value. We also defined specificity as the fraction of cases where the gene sets for the other TFs (non-KD TFs in the benchmark) were not enriched with q-values above the cutoff value. We then computed the area under the ROC curve (AUC) using the DescTools R package [71].
Inferring TF activity
TF activity is inferred from the expression of target genes retrieved from the TF-target database. NetAct defines the activity of the selected TFs using two different schemes—one using only the expression of target genes and the other using the expression of both the TF and its target genes. The second scheme is only used for the situation of noisy target gene expression. For each TF, the algorithm selects the better scheme according to its performance, as described below.
Without directly using TF expression
For each TF, its downstream targets are first divided into two modules using Newman's community detection algorithm [72] on the pairwise Spearman correlation matrix of the target genes. Then, within each module, some less-correlated genes are filtered out to improve the quality of the inference. Here, the filtering step is achieved as follows: (1) each target gene is assigned a vector of correlations with the other target genes, where the distance between two genes is calculated as the sum of squares of the correlation vectors of two genes; (2) k-mean algorithm (k = 1) is performed within each cluster to determine the center vector; and (3) genes are filtered out if the distance between the genes and the center is larger than the average distance.
This step outputs two groups of genes—genes in one group are supposed to be activated by the TF, while genes in the other group are inhibited by the TF. Note that, at this stage, the nature of activation/inhibition of the individual group is not yet determined. The activity of the TF is calculated as:
$$A\left(\textrm{TF}\right)=\frac{\sum_{i=1}^n{w}_i{g}_i{I}_i}{\sum_{i=1}^n{w}_i}$$
where gi is the standardized expression value of a target gene i, and wi is the weighting factor defined as a Hill function:
$${w}_i=1/\left[1+{\left(\frac{s_i}{s_0}\right)}^n\right]$$
where si is the adjusted p-value from DE analysis for gene i, the threshold S0 is 0.05, and n is set to be 1/5 for best performance (Additional file 2: Fig. S8). Ii is 1 if the corresponding gene belongs to the first group and − 1 if it belongs to the second group. If the calculated TF activity pattern is not consistent with the TF expression trend (evaluated by Spearman correlation), both the sign of the two groups and the sign of the activity are flipped. According to our in silico benchmark test (Additional file 2: Fig. S9), we found that majority of the targets in one group are activated by the TF, and majority of those in the other group are inhibited by the TF. For genes in the inhibition group, the higher the TF activity, the more the genes are suppressed. Thus, the formula in Eq. 1 captures well the activity of TFs for their effects to both activating and inhibitory targets. We also explored a few other community detection algorithms [73,74,75] and found they produced similar results (Additional file 2: Fig. S1).
Using TF expression
For each TF, its downstream targets are first divided into two groups according to the sign of the Spearman correlation between the TF expression and the target expression. Similar to the previous scheme, in each group, target genes are filtered out if the correlation value is less than the average correlation of all the targets. The activity of the TF is also calculated using Eq. 1.
Sign assignment for DE TF
For any DE TF (i.e., there is a significant difference in the TF expression across cell type conditions) of interest, NetAct computes the activity values from both the schemes (with or without TF's expression) and selects the better way based on how well the activity values correlate with target expression. To this end, NetAct calculates the absolute value of Spearman correlation between the TF activity and the expression of each target, and selects the scheme whose activity gives larger average correlations.
Sign assignment for non-DE TF
If the expression patterns of the identified TFs fail to show the significant differences between cell type conditions, a semi-manual method to assign the sign of activity can be adopted. Putative interaction partners between DE and non-DE TFs in the inferred network are identified using Fisher's exact test between TF targets in the NetAct TF-target database. The most significant pairs are then cross-referenced with the STRING database (https://string-db.org) to identify instances of protein-protein interactions (PPIs). A literature search is then performed to identify the nature of the PPI, and the sign of the non-DE TF is adjusted based on the DE TF and the type of PPI. Note that the last step needs to be done manually for each modeling application. Additional file 3: Table S3 shows the details of TF sign flipping and supported experimental evidence for the two network modeling applications.
Network construction and mathematical modeling
NetAct constructs a TF regulatory network using both the TF-TF regulatory interactions from the TF-target database and the activity values. (1) The network is constructed using mutual information between the activity values of two TFs. (2) Interactions are filtered out if they cannot be found in the TF-target regulatory database (i.e., D1). (3) The sign of each link is determined by the sign of the Spearman correlation between the activity of two TFs. (4) We keep the interaction between two TFs if their mutual information is higher than a threshold cutoff. With different cutoff values for mutual information, NetAct establishes networks of different sizes. To identify the best network model capturing gene expression profiles, we apply mathematical modeling to each of the TF networks using RACIPE [29]. RACIPE takes network topology as the input and generates an ensemble of mathematical models with random kinetic parameters. By simulating the network, we expect to obtain multiple clusters of gene expression patterns that are constrained by the complex interactions in the network. RACIPE was also applied to generate simulated benchmark test sets for a synthetic TF-target network (Additional file 1: Supplementary Note 5).
The information of the TF-target gene set databases is listed in Additional file 3: Table S1. The public gene expression datasets for algorithm optimization and benchmark are listed in Additional file 3: Table S2. The datasets and computational scripts for in silico benchmark; the network modeling scripts, including those for data processing, network construction, and network simulations; and the inferred network topology files are available in GitHub [76] and in Zenodo [77]. The NetAct software is available in GitHub [78] and in Zenodo [79]. NetAct is platform-independent, written in R with a partial of codes in C++ for improved performance. NetAct is licensed under the MIT License.
Margolin AA, et al. ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics. 2006;7:S7.
Alvarez MJ, et al. Functional characterization of somatic mutations in cancer using network-based inference of protein activity. Nat Genet. 2016;48:838.
Ament SA, et al. Transcriptional regulatory networks underlying gene expression changes in Huntington's disease. Mol Syst Biol. 2018;14:e7435.
Chan TE, Stumpf MPH, Babtie AC. Gene regulatory network inference from single-cell data using multivariate information measures. Cell Syst. 2017;5:251–267.e3.
Carré C, Mas A, Krouk G. Reverse engineering highlights potential principles of large gene regulatory network design and learning. Npj Syst Biol Appl. 2017;3:17.
Fiers MWEJ, et al. Mapping gene regulatory networks from single-cell omics data. Brief Funct Genom. https://doi.org/10.1093/bfgp/elx046.
Gérard C, Goldbeter A. Temporal self-organization of the cyclin/Cdk network driving the mammalian cell cycle. Proc Natl Acad Sci. 2009;106:21643–8.
Laub MT, McAdams HH, Feldblyum T, Fraser CM, Shapiro L. Global analysis of the genetic network controlling a bacterial cell cycle. Science. 2000;290:2144–8.
Li F, Long T, Lu Y, Ouyang Q, Tang C. The yeast cell-cycle network is robustly designed. Proc Natl Acad Sci. 2004;101:4781–6.
Nieto MA, Huang RY-J, Jackson RA, Thiery JP. EMT: 2016. Cell. 2016;166:21–45.
Kim J, Chu J, Shen X, Wang J, Orkin SH. An extended transcriptional network for pluripotency of embryonic stem cells. Cell. 2008;132:1049–61.
Loh Y-H, et al. The Oct4 and Nanog transcription network regulates pluripotency in mouse embryonic stem cells. Nat Genet. 2006;38:431.
Katebi A, Ramirez D, Lu M. Computational systems-biology approaches for modeling gene networks driving epithelial–mesenchymal transitions. Comput Syst Oncol. 2021;1:e1021.
Alon U. An introduction to systems biology: design principles of biological circuits. (Chapman and Hall/CRC); 2006. https://doi.org/10.1201/9781420011432.
Book Google Scholar
Kirk PDW, Babtie AC, Stumpf MPH. Systems biology (un)certainties. Science. 2015;350:386–8.
Chasman D, Roy S. Inference of cell type specific regulatory networks on mammalian lineages. Curr Opin Syst Biol. 2017;2:130–9.
Ben-Jacob E, Lu M, Schultz D, Onuchic JN. The physics of bacterial decision making. Front Cell Infect Microbiol. 2014;4:154.
Dutta P, Ma L, Ali Y, Sloot PMA, Zheng J. Boolean network modeling of β-cell apoptosis and insulin resistance in type 2 diabetes mellitus. BMC Syst Biol. 2019;13:36.
Steinway SN, et al. Network modeling of TGFβ signaling in hepatocellular carcinoma epithelial-to-mesenchymal transition reveals joint Sonic Hedgehog and Wnt pathway activation. Cancer Res. 2014;74:5963–77.
Zeigler AC, et al. Computational model predicts paracrine and intracellular drivers of fibroblast phenotype after myocardial infarction. Matrix Biol. 2020;91–92:136–51.
Kanehisa M, Furumichi M, Sato Y, Ishiguro-Watanabe M, Tanabe M. KEGG: integrating viruses and cellular organisms. Nucleic Acids Res. 2021;49:D545–51.
Krämer A, Green J, Pollard J, Tugendreich S. Causal analysis approaches in Ingenuity Pathway Analysis. Bioinforma Oxf Engl. 2014;30:523–30.
Ramirez D, Kohar V, Lu M. Toward modeling context-specific EMT regulatory networks using temporal single cell RNA-Seq data. Front Mol Biosci. 2020;7:54.
Dunn S, Li MA, Carbognin E, Smith A, Martello G. A common molecular logic determines embryonic stem cell self-renewal and reprogramming. EMBO J. 2019;38:e100003.
Wooten DJ, Gebru M, Wang H-G, Albert R. Data-driven math model of FLT3-ITD acute myeloid leukemia reveals potential therapeutic targets. J Pers Med. 2021;11:193.
Udyavar AR, et al. Novel hybrid phenotype revealed in small cell lung cancer by a transcription factor network model that can explain tumor heterogeneity. Cancer Res. 2017;77:1063–74.
Wooten DJ, et al. Systems-level network modeling of small cell lung cancer subtypes identifies master regulators and destabilizers. PLoS Comput Biol. 2019;15:e1007343.
Khan FM, et al. Unraveling a tumor type-specific regulatory core underlying E2F1-mediated epithelial-mesenchymal transition to predict receptor protein signatures. Nat Commun. 2017;8:198.
Kohar V, Lu M. Role of noise and parametric variation in the dynamics of gene regulatory circuits. Npj Syst Biol Appl. 2018;4:1–11.
Moignard V, et al. Decoding the regulatory network of early blood development from single-cell gene expression measurements. Nat Biotechnol. 2015;33:269–76.
Sha Y, Wang S, Zhou P, Nie Q. Inference and multiscale model of epithelial-to-mesenchymal transition via single-cell transcriptomic data. Nucleic Acids Res. 2020;48:9505–20.
Lu M, Jolly MK, Levine H, Onuchic JN, Ben-Jacob E. MicroRNA-based regulation of epithelial–hybrid–mesenchymal fate determination. Proc Natl Acad Sci. 2013;110:18144–9.
Jang S, et al. Dynamics of embryonic stem cell differentiation inferred from single-cell transcriptomics show a series of transitions through discrete cell states. eLife. 2017;6:e20487.
Liao JC, et al. Network component analysis: reconstruction of regulatory signals in biological systems. Proc Natl Acad Sci. 2003;100:15522–7.
Aibar S, et al. SCENIC: single-cell regulatory network inference and clustering. Nat Methods. 2017;14:1083–6.
Huang B, et al. Interrogating the topological robustness of gene regulatory circuits by randomization. PLoS Comput Biol. 2017;13:e1005456.
Katebi A, Kohar V, Lu M. Random parametric perturbations of gene regulatory circuit uncover state transitions in cell cycle. iScience. 2020;23:101150.
Huang B, et al. Decoding the mechanisms underlying cell-fate decision-making during stem cell differentiation by random circuit perturbation. J R Soc Interface. 2020;17:20200500.
Subramanian A, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc Natl Acad Sci U S A. 2005;102:15545–50.
Han H, et al. TRRUST: a reference database of human transcriptional regulatory interactions. Sci Rep. 2015;5:11432.
Liu Z-P, Wu C, Miao H, Wu H. RegNetwork: an integrated database of transcriptional and post-transcriptional regulatory networks in human and mouse. Database. 2015;2015.
Essaghir A, Demoulin J-B. A minimal connected network of transcription factors regulated in human tumors and its application to the quest for universal cancer biomarkers. PLoS One. 2012;7:e39666.
Jiang C, Xuan Z, Zhao F, Zhang MQ. TRED: a transcriptional regulatory element database, new entries and other development. Nucleic Acids Res. 2007;35:D137–40.
Abugessaisa I, et al. FANTOM5 transcriptome catalog of cellular states based on Semantic MediaWiki. Database J Biol Databases Curation. 2016;2016.
Lachmann A, et al. ChEA: transcription factor regulation inferred from integrating genome-wide ChIP-X experiments. Bioinformatics. 2010;26:2438–44.
Wingender E, Dietze P, Karas H, Knüppel R. TRANSFAC: a database on transcription factors and their DNA binding sites. Nucleic Acids Res. 1996;24:238–41.
Sandelin A, Alkema W, Engström P, Wasserman WW, Lenhard B. JASPAR: an open-access database for eukaryotic transcription factor binding profiles. Nucleic Acids Res. 2004;32:D91–4.
Luo Y, et al. New developments on the Encyclopedia of DNA Elements (ENCODE) data portal. Nucleic Acids Res. 2020;48:D882–9.
Abugessaisa, I. et al. FANTOM5 transcriptome catalog of cellular states based on Semantic MediaWiki. Database J. Biol. Databases Curation 2016, baw105 (2016).
Garcia-Alonso L, Ibrahim MM, Turei D, Saez-Rodriguez J. Benchmark and integration of resources for the estimation of human transcription factor activities. Genome Res. 2021;31(4):745.
Alvarez MJ, Sumazin P, Rajbhandari P, Califano A. Correlating measurements across samples improves accuracy of large-scale expression profile experiments. Genome Biol. 2009;10:R143.
Langfelder P, Horvath S. WGCNA: an R package for weighted correlation network analysis. BMC Bioinformatics. 2008;9:559.
Hu M, Qin ZS. Query large scale microarray compendium datasets using a model-based Bayesian approach with variable selection. PLoS One. 2009;4:e4495.
Schaffter T, Marbach D, Floreano D. GeneNetWeaver: in silico benchmark generation and performance profiling of network inference methods. Bioinformatics. 2011;27:2263–70.
Levandowsky M, Winter D. Distance between sets. Nature. 1971;234:34.
Huynh-Thu VA, Irrthum A, Wehenkel L, Geurts P. Inferring regulatory networks from expression data using tree-based methods. PLoS One. 2010;5:e12776.
Moerman T, et al. GRNBoost2 and Arboreto: efficient and scalable inference of gene regulatory networks. Bioinformatics. 2019;35:2159–61.
Kim S. ppcor: an R package for a fast calculation to semi-partial correlation coefficients. Commun Stat Appl Methods. 2015;22:665–74.
Pratapa A, Jalihal AP, Law JN, Bharadwaj A, Murali TM. Benchmarking algorithms for gene regulatory network inference from single-cell transcriptomic data. Nat Methods. 2020;17:147–54.
Sartor MA, et al. ConceptGen: a gene set enrichment and gene set relation mapping tool. Bioinformatics. 2010;26:456–63.
Schiffer M, Von Gersdorff G, Bitzer M, Susztak K, Böttinger EP. Smad proteins and transforming growth factor-β signaling. Kidney Int. 2000;58:S45–52.
Zhang Y, Feng X-H, Derynck R. Smad3 and Smad4 cooperate with c-Jun/c-Fos to mediate TGF-β-induced transcription. Nature. 1998;394:909–13.
Jolly MK, et al. Implications of the hybrid epithelial/mesenchymal phenotype in metastasis. Front Oncol. 2015;5.
Piccolo V, et al. Opposing macrophage polarization programs show extensive epigenomic and transcriptional cross-talk. Nat Immunol. 2017;18:530–40.
Mosser DM, Edwards JP. Exploring the full spectrum of macrophage activation. Nat Rev Immunol. 2008;8:958–69.
Bae S, et al. MYC-mediated early glycolysis negatively regulates proinflammatory responses by controlling IRF4 in inflammatory macrophages. Cell Rep. 2021;35:109264.
Hu X, Ivashkiv LB. Cross-regulation of signaling pathways by interferon-γ: implications for immune responses and autoimmune diseases. Immunity. 2009;31:539–50.
Pliner HA, et al. Cicero predicts cis-regulatory DNA interactions from single-cell chromatin accessibility data. Mol Cell. 2018;71:858–871.e8.
Malysheva V, Mendoza-Parra MA, Saleem M-AM, Gronemeyer H. Reconstruction of gene regulatory networks reveals chromatin remodelers and key transcription factors in tumorigenesis. Genome Med. 2016;8:57.
Ritchie ME, et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015;43:e47.
Signorell, A. et al. DescTools: tools for descriptive statistics. (2022).
Newman MEJ. Modularity and community structure in networks. Proc Natl Acad Sci. 2006;103:8577–82.
Reichardt J, Bornholdt S. Statistical mechanics of community detection. Phys Rev E. 2006;74:016110.
Newman MEJ. Analysis of weighted networks. Phys Rev E. 2004;70:056131.
Newman MEJ. Finding community structure in networks using the eigenvectors of matrices. Phys Rev E. 2006;74:036104.
Su K, Katebi A, Kohar V, Clauss B, Gordin D, Qin Z, et al. NetAct analysis code and data. GitHub. 2022. https://github.com/lusystemsbio/NetActAnalysis.
Su K, Katebi A, Kohar V, Clauss B, Gordin D, Qin Z, et al. NetAct analysis code and data. GitHub (Zenodo link). 2022. https://doi.org/10.5281/zenodo.7352281.
Su K, Katebi A, Kohar V, Clauss B, Gordin D, Qin Z, et al. NetAct R package. GitHub. 2022; https://github.com/lusystemsbio/NetAct.
Su K, Katebi A, Kohar V, Clauss B, Gordin D, Qin Z, et al. NetAct R package GitHub (Zenodo link); 2022. https://doi.org/10.5281/zenodo.7352299.
Wenjing She was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
The review history is available as Additional file 4.
The study is supported by startup funds from The Jackson Laboratory and Northeastern University, by the National Cancer Institute of the National Institutes of Health under Award Number P30CA034196, and by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number R35GM128717.
Kenong Su and Ataur Katebi contributed equally to this work.
Department of Biomedical Informatics, Emory University, Atlanta, GA, 30322, USA
Kenong Su
Department of Bioengineering|, Northeastern University, Boston, MA, 02115, USA
Ataur Katebi, Danya Gordin & Mingyang Lu
Center for Theoretical Biological Physics, Northeastern University, Boston, MA, 02115, USA
Ataur Katebi, Benjamin Clauss, Danya Gordin & Mingyang Lu
The Jackson Laboratory, Bar Harbor, ME, 04609, USA
Vivek Kohar & Mingyang Lu
Genetics Program, Graduate School of Biomedical Sciences, Tufts University, Boston, MA, 02111, USA
Benjamin Clauss & Mingyang Lu
Department of Biostatistics and Bioinformatics, Emory University, Atlanta, GA, 30322, USA
Zhaohui S. Qin
The Jackson Laboratory for Genomic Medicine, Farmington, CT, 06032, USA
R. Krishna M. Karuturi & Sheng Li
Department of Computer Science and Engineering, University of Connecticut, Storrs, CT, USA
Graduate School of Biological Sciences & Eng., University of Maine, Orono, ME, USA
R. Krishna M. Karuturi
Ataur Katebi
Vivek Kohar
Benjamin Clauss
Danya Gordin
Sheng Li
Mingyang Lu
M.L conceived the study. K.S. developed and V.K. and A.K. improved the NetAct algorithm. A.K. constructed and performed the in silico benchmark. K.S. and V.K. performed the benchmark tests on public experimental gene expression data. B.C. and V.K. performed the network modeling. D.D. helped to refine the NetAct code. S. L, K.K., and Z.S.Q. provided conceptual input to the manuscript. K.S., A.K., V.K., and M. L wrote the manuscript, with help from all other authors. The authors read and approved the final manuscript.
Correspondence to Mingyang Lu.
Additional file 1: Supplementary Note 1.
TF-target gene set databases. Supplementary Note 2. Processing transcriptomics data. Supplementary Note 3. In silico TF activity inference benchmark. Supplementary Note 4. Construction of the synthetic GRN. Supplementary Note 5. Simulation of activity and expression using RACIPE. Supplementary Note 6. In silico network construction benchmark. Supplementary Note 7. Applications of network modeling with NetAct.
Additional file 2: Fig. S1.
Comparison of the grouping schemes by Newman's method (NetAct) and other community detection algorithms. Fig. S2. Correlation structure in the simulated activities and expressions of the synthetic gene regulatory network with knockdown of transcription factor TF9. Fig. S3. Comparing stability of activity inference methods. Fig. S4. Null distribution of the average correlations for the four methods. Fig. S5. Activity levels of four transcription factors. Fig. S6. Scatter plot for activities of four transcription factors. Fig. S7. The performance of activity inference and network construction from a simulation benchmark. Fig. S8. Optimization of the Hill coefficient in the TF activity inference. Fig. S9. Comparison of NetAct grouping scheme for target genes with the synthetic gene regulatory network. Fig. S10. Analysis of 10,000 RACIPE-simulated gene expression profiles for the macrophage depolarization TF regulatory network.
Supplementary tables: Table S1. Summary of the transcription factor (TF)-target databases for both human and mouse genomes. Table S2. Summary of the publicly available gene expression data sets for benchmarking TF-target gene set databases. Table S3. Sign correction for network construction and modeling. Table S4. Predicted driver TFs from network modeling of the EMT network and experimental evidences from the literature.
Review history.
Su, K., Katebi, A., Kohar, V. et al. NetAct: a computational platform to construct core transcription factor regulatory networks using gene activity. Genome Biol 23, 270 (2022). https://doi.org/10.1186/s13059-022-02835-3
Accepted: 05 December 2022
Gene regulatory networks
Gene regulatory circuits
Cellular state transitions
Transcriptional activity
Epithelial-mesenchymal transition
Macrophage polarization
Submission enquiries: [email protected]
|
CommonCrawl
|
Physics Meta
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
How can one derive Schrödinger equation?
Modified 1 year, 1 month ago
The Schrödinger equation is the basis to understanding quantum mechanics, but how can one derive it? I asked my instructor but he told me that it came from the experience of Schrödinger and his experiments. My question is, can one derive the Schrödinger equation mathematically?
quantum-mechanics
schroedinger-equation
A.khalafA.khalaf
$\begingroup$ Possible duplicate: physics.stackexchange.com/questions/135872/… $\endgroup$
– Bubble
$\begingroup$ @Bubble related, but not a duplicate IMO since that question is asking for a physical motivation, not a derivation. $\endgroup$
– David Z
$\begingroup$ Related physics.stackexchange.com/q/30537 $\endgroup$
Be aware that a "mathematical derivation" of a physical principle is, in general, not possible. Mathematics does not concern the real world, we always need empirical input to decide which mathematical frameworks correspond to the real world.
However, the Schrödinger equation can be seen arising naturally from classical mechanics through the process of quantization. More precisely, we can motivate quantum mechanics from classical mechanics purely through Lie theory, as is discussed here, yielding the quantization prescription
$$ \{\dot{},\dot{}\} \mapsto \frac{1}{\mathrm{i}\hbar}[\dot{},\dot{}]$$
for the classical Poisson bracket. Now, the classical evolution of observables on the phase space is
$$ \frac{\mathrm{d}}{\mathrm{d}t} f = \{f,H\} + \partial_t f$$
and so its quantization is the operator equation
$$ \frac{\mathrm{d}}{\mathrm{d}t} f = \frac{\mathrm{i}}{\hbar}[H,f] + \partial_t f$$
which is the equation of motion in the Heisenberg picture. Since the Heisenberg and Schrödinger picture are unitarily equivalent, this is a "derivation" of the Schrödinger equation from classical phase space mechanics.
ACuriousMind♦ACuriousMind
$\begingroup$ What about the "derivation" via path integrals? $\endgroup$
– Physics_maths
$\begingroup$ @LoveLearning: It all depends on where you want to start. In my view, the most mysterious element of both the Schrödinger equation and the path integral is the appearance of $\mathrm{i}$. You can indeed derive the SE from the path integral (and vice versa), but then you have to explain why the heck you are integrating over $e^{iS/\hbar}$ in the first place. The procedure of geometric quantization at least gives a mathematical motivation for that, starting from classical mechanics. Of course, if you believe that we should not start from classical mechanics, then you'll not find this convincing. $\endgroup$
– ACuriousMind ♦
$\begingroup$ Derive Schrödinger equation via path integrals can be at most a "physical" derivation, but never a mathematical deivation, since path integrals in the sense of Feynman do not have a mathematical meaning. $\endgroup$
– Mateus Sampaio
$\begingroup$ @LoveLearning See my newly added answer for more clarifications. $\endgroup$
– Ellie
$\begingroup$ @ACuriousMind +1: I think I'm going to make a T-shirt that reads "It all depends on where you want to start" and start selling it. That way, you can just point to your chest next time. I will also benefit immensely from wearing it when I walk around campus. Can I mark you down for one order? $\endgroup$
– joshphysics
Small addition to ACuriousMind's great answer, in reply to some of the comments asking for a derivation of Schrödinger wave equation, using the results of Feynman's path integral formalism:
(Note: not all steps can be included here, it would be too long to remain in the context of a forum-discussion-answer.)
In the path integral formalism, each path is attributed a wavefunction $\Phi[x(t)]$, that contributes to the total amplitude, of let's say, to go from $a$ to $b.$ The $\Phi$'s have the same magnitude but have differing phases, which is just given by the classical action $S$ as was defined in the Lagrangian formalism of classical mechanics. So far we have: $$ S[x(t)]= \int_{t_a}^{t_b} L(\dot{x},x,t) dt $$ and $$\Phi[x(t)]=e^{(i/\hbar) S[x(t)]}$$
Denoting the total amplitude $K(a,b)$, given by: $$K(a,b) = \sum_{paths-a-to-b}\Phi[x(t)]$$
The idea to approach the wave equation, describing the wavefunctions as a function of time, we should start by dividing the time interval between $a$-$b$ into $N$ small intervals of length $\epsilon$, and for a better notation, let's use $x_k$ for a given path between $a$-$b$, and denote the full amplitude, including its time dependance as $\psi(x_k,t)$ ($x_k$ taken over a region $R$):
$$\psi(x_k,t)=\lim_{\epsilon \to 0} \int_{R} \exp\left[\frac{i}{\hbar}\sum_{i=-\infty}^{+\infty}S(x_{i+1},x_i)\right]\frac{dx_{k-1}}{A} \frac{dx_{k-2}}{A}... \frac{dx_{k+1}}{A} \frac{dx_{k+2}}{A}... $$
Now consider the above equation if we want to know the amplitude at the next instant in time $t+\epsilon$:
$$\psi(x_{k+1},t+\epsilon)=\int_{R} \exp\left[\frac{i}{\hbar}\sum_{i=-\infty}^{k}S(x_{i+1},x_i)\right]\frac{dx_{k}}{A} \frac{dx_{k-1}}{A}... $$
The above is similar to the equation preceding it, the difference relying on the hint that, the added factor with $\exp(i/\hbar)S(x_{k+1},x_k)$ does not involve any of the terms $x_i$ before $i<k$, so the integration can be preformed with all such terms factored out. All this reduces the last equation to:
$$\psi(x_{k+1},t+\epsilon)=\int_{R} \exp\left[\frac{i}{\hbar}\sum_{i=-\infty}^{k}S(x_{i+1},x_i)\right]\psi(x_k,t)\frac{dx_{k}}{A}$$
Now a quote from Feynman's original paper, regarding the above result:
This relation giving the development of $\psi$ with time will be shown, for simple examples, with suitable choice of $A$, to be equivalent to Schroedinger's equation. Actually, the above equation is not exact, but is only true in the limit $\epsilon \to 0$ and we shall derive the Schroedinger equation by assuming this equation is valid to first order in $\epsilon$. The above need only be true for small $\epsilon$ to the first order in $\epsilon.$
In his original paper, following up the calculations for 2 more pages, from where we left things, he then shows that:
Canceling $\psi(x,t)$ from both sides, and comparing terms to first order in $\epsilon$ and multiplying by $-\hbar/i$ one obtains
$$-\frac{\hbar}{i}\frac{\partial \psi}{\partial t}=\frac{1}{2m}\left(\frac{\hbar}{i}\frac{\partial}{\partial x}\right)^2 \psi + V(x) \psi$$ which is Schroedinger's equation.
I would strongly encourage you to read his original paper, don't worry it is really well written and readable.
References: Space-Time Approach to Non-Relativistic Quantum Mechanics by R. P. Feynman, April 1948.
Feynman Path Integrals in Quantum Mechanics, by Christian Egli
EllieEllie
$\begingroup$ The Schroedinger Equation is simply the Hamiltonian ie. Kinetic + Potential energy as a function of momenta and coordinates alone, written with Quantum operators for momentum replacing the classical definition of momentum. Hamilton's equation is well known from Classical Physics, has been tested for ~2 Centuries, and is easy to use. The only 'new' idea is the Quantum operator for momentum, which isn't intuitive or obvious, but is used because it gives the correct answer. $\endgroup$
– Arif Burhan
Mar 5, 2016 at 17:41
$\begingroup$ Do you, per chance, have a link to Schroedinger's papers in English? $\endgroup$
– MadPhysicist
$\begingroup$ @MadPhysicist unfortunately I cannot find the very early ones in English, but at least there's his paper on "An Undulatory Theory of the Mechanics of Atoms and Molecules". Among the very first ones by Heisenberg and afterwards Schrödinger were "Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen." and "Quantisierung als Eigenwertproblem" respectively. Try to look for the English translation of these. What are you exactly interested in? Maybe I can recommend more modern material to you. $\endgroup$
$\begingroup$ I was particularly interested in his papers pertaining to expanding on the work of de Broglie and producing the Schroedinger equation. $\endgroup$
According to Richard Feynman in his lectures on Physics, volume 3, and paraphrased "The Schrodinger Equation Cannot be Derived". According to Feynman it was imagined by Schrodinger, and it just happens to provide the predictions of quantum behavior.
docsciencedocscience
$\begingroup$ See also this $\endgroup$
– HDE 226868
$\begingroup$ That was in the 1960's. But I also found this 2006 paper in the American Journal of Physics: arxiv.org/abs/physics/0610121 which claims a derivation. $\endgroup$
– docscience
Fundamental laws of physics cannot be derived (turtles all the way down and all that).
However, they can be motivated in various ways. Direct experimental evidence aside, you can argue by analogy - in case of the Schrödinger equation, comparisons to Hamiltonian mechanics and the Hamilton-Jacobi equation, fluid dynamics, Brownian motion and optics have been made.
Another approach is arguing by mathematical 'beauty' or necessity: You can look at various ways to model the system and go with the most elegant approach consistent with constraints you imposed (ie reasoning in the vein of 'quantum mechanics is the only way to do X' for 'natural' or experimentally necessary values of X).
ChristophChristoph
While it is in general impossible to derive the laws of physics in the mathematical sense of the word, a strong motivation or rationale can be given most of the time. Such impossibility arises from the very nature of physical sciences which attempt to stretch the all-to-imperfect logic of the human mind onto the natural phenomena around us. In doing so, we often make connections or intuitive hunches which happen to be successful at explaining phenomena in question. However, if one had to point out which logical sequence was used in producing the hunch, he would be at a loss - more often than not such logical sequence simply does not exist.
"Derivation" of the Schroedinger equation and its successful performance at explaining various quantum phenomena is one of the best (read audacious, mind-boggling and successful) examples of the intuitive thinking and hypothesizing which lead to great success. What many people miss is that Schroedinger simply took the ideas of Luis de Broglie further to their bold conclusion.
In 1924 de Broglie suggested that every moving particle could have a wave phenomenon associated with it. Note that he didn't say that every particle was a wave or vice versa. Instead, he was simply trying to wrap his mind around the weird experimental results which were produced at the time. In many of these experiments, things which were typically expected to behave like particles also exhibited a wave behavior. It is this conundrum which lead de Broglie to produce his famous hypothesis of $\lambda = \frac{h}{p}$. In turn, Schroedinger used this hypothesis as well as the result from Planck and Einstein ($E = h\nu$) to produce his eponymous equation.
It is my understanding that Schroedinger originally worked using Hamilton-Jacobi formalism of classical mechanics to get his equation. In this, he followed de Broglie himself who also used this formalism to produce some of his results. If one knows this formalism, he can truly follow the steps of the original thinking. However, there is a simpler, more direct way to produce the equation.
Namely, consider a basic harmonic phenomenon:
$ y = A sin (wt - \delta)$
for a particle moving along the $x$-axis,
$ y = A sin \frac{2\pi v}{\lambda} (t - \frac{x}{v}) $
Suppose we have a particle moving along the $x$-axis. Let's call the wave function (similar to the electric field of a photon) associated with it $\psi (x,t)$. We know nothing about this function at the moment. We simply gave a name to the phenomenon which experimentalists were observing and are following de Broglie's hypothesis.
The most basic wave function has the following form: $\psi = A e^{-i\omega(t - \frac{x}{v})}$, where $v$ is the velocity of the particle associated with this wave phenomenon.
This function can be re-written as
$\psi = A e^{-i 2 \pi \nu (t - \frac{x}{\nu\lambda})} = A e^{-i 2 \pi (\nu t - \frac{x}{\lambda})}$, where $\nu$ - the frequency of oscillations and $E = h \nu$. We see that $\nu = \frac{E}{2 \pi \hbar}$ The latter is, of course, the result from Einstein and Planck.
Let's bring the de Broglie's result into this thought explicitly:
$\lambda = \frac{h}{p} = \frac{2\pi \hbar}{p}$
Let's substitute the values from de Broglie's and Einstein's results into the wave function formula.
$\psi = A e^{-i 2 \pi (\frac{E t}{2 \pi \hbar} - \frac{x p}{2 \pi \hbar})} = A e^{- \frac{i}{\hbar}(Et - xp)} (*)$
this is a wave function associated with the motion of an unrestricted particle of total energy $E$, momentum $p$ and moving along the positive $x$-direction.
We know from classical mechanics that the energy is the sum of kinetic and potential energies.
$E = K.E. + P.E. = \frac{m v^2}{2} + V = \frac{p^2}{2 m} + V$
Multiply the energy by the wave function to obtain the following:
$E\psi = \frac{p^2}{2m} \psi + V\psi$
Next, rationale is to obtain something resembling the wave equation from electrodynamics. Namely we need a combination of space and time derivatives which can be tied back into the expression for the energy.
Let's now differentiate $(*)$ with respect to $x$.
$\frac{\partial \psi}{\partial x} = A (\frac{ip}{\hbar}) e^{\frac{-i}{\hbar}(Et - xp)}$
$\frac{\partial^2 \psi}{\partial x^2} = -A (\frac{p^2}{\hbar^2}) e^{\frac{-i}{\hbar}(Et - xp)} = \frac{p^2}{\hbar^2} \psi$
Hence, $p^2 \psi = -\hbar^2 \frac{\partial^2 \psi}{\partial x^2}$
The time derivative is as follows:
$\frac{\partial \psi}{\partial t} = - A \frac{iE}{\hbar} e^{\frac{-i}{\hbar}(Et - xp)} = \frac{-iE}{\hbar}\psi$
Hence, $E \psi = \frac{-\hbar}{i} \frac{\partial \psi}{\partial t}$
The expression for energy we obtained above was $E\psi = \frac{p^2}{2m} \psi + V\psi$
Substituting the results involving time and space derivatives into the energy expression, we obtain
$\frac{-i}{\hbar} \frac{\partial \psi}{\partial t} = \frac{- \hbar ^2}{2m} \frac{\partial ^2 \psi}{\partial x^2} + V\psi$
This, of course, became better known as the Schroedinger equation.
There are several interesting things in this "derivation." One is that both the Einstein's quantization and de Broglie's wave-matter hypothesis were used explicitly. Without them, it would be very tough to come to this equation intuitively in the manner of Schroedinger. What's more, the resulting equation differs in form from the standard wave equation so well-known from classical electrodynamics. It does because the orders of partial differentiation with respect to space and time variables are reversed. Had Schrodinger been trying to match the form of the classical wave equation, he would have probably gotten nowhere.
However, since he looked for something containing $p^2\psi$ and $E\psi$, the correct order of derivatives was essentially pre-determined for him.
Note: I am not claiming that this derivation follows Schroedinger's work. However, the spirit, thinking and the intuition of the times are more or less preserved.
MadPhysicistMadPhysicist
30811 silver badge1010 bronze badges
There are already a lot of good answers to this question. Of course, as already pointed out by many others, there is no "mathematical derivation" of Schrödinger's equation from first principles, since it is one of the axioms of quantum mechanics. However, there a many different ways to motivate this equation, from which many were already described by others here. One of them, which I like personally very much, is still missing, and hence I would like to add it here at this point.
Lets take the other axioms of quantum mechanics for granted, like the description of states using unit rays in a seperable Hilbert space and observables in terms of hermitian operators. Then there is very nice way to motivate the Schrödinger equation from physical ideas and some very deep and beautiful mathematical theorems.
The starting point is the following: A physical state is represented by a vector $\vert\psi_{0}\rangle\in\mathcal{H} $ in some Hilbert space $\mathcal{H}$. Now, we would like to analyse the time-evolution of this state. For this, we take a mapping $U:\mathbb{R}_{\geq 0}\to\mathcal{B}(\mathcal{H})$, where $\mathcal{B}(\mathcal{H})$ is the set of bounded operators in $\mathcal{H}$, such that $$\mid\psi(t)\rangle=U(t)\vert\psi_{0}\rangle.$$ Now, by the probabilistic description of quantum mechanics, the state $\mid\psi_{0}\rangle$ should be normalized and similarely, also $\mid\psi(t)\rangle$ should be normalized. As a consequence, we see that the operator $U(t)$ has to be unitary for all $t$. Furthermore, it is easy to see that we must have that $$U(t+s)=U(t)U(s).$$ To sum up, we see that the time-evolution operator $\{U(t)\}_{t\in\mathbb{R}_{\geq 0}}$ has to have the following properties:
$U(t)$ is unitary for all $t$ and $U(0)=1_{\mathcal{H}}$.
The mapping $U:\mathbb{R}_{\geq 0}\to\mathcal{B}(\mathcal{H})$ is a group homomorphism, i.e. $U(t+s)=U(t)U(s)$ for all $t,s$.
From the mathematical side, we may also assume that the family of operators is "strongly-continuous", i.e. $\lim_{t\to s}\Vert U(t)-U(s)\Vert=0$.
A family of such operators $\{U(t)\}_{t\in\mathbb{R}_{\geq 0}}$ is called a "strongly-continuous unitary one-parameter group" in mathematics. This type of objects is a special case of "strongly continuous one-parameter semigroups", which are well-studied in the mathematics literature (i.e. theorem of Hille-Yosida, etc.).
Now, how is this related to the Schrödinger equation? It turns out that the statement "the time-evolution operator defines a strongly-continuous unitary one-parameter group" is equivalent to say "the state satisfies the Schrödinger equation" for a suitable operator $H$! Let me discuss this in the following: First of all, it is a general fact, which is not too hard to prove, that for every (possibly unbounded operator) $A:\mathcal{D}(A)\to\mathcal{H}$, where $\mathcal{D}(A)\subset\mathcal{H}$ is some domain, the family $$U(t):=e^{-itA}$$ defines a strongly-continous one-parameter group. Let me also stress that this expression is totally well-defined: In physics, one often just writes the exponential of an operator without discussing how to make sense out of these expressions, but this can be done in total rigour. For bounded operators, one can just take the usual exponential series, which always converges. For unbounded operators (like differential operators) one can use the "spectral theorem", which provides a rigours way to write expressions like $U(t):=e^{itA}$ (and in fact reduces to the standard exponential series for bounded operators). Using all the properties of the functional calculus coming from the spectral theorem, one can also prove that the group $U(t)=e^{itA}$ has the following properties:
The domain $\mathcal{D}(A)$ can be characterised as $$\mathcal{D}(A)=\bigg\{\vert\psi\rangle\in\mathcal{H}\,\bigg\vert\,\frac{\mathrm{d}}{\mathrm{d}t}\bigg\vert_{t=0} U(t)\vert\psi\rangle:=\lim_{t\to 0}\frac{U(t)\vert\psi\rangle -\vert\psi\rangle}{t}\, \text{exists} \bigg\}.$$
Every $\vert\psi\rangle\in\mathcal{D}(A)$ satisfies the following equation $$A\vert\psi\rangle=i\frac{\mathrm{d}}{\mathrm{d}t}\bigg\vert_{t=0} U(t)\vert\psi\rangle.$$
Now, the main point is that also the reverse is true: For every strongly-continuous unitary one-parameter group $\{U(t)\}_{t\in\mathbb{R}_{\geq 0}}$, there is a self-adjoint operator $A:\mathcal{D}(A)\to\mathcal{H}$, such that $U(t)=e^{-itA}$. This is the so-called Stone's theorem.
To sum up, we have seen that the fact that a time-evolution should preserve the norm of a state, implies that the time-evolution is described by a strongly-continuous one-parameter group $\{U(t)\}_{t\in\mathbb{R}_{\geq 0}}$. Furthermore, we have seen that (by Stone's theorem) there exists a self-adjoint operator $H:\mathcal{D}(H)\to\mathcal{H}$ such that $U(t)=e^{-itH}$. Defining a time-dependent state via $\vert\psi(t)\rangle=U(t)\vert\psi_{0}\rangle$, we also have seen that this satisfies the equation $$H\vert\psi(t)\rangle=i\frac{\mathrm{d}}{\mathrm{d}s}\bigg\vert_{s=0} U(s)\vert\psi(t)\rangle=i\frac{\mathrm{d}}{\mathrm{d}s}\bigg\vert_{s=0} \vert\psi(t+s)\rangle=i\frac{\mathrm{d}}{\mathrm{d}s}\bigg\vert_{s=t} \vert\psi(s)\rangle=i\frac{\mathrm{d}}{\mathrm{d}t} \vert\psi(t)\rangle,$$ but this is nothing else then Schrödinger's equation! Furthemore, also the reverse is true. Let us start from Schrödinger's equation $$H\vert\psi(t)\rangle=i\frac{\mathrm{d}}{\mathrm{d}t} \vert\psi(t)\rangle.$$ Choosing an initial value $\vert\psi(0)\rangle=\vert\psi_{0}\rangle$, then one can show that this equation has a unique solution, which is given by $$\vert\psi(t)\rangle=e^{-itH}\vert\psi_{0}\rangle$$ and as we have seen above, the family $U(t):=e^{-itH}$ is in fact a strongly-continuous unitary one-parameter group.
To sum up, the existence of a unitary time-evolution (which physically comes from the fact that we want to preserve the probability) is equivalent to the statement of Schrödinger's equation!
Let me stress that the above arguments are only valid if one assumes that the Hamilton operator itself is time-independent. However, one can generalize many of the argument above using "two-parameter unitary groups" $U(t,s)$. In this case, the story is mathematically more involved, but still one can proof a lot of nice results using this. For more details, I would recomment the second volume of Reed and Simons book "Methods of Modern Mathematical Physics".
G. BlaicknerG. Blaickner
To my mind there are two senses in which we can "derive" a result in physics. New theories try to address the shortcomings of older ones by upgrading what we already have, giving new results. They also recover old results. I suppose we can call both derivations.
For example, the TISE and TDSE were first obtained because quantum mechanics said that, where classical mechanics would imply $f=0$, we should have $\hat{f}\left|\psi\right\rangle = 0$, with $\hat{f}$ the operator promotion of $f$, which in this case is $f=E-\frac{p^2}{2m}-V$ with operators $E=i\hbar\partial_t,\,\mathbf{p}=-i\hbar\boldsymbol{\nabla}$. (Some results become the weaker $\left\langle\psi\right|\hat{f}\left|\psi\right\rangle = 0$, e.g. with $f=\frac{d\mathbf{p}}{dt}+\boldsymbol{\nabla}V$, so I'm not being entirely honest here. But we expect $\hat{E}$-eigenstates are important because the probability distribution of $E$ is conserved.)
Note that the above paragraph summarises how Schrödinger was derived in the first sense, and its ending parenthesis hints at how Newton's second law was "derived" in my second sense. And everyone talking about path integrals is hinting at a type-2 derivation for both results (path integrals obtain a transition amplitude in terms of $e^{iS/\hbar}$ with $S$ the classical action now miraculously coming out of a hat, so technically our direct recovery is of Lagrangian mechanics rather than the equivalent Newtonian formulation).
I'll leave people to fight over which, if either, type of derivation is "valid" or "better", but physical insight requires frequent doses of both. I think it's worth distinguishing them in a discussion like this.
J.G.J.G.
It is common in books on path integrals to see the claim that we can derive the Schrodinger equation (indeed Ref. [2] below claims it can indeed be derived via path integrals), so it is surprising to see people say that it can't be derived.
I will say that of course there is a way to directly derive the Schrodinger equation simply (i.e. without the formal machinery of one-parameter groups and Stone's Theorem etc...) without path integrals and within the logic of (Copenhagen) quantum mechanics. It is given in (the absolutely canonical textbook of) reference [1]. Another derivation is given by Dirac in ([3], Sec. 27).
Assuming the existence of a wave function $$\Psi(q,t)$$ (where $q$ are generalized coordinates), and assuming the wave function completely describes the state of a physical quantum system (i.e. assuming we can obtain 'complete information' about the system and do not need density matrices, to which a slight modification of what follows still applies, see [1]), this implies that the state of the system at all future instants is determined (to within the obvious limits of quantum mechanics). This means that the time derivative $\frac{\partial \Psi}{\partial t}$ of the wave function must be determined by the value of $\Psi(q,t)$ at that instant, and the superposition principle implies this relationship has to be a linear relationship, so that $$\frac{\partial \Psi(q,t)}{\partial t} = \tilde{\hat{H}} \Psi(q,t)$$ should hold, for some linear operator $\tilde{\hat{H}}$ (notice the hat and the tilde on $\tilde{\hat{H}}$).
Further assuming the existence of classical mechanics as a limit of the theory (a fundamental unavoidable assumption at least in Copenhagen quantum mechanics), we have to describe how a quantum wave function reduces to give a classical picture (and in fact disappears in the classical limit).
The perspective taken in [1] is that this limiting procedure is done by requiring that the wave function $\Psi(q,t)$, which is a complex number, written in complex exponential form $$\Psi(q,t) = |\Psi(q,t)|e^{i \phi},$$ behaves in the same way that for example an electromagnetic field, which can also be written in complex exponential form (of course there we must still take the real part), behaves when we take the limit of geometric optics, where the notion of definite 'paths' can be defined, despite the wave-like nature of electromagnetism.
Since quantum mechanics says that paths don't exist, but that they should exist more and more accurately until they completely exist in a to-be-defined 'classical limit', this is a very natural assumption to make about the behavior of a wave function as we approach the 'classical limit'. What this means is that we should ignore the amplitude of $\Psi(q,t)$ like we ignore the amplitude in geometric optics approximation, and we should treat the argument of the exponential (the eikonal) like a function which determines 'classical paths'.
But there is obviously some function we know of that does precisely this, the 'classical action $S$'. Thus we assume in a 'quasi-classical' limit, where the wave function more and more accurately describes a classical system, should reduce to the form $$\Psi(q,t) = |\psi(q,t)| e^{i S/\hbar}$$ with $|\psi(q,t)|$ being a slowly varying amplitude that we can neglect the more classical we get, and at this stage $\hbar$ is literally just a constant with the dimensions of action $[S] = [\hbar] = [E][T]$ used to ensure the argument of the exponential is dimensionless.
We now see the classical limit is taken by sending $$\hbar \to 0$$ which means the argument of the exponential of the quasi-classical wave function will become infinite and make no sense unless the action $S$ is minimized in the $\hbar \to 0$ limit, which is exactly the condition needed for a classical action to completely describe the dynamics of a classsical system.
Therefore in the quasi-classical limit, if we take a time derivative of $\Psi(q,t)$, we can neglect contributions from the amplitude, to find $$\frac{\partial \Psi(q,t)}{\partial t} = \frac{i}{\hbar} \frac{\partial S}{\partial t} \Psi(q,t).$$ From classical analytical mechanics we know that $$\frac{\partial S}{\partial t} = - H$$ so we now approximately have $$\frac{\partial \Psi(q,t)}{\partial t} = - \frac{i}{\hbar} H \Psi(q,t).$$ ignoring contributions from the amplitude, or rather $$i \hbar \frac{\partial \Psi(q,t)}{\partial t} = H \Psi(q,t).$$
We now see that the linear operator $\tilde{\hat{H}}$ must reduce to the classical Hamiltonian $H$ (multiplied by $- \frac{i}{\hbar}$) in the classical limit, and we refer to $$i \hbar \frac{\partial \Psi(q,t)}{\partial t} = \hat{H} \Psi(q,t)$$ (with $\hat{H}$ reducing to the classical Hamiltonian $H$ in the classical limit) as the Schrodinger equation.
The form of the operator $\hat{H}$ is still left arbitrary, indeed we have not yet invoked non-relativistic Galilean or Relativistic Poincare symmetry principles which are required to fix the form of $\hat{H}$.
We can similarly determine the momentum operator from the quasi-classical limit, noting from $$\nabla \Psi = \frac{i}{\hbar} (\nabla S) \Psi$$ and the classical analytical mechanics definition of momentum as $$\mathbf{p} = \nabla S$$ that the $\hat{\mathbf{p}}$ in $$\hat{\mathbf{p}} \Psi = - i \hbar \hat{\nabla} \Psi$$ is the operator reducing to the classical momentum in the classical limit.
One immediate consequence of this perspective is by differentiating the expected value of some operator $\hat{f}$ $$ \overline{\hat{f}} = \int dq \Psi^* \hat{f} \Psi$$ we can easily derive that the time derivative of $\hat{f}$ is \begin{align*} \frac{d \hat{f}}{dt} &= \frac{\partial \hat{f}}{\partial t} + \frac{i}{\hbar}(\hat{H} \hat{f} - \hat{f}\hat{H}) \\ &= \frac{\partial \hat{f}}{\partial t} + \frac{i}{\hbar}[\hat{H},\hat{f}]. \end{align*} But we also have the classical result that some function $f$ of the canonical variables satisfies \begin{align*} \frac{d f}{dt} &= \frac{\partial f}{\partial t} + (\frac{\partial f}{\partial q_i} \frac{d q_i}{dt} + \frac{\partial f}{\partial p_i} \frac{d p_i}{dt}) \\ &= \frac{\partial f}{\partial t} + (\frac{\partial f}{\partial q_i} \frac{\partial H}{\partial p_i} - \frac{\partial f}{\partial p_i} \frac{\partial H}{\partial q_i}) \\ &= \frac{\partial f}{\partial t} - \{H,f \}_{P.B.} \end{align*} Thus the time derivative of the quantum version of a classical function $f$ should reduce to these equations of motion in the classical limit, showing that in the quasi-classical limit the commutator $$[\hat{H},\hat{f}] = \hat{H} \hat{f} - \hat{f}\hat{H}$$ reduces to $$[\hat{H},\hat{f}] = i \hbar \{H,f\}_{P.B.} + \mathcal{O}(\hbar^2),$$ justifying the fundamental quantization rule $$\{\dot{},\dot{}\}_{P.B.} \mapsto \frac{1}{\mathrm{i}\hbar}[\dot{},\dot{}]$$ for operators. In other words, to set up the basic principles of the theory, at a fundamental level it ultimately reduces to invoking this classical limit, which is why people take so seriously the existence of a classical limit and stress that (at least Copenhagen) quantum mechanics does not exist without this limit.
To fix the form of $\hat{H}$ we note, e.g. if we assume that we are dealing with a non-relativistic quantum system, then on taking the quasi-classical limit of $\Psi(q,t)$ we see the action $S$ in the quasi-classical limit should reduce to the non-relativistic classical Hamiltonian.
This is the fundamental idea behind why an approach such as the Schrodinger functional picture will apply in quantum field theory, we instead just assume the Hamiltonian obeys relativistic invariance principles (and something else important which I'll leave as a mystery) and depending on the fields involved in the action/Hamiltonian we then just apply the above thinking to get the Schrodinger functional equation and solve it (easier said than done).
If one thinks about it, they will notice some of this looks similar to the path integral approach (e.g. the quasi-classical limit). Indeed the above is constructed on the basis of assuming 'no paths' existing. What do we do when setting up path integrals? We instead assume all possible paths affect our results. Reference [2] makes explicit how assuming the existence of $e^{iS/\hbar}$ (with $S$ evaluated on all possible paths between two points) is a postulate in setting up quantum mechanics from a path integral perspective.
This is the logically consistent (within the principles of quantum mechanics) way to justify those rules of sending classical quantities to their operator analogues. It's completely inconsistent to just take a classical equation and just insert the operator forms without at least going through something equivalent to this dance, but with the above proviso's it's as consistent as one can expect. It's of course why books claim the Schrodinger equation can be derived from a path integral perspective, i.e. within that logical framework books like [2] say it can be derived, so of course it can also be derived within the canonical Copenhagen framework also.
Landau and Lifshitz, "Quantum Mechanics", 3rd Ed.
Kaku, "Quantum Field Theory: A Modern Introduction", Ch. 8.
Dirac, "Principles of Quantum Mechanics", 4th Ed.
bolbteppabolbteppa
In Mathematics you derive theorems from axioms and the existing theorems.
In Physics you derive laws and models from existing laws, models and observations.
In this case we can start from the observations in photoelectric effect to get the relation between photon energy and frequency. Then continue with the special relativity where we observed the speed of the light is constant in all reference frames. From this when generalizing the kinetic energy we can get the mass energy equivalence. Combining the two we can assign mass to the photon, consequently we can get the momentum of a photon as function of the wavenumber.
Generalizing the energy-frequency and the momentum-wavenumber relation when have the De-Broglie relations. Which is applicable to any particles.
Assuming that a particle have 0 energy when it stands still (you can do it), although it doesn't cause too much trouble if you leave the constant term there, in the later phases you can simply put it into the left side of the equation. We can deal with the kinetic energy. Substituting the non-relativistic kinetic into the relation and reordering we can have the following dispersion relation:
$$\omega = \frac{\hbar k^2}{2m}$$
The wave equation can be derived from the dispersion relation of the matter waves using the way I mentioned in that answer.
In this case we will need the laplacian and first time derivative:
$$\nabla^2 \Psi + \partial_t \Psi = -k^2\Psi - \frac{i \hbar k^2}{2m}\Psi$$
Multiplying the time derivative with $-\frac{2m}{i\hbar}$, we can zero the right side:
$$\nabla^2 \Psi - \frac{2m}{i\hbar} \partial_t \Psi = -k^2\Psi + k^2\Psi = 0$$
We can reorder it to obtain the time dependent schrödinger equation of a free particle:
$$ \partial_t \Psi = \frac{i\hbar}{2m} \nabla^2 \Psi$$
CalmariusCalmarius
A new physical theory is based on a new physical principle that allows to formulate useful models. In particular, differential equations, like the Schrödinger equation, are based on first-principles. In quantum mechanics the founding principle is the uncertainty principle that mixed with probabilistic classical mechanics allows to derive the quantum dynamics that gives the practical theory. For a first-principle derivation of the Schrödinger equation, see e.g. Another look through Heisenberg's microscope.
HulksterHulkster
$\begingroup$ You're confusing "revolutionary" with "wrong". This answer is being downvoted because it is incorrect, plain and simple. $\endgroup$
– Emilio Pisanty
Dec 9, 2021 at 9:51
How to derive Schrödinger equation?
A mathematical step done by Schrödinger in his solution to the hydrogen atom I need clarification on
Is there a proof of the time-dependent Schrödinger equation?
Is it true that Schrödinger wrote his Schrödinger Wave Equation from his mind?
Why quantum mechanics?
Why can't the Schrödinger equation be derived?
Is the Schrödinger equation derived or postulated?
What is the meaning of the wavefunction?
Schrödinger equation derivation and Diffusion equation
What does the Schrodinger Equation really mean?
What inspired Schrödinger to derive his equation?
Can we derive the Schrödinger equation from the Klein-Gordon equation?
How did Planck derive his formula $E=hf$?
How to derive the Schrödinger Equation from Heisenberg's matrix mechanics and vice-versa?
Can one derive the Schrödinger equation from probability density arguments?
Schrödinger Equation as a limit of von Neumann equation
|
CommonCrawl
|
Computational Science Meta
Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It only takes a minute to sign up.
Which finite difference better approximates $uu'$?
I want to approximate $uu'$ with a finite difference. On the one hand, it seems to be $$(uu')_i=u_i\frac{u_{i+1}-u_{i-1}}{2\Delta t}=\frac{u_iu_{i+1}-u_iu_{i-1}}{2\Delta t}$$ On the other hand, $$(uu')_i=\left(\frac{d}{dx}\frac{u^2}{2}\right)_i=\frac{u^2_{i+1}-u^2_{i-1}}{4\Delta x}$$ I might be wrong but I think that they both have truncation error $\mathcal{O}(\Delta x)^2$. Which of these finite differences should I use?
finite-difference differential-equations
edited Jan 11 at 3:44
cfdlab
asked Jan 10 at 13:53
Vladislav GladkikhVladislav Gladkikh
Vladislav Gladkikh is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
$\begingroup$ Unfortunately this is going to be somewhat dependent on the context. Do you have a particular problem in mind? $\endgroup$ – Kyle Mandli Jan 10 at 14:26
$\begingroup$ @KyleMandli E.g. Kuramoto-Sivashinsky $u_t=-uu_x-u_{xx}-u_{xxxx}$, $x\in[0,L_x]$, periodic bc but I would like to know how to approach this in general to be able to deal with other non-linear pde $\endgroup$ – Vladislav Gladkikh Jan 11 at 1:28
$\begingroup$ I don't think both formulas have the same truncation error. In fact, in your first formula you assume that $u$ is constant in the range $[x-\Delta x,x+\Delta x]$ which is not necessarily a good approximation. In fact, this approximation is just a zeroth order approximation. I think better approximation is using trapezoidal rule to take $u$ equal to $\frac{u_{i-1}+u_{i+1}}{2}$ in that range which brings you to your second formula. My final conclusion is that your second formula is more accurate than the first formula and has lower truncation error or its order of accuracy is higher. $\endgroup$ – Alone Programmer Jan 11 at 3:23
$\begingroup$ Both are second order, but truncation errors will be different. Second one is conservative, use that if you think conservation is important for your problem. $\endgroup$ – cfdlab Jan 11 at 3:44
$\begingroup$ @cfdlab has this right: You have to think about what the term means and then discretize accordingly. Here, you are probably thinking of it as a flux, so you will want to choose a conservative way to discretize. $\endgroup$ – Wolfgang Bangerth Jan 11 at 16:29
There is really no such thing as a good finite difference equivalent to an operator. In the earliest days of scientific computing, the thought was that each differential operator would be replaced by some finite difference expression, and that finite difference operator would be the most accurate one available, usually a central difference.
I believe that the issue was first stated properly by Brian Spalding
"There is no best formulation for a first-derivative or second-derivative expression in isolation; it is the combination of the first and second order derivatives, as they appear in a particular differential equation, that require to be represented by an algebraic expression" (Spalding, D.B., 1972. A novel finite difference formulation for differential expressions involving both first and second derivatives. International Journal for Numerical Methods in Engineering, 4(4), pp.551-559.)
This is far more general than just in the context of advection-diffusion equations in which it was made. It applies to ODEs and PDEs of any kind. What matters is not the error in each term, but the error in the entire combination. This is sometimes referred to as error cancellation. It is surprising that, almost fifty years later, this is still not recognised by some communities.
It must also be stressed that accuracy is not the only property required. Stability, conservation, upwinding and behavior at boundaries must also be taken into account, and may sometimes be in conflict. Efficiency (in various senses) and ease of coding are also important.
There is a shortage of books dealing with this, because most authors write from within thir own specialization and its own traditions. However, techniques for investigating this kind of question do exist, such as dispersion analysis and "equivalent equation" analysis. But in the broadest context this is still a research problem.
answered 2 days ago
Philip RoePhilip Roe
Vladislav Gladkikh is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Computational Science Stack Exchange!
Not the answer you're looking for? Browse other questions tagged finite-difference differential-equations or ask your own question.
How do you improve the accuracy of a finite difference method for finding the eigensystem of a singular linear ODE
Impact of irregularity at the boundary on the error analysis
Finite Difference Method Stability
Implementing temperature depending viscosity in a finite-difference scheme
How to implement finite difference method for one dimensional Navier-Stokes PDEs
What method of Finite difference is this?
Modified Equation and Stability for Centred Finite Differences for Wave Equation
Parity for artificial dissipation term in a finite-difference solution
|
CommonCrawl
|
Experimental and theoretical investigation of heat transfer characteristics of cylindrical heat pipe using Al2O3–SiO2/W-EG hybrid nanofluids by RSM modeling approach
R. Vidhya1,
T. Balakrishnan2 &
B. Suresh Kumar3
A Publisher Correction to this article was published on 28 November 2021
This article has been updated
Nanofluids are emerging two-phase thermal fluids that play a vital part in heat exchangers owing to its heat transfer features. Ceramic nanoparticles aluminium oxide (Al2O3) and silicon dioxide (SiO2) were produced by the sol-gel technique. Characterizations have been done through powder X-ray diffraction spectrum and scanning electron microscopy analysis. Subsequently, few volume concentrations (0.0125–0.1%) of hybrid Al2O3–SiO2 nanofluids were formulated via dispersing both ceramic nanoparticles considered at 50:50 ratio into base fluid combination of 60% distilled water (W) with 40% ethylene glycol (EG) using an ultrasonic-assisted two-step method. Thermal resistance besides heat transfer coefficient have been examined with cylindrical mesh heat pipe reveals that the rise of power input decreases the thermal resistance and inversely increases heat transfer coefficient about 5.54% and 43.16% respectively. Response surface methodology (RSM) has been employed for the investigation of heat pipe experimental data. The significant factors on the various convective heat transfer mechanisms have been identified using the analysis of variance (ANOVA) tool. Finally, the empirical models were developed to forecast the heat transfer mechanisms by regression analysis and validated with experimental data which exposed the models have the best agreement with experimental results.
Nanofluid is an efficient working medium for heat transfer applications in automobile industries, solar collectors, air conditioning, nuclear reactors, microelectronics, computers, and cooling electronic devices [1]. Effectiveness of heat exchanging systems is primarily connected with thermal transport properties of working liquids. Among many heat exchangers, a heat pipe is the modest heat exchanging apparatus that transfers the huge amount of heat energy because of its principle of phase change and capillary action [2]. Generally, distilled water, engine oils, and ethylene glycol (EG) were used as heat transfer fluids in the heat exchanger, which exhibited less effectiveness in heat, transfer processes [3, 4]. In this regard, many researchers focused their attention on the enrichment of heat transport behavior on conventional working solutions by dissolving the solid nanosized particles (1–100 nm) into conventional fluids, called nanofluids [5,6,7,8]. Choi et al. [9] first successfully prepared and analyzed the dispersal of solid nanosized particles into the conventional fluid. He proved the efficiency of nanofluid in thermal conducting features by his work.
Hsin-Tang Chien et al. [10] remained the first one in the evaluation on the efficiency of heat pipes charged with nanofluids. Later, many experimental research works have been conducted by the researchers to assess thermal transferring efficiency on variety of heat pipes with a sort of single component fluids [11,12,13,14,15]. Solomon et al. [16] observed that the heat pipe charged with 0.1% addition of Cu-water nanofluid increases the effective thermal conductivity of the wick structure and enhances the heat transfer capability of the heat pipe by 20%. Akbari et al. [17] analyzed thermal performance and flow regimes of pulsating heat pipe with Graphene/water nanofluid and titania/water nanofluids. They reported that more stable nanofluid has a better thermal performance 70% flow regimes for 70 W. Salehi et al. [18] studied the thermal characteristics of a two-phase closed thermosyphon with Ag/water nanofluid as working fluid and applying a magnetic field. The experimental results showed that the thermal resistance of the thermosyphon decreases with nanoparticle concentration as well as magnetic field strength. Ghanbarpour et al. [19] performed an experiment to investigate the thermal performance of wick structured cylindrical heat pipes using SiC/water nanofluid. Thermal resistance reduction of heat pipes by 11%, 21%, and 30% was observed with SiC nanofluids for 0.35%, 0.7%, and 1.0% volume concentrations. Saeed Zeinali Heris et al. [20] proved that the thermal efficiency of the thermosyphon was improved by analyzing the heat transfer performance of two-phase closed thermosyphon with oxidized CNT/water nanofluids. Zeinali Heris et al. [21] experimentally studied the heat transfer performance of a car radiator with CuO/Ethylene Glycol-Water as a coolant and their results showed that nanofluids clearly enhanced the heat transfer coefficient of about 55% compared to the base fluid.
Previously, researchers examined CuO, Fe2O3, Al2O3 & SiO2 [6] single component nanofluids that alter the specific heat capacity, density, thermal conducting feature, as well as viscosity of fluids. Therefore, it is understood that thermophysical properties essentially influences to thermal transferring ability [22, 23]. It is important to indicate that all the combinations of metal-metal, metal-ceramic, and ceramic-ceramic hybrid nanofluids revealed extensive enhancement in thermal conductivity and other thermal characteristics [24,25,26]. Studies from literatures revealed the thermal transfer features of nanoparticles in aqueous ethylene glycol combination exposed substantial consequences than the common base fluids [25, 27,28,29,30,31]. In our work, the base fluid mixture ratio 60:40 is chosen after several trials. Because, the performance with nanoparticles in W/EG in 60:40 proportion have been intensified significantly in thermophysical properties, stability and thermal conductivity compared to either of the fluids [25, 28, 29].
Among all-ceramic nanoparticles, Al2O3 differs from metallic nanoparticles due to its capability of easy incorporation to fluids. Though the thermal conductivity of Al2O3 is not significant compared to metal oxides like ZnO and CuO, it is having higher thermal conductivity than the conventional heat transfer fluids [32]. Experimentation on Al2O3 nanoparticles exhibited higher hardness, higher insulation, and higher stability. Especially, stable suspensions of aluminium oxide nanoparticles were obtained with the suspended particles inside W/EG solvent [28, 33]. Mashaei et al. [34] analyzed using Al2O3/water nanofluid with 0 vol%, 2 vol%, 4 vol%, and 8 vol% in a cylindrical heat pipe which had multiple evaporators. It was found that increase in heat load and concentration of nanoparticles resulted in higher heat transfer coefficient. Keshavarz Moraveji and Razvarz et al. [35] tested alumina/water nanofluid in a sintered wick heat pipe. They found the augmented heat transfer performance for 3% volume concentration of the fluid. Using Al2O3/water nanofluids, Noie et al. [36] studied thermal efficiency of a two-phase closed thermosyphon (TPCT). Their results indicated that thermal efficiency improved by increasing the volume fraction of nanoparticles and power. Similarly, SiO2 nanofluids also shows the enhanced thermal performance. Nabil et al. [25] experimentally tested the thermal conductivity of TiO2-SiO2/W-EG nanofluids and observed 22.8% maximum enhancement for 3% volume concentration by increasing the volume concentration and temperature. Using SiO2–G/EG nanofluid, Akilu et al. [28] established the enhancement in thermal conductivity and viscosity by 26.9% and 1.15 times than the base fluid alone. Yıldız et al. [1] compared the hypothetical and experimental thermal conductivity models and evaluated aqueous-based Al2O3/SiO2 nanofluids efficiency with heat pipe. They observed no significant heat transfer results. Upon deep survey of literature, heat pipe enactment using hybrid nanofluids seems to finite. In addition, there is no such work described in the literature related to the heat transfer analysis in cylindrical mesh heat pipe charged with W/EG based hybrid Al2O3–SiO2 nanofluids.
For better understanding and predicting the experimental process, the models like artificial neural network (ANN) as well as response surface methodology (RSM) would be the powerful tools that are established from the experimental data. Recently, researchers proposed RSM statistical modeling to estimate the various nanofluids' thermal properties based on input parameters such as temperatures and mass volume concentrations. They analyzed the efficiency as well as the performance of heat transfer liquids in detail and illustrated the proposed models were agreed to the experimental data [37,38,39,40]. Hemmat et al. [41] designed ANN model on thermal conductivity of Al2O3 nanoparticles in W/EG solution with the experimental results. No other work has emphasized the model development using RSM with these two combinations of nanoparticles especially in W/EG mixture. Hence, we focused on the development of the regression models and ANOVA technique with RSM to estimate the correlation between the empirical heat transfer performance data and the models.
This work investigates the heat transfer performance of cylindrical screen mesh heat pipe charged with hybrid W/EG based Al2O3–SiO2 liquids with different volume concentrations for 0%, 0.0125%, 0.025%, 0.05%, 0.075%, and 0.1% at the moderate heat inputs of 30, 60, and 90 W. Response surface methodology and ANOVA tool were employed to investigate the relationship between dependent and independent variables. New regression correlations were established using the experimental results for the assessment of thermal resistance besides heat transfer coefficient that is not noticed in the literature widely.
All the chemicals used in testing have been purchased from Sigma-Aldrich in the analytical grade and used for experimentation without further purification.
Production of Al2O3 and SiO2 nanoparticles and its nanofluids
Sol-gel procedure was engaged for the synthesis of Al2O3 and SiO2 ceramic nanoparticles. In a typical production of Al2O3 nanoparticles, 10 g of aluminium nitrate was amalgamated with 104 ml of ethanol, 70 ml of water, and 10 ml of polyethylene glycol (PEG) under constant stirring in a magnetic stirrer for 30 min. Then 16 ml of ammonia suspension drizzled drop wise inside the above mixture and stirred over 3 h at 80 °C resulting in the formation of a white gel. The gel was kept at room temperature overnight for aging. The white precipitate was obtained after the centrifugation process and washed three times using water and ethanol solutions to eliminate the residuals and impurities. Finally, the particles were calcinated at 750 °C in a muffle furnace and then grinded with mortar and pestle. SiO2 nanoparticles have been prepared with TEOS, water, ethanol, PEG also ammonia solution in the quantity of 20 ml, 35 ml, 100 ml, 35 ml, and 10 ml respectively under vigorous stirring using the magnetic stirrer. Finally, the obtained white slurry gel was dried and calcinated at 750 °C for 5 h in a muffle furnace. The synthesized salts of white agglomerates were powdered using mortar and pestle.
The production of hybrid nanofluids was carried out through the famous two step mode with synthesized Al2O3 and SiO2 nanoparticles, which accounts for producing bulk nanofluids with low cost. In the first step of preparing Al2O3–SiO2 hybrid nanofluid, the nanoparticles were taken in the ratio of 50:50 each by weight concentration of each type of nanoparticles and dispersed in 60% water and 40% ethylene glycol. To overcome the stability, 2 g of challenge CTAB surfactant is added to above combination [34, 36]. Subsequently, ultrasonic vibration using ultrasonicator (170V AC – 270 V AC, 50 Hz, operating frequency 50 kHz, single phase model, Labline make, Mumbai, Maharashtra) of 30 min for each sample is employed to break the clustering the solid particles and uniform scattering of the particles inside base fluids [28, 40]. Thus the hybrid Al2O3–SiO2 nanofluids for several volume concentrations (φ) have been prepared at room temperature by the eq. (1)
$$ \phi =\frac{w_{np}/{\rho}_{np}}{w_{np}/{\rho}_{np}+{w}_{bf}/{\rho}_{bf}}\;X\kern0.24em 100 $$
where, wnpwnp is the weight of both nanoparticles, wbf is the weight of W/EG solution,ρnp is both nanoparticles density, and ρbfis W/EG solution density.
Cylindrical heat pipe experimental setup
The schematic of cylindrical screen mesh heat pipe apparatus set up employed for examining the heat transfer performance is shown in Fig. 1. Heat pipe works on the phenomenon of phase change with capillary action. Initially, cylindrical copper tube and zinc wire screen mesh wick are cut into the desired size. Proper cleaning assembly procedures eliminate the formation of gas inside the heat pipe during construction. The cylindrical heat pipe is fabricated with copper material. It contains evaporator (300 mm), adiabatic (400 mm), and condenser (300 mm) sectors. The inner diameter has 10.9 mm diameter and outer has 12.7 mm diameter, and length is about 1000 mm. Its interior is surrounded with a metal (zinc) wire mesh. One end of heat pipe is blocked up with the metal lid and the other end is left open for charging nanofluids. The three K-type thermocouples (± 2% accuracy) are inserted on the surfaces of the heat pipe through small holes. There are three temperature indicators joined with three thermocouples. The input heating system consists of a Nichrome (Ni-Cr) heating coil associated with 230 V, 50 Hz AC power input, an autotransformer (0–230 V), an ammeter (0–1 A), and a voltmeter (0–150 V) for heating the evaporator section. This input heating system is attached with the heat pipe. Atmospheric air cools the condenser section. Glass wool of 20 mm thickness is used for minimizing the heat loss carefully which is wounded around the evaporator and adiabatic regions. The power applied to the heat pipe is estimated with the voltage and current provided with ± 0.5% accuracy.
Schematic of experimental heat pipe system
Al2O3–SiO2 hybrid nanofluids of six various volume concentrations (0%, 0.0125%, 0.025%, 0.05%, 0.075, and 0.1%) were used for experimental investigation. Initially, the experiment was performed with 0% volume concentration (base fluid alone), subsequently with Al2O3, SiO2, and Al2O3–SiO2 nanofluids volume concentrations. The orientation of heat pipe was maintained at three angles 30, 45, and 60°. The temperature in evaporator and condenser sections was measured for 30, 60, and 90 W input heat powers. Five set of values was recorded for each particles volume concentrations for the accuracy of the values. The values were noted every time after the system attained the stable position. As of the experimental outcomes, thermal resistance (R) and heat transfer coefficient (h) of cylindrical pipe were calculated.
Data reduction and uncertainty examination
From the experimentation data, thermal values R and h values are estimated through the following equations.
The thermal resistance is calculated by the eq. (2)
$$ R=\left[{T}_e-{T}_c\right]/Q $$
The heat transfer coefficient (h) was calculated using eq. (3) from the literature [21, 39, 42] as follows.
$$ h=\frac{Q}{A\left({T}_e-{T}_c\right)} $$
where evaporator temperature is denoted as Te and condenser temperature is represented as Tc at the stable position, Q denotes heat Power (Q = IxV), and A indicates the interior heat pipe area which is equal to πdl (d specifies heat pipe interior diameter (m) and l represents length (m)).
The uncertainties in heat pipe experimental data were evaluated by the standard method [42]. The uncertainty of any variable such as Rv, the uncertainty interval for independent experimental measured variables (xi) can be calculated using the following eq. (4)
$$ {U}_{R_i}=\frac{x_i}{R_v}\frac{\partial {R}_v}{\partial {x}_i}{U}_{x_i} $$
where URi is the uncertainty in the result (i.e., the approximate possible error introduced in calculating one parameter) and Rv is a parameter calculated using computable quantities; Uxi is measurement error for the experimental measured variable and determined by dividing the measurement precision into the minimum measured value. The uncertainties of various measurements are calculated below
$$ \mathrm{The}\ \mathrm{uncertainty}\ \mathrm{invVoltage}\ \left({U}_v\right)=\pm \frac{0.01}{49.5}=\pm 2.53\times {10}^{-4}\approx \pm 0.8\% $$
$$ \mathrm{The}\ \mathrm{uncertainty}\ \mathrm{in}\ \mathrm{current}\ \left({U}_c\right)=\pm \frac{0.01}{0.5}=\pm 0.02\approx \pm 2\% $$
$$ \mathrm{The}\ \mathrm{uncertainty}\ \mathrm{in}\ \mathrm{area}\ \left({U}_A\right)=\pm \frac{0.02}{24}=\pm 8.3\times {10}^{-4}\approx \pm 0.8\% $$
The maximum uncertainty in R can be calculated for all values of xi (i = 1, 2, 3. . . n) by the eq. (5)
$$ {U}_{R_i}=\left[\left(\frac{x_1}{R_v}\frac{\partial {R}_v}{\partial {x}_1}{U}_{x_1}\right)+\left(\frac{x_2}{R_v}\frac{\partial {R}_v}{\partial {x}_2}{U}_{x_2}\right)+..\dots +\left(\frac{x_n}{R_v}\frac{\partial {R}_v}{\partial {x}_n}{U}_n\right)\right] $$
The maximum uncertainties in power (Q), R, and h data were computed as through the relations (6), (7), and (8). The values of uncertainties in Q, R, and h are found to be ± 2.76%, ± 2.76%, and ± 2.91% respectively that are optimum only.
$$ \frac{\Delta Q}{Q}=\sqrt{{\left(\frac{\Delta V}{V}\right)}^2+{\left(\frac{\Delta V}{V}\right)}^2}\kern0.5em =\pm 2.76\% $$
$$ \frac{\Delta R}{R}=\sqrt{{\left(\frac{\Delta Q}{Q}\right)}^2+{\left(\frac{\Delta \left(\Delta {T}_{e/c}\right)}{\Delta {T}_{e/c}}\right)}^2}\kern0.5em =\pm 2.76\% $$
$$ \frac{\Delta h}{h}=\sqrt{{\left(\frac{\Delta Q}{Q}\right)}^2+{\left(\frac{\Delta {A}_{e/c}}{A_{e/c}}\right)}^2+{\left(\frac{\Delta \left(\Delta {T}_{e/c}\right)}{\Delta {T}_{e/c}}\right)}^2}\kern0.5em =\kern0.5em 2.91\% $$
Response surface methodology
One of the designs of experiment (DOE) methods used in engineering applications is response surface methodology (RSM). It is the technique used to identify the influences of the input parameters on considered responses in association with statistical and mathematical techniques [35]. The input parameters and execution scale are commonly referred to as independent variables and response respectively [29]. The attention of present work is of a regression model development of approximation using empirical data on thermal resistance and heat transfer coefficient to forecast response variables using RSM. The ability and dependability of the developed regression model have been examined through the analysis of variance (ANOVA) statistical technique. Sum of squares deals with the source of variance. The degrees of freedom represent total levels considered for experiment minus one. The ratio between sum of square values of every element and its respective degrees of freedom is considered as mean square. The ratio between the mean square of each factor and mean square of residual is called F ratio. ANOVA method was used to analyze the components such as coefficient of variation (R-square), Fisher's test (F test), and probabilities value (P value) which is essential variables for the prediction of appropriation of the models developed using regression analysis.
In the first section, the structural and surface characterization (powder XRD, SEM, with EDS) of the prepared nanoparticles is presented. In the second section, the heat transfer performance of the hybrid nanofluids are explained with a comparison between hybrid and single Al2O3 and SiO2 nanofluids. The third section analyzes the RSM approach with statistical techniques and validation.
Al2O3 and SiO2 nanoparticles characterization studies
Powder X-ray diffraction method analysis
Figure 2a and b illustrates XRD pattern of aluminium oxide and silicon dioxide nanoparticles made through sol-gel manner. The phase and crystal arrangements of the Al2O3 as well as SiO2 nanoparticles have been determined using X-ray diffractometer (X'pert-Pro). The 2θ testing has been directed from 10 to 80° at 0.05° range. Strong peaks spotted in Fig. 2a related to scattering angles 36.64°, 45.69°, 60.24°, and 66.07° denoting hkl on Miller indices (010), (121), (123), and (118) respectively. The miller indices planes corresponding to Bragg's peaks exhibit the orthorhombic structure of Al2O3 nanoparticles well matched with the JCPDS card no: 46-1215. Similarly, the diffraction peaks (Fig. 2b) intensify 22.22°, 44.6°, 65.42°, and 77.55° scattering angled that are associated with (101), (202), (204), and (401) miller indices planes respectively having the tetragonal structure of SiO2 solid particles. The peak at 2θ = 22° indicates that SiO2 particles were formed by small nanocrystals and the slight broadening of the peak is due to the effect of smaller grain size. The XRD pattern of SiO2 particles is in good agreement with JCPDS card no: 76-0939. Examination of previous results denoted that the SiO2 particles are amorphous in nature and size of the particle was above 50 nm [13, 15]. The increase in the crystalline nature of SiO2 particles and smaller sized nanoparticles were due to the addition of polymer surfactant PEG. The mean crystallites size (D) is assessed by means of Debye–Scherrer Eq. (9)
Powder XRD structure of a Al2O3 and b SiO2 samples
$$ D= K\lambda /\beta\;\cos \theta $$
where K indicates constant shape factor (1.9), λ denotes incident X-ray's wavelength (1.5405 Å), and β specifies full width at half maximum as well as θ stands for scattering angle. As of obtained XRD values, the mean crystallite size of Al2O3 and SiO2 particles were found about 6 nm and 25 nm respectively. Smaller size nanoparticles exhibit a better performance in thermal conductivity and heat transfer properties [30].
Scanning electron microscope analysis
Scanning electron microscope analysis of nanoparticles is used for the representation of the surface structure of the samples through high-resolution images. JEOL, JSM 6390 device recorded the scanning electron microscope (SEM) pictures of Al2O3 and SiO2 nanoparticles. It is revealed from the SEM results of Al2O3 nanoparticles with the histogram in Fig. 3 that the Al2O3 nanoparticles are mostly in a spherical shape with slight clusters of the particles. These results are in best agreement with those reported works [10, 25]. Besides, the size distribution of Al2O3 nanoparticles has been displayed through a histogram graph. It could be observed that the nanoparticles are distributed in the diameter range of 25–125 nm and most of the nanoparticles fall around 60 nm diameter. The Gaussian fit over the distribution denotes nano Al2O3 particles mean size is 62.338 nm. Reduced dimension of the as-synthesized aluminium oxide nanoparticles is due to the addition of the polymer surfactant polyethylene glycol.
SEM with histogram pattern of Al2O3 nanoparticles
The SEM images of SiO2 nanoparticles with histogram in Fig. 4 indicate that the spherical-shaped SiO2 nanoparticles have a small aggregation with the neighboring nanoparticles. The aggregation of the SiO2 nanoparticles is obtained due to the existence of a weak covalent bond between the atoms. From the histogram chart, it is confirmed that the SiO2 nanoparticles having the size around 61 nm since most of the particles lie with this diameter range. The size distribution of SiO2 particles is ranging from 40 to 100 nm. The obtained morphology of SiO2 nanoparticles shows better results than the previous reports [13, 15] due to the addition of surfactant PEG. Controlling in size of both nanoparticles was accomplished as a consequence of balanced nucleation growth of nanoparticles. In addition, the size and morphology could be preciously tuned in nanometric scale by controlling the saturation rate during the second stage with optimum concentration.
SEM with histogram pattern of SiO2 nanoparticles
Heat transfer performance analysis of heat pipe
Thermal ability in heat pipe is not only influenced by working fluid concentration but also heat pipe orientation, heat input power, and gravity effect. Therefore, it is necessary to analyze the thermal performance based on the above mentioned factors.
Effect of heat pipe orientation upon thermal resistivity
Thermal resistance determined from Al2O3–SiO2 nanofluids in cylindrical heat pipe experimentation for at various inclined angles (30, 45, and 60°) are drawn in Fig. 5a–c. The thermal resistivity decreases from 30° to 45° orientation of heat pipe and also increased after 45° to 60° angle for all the heat input powers in the range 30, 60, and 90 W. This is due to the high influence of gravitational force action between the condenser and evaporator regions. Usually, the vapor movement within the heat pipe is owing to the change in density of fluids. The mutual action of gravity and the force in the wick structure initiates the returning of condensed vapor back to the evaporator area. Beyond the inclined angle 45°, the heat provided at the evaporator region is insufficient. Hence, heat transfer action is decreased radially and also leads to increased thermal resistance. Also, below the inclination angle 30°, the heat transfer characteristics seem to be decreased. The researchers have also reported that this range of inclination angle of heat pipe yielded the better heat transfer performance than horizontal (0°) and vertical positions (90°) [13, 27]. Hence, the orientation of heat pipe was maintained in the angle ranging from 30° to 60° angle.
The effect of heat pipe orientation on thermal resistance at a 30 W, b 60 W, and c 90 W heat input power
Effect of power and volume concentrations on thermal resistivity
Figure 6 was drawn between the volume concentration of nanofluids and thermal resistance for three different power inputs (30, 60, and 90 W). According to the graph, it is clear that nanoparticles addition with base solution diminishes the values of thermal resistance dramatically. As of increase in surface wettability, resistivity of hybrid nanofluids seems to be decreased. This is caused by Al2O3–SiO2 nanoparticles settlement of above wick structures while heating takes place in evaporation zone. Moreover, thermal resistance shows a decreasing phenomenon with increasing heat power inputs. At high heat input, the formation of a fluid layer upon wick structure due to the base fluid is eliminated [13]. The maximum value of thermal resistance 3.5 K/W is noticed at 90 W heat input for 0.1% volume concentration of hybrid Al2O3–SiO2 nanofluid. This seems to be 5.54% higher than W/EG fluid. This decrement in thermal resistance at a higher concentration of nanofluids exhibits the improved efficiency of nanofluids in the heat pipe.
Effect of volume concentration and heat power input on thermal resistance
Comparison on thermal resistance of single and hybrid nanofluids
Figure 7 displays thermal resistance decrement comparison between single component (Al2O3 and SiO2) and hybrid nanofluids (Al2O3–SiO2) for three heat power inputs at maximum concentration 0.1%. The comparative analysis of the single and hybrid nanofluids maintained at the optimum orientation of heat pipe at 45°. The decrement percentage can be assessed through the ratio of thermal resistance variation between nanofluids and base fluid. High percentage in decrement indicates the lower value of thermal resistance and vice versa. In the figure, it is confirmed the thermal resistance decrement percentage for Al2O3–SiO2 hybrid nanofluids is maximum, denoting the lower value of thermal resistance than the Al2O3 and SiO2 single nanofluids. It shows maximum 5.54% reduction at 90 W for 0.1% volume concentration than the base fluid while single nanofluids exhibited lower percentage decrement in all power inputs.
Comparison on thermal resistance of single and hybrid nanofluids for 0.1% volume concentration at 45° orientation of heat pipe
Heat transfer coefficient
Figure 8 was plotted between volume concentrations and heat transfer coefficient values with 45° inclination angle. The incorporation of 0.1% nanoparticles in 90 W power indicates the maximum heat transfer coefficient is 62.01 W/m2K, which is 43.16% more than W/EG fluid mixture. Higher heat transfer values in this hybrid fluid are achieved by low thermal resistance over liquid layer. The minimum and maximum heat transfer coefficient value of 14.88 W/m2 K and 62.01 W/m2 K was observed at low and higher volume concentrations (0.0125% and 0.1%) with low and high powers of 30 and 90 W respectively.
Heat transfer coefficient of Al2O3–SiO2 nanofluids as a function of heat input and volume concentrations
Heat transfer coefficient increases by 43.16% at 0.1% fraction on hybrid Al2O3–SiO2 fluids with respect to base solution at 90 W. When temperature increases, thermal conductivity values in nanofluids tremendously increases and viscosity decreases according to base fluid particularly at higher particle concentrations. The augmented heat transfer coefficient value with hybrid Al2O3–SiO2 nanofluids is superior to other nanofluids [13, 14, 43,44,45]. The comparison of heat transfer performance for various hybrid nanofluids with this hybrid Al2O3–SiO2 nanofluids represented in Table 1 confirms the proficiency of hybrid Al2O3-SiO2 nanofluids in cylindrical mesh heat pipe than other conventional fluids especially W/EG solution.
Table 1 Comparison in heat transfer performance of screen mesh type wick structure cylindrical heat pipe using nanofluids with present work
Figure 9 presents heat transfer coefficient increment comparison between single component (Al2O3 and SiO2) and hybrid nanofluids (Al2O3–SiO2) for at three heat power inputs at maximum concentration 0.1%. The comparative analysis displayed the results of the single and hybrid nanofluids at the optimum orientation of heat pipe maintained at 45°. The increment percentage can be calculated by the ratio of difference in heat transfer coefficient between nanofluids and base fluid. High percentage in increment indicates the higher heat transfer ability. From graph, it is established that heat transfer coefficient increment percentage for Al2O3–SiO2 hybrid nanofluids is maximum, denoting the augmented heat transfer performance than the Al2O3 and SiO2 single nanofluids. It shows maximum 43.16% augmentation at 90 W for 0.1% volume concentration than the base fluid whereas single nanofluids exhibited lower increment percentage for all heat power inputs.
Comparison on heat transfer coefficient of single and hybrid nanofluids for 0.1% volume concentration at 45° orientation of heat pipe
RSM approach with statistical technique
The empirical data collected from experimental setup are given as input to the response surface methodology. Design-Expert software is adopted to analyze the experimental data. The user-defined data format is used for statistical analysis.
ANOVA is a standard statistical tool employed to estimate the variance size between experimental data set. Table 2 represents the model summary of thermal resistance and heat transfer coefficient with RSM statistical analysis. This Table 2 values depict that the quadratic formula is best to predict the thermal resistance and heat transfer coefficient value. The achievement of good matching between the predicted and developed models with the correlation is possible only if the value of R-squared is near to 1 [40]. The four different models named linear, 2factorial interaction (2FI), quadratic, and cubic were developed based on regression analysis. It is suggested the quadratic model provides the best correlation with the experimental data since R2 is near to 1in both thermal resistance and heat transfer coefficient results. Also, the "Predicted R2" of 0.9988 and 0.9556 are near to the "Adjusted R2" values of 0.9994 and 0.9761 for thermal resistance as well as heat transfer coefficient respectively. This shows the accuracy of both the developed models in this work. The signal to noise ratio is measured by "Adequate Precision" value. The adequate precision value larger than 4 is desirable for navigating the design space [46]. The adequate precision values of 167.941 and 32.201 for thermal resistance and heat transfer coefficient indicate that the models can be used for navigating the design space. Both the thermal resistance and heat transfer coefficient response values obtained in this work is very adjacent to each other denoting the accuracy of thr developed RSM models to predict the thermal resistance and heat transfer coefficient data with the cylindrical mesh type heat pipe. Generally, RSM proposes a proficiency to be used in the thermal engineering field exclusively in nanofluids that are engaged for heat and mass transfer applications in solar collectors, heat exchangers [37,38,39,40].
Table 2 Model summary of responses
The significant model terms are recognized by using the ANOVA table. Table 3 shows the ANOVA test results of thermal resistance and heat transfer coefficient response surface quadratic model for hybrid Al2O3–SiO2 nanofluids. From Table 3, the F value of the model for thermal resistance and heat transfer coefficient is found to be 5572.90 and 139.72 which implies that both the regression models are significant. The probability values (Prob values) are used to examine the importance of each term. The model terms are said to important if the "Prob > F" lies below 0.05 and "Prob > F" lies above 0.1000 implies the unimportance of model terms. In this connection, the two models developed in this works showed the statistic significance of volume concentration and heat power both model terms.
Table 3 Analysis of variance of all responses
Response surface plot analysis
Figures 10 and 11 show the interaction plots for thermal resistance and heat transfer coefficient. The thermal resistance interaction plot and response plot in Fig. 10a, b represents that the high level of volume concentration with high heat input power gives the minimum thermal resistance. Heat transfer coefficient interaction plot and response plot in Fig. 11a, b illustrate that a high level of volume concentration with a high level of power produces maximum heat transfer coefficient.
a Interaction effect of thermal resistance of hybrid nanofluid. b Response surface plot
a Interaction effect of heat transfer coefficient for hybrid nanofluid. b Response surface plot
Regression analysis
The relationship between input factors and response needs a declaration of the statistical model. Concerning Figs. 10a and 11a, the interaction plots of thermal resistance and heat transfer coefficient does not follow the straight line. Hence, these models incorporated the polynomial regression models. Equations 10 and 11 represent the quadratic formulae for thermal resistance (R) and heat transfer coefficient (h).
$$ {\displaystyle \begin{array}{l}R=19.72167-11.24794\ast \phi -0.35201\ast P+0.04563\ast \phi P\\ {}+59.40435\ast {\phi}^2+1.93076E-003\ast {P}^2\kern0.6em \end{array}} $$
$$ {\displaystyle \begin{array}{l}h=2.86346+191.95901\ast \phi +0.3368\ast P+1.46458\ast \phi P\\ {}\hbox{-} 1894.44815\ast {\phi}^2+1.92138E-003\ast {P}^2\end{array}} $$
Validation of RSM models
Figure 12a, b shows the validation of RSM models with experimental results for thermal resistance and heat transfer coefficient. The average deviation between experimental results and RSM models are − 2.36% and − 12.74% for thermal resistance and heat transfer coefficient respectively. The maximum deviations are observed as − 5.15% and − 33.46% for 0.1% volume concentration at 60 W heat input power. Table 4 displays the percentage of deviation between the experimental and RSM model results. The difference between the predicted values and experimental value has a minor deviation that implies the high accuracy in the modeling approach. Thus, eqs. 8 and 9 can be used to predict the thermal resistance and heat transfer coefficient values of heat pipe with Al2O3–SiO2 hybrid nanofluids within the range of input parameters considered in this experiment.
Validation of RSM results with experimental data of a thermal resistance and b heat transfer coefficient
Table 4 Comparison between experimental and RSM model results
This work deals with the enhancement of heat transfer characteristics of cylindrical screen mesh heat pipe using hybrid Al2O3 and SiO2 nanoparticles with W/EG binary mixture. The experimental results showed that the heat transfer capacity increases with volume concentrations of hybrid nanofluids and heat input power. The following main conclusions were obtained from the present study:
Al2O3 and SiO2 nanoparticles were synthesized using the sol-gel method. XRD analysis revealed the orthorhombic and tetragonal structures of nanoparticles with the average crystallite sizes of 6 and 25 nm respectively. From SEM analysis, the observed structures for both Al2O3 and SiO2 nanoparticles were spherical and the particles size falls around 62 nm and 61 nm respectively.
The thermal resistance of the heat pipe with Al2O3–SiO2 hybrid nanofluids is reduced by 5.54% and heat transfer coefficient is enhanced by 43.16% than base fluid W/EG base fluid mixture at higher volume concentration (0.1%) and high power (90W).
The superior enhancement in heat transfer characteristics of heat pipe suggested this novel hybrid Al2O3–SiO2 nanofluid could be a substitute for heat transfer applications in various devices.
The parameter influence and empirical models were established based on statistical techniques response surface methodology and regression analyses. Based on the ANOVA table, the influencing independent parameters on thermal resistance and heat transfer coefficient are volume concentration as well as power, which are considered as the significant input parameters. Both the thermal resistance and heat transfer coefficient response values obtained are very near to each other denoting the accuracy of the developed RSM models.
Validation of results implied that the developed quadratic models are able to predict well the thermal resistance and heat transfer coefficient of cylindrical heat pipe with average percentage of deviation (− 2.36% and − 12.74%) less than 1% against the experimental results for the heat power range 30–90 W at the inclined angle 45° in the volume concentration ranging from 0.0125 to 0.1%. Hence, it is emphasized that these proposed mathematical models can be utilized for prediction of heat transfer coefficient and thermal resistivity of Al2O3–SiO2 hybrid nanofluids with cylindrical screen mesh heat pipe.
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
A Correction to this paper has been published: https://doi.org/10.1186/s44147-021-00048-2
Al2O3 :
Aluminium oxide
SiO2 :
CuO:
Fe2O3 :
Ferrous oxide
TiO2 :
ZnO:
TEOS:
Tetra ethyl ortho silicate
PEG:
CTAB:
Cationic surfactant
Ethylene glycol
eq.:
eqs.:
Ni-Cr:
Si:
RSM:
F test:
Fisher's test
prob:
ANN:
Artificial Neural Network
w np :
Weight of both nanoparticles
w bf :
Weight of W/EG solution
ρnp :
Nanoparticles density
ρbf :
W/EG solution density
R :
Thermal resistance
h :
T e :
Evaporator temperature
T c :
Condenser temperature
Q :
Heat power
P :
Interior heat pipe area
Heat pipe interior diameter
l :
Heat pipe length
EDS:
Energy dispersive X-ray diffraction
Mean crystallites size
K :
Constant shape factor
λ:
Incident X-ray's wavelength
β:
Full width at half maximum
θ:
Scattering angle
U Ri :
The uncertainty in the result
R v :
Parameter calculated using computable quantities
U xi :
Measurement error for the experimental measured variable
U v :
The uncertainty in voltage
U c :
The uncertainty in current
U A :
The uncertainty in area
Yıldız C, Arıcı M, Karabay H (2019) Comparison of a theoretical and experimental thermal conductivity model on the heat transfer performance of Al2O3-SiO2/water hybrid-nanofluid. Int J Heat Mass Transf 140:598–605. https://doi.org/10.1016/j.ijheatmasstransfer.2019.06.028
Lee HS (2011) Thermal Design, Heat Sinks, Thermo electrics heat pipes, compact heat exchangers and solar cells. Wiley, Newyork. https://doi.org/10.1002/9780470949979
Rahimi-Gorji M, Pourmehran O, Hatami M, Ganji DD (2015) Statistical optimization of microchannel heat sink (MCHS) geometry cooled by different nanofluids using RSM analysis. Eur Phys J Plus 130(2):22. https://doi.org/10.1140/epjp/i2015-15022-8
Żyła G (2016) Thermophysical properties of ethylene glycol based yttrium aluminum garnet (Y3Al5O12–EG) nanofluids. Int J Heat Mass Transf 92:751–756. https://doi.org/10.1016/j.ijheatmasstransfer.2015.09.045
Che Sidik NA, Mahmud Jamil M, Aziz Japar WMA, Muhammad Adamu I (2017) A review on preparation methods, stability and applications of hybrid nanofluids. Renew Sust Energ Rev 80:1112–1122. https://doi.org/10.1016/j.rser.2017.05.221
Dhinesh Kumar D, Valan Arasu A (2018) A comprehensive review of preparation, characterization, properties and stability of hybrid nanofluids. Renew Sust Energ Rev 81:1669–1689. https://doi.org/10.1016/j.rser.2017.05.257
Mehrali M, Sadeghinezhad E, Rosen MA, Akhiani AR, Tahan Latibari S, Metselaar HSC (2015) Heat transfer and entropy generation for laminar forced convection flow of graphene nanoplatelets nanofluids in a horizontal tube. Int Commun Heat Mass Transf 66:23–31. https://doi.org/10.1016/j.icheatmasstransfer.2015.05.007
Zyła G, Cholewa M, Witek A (2012) Dependence of viscosity of suspensions of ceramic nanopowders in ethyl alcohol on concentration and temperature. Nanoscale Res Lett 7(1):412. https://doi.org/10.1186/1556-276X-7-412
Choi SUS (1995) Enhancing thermal conductivity of fluids with nanoparticles developments and applications of non-Newtonian flows. In: Siginer DA, Wang HP (eds) FED-Vol. 231/MD, vol 66. ASME, New York, pp 99–103
Chien H-T, Tsai C-I, Chen P-H, Chen P-Y (2003) Improvement on thermal performance of a disk-shaped miniature heat pipe with nanofluid, Fifth International Conference on Electronic Packaging Technology. Proceedings - Shanghai, China. Fifth International Conference on Electronic Packaging Technology Proceedings, pp 389–391
Alijani H, Çetin B, Akkus Y, Dursunkaya Z (2018) Effect of design and operating parameters on the thermal performance of aluminum flat grooved heat pipes. Appl Therm Eng 132:174–187. https://doi.org/10.1016/j.applthermaleng.2017.12.085
Poplaski LM, Benn SP, Faghr A (2017) Thermal performance of heat pipes using nanofluids. Int J Heat Mass Transf 107:358–371. https://doi.org/10.1016/j.ijheatmasstransfer.2016.10.111
Venkatachalapathy S, Kumaresan G, Suresh S (2015) Performance analysis of cylindrical heat pipe using nanofluids – an experimental study. Int J Multiphase Flow 72:188–197. https://doi.org/10.1016/j.ijmultiphaseflow.2015.02.006
Kim KM, Bang IC (2016) Effects of graphene oxide nanofluids on heat pipe performance and capillary limits. Int J Therm Sci 100:346–356. https://doi.org/10.1016/j.ijthermalsci.2015.10.015
Wan Z, Deng J, Li B, Xu Y, Wang X, Tang Y (2015) Thermal performance of a miniature loop heat pipe using water-copper nanofluid. Appl Therm Eng 78:712–719. https://doi.org/10.1016/j.applthermaleng.2014.11.010
Solomon AB, Ramachandran K, Asirvatham LG, Pillai BC (2014) Numerical analysis of a screen mesh wick heat pipe with Cu/water nanofluid. Int J Heat Mass Transf 75:523–533. https://doi.org/10.1016/j.ijheatmasstransfer.2014.04.007
Akbari A, Saidi MH (2018) Experimental investigation of nanofluid stability on thermal performance and flow regimes in pulsating heat pipe. J Therm Anal Calorim 135(3):1835–1847. https://doi.org/10.1007/s10973-018-7388-3
Salehi H, Zeinali Heris S, Koolivand Salooki M, Noei SH (2011) Designing a neural network for closed thermosyphon with nanofluid using a genetic algorithm. Braz J Chem Eng 28(1):157–168. https://doi.org/10.1590/S0104-66322011000100017
Ghanbarpour M, Nikkam N, Khodabandeh R, Toprak MS (2015) Improvement of heat transfer characteristics of cylindrical heat pipe by using SiC nanofluids. Appl Therm Eng 90:127–135. https://doi.org/10.1016/j.applthermaleng.2015.07.004
Heris SZ, Fallahi M, Shanbedi M, Amiri A (2016) Heat transfer performance of two-phase closed thermosyphon with oxidized CNT/water nanofluids. Heat Mass Transf 52(1):85–93. https://doi.org/10.1007/s00231-015-1548-9
Zeinali Heris S, Shokrgozar M, Poorpharhang S, Shanbedi M, Noie SH (2014) Experimental study of heat transfer of a car radiator with CuO/ethylene glycol-water as a coolant. J Dispers Sci Technol 35(5):677–684. https://doi.org/10.1080/01932691.2013.805301
Ilyas SU, Narahari M, Theng JTY, Pendyala (2019) Experimental evaluation of dispersion behavior, rheology and thermal analysis of functionalized zinc oxide-paraffin oil nanofluids. J Mol Liq 294:111613. https://doi.org/10.1016/j.molliq.2019.111613
Ranga Babu JA, Kumar KK, Srinivasa Rao S (2017) State-of-art review on hybrid nanofluids. Renew Sust Energ Rev 77:551–565. https://doi.org/10.1016/j.rser.2017.04.040
Esfe MH, Arani AAA, Rezaie M, Yan WM, Karimipour A (2015) Experimental determination of thermal conductivity and dynamic viscosity of Ag–MgO/water hybrid nanofluid. Int Commun Heat Mass Transf 66:189–195. https://doi.org/10.1016/j.icheatmasstransfer.2015.06.003
Nabil MF, Azmi WH, Abdul Hamid K, Mamat R, Hagos FY (2017) An experimental study on the thermal conductivity and dynamic viscosity of TiO2-SiO2 nanofluids in water: ethylene glycol mixture. Int Commun Heat Mass Transf 86:181–189. https://doi.org/10.1016/j.icheatmasstransfer.2017.05.024
Paul G, Philip J, Raj B, Das PK, Manna I (2011) Synthesis, characterization, and thermal property measurement of nano-Al95Zn05 dispersed nanofluid prepared by a two-step process. Int J Heat Mass Transf 54(15-16):3783–3788. https://doi.org/10.1016/j.ijheatmasstransfer.2011.02.044
Wang P-Y, Chen X-J, Liu Z-H, Liu Y-P (2012) Application of nanofluid in an inclined mesh wicked heat pipes. Thermochim Acta 339:100–108. https://doi.org/10.1016/j.tca.2012.04.011
Akilu S, Baheta AT, Said MAM, Minea AA, Sharma KV (2018) Properties of glycerol and ethylene glycol mixture based SiO2-CuO2/C hybrid nanofluid for enhanced solar energy transport. Solar Energy Mat and Solar Cells 179:118–128. https://doi.org/10.1016/j.solmat.2017.10.027
Esfe MH, Wongwises S, Naderi A, Asadi V, Safaei MR, Rostamian H et al (2015) Thermal conductivity of Cu/TiO2–water/EG hybrid nanofluid: experimental data and modeling using artificial neural network and correlation. Int Commun Heat Mass Transf 66:100–104. https://doi.org/10.1016/j.icheatmasstransfer.2015.05.014
Shukla KN, Brusly Solomon A, Pillai BC, Jacob Ruba Singh B, Saravana Kumar S (2012) Thermal performance of heat pipe with suspended nano-particles. Heat Mass Transf 48(11):1913–1920. https://doi.org/10.1007/s00231-012-1028-4
Yiamsawas T, Mahian O, Dalkilic AS, Kaewnai S, Wongwises S (2013) Experimental studies on the viscosity of TiO2 and Al2O3 nanoparticles suspended in a mixture of ethylene glycol and water for high temperature applications. Appl Energy 111:40–45. https://doi.org/10.1016/j.apenergy.2013.04.068
Timofeeva EV (2011) Nanofluids for heat transfer—POTENTIAL AND ENGINEERING STRATEGIES. In: Ahsan A (ed) Two Phase Flow, Phase Change and Numerical Modelling. InTech, Croatia, pp 435–450
Xu J, Bandyopadhyay K, Jung D (2016) Experimental investigation on the correlation between nano-fluid characteristics and thermal properties of Al2O3 nano-particles dispersed in ethylene glycol–water mixture. Int J Heat Mass Transf 94:262–268. https://doi.org/10.1016/j.ijheatmasstransfer.2015.11.056
Mashaei PR, Shahryari M, Madani S (2016) Analytical study of multiple evaporator heat pipe with nanofluid; a smart material for satellite equipment cooling application. Aerosp Sci Technol 59:112–121. https://doi.org/10.1016/j.ast.2016.10.018
Keshavarz Moraveji M, Razvarz S (2012) Experimental investigation of aluminum oxide nanofluid on heat pipe thermal performance. Int Commun Heat Mass Transf 39(9):1444–1448. https://doi.org/10.1016/j.icheatmasstransfer.2012.07.024
Noie SH, Heris SZ, Kahani M, Nowee SM (2009) Heat transfer enhancement using Al2O3–water nanofluid in a two-phase closed thermosyphon. Int J Heat Fluid Flow 30(4):700–705. https://doi.org/10.1016/j.ijheatfluidflow.2009.03.001
Esfe MH, Hajmohammad MH (2017) Thermal conductivity and viscosity optimization of nanodiamond-Co3O4/EG (40:60) aqueous nanofluid using NSGA-II coupled with RSM. J Mol Liq 238:545–552. https://doi.org/10.1016/j.molliq.2017.04.056
Hemmat Esfe M, Abbasian Arani AA, Shafiei Badi R, Rejvani M (2018) ANN modeling, cost performance and sensitivity analyzing of thermal conductivity of DWCNT–SiO2/EG hybrid nanofluid for higher heat transfer. J Therm Anal Calorim 131(3):2381–2393. https://doi.org/10.1007/s10973-017-6744-z
Shanbedi M, Zeinali Heris S, Maskooki A, Eshghi H (2015) Statistical analysis of laminar convective heat transfer of MWCNT-deionized water nanofluid using the response surface methodology. Num Heat Transf Part A 68:454–469.
Vajjha RS, Das DK (2015) An experimental determination of the viscosity of propylene glycol/water based nanofluids and development of new correlations. Int J Mat Sci Eng 356-387:98–128
Hemmat EM, Hassani AMR, Toghraie D, Hajmohammad MH, Rostamian H, Tourang H (2016) Designing artificial neural network on thermal conductivity of Al2O3–water–EG (60–40%) nanofluid using experimental data. J Therm Anal Calorim 126(2):837–843. https://doi.org/10.1007/s10973-016-5469-8
Heris SZ, Edalati Z, Noie SH, Mahian O (2014) Experimental investigation of Al2O3/water nanofluid through equilateral triangular duct with constant wall heat flux in laminar flow. Heat Transf Eng 35(13):1173–1182. https://doi.org/10.1080/01457632.2013.870002
Wang PY, Chen XJ, Liu ZH, Liu YP (2012) Application of nanofluid in an inclined mesh wicked heat pipes. Thermochim Acta 539:100–108. https://doi.org/10.1016/j.tca.2012.04.011
Solomon AB, Ramachandran K, Pillai BC (2012) Thermal performance of a heat pipe with nanoparticles coated wick. Appl Therm Eng 36(1):106–112. https://doi.org/10.1016/j.applthermaleng.2011.12.004
Vijayakumar M, Navaneethakrishnan P, Kumaresan G (2016) Thermal characteristics studies on sintered wick heat pipe using CuO and Al2O3 nanofluids. Exp Thermal Fluid Sci 79:25–35. https://doi.org/10.1016/j.expthermflusci.2016.06.021
Suresh Kumar B, Baskar N (2013) Integration of fuzzy logic with response surface methodology for thrust force and surface roughness modeling of drilling on titanium alloy. Int J Adv Manuf Technol 65(9-12):1501–1514. https://doi.org/10.1007/s00170-012-4275-0
No funds, grants, or other support was received.
PG & Research Department of Physics, Bishop Heber College (Affiliated to Bharathidasan University), Tiruchirappalli, Tamilnadu, 620017, India
R. Vidhya
Crystal Growth Laboratory, PG and Research Department of Physics, Periyar E.V.R College (Affiliated to Bharathidasan University), Tiruchirappalli, Tamilnadu, 620023, India
T. Balakrishnan
K. Ramakrishnan College of Technology (Affiliated to Anna University), Samayapuram, Tiruchirappalli, Tamilnadu, 621112, India
B. Suresh Kumar
All authors contributed to the study conception and design. RV investigated the experimental data and prepared the original draft. TB reviewed, edited and validated the results and manuscript. BS Conceptualized and analyzed the results. All authors have read and approved the manuscript.
Correspondence to T. Balakrishnan.
Vidhya, R., Balakrishnan, T. & Kumar, B.S. Experimental and theoretical investigation of heat transfer characteristics of cylindrical heat pipe using Al2O3–SiO2/W-EG hybrid nanofluids by RSM modeling approach. J. Eng. Appl. Sci. 68, 32 (2021). https://doi.org/10.1186/s44147-021-00034-8
Nanofluid
Heat pipe
Heat transfer enhancement
|
CommonCrawl
|
Waves of chromatin modifications in mouse dendritic cells in response to LPS stimulation
Alexis Vandenbon ORCID: orcid.org/0000-0003-2180-57321,2 na1,
Yutaro Kumagai3,4 na1,
Mengjie Lin5,
Yutaka Suzuki5 &
Kenta Nakai6
The importance of transcription factors (TFs) and epigenetic modifications in the control of gene expression is widely accepted. However, causal relationships between changes in TF binding, histone modifications, and gene expression during the response to extracellular stimuli are not well understood. Here, we analyze the ordering of these events on a genome-wide scale in dendritic cells in response to lipopolysaccharide (LPS) stimulation.
Using a ChIP-seq time series dataset, we find that the LPS-induced accumulation of different histone modifications follows clearly distinct patterns. Increases in H3K4me3 appear to coincide with transcriptional activation. In contrast, H3K9K14ac accumulates early after stimulation, and H3K36me3 at later time points. Integrative analysis with TF binding data reveals potential links between TF activation and dynamics in histone modifications. Especially, LPS-induced increases in H3K9K14ac and H3K4me3 are associated with binding by STAT1/2 and were severely impaired in Stat1−/− cells.
While the timing of short-term changes of some histone modifications coincides with changes in transcriptional activity, this is not the case for others. In the latter case, dynamics in modifications more likely reflect strict regulation by stimulus-induced TFs and their interactions with chromatin modifiers.
Epigenetic features, such as histone modifications and DNA methylation, are thought to play a crucial role in controlling the accessibility of DNA to RNA polymerases. Associations have been found between histone modifications and both long-term and short-term cellular processes, including development, heritability of cell type identity, DNA repair, and transcriptional control [1, 2]. For cells of the hematopoietic lineage, cell type-defining enhancers are established during differentiation by priming with the H3K4me1 marker [3, 4]. After differentiation, signals from the surrounding tissue environment or from pathogens induce changes in histone modifications reflecting the changes in activity of enhancers and promoters, including the de novo establishment of latent enhancers [5,6,7,8,9].
Transcription factors (TFs) are key regulators in the control of epigenetic changes [10, 11]. During the long-term process of differentiation, closed chromatin is first bound by pioneer TFs, which results in structural changes that make it accessible to other TFs and RNA polymerase II (Pol2) [6, 12]. Similarly, short-term changes in gene expression following stimulation of immune cells are regulated by TFs. This regulation is thought to involve TF binding, induction of changes in histone modifications, and recruitment of Pol2 [13,14,15,16]. However, details of the temporal ordering and causal relationships between these events remain poorly understood [17, 18]. Especially, it is unclear whether certain histone modifications are a requirement for, or a result of, TF binding and transcription [19,20,21].
As sentinel cells of the innate immune system, dendritic cells (DCs) are well equipped for detecting the presence of pathogens. Lipopolysaccharide (LPS), a component of the cell wall of Gram-negative bacteria, is recognized by DCs through the membrane-bound Toll-like receptor 4 (TLR4), resulting in the activation of two downstream signaling pathways [22]. One pathway is dependent on the adaptor protein MyD88 and leads to the activation of the TF NF-κB, which induces expression of proinflammatory cytokines. The other pathway involves the receptor protein TRIF, whose activation induces phosphorylation of the TF IRF3 by TBK1 kinase. The activated IRF3 induces expression of type I interferon, which in turn activates the JAK-STAT signaling pathway, by binding to the type I IFN receptor (IFNR) [23].
Here, we present a large-scale study of short-term changes in histone modifications in mouse DCs during the response to LPS. We focused on the timing of increases in histone modifications at promoters and enhancers, relative to the induction of transcription and to TF binding events. We observed that LPS stimulation induced increased levels of H3K9K14ac, H3K27ac, H3K4me3, and H3K36me3 at LPS-induced promoters and enhancers. Surprisingly, we observed clearly distinct patterns: accumulation of H3K9K14ac was early (between 0.5 and 2 h after stimulation), regardless of the timing of transcriptional induction of genes. Accumulation of H3K36me3 was late and spreads from the 3′ end of gene bodies towards the 5′ end, reaching promoters at later time points (between 8 and 24 h). H3K4me3 accumulation was later than that of H3K9K14ac (between 1 and 4 h) and was more correlated with transcriptional induction times. Integrated analysis with genome-wide binding data for 24 TFs revealed possible associations between increases in H3K9K14ac and H3K4me3 and binding by RelA, Irf1, and especially STAT1/2. LPS-induced accumulation of H3K9K14ac and H3K4me3 was severely impaired in Stat1−/− cells. Together, these results suggest that stimulus-induced dynamics in a subset of histone modifications reflect the timing of activation of stimulus-dependent TFs, while others are more closely associated with transcriptional activity.
Genome-wide measurement of histone modifications at promoter and enhancer regions
To elucidate the temporal ordering of stimulus-induced changes in transcription and chromatin structure, we performed chromatin immunoprecipitation experiments followed by high-throughput sequencing (ChIP-seq) for the following histone modifications in mouse DCs before and after LPS stimulation: H3K4me1, H3K4me3, H3K9K14ac, H3K9me3, H3K27ac, H3K27me3, H3K36me3, and similarly for Pol2 (Additional file 1: Figure S1), for ten time points (0 h, 0.5 h, 1 h, 2 h, 3 h, 4 h, 6 h, 8 h, 16 h, 24 h). We integrated this data with publicly available whole-genome transcription start site (TSS) data (TSS-seq) [24]. All data originated from the same cell type, treated with the same stimulus, and samples taken at the same time points. Snapshots of the data for a selection of features at four promoters are shown in Additional file 1: Figure S2.
Using this data collection, we defined 24,416 promoters (based on TSS-seq data and Refseq annotations) and 34,079 enhancers (using H3K4me1high/H3K4me3low signals) (see the "Methods" section). For this genome-wide set of promoters and enhancers, we estimated the levels of histone modifications, Pol2 binding, and RNA reads over time (see the "Methods" section).
Epigenetic changes at inducible promoters and their enhancers
Recent studies using the same cell type and stimulus showed that most changes in gene expression patterns were controlled at the transcriptional level, without widespread changes in RNA degradation rates [25, 26]. We therefore defined 1413 LPS-induced promoters based on increases in TSS-seq reads after LPS stimulation. Similarly, for both promoters and enhancers, we defined significant increases in histone modifications and Pol2 binding by comparison to pre-stimulation levels. Our analysis suggested that changes were in general rare; only 0.7 to 5.3% of all promoters (Fig. 1a) and 0.2 to 11.0% of all enhancers (Fig. 1b) experienced significant increases in histone modifications and Pol2 binding. However, changes were frequent at LPS-induced promoters, especially for markers of activity such as Pol2 binding, H3K4me3, H3K27ac, and H3K9K14ac, as well as for H3K36me3 (Fig. 1a). For example, while only 957 promoters (out of a total of 24,416 promoters; 3.9%) experienced significant increases in H3K9K14ac, this included 27.6% of the LPS-induced promoters (390 out of 1413 promoters). To a lesser extent, we observed the same tendency at associated enhancers (Fig. 1b). The smaller differences at enhancers are likely to be caused by imperfect assignments of enhancers to LPS-induced promoters (i.e., we naively assigned enhancers to their most proximal promoter). Analysis of an independent ChIP-seq dataset originating from LPS-treated macrophages [6] revealed a high consistency between DCs and macrophages in LPS-induced increases in Pol2 binding, H3K27ac, and H3K4me3 at promoters and enhancers (see Additional file 1: section "Analysis of histone modification changes in LPS-treated macrophages" and Figure S3). The overlap in increases in H3K4me1 at enhancers was lower, though still statistically significant (p < 1e−4, based on 10,000 randomizations), possibly reflecting differences between DC- and macrophage-specific enhancers and the molecular processes that define these cell types.
Frequencies of induction of features at LPS-induced promoters. a The fraction of promoters (y axis) with increases in features (x axis) are shown for the genome-wide set of promoters (green) and for the LPS-induced promoters (orange). Increases in H3K4me3, H3K9K14ac, H3K27ac, H3K36me3, RNA, and Pol2 binding are observed frequently at LPS-induced promoters. Significance of differences was estimated using Fisher's exact test; *p < 1e−4; **p < 1e−6; ***p < 1e−10. b Same as a, for enhancers. c, d Heatmaps indicating the overlap in induction of pairs of features. Colors represent p values (− log10) of Fisher's exact test. White, low overlap; red, high overlap. Plots are shown for promoters (c) and enhancers (d)
LPS-induced promoters were less frequently associated with CpG islands (57%) than stably expressed promoters (87%, Additional file 1: Figure S4A) [27]. Non-CpG promoters more frequently had lower basal levels (i.e., levels at 0 h, before stimulation) of activation-associated histone modifications, such as H3K27ac, H3K9K14ac, and H3K4me3, and similarly lower levels of Pol2 binding and pre-stimulation gene expression (Additional file 1: Figure S4B). This partly explains the higher frequency of significant increases in histone modifications at LPS-induced promoters (Fig. 1a) and the higher fold-induction of genes associated with non-CpG promoters (Additional file 1: Figure S4C).
Previous studies have reported only limited combinatorial complexity between histone modifications, i.e., subsets of modifications are highly correlated in their occurrence [28, 29]. In our data too, basal levels of activation markers at promoters and, to a lesser degree, at enhancers were highly correlated (Additional file 1: Figure S5). Stimulus-induced accumulations of histone modifications and Pol2 binding at promoters and enhancers further support this view: increases in H3K9K14ac, H3K4me3, H3K36me3, H3K27ac, Pol2 binding, and transcription often occurred at the same promoters (Fig. 1c). Similarly, increases in H3K9K14ac, H3K27ac, Pol2 binding, and transcription often coincided at enhancer regions (Fig. 1d). In general, activated regions experienced increases in several activation markers.
Several histone modifications are induced at a specific time after stimulation
Previous studies have reported considerable dynamics in histone modifications in response to environmental stimuli (see the "Background" section) based on the analysis of small numbers of time points. Our dataset, however, allows the analysis of the order and timing of changes over an extended time period after stimulation. To this end, we analyzed the induction times of transcription activity, Pol2 binding, and histone modifications.
First, we inferred the transcriptional induction time of the 1413 LPS-induced genes (see the "Methods" section and Fig. 2a). In addition, we defined a set of 772 promoters with highly stable activity over the entire time course.
Induction times of transcription, Pol2 binding, and histone modifications at promoters in function of induction of transcriptional activation times. a Heatmap showing the changes (white, no change; red, induction; blue, repression) in transcriptional activity of 1413 LPS-induced promoters and stably expressed promoters, relative to time point 0 h. At the right, induction times and the number of promoters induced at each time point are indicated. b–g Timing of increases in RNA-seq reads (b), Pol2 binding (c), H3K27ac (d), H3K9K14ac (e), H3K4me3 (f), and H3K36me3 (g) at LPS-induced promoters is shown in function of their transcription initiation times. The count of promoters with increases is indicated. Note that the sum of the counts differs between each panel. Colors represent the fraction of promoters per transcriptional induction time
As a proof of concept, using an independent time series of RNA-seq samples, we confirmed that significant increases in RNAs are seen at LPS-induced promoters in a consistent temporal order (Fig. 2b). For example, at promoters with early induction of transcription initiation (TSS-seq), there was an early induction of mapped RNA-seq reads, while those with later induction have later induction of mapped reads. Stably expressed genes lack induction of mapped RNA reads at their promoter. Significant increases in Pol2 binding were less frequent but followed a similar pattern (Fig. 2c).
However, the accumulation of histone modifications showed more varying patterns (Fig. 2d–g). Increases in H3K9K14ac were in general early, between 0.5 and 2 h after stimulation (Fig. 2e), although promoters with early induction of transcription (0.5 h, 1 h, 2 h) tended to have early increases in H3K9K14ac (at 0.5 h). Even genes with transcriptional induction between 3 and 6 h had increases in H3K9K14ac between 0.5 and 2 h after stimulation. Therefore, the increases in acetylation for these promoters preceded induction of transcription. Significant increases later than 3 h after stimulation were rare. In addition, increases were rare at promoters with late induction (16 h, 24 h) or at stably active promoters.
Increases in H3K4me3 were concentrated between 1 and 4 h after stimulation (Fig. 2f). In contrast with H3K9K14ac, increases in H3K4me3 were rare at time point 0.5 h. Accumulation of H3K4me3 was frequent at promoters with transcriptional induction between 1 and 4 h, but—in contrast with H3K9K14ac—it was rare at immediate-early promoters.
Finally, H3K36me3 was only induced at later time points (between 8 and 24 h), regardless of transcriptional induction times of promoters (Fig. 2g). In contrast with H3K9K14ac and H3K4me3, H3K36me3 is located within gene bodies and peaks towards their 3′ end (Additional file 1: Figure S6) [30]. Upon stimulation, H3K36me3 gradually accumulated within the gene bodies of LPS-induced genes, spreading towards the 5′ end, and reached the promoter region at the later time points in our time series (Additional file 1: Figure S6A). Stably expressed genes had on average high basal levels of H3K36me3, with only limited changes over time. However, interestingly, even for stably expressed genes, an accumulation of H3K36me3 was observed towards their 5′ end at time points 16–24 h (Additional file 1: Figure S6B).
Remarkably, the induction times of H3K9K14ac, H3K4me3, and H3K36me3 at promoters did not change depending on their basal levels (Additional file 1: Figure S7); regardless of their pre-stimulus levels, increases in H3K9K14ac were early, followed by H3K4me3, and H3K36me3 accumulation was late. This might indicate that a common mechanism is regulating these accumulations, regardless of basal levels. No differences in the accumulation times were observed between non-CpG promoters and CpG island-associated promoters (Additional file 1: Figure S8).
Compared to H3K9K14ac, H3K4me3, and H3K36me3, significant increases in H3K27ac appeared to be less frequent at promoters (Fig. 1a), and their timing coincided with the induction of transcription (Fig. 2d). Increases in H3K9me3, H3K27me3, and H3K4me1 were rare at promoters (Additional file 1: Figure S9).
The early accumulation of H3K9K14ac, followed by H3K4me3, was confirmed using an independent replicate TSS-seq data and ChIP-seq time series dataset with lower resolution (time points 0 h, 1 h, 2 h, and 4 h; Additional file 1: Figure S10). Although accumulation of both modifications was earlier in the duplicate data than in the original time series, their relative ordering was preserved: Increases in H3K9K14ac at 1 h were more frequent than H3K4me3 in the original data (58% vs 30% of loci), as well as in the replicate data (89% vs 71% of loci). Additional replication was performed using RT-qPCR and ChIP-qPCR measuring RNA, H3K9K14ac, H3K4me3 (see wild-type (WT) data in Additional file 1: Figure S11), and H3K36me3 (Additional file 1: Figure S12) at the promoters of nine LPS-induced genes. Here too, accumulation of H3K9K14ac and H3K4me3 occurred early, with H3K9K14ac preceding H3K4me3, while accumulation of H3K36me3 occurred later. The early (between 0 and 4 h) timing of LPS-induced increases in H3K27ac and H3K4me3 were further supported by the analysis of an independent ChIP-seq time series dataset (0, 4, and 24 h) originating from LPS-treated macrophages [6] (see Additional file 1: section "Analysis of histone modification changes in LPS-treated macrophages" and Figure S13).
Correlation between LPS-induced TF binding and increases in epigenetic features
To reveal potential regulatory mechanisms underlying the epigenetic changes induced by LPS, we performed an integrative analysis of our histone modification data with TF binding data. For this, we used a publicly available ChIP-seq dataset for 24 TFs with high expression in mouse DCs [31], before and after treatment with LPS (typical time points include 0 h, 0.5 h, 1 h, and 2 h; see the "Methods" section).
Initial analysis confirmed the known widespread binding of promoters by PU.1 and C/EBPβ, and to a lesser degree by IRF4, JUNB, and ATF3 [31] (Additional file 1: Figure S14A), and the known association between H3K4me1 and binding by PU.1 and C/EBPβ (Additional file 1: Figure S15A,B) [12, 15]. LPS-induced promoters were frequently bound by TFs controlling the response to LPS, such as NF-κB (subunits NFKB1, REL, and RELA) and STAT family members (Additional file 1: Figure S14B).
Focusing on the overlap between LPS-induced TF binding at promoters and enhancers, and accumulation of epigenetic features, we found that binding of promoters by RelA, IRF1, STAT1, and STAT2 was especially associated with increases in H3K9K14ac, H3K4me3, H3K36me3, transcription, and to a lesser degree Pol2 binding and H3K27ac (Fig. 3, left; Fisher's exact test). For example, of the 418 promoter regions that become newly bound by STAT1 after stimulation, 223 (53.3%) experience increases in H3K9K14ac (vs 3.0% of promoters not bound by STAT1; p: 8.3E-205). LPS-induced binding by the same four TFs was also strongly associated with increases in H3K9K14ac and H3K27ac at enhancers (Fig. 3, right). Combinations of these four TFs often bind to the same promoters and enhancers (Additional file 1: Figure S14C,D), and STAT1 functions both as a homodimer or as a heterodimer with STAT2 [32]. LPS-induced TFs, including NF-κB and STAT family members, have been shown to bind preferentially at loci that are pre-bound by PU.1, C/EBPβ, IRF4, JUNB, and ATF3 [31]. Accordingly, histone modifications were also more frequently observed at regions that were pre-bound by these five TFs (Additional file 1: Figure S16).
Associations between LPS-induced TF binding at promoters (left) and enhancers (right) and increases in histone modifications, Pol2 binding, and transcription at the newly bound regions. Colors in the heatmap represent the degree of co-incidence (Fisher's exact test, − log10 p values) between new TF binding events (rows) and increases (columns). TFs (rows) have been grouped through hierarchical clustering by similarity
Weaker associations were found for LPS-induced binding by other NF-κB subunits (NFKB1, REL, and RELB), TFs with pervasive binding even before stimulation (C/EBPβ, ATF3, JUNB, and IRF4), and E2F1, which has been shown to be recruited by NF-κB through interaction with RelA [33].
Together, these results suggest a strong correlation between increases in activation-associated histone modifications and LPS-induced binding by RelA, IRF1, STAT1, and STAT2.
STAT1 and STAT2 binding coincides with accumulation of H3K9K14ac and precedes accumulation of H3K4me3
The relative timing of LPS-induced TF binding events and increases in histone modifications can reflect potential causal relationships. Particularly, many LPS-induced promoters show increases in H3K9K14ac between 0.5 and 2 h after LPS stimulation (Fig. 2e), and we found a strong overlap between increases in H3K9K14ac and binding by STAT1 (Fig. 3). STAT1 is not active before stimulation, and its activity is only induced about 2 h after LPS stimulation [34], resulting in a strong increase in STAT1-bound loci (from 56 STAT1-bound loci at 0 h to 1740 loci at 2 h; Additional file 1: Figure S14B).
We observed a particularly strong coincidence in timing between STAT1 binding and increases in H3K9K14ac (Fig. 4a): genomic regions that become bound by STAT1 at 2 h show a coinciding sharp increase in H3K9K14ac around the STAT1 binding sites. At promoters and enhancers that became bound by STAT1 at 2 h, the induction of H3K9K14ac was particularly frequent (Fig. 4b, c). Of the 407 promoters and 378 enhancers that become bound by STAT1 at 2 h after stimulation, 222 (54%) and 214 (57%) have an increase in H3K9K14ac (vs only 3.0% of promoters and 3.3% of enhancers lacking STAT1 binding). These increases were especially frequent at the 2-h time point (Fig. 4b, c).
Interaction between STAT1 binding and accumulation of H3K9K14ac (a–c) and H3K4me3 (d, e). a For all genomic regions bound by STAT1 at 2 h after LPS stimulation, mean H3K9K14ac signals are shown over time. Left: profile of mean values (y axis) over time in bins of 100 bps in function of distance (x axis) to the TF binding site. Right: mean values (y axis) summed over the region − 2 to + 2 kb over all bound regions, over time (x axis). The red arrow indicates the time at which these regions become bound by STAT1. b The fraction of promoters with increases in H3K9K14ac at each time point after stimulation (x axis). Blue, the 409 promoters bound by STAT1 at time 2 h; red, 23,964 promoters not bound by STAT1 at any time point. c As in b, for 378 enhancer regions bound and 33,693 not bound by STAT1. d As in a, for H3K4me3 at the genomic regions bound by STAT1 2 h after LPS stimulation. e As in b, for increases in H3K4me3 at 409 promoters bound by STAT1 and 23,964 promoters not bound by STAT1
Similar to H3K9K14ac, we observed a general increase in H3K4me3 around STAT1 binding sites (Fig. 4d), between 2 to 4 h after stimulation. Accordingly, only 21 STAT1-bound promoters (out of 409; 5.1%) had significant increases between 0.5 to 1 h, but an additional 140 promoters (34%) experienced increases at the following time points (2–4 h; Fig. 4e). As noted above, H3K4me3 was in general absent at enhancers.
Similar patterns were observed for enhancers and promoters bound by STAT2 2 h after stimulation (Additional file 1: Figure S17). In contrast, regions bound by RelA (Additional file 1: Figure S18) and IRF1 (Additional file 1: Figure S19) showed increased levels of H3K27ac and to a lesser degree H3K9K14ac at earlier time points. Associations with H3K9K14ac induction after 2 h were weak compared to STAT1/2. Average increases in H3K4me3 at RelA- and IRF1-bound regions were only modest (Additional file 1: Figure S18G-I and S19G-I), suggesting that the association between RelA- and IRF1-binding and H3K4me3 as seen in Fig. 3 is mostly through co-binding at STAT1/2-bound regions. Associations between histone modifications and binding by other TFs were in general weak (not shown; see Fig. 3). No changes were observed in H3K4me1 at STAT1/2-bound regions (Additional file 1: Figure S20A). Although there was a tendency for STAT1/2-bound loci to have increases in H3K27ac, binding seemed to slightly lag behind H3K27ac induction (Additional file 1: Figure S20B). Finally, although STAT1/2-bound regions tended to experience increases in H3K36me3, there was a large time lag between binding and increases in this modification (Additional file 1: Figure S20C). This is also true for other TFs, such as RelA and IRF1, and even PU.1 and C/EBPβ, regardless of the timing of TF binding (Additional file 1: Figure S15C-F).
These results suggest possible causal relationships between STAT1/2 binding and the accumulation of H3K9K14ac and H3K4me3. The specific timing of increases in these modifications might reflect the timing of activation of these TFs, resulting in the recruitment of acetyl transferases and methyl transferases to specific promoter and enhancer regions.
LPS-induced changes in H3K9K14ac and H3K4me3 are strongly affected in Stat1 −/− cells
We decided to further investigate the role of STAT1 in controlling the changes in histone modifications. In Trif−/− knockout (KO) cells, LPS-induced type I IFN production, activation of the JAK-STAT pathway, and activation of STAT1 and STAT2 target genes are severely impaired [23]. Using Trif−/− and MyD88−/− DCs, we defined a set of TRIF-dependent genes (Additional file 1: Figure S21A) and confirmed that they were frequently bound by STAT1/2 (Additional file 1: Figure S21B). We observed that promoters of TRIF-dependent and STAT1/2-bound genes frequently had LPS-induced increases in H3K9K14ac and H3K4me3 (Additional file 1: Figure S21C,D).
RT-qPCR and ChIP-qPCR experiments in WT, Trif−/−, Irf3−/−, and Ifnar1−/− cells showed that a subset of TRIF-dependent and STAT1/2-bound genes (in particular Ifit1 and Rsad2) showed increases in H3K9K14ac and H3K4me3 in WT but not in KO cells (Additional file 1: Figure S11 and section "A Subset of STAT1/2 Target Genes lack Induction of H3K9K14ac and H3K4me3 in Trif−/−, Irf3−/−, and Ifnar1−/− cells").
Furthermore, stimulation of WT cells using IFN-β induced expression of Ifit1 and Rsad2, and accumulation of H3K9K14ac and H3K4me3 at their promoters (Additional file 1: Figure S22). In this system, the activation of the IFNR signaling pathway and of STAT1/2, is independent of TRIF. Accordingly, this IFN-β-induced accumulation of H3K9K14ac and H3K4me3 was not affected in Trif−/− cells, further supporting a role for STAT1/2 in the control of these modifications at these genes.
Finally, we performed new ChIP-seq analysis of H3K9K14ac and H3K4me3 in WT and Stat1−/− DCs (Fig. 5). Genomic regions that are bound by STAT1 showed a sharp increase in H3K9K14ac (Fig. 5a) and H3K4me3 (Fig. 5d) in WT cells, reproducing our observations from our first time series data (Fig. 4a, d). However, this increase was completely abrogated in Stat1−/− cells (Fig. 5a, d). Focusing on promoter sequences, we noted 321 promoters that had increases in H3K9K14ac in WT but not in KO (Fig. 5b). These promoters were frequently bound by STAT1/2, IRF1, and NF-κB in WT cells (Fig. 5c). On the other hand, 184 promoters had increases in H3K9K14ac in the Stat1−/− cells but not in WT (Fig. 5b). These promoters lack binding by STAT1/2 in WT (Fig. 5c), and the KO-specific increase in H3K9K14ac might be the result of a different set of TFs recruiting histone modifiers to these promoters, in the absence of functional STAT1. One such TF might be HIF1A, which binds a subset of these promoters but not promoters with H3K9K14ac increases in WT (Fig. 5c) and has been reported to be repressed by STAT1 [35]. Similar observations were made for H3K4me3 induction in WT and Stat1−/− cells (Fig. 5e, f).
Dynamics of H3K9K14ac and H3K4me3 in Stat1−/− cells. a For all genomic regions bound by STAT1 at 2 h after LPS stimulation, mean H3K9K14ac signals in bins of 100 bps (y axis) are shown over time in WT and in Stat1−/− KO cells, in function of distance (x axis) to the STAT1 binding site in WT cells. b A Venn diagram showing the counts of promoters with significant increases in H3K9K14ac in WT, Stat1−/− KO, and both. c For promoters with increases in H3K9K14ac in WT and/or KO, the fraction bound by a selection of TFs is shown. d–f Same data for H3K4me3 in WT and Stat1−/− KO cells
The concept of active genes being in an open chromatin conformation was introduced several decades ago [36], but the contribution of histone modifications to the control of gene activity remains controversial [17]. In contrast, the contribution of TFs to regulating gene expression is widely recognized [37], and several studies have identified important crosstalk between TFs and histone modifiers in the regulation of the response to immune stimuli [6, 7, 38,39,40,41,42]. Nevertheless, our understanding about causal relationships between TF binding, changes in histone modifications, and changes in transcriptional activity of genes in response to stimuli is still lacking.
Analysis of the ordering of events over time can reveal insights into possible causal relationships or independence between them. Here, we have presented an integrative study of the timing and ordering of changes in histone modifications, in function of transcriptional induction in response to an immune stimulus. Our results suggest that, while the dynamics of some histone modifications are closely associated with transcriptional activity, other modifications appear to be induced at specific time frames after stimulation. For a subset of modifications (e.g., H3K9K14ac and H3K36me3), these time frames appear to be non-coincident with the timing of induction of transcription.
In our dataset, we roughly observed three patterns of modifications. The first was early induction of H3K9K14ac, which occurs mainly in the first 2 h after stimulation. A second pattern consisted of increases in H3K4me3 and H3K27ac, roughly coinciding with induction of transcription. Finally, a third pattern consisted of changes in H3K36me3, occurring only around 8–24 h after stimulation. Although H3K4me3 is widely used as a marker for active genes, the functional role of this modification is still unclear. For example, the deletion of Set1, the only H3K4 methyltransferase in yeast, resulted in slower growth than in wild type but otherwise appears to have only limited effects on transcription [19]. Other studies too have reported a lack of a direct effect of H3K4me3 on transcription [20, 21]. Several experiments by Cano-Rodriguez and colleagues illustrate that transcription can be transiently induced in the absence of H3K4me3 and that loci-specific induction of H3K4me3 had no or limited effect on transcription [43]. Another study showed that H3K4 methyltransferase Wbp7/MLL4 controls expression of only a small fraction of genes directly [44]. In contrast, fluorescence microscopy experiments have shown that H3K27ac levels can alter Pol2 kinetics by up to 50% [21].
Since the induction of remodeling appears to occur specifically at LPS-induced genes, it is likely that histone modifiers are recruited by one or more LPS-activated TFs to specific target regions in the genome defined by the binding specificity of the TFs. In this model, primary response regulators could control immediate stimulus-induced changes in transcription and histone modifications, while the later "waves" could depend to different degrees on (1) the process of transcription itself, (2) subsequent activation of secondary regulators, and (3) the presence of other histone modifications (Fig. 6). This fits well with our observations for STAT1/2 and the induction of H3K9K14ac and H3K4me3, within specific time frames and mostly restricted to LPS-induced promoters, and the later establishment of H3K36me3. Other studies have reported associations between STAT1 binding and changes in epigenetic markers following environmental stimulation, including the activation of latent enhancers [6] and histone acetylation [40, 45]. Moreover, epigenetic priming by histone acetylation through STAT1 binding to promoters and enhancers of Tnf, Il6, and Il12b has been reported, resulting in enhanced TF and Pol2 recruitment after subsequent TLR4 activation [46]. These primed regions were reported to have sustained binding by STAT1 and IRF1 and prolonged associations with CBP/p300 and constitute a stable, stimulus-induced chromatin state. The step-by-step establishment of histone modifications could reflect one way of regulating this process, with combinations of regulators deciding whether a locus will reach a stably active/poised state or whether it will return to the basal inactive state (Fig. 6). Since TFs such as STAT1 are also known to induce gene expression, one might expect the timing of increases in histone modifications to co-occur with induction of expression. However, as we described here, and as supported by the above studies, this is not necessarily the case. Gene expression is known to be regulated by combinations of TFs, and in this study too, we noticed that LPS-activated TFs such as NF-κB, IRF1, and STATs often bound to the same loci (Additional file 1: Figure S14), which were moreover often pre-bound by several other TFs, including PU.1 and C/EBPβ. Discrepancies between timing of expression induction and accumulation of histone modifications could be caused by different requirements for combinatorial binding. This could also explain widely reported "non-functional" TF binding, where TF binding does not seem to affect the activity of nearby genes [47]. Such "non-functional" TF binding might instead trigger changes in histone modifications that remain unnoticed and affect gene activity in more subtle ways.
Model for the different patterns of stimulus-induced histone modifications. (i) Stimulation induces the binding of primary TFs and their interacting histone modifiers (orange) at regions pre-defined by lineage-defining TFs (gray), leading to early increases in histone modifications. (ii) Secondary regulators, Pol2, and interacting histone modifiers (green) establish additional modifications at specific time points. (iii) Downstream regulators and existing histone modifications lead to further recruitment of histone modifiers (purple), establishing a stably active chromatin state
Although many studies have compared histone modifications before and after stimulation, most lack sufficient time points and resolution to allow analysis of temporal ordering of changes. One recent study in yeast reported results that are partly similar to ours [48]: specific modifications (especially, but not only, acetylation) occur at earlier time frames during the response of yeast to diamide stress and others at later time points. Another study in yeast showed that H3K9ac deposition appeared before the passing of the replication fork during DNA replication, while tri-methylations took more time to be established [49]. Interestingly, in these studies, typical time frames for changes in histone modifications (including H3K36me3) are less than 1 h after stimulation or replication. In contrast, changes in H3K36me3 in our data appeared 8–24 h after stimulation. Thus, time scales of stimulus-induced epigenetic changes in multicellular, higher mammalian systems might be considerably longer. Interestingly, increases in H3K36me3 around 16–24 h often coincide with a decrease in histone acetylation towards pre-stimulation levels at LPS-induced promoters. A study in yeast suggested that H3K36me3 plays a role in the activation of a histone deacetylase [50] and might therefore play a role in the return to a basal state of histone modifications and terminating the response to stimulus.
Our time series ChIP-seq data and analysis present a first genome-wide view of the timing and order of accumulation of histone modifications during a stress response in mammalian immune cells. The stimulus-induced accumulation of H3K9K14ac, H3K4me3, H3K27ac, and H3K36me3 followed distinct patterns over time. Integrative analysis suggests a role for STAT1/2 in triggering increases in H3K9K14ac and H3K4me3 at stimulus-dependent promoters and enhancers. Differences in interactions between histone modifiers, TFs, and the transcriptional machinery are likely causes for the different patterns of dynamics in histone modifications.
Reagents, cells, and mice
Bone marrow cells were prepared from C57BL/6 female mice and were cultured in RPMI 1640 supplemented with 10% of fetal bovine serum under the presence of murine granulocyte/monocyte colony-stimulating factor (GM-CSF, purchased from Peprotech) at the concentration of 10 ng/mL. Floating cells were harvested as bone marrow-derived dendritic cells (BM-DCs) after 6 days of culture with changing medium every 2 days. The cells were stimulated with LPS (Salmonella minnesota Re595, purchased from Sigma) at the concentration of 100 ng/mL for 0, 0.5, 1, 2, 3, 4, 6, 8, 16, and 24 h and were subjected to RNA extraction or fixation. Murine IFN-β was purchased from Pestka Biomedical Laboratories and was used to stimulate the cells at the concentration of 1 × 10^2 unit/mL. TRIF-, IRF3-, or IFNR-deficient mice have been described previously [51,52,53]. Stat1-deficient mouse was described previously [54].
ChIP-seq experiments
For each time point, thirty million BM-DCs were stimulated with LPS and subjected to fixation with addition of 1/10 volume of fixation buffer (11% formaldehyde, 50 mM HEPES pH 7.3, 100 mM NaCl, 1 mM EDTA pH 8.0, 0.5 mM EGTA pH 8.0). The cells were fixed for 10 min at room temperature, and immediately washed with PBS three times. ChIP and sequencing were performed as described (Kanai et al., DNA Res, 2011). Fifty microliters of lysate after sonication was aliquoted as "whole cell extract" (WCE) control for each IP sample. Antibodies used were Pol2 (05-623, Millipore), H3K4me3 (ab1012, Abcam), H3K9K14ac (06-599, Millipore), H3K36me3 (ab9050, Abcam), H3K9me3 (ab8898, Abcam), H3K27me3 (07-449, Milllipore), H3K4me1 (ab8895, Abcam), and H3K27ac (ab4729, Abcam).
RNA extraction and RT-qPCR
One million BM-DCs were stimulated with LPS for indicated times and subjected to RNA extraction by using TRIzol (Invitrogen) according to the manufacturer's instruction. RNAs were reverse transcribed by using RevaTra Ace (Toyobo). The resulting cDNAs were used for qPCR by using Thunderbird SYBR master mix (Toyobo) and custom primer sets (Additional file 1: Table S1). qPCR was performed by using LightCycler Nano (Roche).
ChIP-qPCR
ChIP was done as above, except 4 × 10^6 cells were used. The resulting ChIP-DNAs were subjected to qPCR similar to the RT-qPCR procedure, using custom primer sets (Additional file 1: Table S2).
Peak calling and processing of ChIP-seq data
For each histone modification and for Pol2 binding data, we aligned reads to the genome and conducted peak calling and further processing as follows.
We mapped sequenced reads of ChIP-seq IP and control (WCE) samples using Bowtie2 (version 2.0.2), using the parameter "very-sensitive," against the mm10 version of the mouse genome [55]. Processing of alignment results, including filtering out low MAPQ alignments (MAPQ score < 30), was performed using samtools [56].
We predicted peaks for each time point using MACS (version 1.4.2) [57], using each IP sample as input and its corresponding WCE sample as control. To improve the detection of both narrow and broad peaks, peak calling was performed using default settings and also using the "nomodel" parameter with "shiftsize" set to 73. Negative control peaks were also predicted in the control sample using the IP sample as reference. Using the predicted peaks and negative control peaks, we set a threshold score corresponding to a false discovery rate (FDR) of 0.01 (number of negative control peaks vs true peaks), for each time point separately. All genomic regions with predicted peaks were collected over all 10 time points, and overlapping peak regions between time points were merged together. Moreover, we merged together peak regions separated by less than 500 bps. This gave us a collection of all genomic regions associated with a peak region in at least one sample of the time series.
In a next step, we counted the number of reads mapped to each region at each time point for both the IP samples and WCE control samples. Using these counts, we performed a read count correction, as described by Lee et al. [58]. Briefly, this method subtracts from the number of IP sample reads aligned to each peak region the expected number of non-specific reads given the number of reads aligned to the region in the corresponding WCE sample. The resulting corrected read count is an estimate of the number of IP reads in a region that would remain if no WCE reads are present [58]. This correction is necessary for the quantitative comparison of ChIP signals over time in the downstream analysis.
Finally, the corrected read counts were converted to reads per kilobase per million reads (RPKM) values (using read counts and the lengths of each region) and normalized using quantile normalization, under the assumption that their genome-wide distribution does not change substantially during each time series. The normalized RPKM values were converted to reads per million reads (ppm) values.
TSS-seq data processing and promoter definition
TSS-seq data for BM-DCs before and after stimulation with LPS was obtained from the study by Liang et al. [24] (DDBJ accession number DRA001234). TSS-seq data reflects transcriptional activity but also allows for the detection of TSSs on a genome-wide scale at a 1 base resolution [59]. Mapping of TSS-seq samples was done using Bowtie2, as for ChIP-seq data. The location (5′ base) of the alignment of TSS-seq reads to the genome indicates the nucleotide at which transcription was started. In many promoters, transcription is initiated preferably at one or a few bases. Because of this particular distribution of TSS-seq reads mapped to the genome, default peak calling approaches cannot be applied. Instead, we used the following scanning window approach for defining regions with significantly high number of aligned TSS-seq reads.
The number of TSS-seq reads mapped to the genome in windows of size 1, 10, 50, 100, 500, and 1000 bases were counted in a strand-specific way, in steps of 1, 1, 5, 10, 50, and 100 bases. As a control, a large number of sequences was randomly selected from the mouse genome and mapped using the same strategy, until an identical number of alignments as in the true data was obtained. For these random regions too, the number of reads was counted using the same scanning window approach. The distribution of actual read counts and control read counts was used to define a FDR-based threshold (FDR: 0.001) for each window size. For overlapping regions with significantly high read counts, the region with the lowest associated FDR was retained.
In order to remove potentially noisy TSSs, we removed TSSs that were located within 3′ UTRs and TSSs located > 50 kb upstream of any known gene. For the remaining TSSs, we used a simple model (see Additional file 1: section Supplementary Methods) (1) to decide the representative TSS location in case a promoter region contained several candidate main TSSs, (2) to remove TSS-seq hits lacking typical features of promoters (e.g., presence of only TSS-seq reads in absence of histone modifications and Pol2 binding), and (3) to decide the main promoter of a gene in case there were multiple candidates. Finally, we obtained 9964 remaining high-confidence TSSs, each assigned to 1 single Refseq gene.
These TSS-seq-based TSSs were supplemented with 14,453 non-overlapping Refseq-based TSSs for all Refseq genes which did not have an assigned high-confidence TSS-seq-based TSS. Most of the genes associated with these TSSs had lower expression in our RNA-seq data (mostly RPKM is 0 or < 1; not shown). Together, TSS-seq-based TSSs and Refseq-based TSSs resulted in a total of 24,416 promoter regions.
CpG-associated promoters were defined as those having a predicted CpG island (from the UCSC Genome Browser Database) in the region − 1 to + 1 kb surrounding the TSS [60]. Other promoters were considered to be non-CpG promoters.
Definition of enhancers
Enhancers were defined based on the signals of H3K4me1 and H3K4me3. First, we collected all genomic regions with significantly high levels of H3K4me1 (see the "Peak calling and processing of ChIP-seq data" section) in at least one of the ten time points. Regions located proximally (< 2 kb distance) to promoter regions and exons were removed, because they are likely to be weak H3K4me1 peaks observed around promoters, as were H3K4me1-positive regions of excessively large size (> 10 kb). Finally, we removed regions with H3K4me1 < H3K4me3 * 5, resulting in 34,072 remaining enhancers.
Enhancers were naively assigned to the nearest promoter (TSS-seq based or Refseq-based) that was < 150 kb separated from it (center-to-center). For 30,448 enhancers (89%), a promoter could be assigned.
Public ChIP-seq data for TFs
Genome-wide binding data (ChIP-seq) is available for mouse DCs before and after stimulation with LPS, for a set of 24 TFs with a known role of importance and/or high expression in DCs [31] (GEO accession number GSE36104). TFs (or TF subunits) included in this dataset are Ahr, ATF3, C/EBPβ, CTCF, E2F1, E2F4, EGR1, EGR2, ETS2, HIF1a, IRF1, IRF2, IRF4, JUNB, MafF, NFKB1, PU.1, Rel, RelA, RelB, RUNX1, STAT1, STAT2, and STAT3. Typically, time points in this data are 0 h, 0.5 h, 1 h, and 2 h following LPS stimulation (some TFs lack one or more time points). We used the ChIP-seq-based peak scores and score threshold as provided by the original study as an indicator of significant TF binding.
Promoters (region − 1 to + 1 kb around TSS) and enhancers (entire enhancer region or region − 1 to + 1 kb around the enhancer center for enhancers < 2 kb in size) were considered to be bound by a TF if they overlapped a ChIP-seq peak with a significantly high peak score. New binding events by a TF at a region were defined as time points with a significantly high score where all previous time points lacked significant binding.
Definition of induction of histone modifications and Pol2 binding
In order to analyze induction times of increases in histone modifications and Pol2 binding, we defined the induction time of a feature as the first time point at which a significant increase was observed compared to its original basal levels (at 0 h). Significant increases were defined using an approach similar to methods such as by DESeq and voom [61, 62], which evaluate changes between samples taking into account the expected variance or dispersion in read counts in function of mean read counts. This approach is necessary because regions with low read counts typically experience high fold-changes because of statistical noise in the data. Here, we modified this approach to be applicable to our data (10 time points without replicates; ppm values per promoter/enhancer region).
The values of all histone modifications, Pol2, RNA-seq, and TSS-seq reads (ppms, for each time point) were collected for all promoters (region − 1 to + 1 kb) and enhancers (entire enhancer region or region − 1 to + 1 kb around the enhancer center for enhancers < 2 kb in size). For each feature (all histone modifications and Pol2 binding), we calculated the median and standard deviation in ppm values for each region, over the 10 time points. Dispersion was defined as follow:
$$ {d}_{x,f}={\left(\frac{s_{x,f}}{m_{x,f}}\right)}^2 $$
where dx, f, sx, f, and mx, f represent the dispersion, standard deviation, and median of feature f in region x over the 10 time points of the time series, respectively. Fitting a second-order polynomial function on the log(dx, f) as a function of log(mx, f) for all promoter and enhancer regions, we obtained expected dispersion values in function of median ppm value (see for example Additional file 1: Figure S23 for H3K9K14ac). From fitted dispersion values, fitted standard deviation values sx, f, fitted were calculated (see Eq. 1), and 0-h-based Z-scores were calculated as follows:
$$ {Z}_{x,f,t}=\frac{\left({ppm}_{x,f,t}-{ppm}_{x,f,0h}\right)}{s_{x,f, fitted}} $$
where Zx, f, t is the Z-score of feature f in region x at time point t, and ppmx, f, t is the ppm value of feature f in region x at time point t. A region x was defined to have a significant induction of feature f if there was at least 1 time point t where Zx, f, t ≥ 4. To further exclude low-signal regions, we added this additional threshold: the region should have a ppm value ≥ the 25th percentile of non-0 values in at least 1 time point. For the regions with a significant induction, the induction time was defined as the first time point t where Zx, f, t ≥ 2. We used a similar approach to define LPS-induced promoters using TSS-seq data (see below).
For the analysis of induction times of H3K9K14ac, H3K4me3, and H3K36me3 at enhancers in function of their pre-stimulation basal levels (Additional file 1: Figure S7), we divided promoters into three classes according to their basal levels of each modifications as follows: Promoters lacking a modifications altogether (0 tag reads after correction described above) were considered as one class ("absent"). The remaining promoters were sorted according to their basal level of the modification and were divided into two classes ("low basal level" and "high basal level") containing the same number of promoters.
Definition of LPS-induced promoters and unchanged promoters
LPS-induced promoters were defined using TSS-seq ppm values. LPS-induced promoters should have Zx, TSS − seq, t ≥ 4 for at least 1 time point and have TSS-seq ppm ≥ 1 at at least 1 time point. Only TSS-seq reads aligned in the sense orientation were considered for this (e.g., they should fit the orientation of the associated gene). For each of the thus obtained 1413 LPS-induced promoters, the transcription induction time was defined as the first time point for which Zx, TSS − seq, t ≥ 2 was observed. Unchanged promoters were defined as those promoters having absolute values of Zx, TSS − seq, t < 1 for all time points, leading to 772 promoters.
RNA-seq data processing for wild-type, Trif −/−, and Myd88 −/− cells
RNA-seq data for mouse BM-DCs treated with LPS were obtained from the study by Patil et al. [63] (DDBJ accession number DRA001131). This data includes time series data for WT, as well as Trif−/− mice and Myd88−/− mice.
Mapping of RNA-seq data was performed using TopHat (version 2.0.6) and Bowtie2 (version 2.0.2) [55, 64]. Mapped reads were converted to RPKM values [65] using gene annotation data provided by TopHat. RNA-seq data obtained from the Myd88−/− and Trif−/− mice was processed in the same way. RPKM values were subjected to quantile normalization over all 10 time points.
For genes corresponding to the LPS-induced promoters, the maximum fold-induction was calculated in the WT RNA-seq data. The same was done in the Trif−/− RNA-seq data and in the Myd88−/− RNA-seq data. TRIF-dependent genes were defined as genes for which the fold-induction was more than five times lower in the Trif−/− data than in WT, leading to 141 TRIF-dependent genes (Additional file 1: Figure S21A). Similarly, 66 MyD88-dependent genes (not shown) were defined as having more than five times lower induction in the Myd88−/− than in WT.
Duplicate ChIP-seq data and Stat1 −/− data
We generated an independent duplicate time series for dendritic cells (DCs) treated with LPS (0, 1, 2, and 4 h), including TSS-seq, and ChIP-seq (H3K9K14ac and H3K4me3) data as described above. Data was processed in the same way as the original time series dataset (see the "Methods" section), and the induction times of H3K9K14ac and H3K4me3 at LPS-induced promoters were estimated. To facilitate the comparison between the duplicate data and the original (longer) time series, we also re-analyzed the original data using only time points 0, 1, 2, and 4 h (the same time points as the duplicate data). Stat1 KO DCs were treated with LPS for 0 or 4 h along with wild-type DCs, and ChIP-seq (H3K9K14ac and H3K4me3) data were obtained as described above.
Fisher's exact test
We used Fisher's exact test to evaluate the significance of differences between induced and non-induced promoters and enhancers (Fig. 1a, b), the significance of associations between changes of pairs of features (Fig. 1c, d), and the association between TF binding and increases in histone modifications, Pol2 binding, and transcription (Fig. 3 and Additional file 1: Figure S16).
BM-DC:
Bone marrow-derived dendritic cells
FDR:
GM-CSF:
Granulocyte/monocyte colony-stimulating factor
IFN:
IFNR:
Interferon receptor
Pol2:
RNA polymerase II
ppm:
Reads per million reads
RPKM:
Reads per kilobase per million reads
TF:
TLR:
Toll-like receptor
TSS:
Transcription start site
WCE:
Whole cell extract
WT:
Wild type
Henikoff S. Nucleosome destabilization in the epigenetic regulation of gene expression. Nat Rev Genet. 2008;9:15–26.
Greer EL, Shi Y. Histone methylation: a dynamic mark in health, disease and inheritance. Nat Rev Genet. 2012;13:343–57.
Mercer EM, Lin YC, Benner C, Jhunjhunwala S, Dutkowski J, Flores M, et al. Multilineage priming of enhancer repertoires precedes commitment to the B and myeloid cell lineages in hematopoietic progenitors. Immunity. 2011;35:413–25.
Winter DR, Amit I. The role of chromatin dynamics in immune cell development. Immunol Rev. 2014;261:9–22.
Creyghton MP, Cheng AW, Welstead GG, Kooistra T, Carey BW, Steine EJ, et al. Histone H3K27ac separates active from poised enhancers and predicts developmental state. Proc Natl Acad Sci U S A. 2010;107:21931–6.
Ostuni R, Piccolo V, Barozzi I, Polletti S, Termanini A, Bonifacio S, et al. Latent enhancers activated by stimulation in differentiated cells. Cell. 2013;152:157–71.
Kaikkonen MU, Spann NJ, Heinz S, Romanoski CE, Allison KA, Stender JD, et al. Remodeling of the enhancer landscape during macrophage activation is coupled to enhancer transcription. Mol Cell. 2013;51:310–25.
Lavin Y, Winter D, Blecher-Gonen R, David E, Keren-Shaul H, Merad M, et al. Tissue-resident macrophage enhancer landscapes are shaped by the local microenvironment. Cell. 2014;159:1312–26.
Gosselin D, Link VM, Romanoski CE, Fonseca GJ, Eichenfield DZ, Spann NJ, et al. Environment drives selection and function of enhancers controlling tissue-specific macrophage identities. Cell. 2014;159:1327–40.
Voss TC, Hager GL. Dynamic regulation of transcriptional states by chromatin and transcription factors. Nat. Rev. Genet. 2014;15:69–81.
Álvarez-Errico D, Vento-Tormo R, Sieweke M, Ballestar E. Epigenetic control of myeloid cell differentiation, identity and function. Nat Rev Immunol. 2014;15:7–17.
Heinz S, Benner C, Spann N, Bertolino E, Lin YC, Laslo P, et al. Simple combinations of lineage-determining transcription factors prime cis-regulatory elements required for macrophage and B cell identities. Mol. Cell. 2010;38:576–89.
Foster SL, Hargreaves DC, Medzhitov R. Gene-specific control of inflammation by TLR-induced chromatin modifications. Nature. 2007;447:972–8.
Smale ST, Tarakhovsky A, Natoli G. Chromatin contributions to the regulation of innate immunity. Annu Rev Immunol. 2014;32:489–511.
Ghisletti S, Barozzi I, Mietton F, Polletti S, De Santa F, Venturini E, et al. Identification and characterization of enhancers controlling the inflammatory gene expression program in macrophages. Immunity. 2010;32:317–28.
Natoli G. Control of NF-kB-dependent transcriptional responses by chromatin organization. Cold Spring Harb Perspect Biol. 2009;1:1–11.
Henikoff S, Shilatifard A. Histone modification: cause or cog? Trends Genet. 2011;27:389–96.
Ivashkiv LB, Park SHO. Epigenetic regulation of myeloid cells. Microbiol Spectr. 2016;4:MCHD-0010-2015. http://www.asmscience.org/content/journal/microbiolspec/10.1128/microbiolspec.MCHD-0010-2015.
Miller T, Krogan NJ, Dover J, Tempst P, Johnston M, Greenblatt JF, et al. COMPASS: a complex of proteins associated with a trithorax-related SET domain protein. Proc Natl Acad Sci U S A. 2001;98:12902–7.
Pavri R, Zhu B, Li G, Trojer P, Mandal S, Shilatifard A, et al. Histone H2B monoubiquitination functions cooperatively with FACT to regulate elongation by RNA polymerase II. Cell. 2006;125:703–17.
Stasevich TJ, Hayashi-Takanaka Y, Sato Y, Maehara K, Ohkawa Y, Sakata-Sogawa K, et al. Regulation of RNA polymerase II activation by histone acetylation in single living cells. Nature. 2014;516:272–5.
Kawai T, Akira S. The role of pattern-recognition receptors in innate immunity: update on Toll-like receptors. Nat Immunol. 2010;11:373–84.
Hoshino K, Kaisho T, Iwabe T, Takeuchi O, Akira S. Differential involvement of IFN-β in Toll-like receptor-stimulated dendritic cell activation. Int Immunol. 2002;14:1225–31.
Liang K, Suzuki Y, Kumagai Y, Nakai K. Analysis of changes in transcription start site distribution by a classification approach. Gene. 2014;537:29–40.
Rabani M, Raychowdhury R, Jovanovic M, Rooney M, Stumpo DJ, Pauli A. Resource high-resolution sequencing and modeling identifies distinct dynamic RNA regulatory strategies. Cell. 2014;159:1698–710.
Kumagai Y, Vandenbon A, Teraguchi S, Akira S, Suzuki Y. Genome-wide map of RNA degradation kinetics patterns in dendritic cells after LPS stimulation facilitates identification of primary sequence and secondary structure motifs in mRNAs. BMC Genomics. 2016;17:1032.
Illingworth RS, Bird AP. CpG islands--'a rough guide'. FEBS Lett. 2009;583:1713–20.
Schübeler D, MacAlpine DM, Scalzo D, Wirbelauer C, Kooperberg C, Van Leeuwen F, et al. The histone modification pattern of active genes revealed through genome-wide chromatin analysis of a higher eukaryote. Genes Dev. 2004;18:1263–71.
Ernst J, Kheradpour P, Mikkelsen TS, Shoresh N, Ward LD, Epstein CB, et al. Mapping and analysis of chromatin state dynamics in nine human cell types. Nature. 2011;473:43–9.
Barski A, Cuddapah S, Cui K, Roh T-Y, Schones DE, Wang Z, et al. High-resolution profiling of histone methylations in the human genome. Cell. 2007;129:823–37.
Garber M, Yosef N, Goren A, Raychowdhury R, Thielke A, Guttman M, et al. A high-throughput chromatin Immunoprecipitation approach reveals principles of dynamic gene regulation in mammals. Mol Cell. 2012;47:810–22.
Ramana CV, Chatterjee-Kishore M, Nguyen H, Stark GR. Complex roles of Stat1 in regulating gene expression. Oncogene. 2000;19:2619–27.
Lim CA, Yao F, Wong JJY, George J, Xu H, Chiu KP, et al. Genome-wide mapping of RELA(p65) binding identifies E2F1 as a transcriptional activator recruited by NF-κB upon TLR4 activation. Mol Cell. 2007;27:622–35.
Toshchakov V, Jones BW, Perera P-Y, Thomas K, Cody MJ, Zhang S, et al. TLR4, but not TLR2, mediates IFN-beta-induced STAT1alpha/beta-dependent gene expression in macrophages. Nat Immunol. 2002;3:392–8.
Hiroi M, Mori K, Sakaeda Y, Shimada J, Ohmori Y. STAT1 represses hypoxia-inducible factor-1-mediated transcription. Biochem Biophys Res Commun. 2009;387:806–10.
Weintraub H, Groudine M. Chromosomal subunits in active genes have an altered conformation. Science. 1976;193:848–56.
Lenhard B, Sandelin A, Carninci P. Metazoan promoters: emerging characteristics and insights into transcriptional regulation. Nat. Rev Genet. 2012;13:233–45.
Kayama H, Ramirez-Carrozzi VR, Yamamoto M, Mizutani T, Kuwata H, Iba H, et al. Class-specific regulation of pro-inflammatory genes by MyD88 pathways and IκBζ. J Biol Chem. 2008;283:12468–77.
Tartey S, Matsushita K, Vandenbon A, Ori D, Imamura T, Mino T, et al. Akirin 2 is critical for inducing inflammatory genes by bridging IκB-ζ and the SWI / SNF complex. EMBO J. 2014;33:2332–48.
Chen X, Barozzi I, Termanini A, Prosperini E, Recchiuti A, Dalli J, et al. Requirement for the histone deacetylase Hdac3 for the inflammatory gene expression program in macrophages. Proc Natl Acad Sci U S A. 2012;109:2865–74.
Zhu Y, van Essen D, Saccani S. Cell-type-specific control of enhancer activity by H3K9 trimethylation. Mol Cell. 2012;46:408–23.
Stender JD, Pascual G, Liu W, Kaikkonen MU, Do K, Spann NJ, et al. Control of proinflammatory gene programs by regulated trimethylation and demethylation of histone H4K20. Mol. Cell 2012;1:1–11.
Cano-Rodriguez D, Gjaltema RAF, Jilderda LJ, Jellema P, Dokter-Fokkens J, Ruiters MHJ, et al. Writing of H3K4Me3 overcomes epigenetic silencing in a sustained but context-dependent manner. Nat Commun. 2016;7:12284.
Austenaa L, Barozzi I, Chronowska A, Termanini A, Ostuni R, Prosperini E, et al. The histone methyltransferase Wbp7 controls macrophage function through GPI glycolipid anchor synthesis. Immunity. 2012;36:572–85.
Vahedi G, Takahashi H, Nakayamada S, Sun H, Sartorelli V, Kanno Y, et al. STATs shape the active enhancer landscape of T cell populations. Cell. 2012;151:981–93.
Qiao Y, Giannopoulou EG, Chan CH, Park S-H, Gong S, Chen J, et al. Synergistic activation of inflammatory cytokine genes by interferon-γ-induced chromatin remodeling and toll-like receptor signaling. Immunity. 2013;39:454–69.
MacQuarrie KL, Fong AP, Morse RH, Tapscott SJ. Genome-wide transcription factor binding: beyond direct target regulation. Trends Genet. 2011;27:141–8.
Weiner A, Hsieh T-HS, Appleboim A, Chen H V, Rahat A, Amit I, et al. High-resolution chromatin dynamics during a yeast stress response. Mol. Cell 2015;58:371–386.
Bar-ziv R, Voichek Y, Barkai N. Chromatin dynamics during DNA replication. Genome Res. 2016;26:1245–56.
Drouin S, Larame´e L, Jacques P-E, Forest A, Bergeron M, Robert F. DSIF and RNA polymerase II CTD phosphorylation coordinate the recruitment of Rpd3S to actively transcribed genes. PLoS Genet 2010;6:1–12.
Yamamoto M, Sato S, Hemmi H. Role of adaptor TRIF in the MyD88-independent toll-like receptor signaling pathway. Science. 2003;301:640–3.
Saitoh T, Satoh T, Yamamoto N, Uematsu S, Takeuchi O, Kawai T, et al. Antiviral protein viperin promotes toll-like receptor 7- and toll-like receptor 9-mediated type i interferon production in plasmacytoid dendritic cells. Immunity. 2011;34:352–63.
Hemmi H, Kaisho T, Takeda K, Akira S. The roles of toll-like receptor 9, MyD88, and DNA-dependent protein kinase catalytic subunit in the effects of two distinct CpG DNAs on dendritic cell subsets. J Immunol. 2003;170:3059–64.
Masuda K, Kimura A, Hanieh H, Nguyen NT, Nakahama T, Chinen I, et al. Aryl hydrocarbon receptor negatively regulates LPS-induced IL-6 production through suppression of histamine production in macrophages. Int Immunol. 2011;23:637–45.
Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, et al. The sequence alignment/map format and SAMtools. Bioinformatics. 2009;25:2078–9.
Feng J, Liu T, Qin B, Zhang Y, Liu XS. Identifying ChIP-seq enrichment using MACS. Nat Protoc. 2012;7:1728–40.
Lee B, Bhinge AA, Battenhouse A, Song L, Zhang Z, Grasfeder LL, et al. Cell-type specific and combinatorial usage of diverse transcription factors revealed by genome-wide binding studies in multiple human cells. Genome Res. 2012;22:9–24.
Tsuchihara K, Suzuki Y, Wakaguri H, Irie T, Tanimoto K, Hashimoto S, et al. Massive transcriptional start site analysis of human genes in hypoxia cells. Nucleic Acids Res. 2009;37:2249–63.
Meyer LR, Zweig AS, Hinrichs AS, Karolchik D, Kuhn RM, Wong M, et al. The UCSC Genome Browser database: extensions and updates 2013. Nucleic Acids Res. 2013;41:D64–9.
Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010;11:R106.
Law CW, Chen Y, Shi W, Smyth GK. voom precision weights unlock linear model analysis tools for RNA-seq read counts. Genome Biol. 2014;15:R29. https://genomebiology.biomedcentral.com/articles/10.1186/gb-2014-15-2-r29.
Patil A, Kumagai Y, Liang K-C, Suzuki Y, Nakai K. Linking transcriptional changes over time in stimulated dendritic cells to identify gene networks activated during the innate immune response. PLoS Comput Biol. 2013;9:e1003323.
Kim D, Pertea G, Trapnell C, Pimentel H, Kelley R, Salzberg SL. TopHat2: accurate alignment of transcriptomes in the presence of insertions, deletions and gene fusions. Genome Biol. 2013;14:R36.
Mortazavi A, Williams BA, McCue K, Schaeffer L, Wold B. Mapping and quantifying mammalian transcriptomes by RNA-Seq. Nat Methods. 2008;5:621–8.
Vandenbon A, Kumagi Y, Lin M, Suzuki Y, Nakai K. Waves of chromatin modifications in mouse dendritic cells in response to LPS stimulation. DDBJ Repository. 2018. http://ddbj.nig.ac.jp/DRASearch/submission?acc=DRA004881.
Vandenbon A, Kumagi Y, Lin M, Suzuki Y, Nakai K. Waves of chromatin modifications in mouse dendritic cells in response to LPS stimulation. DDBJ Repository DRA006555. 2018. http://ddbj.nig.ac.jp/DRASearch/submission?acc=DRA006555.
We thank the members of the Quantitative Immunology Research Unit for helpful discussions and advice; A. Yoshimura, E. Kurumatani, Y. Kimura, A. Yamashita, K. Imamura, K. Abe, and T. Horiuchi for technical assistance; and M. Ogawa for secretarial assistance. Computational time was provided by the computer cluster of the IFReC Laboratory of Systems Immunology and in part by the NIG supercomputer at ROIS National Institute of Genetics.
This work was supported by the Japan Society for the Promotion of Science (JSPS) through the "Funding Program for World-Leading Innovative R&D on Science and Technology (FIRST Program)," initiated by the Council for Science and Technology Policy (CSTP), by a grant from the Cell Science Research Foundation (to YK) and by a Kakenhi Grant-in-Aid for Scientific Research (JP23710234) from the Japan Society for the Promotion of Science.
The ChIP-seq datasets generated and analyzed during the current study are available in the DDBJ repository, accession numbers DRA004881 and DRA006555 [66, 67]. Previously published TSS-seq data and RNA-seq data are available under accession numbers DRA001234 and DRA001131, respectively [24, 63]. ChIP-seq data for a set of 24 TFs is available in GEO, accession number GSE36104 [32].
Alexis Vandenbon and Yutaro Kumagai contributed equally to this work.
Laboratory of Infection and Prevention, Institute for Frontier Life and Medical Sciences, Kyoto University, Kyoto, 606-8507, Japan
Alexis Vandenbon
Institute for Liberal Arts and Sciences, Kyoto University, Kyoto, 606-8507, Japan
Quantitative Immunology Research Unit, Immunology Frontier Research Center (IFReC), Osaka University, Suita, 565-0871, Japan
Yutaro Kumagai
Biotechnology Research Institute for Drug Discovery, Department of Life Science and Biotechnology, National Institute of Advanced Industrial Science and Technology, Tsukuba, Ibaraki, 305-8565, Japan
Department of Computational Biology and Medical Sciences, Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, 277-8561, Japan
Mengjie Lin
& Yutaka Suzuki
Laboratory of Functional Analysis in silico, The Institute of Medical Science, The University of Tokyo, Minato-ku, Tokyo, 108-8639, Japan
Kenta Nakai
Search for Alexis Vandenbon in:
Search for Yutaro Kumagai in:
Search for Mengjie Lin in:
Search for Yutaka Suzuki in:
Search for Kenta Nakai in:
AV, YK, YS, and KN designed the project. YK, ML, and YS conducted ChIP-seq experiments, and YK additional experiments. AV and YK performed data analysis. All authors contributed to the interpretation of the data. AV and YK wrote the manuscript. All authors read and approved the final manuscript.
Correspondence to Alexis Vandenbon or Kenta Nakai.
All animal experiments were approved by the Animal Care and Use Committee of the Research Institute for Microbial Diseases, Osaka University, Japan (IFReC-AP-H26-0-1-0).
Contains supplementary results, supplementary methods, supplementary tables, and supplementary figures. (PDF 5904 kb)
Vandenbon, A., Kumagai, Y., Lin, M. et al. Waves of chromatin modifications in mouse dendritic cells in response to LPS stimulation. Genome Biol 19, 138 (2018). https://doi.org/10.1186/s13059-018-1524-z
|
CommonCrawl
|
HomeTextbook AnswersMathCalculusCalculus (3rd Edition)Chapter 9 - Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises - Page 46816
Calculus (3rd Edition)
by Rogawski, Jon; Adams, Colin
Published by W. H. Freeman
Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 Chapter 13 Chapter 14 Chapter 15 Chapter 16 Chapter 17 Chapter 18 Appendix A Appendix C Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Exercises Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Exercises Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Exercises Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Exercises Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Exercises Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Exercises Exponents and Exponential Functions - 9.4 Taylor Polynomials - Preliminary Questions Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises Further Applications of the Integral and Taylor Polynomials - Chapter Review Exercises Further Applications of the Integral and Taylor Polynomials - Chapter Review Exercises 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Chapter 9 - Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises - Page 468: 16
$$2.5065$$
Since \begin{aligned} s &=\int_{a}^{b} \sqrt{1+f'^{2}(x)} d x \\\\ &=\int_{0}^{2} \sqrt{1+[-\sin x]^{2}} d x \\ &=\int_{0}^{2} \sqrt{1+\sin ^{2} x} d x \end{aligned} Since $$ \Delta x= \frac{2-0}{8}=\frac{1}{4}$$ Then \begin{aligned} T_{8} &=\frac{1}{2} \cdot \frac{1}{4}\left[f(0)+2 \sum_{j=1}^{7} f\left(0+\frac{1}{4} \cdot j\right)+f(2)\right] \\ &=\frac{1}{8}[\sqrt{1+\sin ^{2} 0}+2(\sqrt{1+\sin ^{2} \frac{1}{4}}+\sqrt{1+\sin ^{2} 2 \cdot \frac{1}{4}}+\sqrt{1+\sin ^{2} 3 \cdot \frac{1}{4}}+\sqrt{1+\sin ^{2} 4 \cdot \frac{1}{4}}+\sqrt{1+\sin ^{2} 5 \cdot \frac{1}{4}}\\ &+\sqrt{1+\sin ^{2} 6 \cdot \frac{1}{4}}+\sqrt{1+\sin ^{2} 7 \cdot \frac{1}{4}})+\sqrt{1+\sin ^{2} 2}] \\ & \approx 2.5065 \end{aligned}
Next Answer Chapter 9 - Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises - Page 468: 17 Previous Answer Chapter 9 - Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises - Page 468: 15
Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Preliminary Questions
Further Applications of the Integral and Taylor Polynomials - 9.1 Arc Length and Surface Area - Exercises
Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Preliminary Questions
Further Applications of the Integral and Taylor Polynomials - 9.2 Fluid Pressure and Force - Exercises
Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Preliminary Questions
Further Applications of the Integral and Taylor Polynomials - 9.3 Center of Mass - Exercises
Exponents and Exponential Functions - 9.4 Taylor Polynomials - Preliminary Questions
Further Applications of the Integral and Taylor Polynomials - 9.4 Taylor Polynomials - Exercises
Further Applications of the Integral and Taylor Polynomials - Chapter Review Exercises
|
CommonCrawl
|
A new clustering routing method based on PECE for WSN
De-gan Zhang1,2,
Xiang Wang2,
Xiao-dong Song2,
Ting Zhang3 &
Ya-nan Zhu2
A new clustering routing method based on predictive energy consumption efficiency (PECE) for a wireless sensor network (WSN) is presented in this paper. It consists of two stages: cluster formation and stable data transfer. In the cluster formation stage, we design an energy-saving clustering routing algorithm based on the node degree, the relative distance between nodes, and the rest energy of nodes. When this algorithm selects the cluster head, the node degree and the relative distance between the nodes are fully considered, so the selected cluster not only has better coverage performance but also short average distance from other member nodes in the formative cluster; therefore, the cost of communications within the clusters is small. In the stable data transfer stage, by using bee colony optimization (BCO), we design a PECE strategy for data transmission. On the basis of considering the predictive values of energy consumption, the hops, and the propagation delay on this route, this strategy gives a precise definition of the route yield by using two types of bee agent to predict the route yield of each routing path from the source node to the sink node. Through the optimization design of the algorithm, it can improve the quality of clusters, thereby increasing the overall network performance, and reduces and balances the energy consumption of whole network and prolongs the survival time of the network.
A wireless sensor network (WSN) is composed of a large number of low-cost and low-power wireless sensor nodes. They are deployed in the specified area and form a wireless network by way of self-organizing. They can work normally at a wicked or special environment that people cannot close. This technology has been widely used in the industry, military, environmental monitoring, and medical and other fields. These nodes can be deployed easily, but they can only use the battery, and it is difficult to change the battery, so how to prolong the life cycle of the whole network is one of the hot research topics in WSN [1, 2].
WSN routing protocol is responsible for looking for a data transfer path in the network layer. The data packet from the source node is forwarded to the data-receiving node by way of multi-hop communication on this path. The stand or fall of routing protocol directly determines the value of energy consumption when the sensor nodes are transmitting data and the survival time of the network [3]. For these characteristics of the nodes in network, like random deployment, limited energy, self-organizing, and frequent changes in network topology, there are better adaptability and energy efficiency by using a hierarchy routing algorithm based on clustering than using the plane routing algorithm [4]. Clustering algorithm is to divide the sensor network nodes into different clusters. Every cluster has a cluster head node, and the other member nodes send information to the cluster head node which continues to fuse and forward these data. Among them, it is the key of clustering algorithm to select a cluster head, and research about how to lower the node energy consumption by selecting the cluster head and then form a high-quality cluster has an important significance.
Swarm intelligence (SI) is a kind of compute intelligent algorithm based on the research of the collective behavior of social insects in a decentralized and self-organizing system, as described in [5]. The ant colony optimization [6] and the bee colony optimization [7] have been widely used in network colony intelligence technology.
Swarm intelligence is used to solve a given problem by a kind of collective act of autonomous agents which interact with each other in the distributed environment, in order to find a global solution to this problem, as defined in [8]. The design algorithm of swarm intelligence is inspired by the collective act of the insects (such as bees, termites, and ants) which exist in a decentralized and self-organizing system or other animal society [9–15]. These insects live in a hostile and dynamic environment; they survive through coordination and cooperation. They communicate with each other directly or indirectly via the environment to communicate in order to complete their basic tasks, such as foraging, division of labor, nesting, or hatching classification [15–20].
The bee colony optimization (BCO) model is a new paradigm of swarm intelligence. It mainly requires two types of agents that are used for routing [21–26]: the scouts find out the new route to sink node as required, and the foragers will transmit the data packets and evaluate the quality of the route that has been found by the predictive energy consumption of the routing path and the end-to-end delay [27–36]. The foragers perceive the network state and evaluate the different routes in WSN according to the measured indicators, and then choose the suitable path for routing data packets with the intent to maximize the survival time of the network [37–45].
We propose a novel clustering routing approach based on predictive energy consumption efficiency (PECE) for a wireless sensor network in this paper. First, in the cluster formation stage, from the perspective of how to choose a cluster head, we design an energy-saving clustering routing algorithm based on the node degree, the relative distance between nodes, and the rest energy of nodes. This algorithm ensures the uniform distribution of cluster heads and the balance of cluster size, and the high-energy nodes have priority to be the cluster head. Then, in the stable data transfer stage, inspired by the process that the bees look for their food, we design a new strategy for predictive energy-efficient data transmission, in order to find all the possible paths from a source node to a sink node and select the optimal path from them. This strategy of data transmission is based on two basic parameters: the energy consumption of the routing path and the end-to-end delay. These two parameters reflect the yield of the path that has been distributed to the forager agent.
LEACH method for clustering routing
In the current typical algorithms for clustering routing, the LEACH (Low-Energy Adaptive Clustering Hierarchy) proposed by [9] is one of the most representative clustering algorithms. This is an adaptive topology algorithm; it executes the process of cluster reconstruction periodically. In the process of cluster formation, the nodes in the WSN have the same probability to act as the cluster head. The selection of the cluster head is random, so the energy consumption of nodes is balanced. However, the random selection mechanism does not consider the rest energy of nodes and cannot guarantee the scalar rationality and the distributional uniformity of the cluster head node [10]. On this basis, some researchers propose the LEACH-ED [11] protocol based on the distance from the source node to the cluster head node and the rest energy, and the LEACH-T [12] protocol which uses the interval instead of the random number of cluster head and the threshold. But the two improved protocols also have not considered the distributive rationality of the cluster head node.
PEGASIS protocol
The PEGASIS (Power-Efficient GAthering in Sensor Information Systems) protocol [13] proposed by Lindsey etc. makes a chain which has a minimum sum of the length with all the nodes in WSN by a greedy algorithm. Each node of the chain just sends and receives data only once and sends data with a minimum power. The nodes select a cluster randomly to communicate with the base station each round, reducing the data traffic. Experiments proved that PEGASIS has a nearly double life cycle than LEACH. But it has a routing cavity problem just like that shown in Fig. 1.
Example of routing cavity
TEEN and EEUC methods
TEEN (Threshold Sensitive Energy Efficient Sensor Network Protocol) [14] adopted a sink searching and message forecast path as shown in Fig. 2. It should set hard and soft thresholds, so it is very complex. EEUC (Energy-Efficient Unequal Clustering) [14] adopted a multi-hop wireless communication method. The energy consumption included inter-cluster and intra-cluster. When the number of hops is larger, the energy consumption also is larger.
Sink searching and message forecast path
Other existing schemes
Many previous research efforts have tried to achieve trade-offs in terms of delay, energy cost, and load balancing for such data collection tasks [15–20]. Many recent research efforts on open vehicle routing (OVR) problems [21–26] have been studied, which are active areas in operations research and based on similar assumptions and constraints compared to sensor networks. Several new insights motivate the scholars [27–37] to adapt these schemes so that they can solve or prove certain challenging problems in WSN applications, such as the data collection protocol called EDAL, which stands for Energy-efficient Delay-aware Lifetime-balancing data collection [38–45]. For example, the authors in [38] propose an online, multi-objective optimization (MO) algorithm to efficiently schedule the nodes of a WSN and to achieve maximum lifetime. Instead of dealing with traditional grid or uniform coverage, they focus on the differentiated or probabilistic coverage where different regions require different levels of sensing. The MO scheme helps to attain a better trade-off among energy consumption, lifetime, and coverage. The scheme can be run every time a node failure occurs due to power failure of the node battery so that it may reschedule the network. This is a good scheme. Due to the length limit of the paper, we cancel comments of the other papers.
Principle and model of the approach
By analyzing the shortcomings of typical protocols for clustering routing in WSN mentioned above, combined with the BCO model, this paper proposes a new method, PECE. Compared with the typical protocols, the new protocol improves the quality of clusters, reduces and balances the whole network energy consumption, and prolongs the survival time of the network. Our PECE-based clustering routing approach will adopt routing setup and communication process (including step (a) to step (f)) as illustrated in Fig. 3.
Routing setup and communication process
The BCO model is a new generic swamp intelligence (SI) optimization technology, which achieves the effective labor employment and energy consumption through a distributed multi-agent model. The ant colony optimization (ACO) model mainly adopts a natural insectival behavior that is "looking for food" to find the shortest path between the colony and the source of food. Different from the ACO, the BCO mainly adopts two natural behaviors from the social life of bees: the behavior in the process of mating and the behavior in the process of foraging. Mating behavior is actually used as a powerful SI optimization clustering technology, and this technology could compete with other typical algorithms for clustering and other SI optimization models particularly the ACO. The foraging behaviors used in this study are based on the behavior of bees in nature to find the food source. This behavior aims to find the highest quality food source. Inside the hive, the bees are divided into five kinds plus the queen. The bees living in the hive are "food packager" and "caregivers"; both are responsible for feeding the queen and bee brood larvae. The other three kinds of bees take part in the process of searching food: "scouts", "foragers", and "worker bees". Our proposed routing approach will take advantage of the process of foraging bees. There are mainly three types of bees participating in this process:
Scouts are responsible for discovering all possible food sources (all paths). Then, they direct foragers to them from the hive through the "swing dance"; this swing dance indicates the orientation of food.
Spectator (or forager)
They are responsible for evaluating the discovery food sources (based on nectar quantity and quality) and employ the worker bees and direct them to the position where they are located.
Worker bees (or hire bee)
Worker bees gather nectar at the evaluated food source following the forager. After giving up a food source, a worker bee can transform into a scout bee and find the next potential food sources.
Inspired by foraging behavior and based on the artificial bee colony (ABC) algorithm [15], Karaboga defined clustering as a process of identifying natural grouping or swarm in multi-dimensional data. Distance measurement is commonly used to assess the similarity between patterns. Given N objects, each object is assigned to one of the K clusters. The sum of squares of the Euclidean distance from each object to their cluster's center can be calculated. The objective of clustering would be to minimize Eq. (1):
$$ J\left(w,z\right)={\displaystyle \sum_{i=1}^N{\displaystyle \sum_{j=1}^K{w}_{ij}{\left\Vert {X}_i-{Z}_j\right\Vert}^2}} $$
In the formula above, W ij is the correlation weight of mode X i and cluster j, X i (i = 1, ⋯, N) is the position of the ith mode, and Z j (j = 1, ⋯, K) is the center of the jth cluster, which can be found in Eq. (2):
$$ {Z}_j=\frac{1}{N_j}{\displaystyle \sum_{i=1}^N{w}_{ij}{x}_i} $$
N j is the number of modes in the jth cluster, and W ij is the correlation weight of mode X i and cluster j. Its value is 1 or 0 (if mode i is assigned to cluster j, W ij is 1; otherwise, it is 0).
By minimizing (optimal) the sum of the Euclidean distance between generally instances X j and the center of cluster Z j in an N-dimensional space and adjusting to proceed, each solution z i is a D-dimensional vector; here i = 1, 2, ⋯, SN. D is the number of products and includes the size of the input and the cluster for each data set.
After initialization, the position (solution) group proceeds repeated cycles. An employed bee generates the modification of location (solution) in its memory according to local information (visual information) and tests the nectar volume (fitness value) of a new food source (new solution). If the new food source has more nectar volume than the previous one, the bee will remember the new location and forget the old location information. Otherwise, it retains the location information. Equation (3) gives the cost function of pattern i (f i ):
$$ {f}_i=\frac{1}{D_{\mathrm{Train}}}{\displaystyle \sum_{j=1}^{D_{\mathrm{Train}}}d\left({x}_j,{P}_i^{C{L}_{\mathrm{Known}\left({x}_j\right)}}\right)} $$
Here, D Train is the number of training patterns, and the training mode is used for normalizing the sum, so that any distances are limited to the range [0.0, 1.0]. \( \left({P}_i^{C{L}_{\mathrm{Known}\left({x}_j\right)}}\right) \) defines the instance which belongs according to the database. The location of a food source represents a possible optimization solution, and the nectar volume of a food source corresponds to the quality (fitness) of relevant solution, calculated by Eq. (4):
$$ {\mathrm{fit}}_i=\frac{1}{1+{f}_i} $$
An artificial spectator bee selects a food source according to the associated probability values. These probability values are expressed in P i , calculated by Eq. (5):
$$ {P}_i=\frac{{\mathrm{fit}}_i}{{\displaystyle \sum_{n=1}^{\mathrm{SN}}{\mathrm{fit}}_n}} $$
Here, the SN is the number of food source, and the employed bees have the same number. fit i is the fitness of solution given by Eq. (4); it is inversely proportional to the f i given by Eq. (3). In order to select a candidate from old food location information in memory, the artificial bee colony algorithm uses Eq. (6):
$$ {V}_{ij}={Z}_{ij}+{\phi}_{ij}\left({Z}_{ij}-{Z}_{kj}\right) $$
Here, k ∈ {1, 2, ⋯, SN} and j ∈ {1, 2, ⋯, D} are the randomly selected targets, D is the number of products, and the following is the same. Although k is a randomly determined count parameter, it must be different from the count parameter i. ϕ ij is a random number in [−1, 1]; it controls the production of the neighbor food sources around Z ij and represents a comparison of two food locations that are visible to a bee. It can be seen from Eq. (6) that when the difference between Z ij and Z kj is reduced, the disturbance of location Z ij also decreases. Therefore, in the search space, when the search approximates optimal solution, the step size adaptively reduces. The food resources which are discarded would be replaced with a new food resource by scouts.
The artificial bee colony algorithm could randomly generate a location to replace the discarded location. This process could be simulated. In this algorithm, if a position cannot be further improved through a predetermined number of cycles, then the food source is assumed to be discarded. The value of a predetermined number of cycles is an important control parameter in this algorithm, called as a "limit" for a discard act.
The source Z i is assumed to have been discarded and j ∈ {1, 2, ⋯, D}, so a new food resource found by scouts replaces the source Z i , as shown in Eq. (7). This operation can be defined as
$$ {Z}_i^j={Z}_{\min}^i+\mathrm{rand}\left(0,1\right)\left({Z}_{\max}^j-{Z}_{\min}^j\right) $$
Each candidate source location V ij comes into being and then is evaluated by artificial bees. Its performance will be compared with the performance of the old location.
Design of the algorithm
The network model is adopted as follows:
We assume all nodes have the same primary energy and do not move after deployment.
Sensor nodes are randomly distributed, and each node has a unique ID code in the whole network.
The location of all nodes in the network is unknown, and there is no need to use the positioning system or positioning algorithm to learn the location of nodes.
The transmit power of the node is settled, and the approximate distance between one node and the sending node can be calculated according to the received signal strength indication (RSSI).
The node energy consumption model
The algorithm is based on the wireless communication energy consumption model proposed in the literature [2]. The calculated formulas of energy consumption are as follows:
The energy consumption of sending data
$$ {E}_t\left(k,d\right)=\left\{\begin{array}{l}E{}_{\mathrm{elec}}\cdot k+{\varepsilon}_{fs}\cdot k\cdot {d}^2,d<{d}_0\\ {}E{}_{\mathrm{elec}}\cdot k+{\varepsilon}_{mp}\cdot k\cdot {d}^2,d\ge {d}_0\end{array}\right. $$
$$ {d}_0=\sqrt{\raisebox{1ex}{${\varepsilon}_{fs}^2$}\!\left/ \!\raisebox{-1ex}{${\varepsilon}_{mp}$}\right.} $$
The energy consumption of receiving data
$$ {E}_r(k)={E}_{\mathrm{elec}}\cdot k $$
When the node detects the exchange packets within its scope in monitor mode, its energy consumption is
$$ {E}_0(k)={E}_{\mathrm{elec}}\cdot k $$
In the formulas above, k is the number of bytes of the transport packet and d is the distance of transmission. When the transmission distance is less than the threshold value d 0, the power amplification uses the free space propagation model; otherwise, the multi-path fading model is used. E elec (in J/bit) is the factor of RF energy consumption. The ε fs and ε mp are respectively the energy consumption factor of the amplifier circuit in two models.
The distance d from this node to the sending node can be calculated by the RSSI. The formulas are as follows [17, 18]:
$$ d={10}^{\frac{\left|\mathrm{RSSI}-A\right|}{10\times n}} $$
In Eq. (12), A is the received signal strength at the position where is 1 m away from the sending point. RSSI is the received signal strength indication, and n is the factor of path fading, which is generally 2 to 5.
Thus, the sum of energy consumption of a node n i could be calculated by Eq. (13):
$$ E\left({n}_i\right)={E}_t\left({k}_{n_i},d\right)+{E}_t\left({k}_{n_i}\right)+{E}_0\left({k}_{n_i}\right) $$
Since the energies which are consumed during the transmission or reception are concerned with the communication process, so they can be considered the efficient energy consumption. However, when detecting the exchanging packets or in idle mode, the energy consumption is invalid to deplete the batteries of nodes. Therefore, this energy consumption should be minimized, and that can be achieved by determining a small "idle" time. After the idle time, the node automatically enters a "sleep" mode to save its battery. In addition, when a WSN increases the number of nodes, the number of neighbors of nodes also increases, which leads to consumption of more energy at detecting or forwarding.
Like other clustering algorithms, the algorithm takes rounds to work. Each round is divided into two parts: cluster formation and stable data transfer. We introduce several definitions used in this study. M is the set of all nodes in the network.
Definition 1
Neighbor nodes: In M, d (i, j) represents the Euclidean distance between any two nodes i and j. When d (i, j) ≤ R c , the node j is called the neighbor node of node i. Here, R c is the broadcast radius of the node and the Neighbor i represents a set of neighbor nodes of node i.
Node degree: In M, the number of nodes which are contained in the R c range of any node i is called the node degree, expressed with Number i . The higher the Number i is, which represents that there are more nodes surrounding node i, the better coverage performance the cluster whose head node is i has.
The relative distance between nodes: In M, the node i receives all the broadcast messages sent by nodes which are in the R c range of node i, and uses SS j to represent the signal strength from node j. I i is defined as follows:
$$ {I}_i={\displaystyle \sum_{j={\mathrm{Neighbor}}_i}{\mathrm{SS}}_j/{\mathrm{Number}}_i} $$
The higher the I i is, which means that the average distance between node i and its surrounding nodes is shorter, the less the energy consumption is when they are communicating.
Member nodes: After the process of cluster head selection, if node j is in the coverage area of cluster head node i, in other words, the distance between j and i is less than R c , so the node j is called the member node of node i, provided that any node can become a member node of only one cluster.
The stage of cluster formation
In the initial stage, assuming that all nodes in the network have the same clock, specific implementation steps are as follows:
Information broadcasting stage: All the nodes broadcast its own ID information to the outside. In this process, the node i will count its Neighbor i , Number i , and I i by received information.
Role determining stage: All nodes execute the following operations:
Each node calculates its own regular time t according to Eq. (14) and start timing after receiving the information sent from the base station. If the node i does not receive the information sent from cluster head within the time t i , then it declares itself to be the cluster head and informs its neighbor nodes.
If it receives information sent from the cluster head, so this node selects to be the member node and exits timing.
If it receives the information sent from multiple cluster heads, then it chooses to join the cluster which sends the information to this node last.
In the above, t i is calculated by the following formula:
$$ {t}_i=\alpha \times {e}^{-{w}_i} $$
In this formula, α is the scale factor that determines the size of delay, wherein w i is as follows:
$$ {w}_i=100\times \left({C}_1\cdot 1/d(i)+{C}_2\cdot \left(1-1/{\mathrm{Number}}_i\right)+{C}_3\cdot {e}^{E_i}\right) $$
In the above formula, the E i represents the current remaining energy of node i; C 1 + C 2 + C 3 = 1, \( d(i)={10}^{\left|{\mathrm{RSSI}}_i-A\right|/\left(10\times n\right)} \).
Equations (14) and (15) show that the nodes with higher degree, shorter average distance away from surrounding nodes, and more rest energy have shorter waiting time t and more chance to be the cluster head; thus, that ensures the rationality of selecting the cluster head.
When a member node receives the information sent from multiple cluster heads, then it chooses to join the cluster which sends the information to this node last, because the approach for cluster head selection in this paper makes the size of the preferred cluster less than the later-formed cluster, so choosing to join the later-formed cluster can balance the size of each cluster.
Stable data transfer stage
After cluster building is completed, the cluster head node creates the TDMA schedules based on the number of member nodes and informs member nodes of the time slot of sending data. The member nodes send data only in the allocated time slot, and then during the rest of the time, they are in a dormant state to conserve energy, and the data sent by member nodes are integrated at the cluster head node and sent to the sink node finally by the cluster head node.
The path discovery process uses a bee agent. The process that one source node sends the data packet to the sink node may exhaust the energy of all the nodes along the path. If the optimal path from a source node to the sink node only depends on the number of hops without considering the energy of the node battery, which may cause the data packet loss in the transmission, this leads to a longer delay to re-route data packets. In addition, when the overloaded nodes stop running, the unfair distribution of network traffic will lead to network partitioning.
Therefore, a fault-tolerant and efficient routing protocol should consider the other route's information of energy consumption before choosing a path to transmit data. In order to hire the artificial bee agent from the source nodes to the sink node, this study uses the BCO model. The bee agents travel on all the possible paths to collect energy information of all the nodes along the path, predicting the routing energy consumption and selecting the optimal path. The energy information of a path should be displayed:
Remaining battery power of each node: If it is less than a predetermined threshold value, then the path cannot be selected to transmit packets.
The total energy consumption of path nodes: In order to route packets on the path which consumes less energy, this parameter will show the efficiency of the path from the angle of energy. The path which consumes less energy tends to have the minimum number of hops, because it will go through the least number of nodes.
Selecting the path which consumes less energy and thus saving battery of nodes along the path prolong the network lifetime by extending the battery life of nodes.
In Fig. 4, there are three possible paths from the source node "A" to the sink node "S": path R1: A, B, E, S; path R2: A, C, F, S; and path R3: A, D, G, I, S.
An example of path founding by a bee agent
The selection of optimal path in these three paths depends on the predictive energy consumption on this path in the process of communication. This energy information will be collected by the foregone bee agent at the journey of path founding, and it mainly include two essential metrics: the rest energy of nodes, which is used to determine the predictive lifetime of nodes and thus determine the efficiency of nodes in the process of transferring packets on the path, and the predictive total energy consumption of path nodes. For each path, it is expressed in E(R j ), where j is the index of the path.
In order to calculate the total energy consumption of all the nodes along each path in the process of receiving, Eq. (16) has given the calculation method, and the formula can be applied to the three possible paths in the previous example: R1, R2, and R3.
$$ E\left(R{}_j\right)=h\left(R{}_j\right)\times {E}_r(k) $$
In this formula, h(R j ) is the hops on the path R j and E r (k) is the energy consumption in the process of receiving packets whose size is k bytes.
If E(R 1) < E(R 2) < E(R 3), then R 1 is the routing path with the lowest energy consumption. If it meets all constraints of other threshold values, for example, in order to achieve reliable packet transmission along the path in the communication process, the node remaining battery power P(n) and the minimum propagation delay D(R j ), then R 1: A, B, E, S will be selected as the optimal path from source node A to sink node S. So the path yield g(R j ) can be a rate of each potential path assigned from A to S and combined: the predictive total energy consumption of path nodes, the hop count, and the propagation delay. The end-to-end propagation delay can be represented and calculated by Eq. (17).
$$ D\left({R}_j\right)={\displaystyle \sum_{i=1}^Nd\left({n}_{ji},{n}_{ji+1}\right)} $$
In the formula, N is the number of nodes on the path R j and d(n ji , n ji+1) is the propagation delay from node i to node i + 1 on the same path j. Finally, the yield g(R j ) of path R j in the M paths can be represented by Eq. (18):
$$ g\left({R}_j\right)=\frac{E\left({R}_j\right)\times D\left({R}_j\right)}{{\displaystyle \sum_{j=1}^ME\left({R}_j\right)\times D\left({R}_j\right)}} $$
Therefore, for each potential path from the source node to the sink node, in order to reflect its energy consumption and propagation delay, each potential path should be given a yield. The optimal path R 0 is the path with the highest yield, given as Eq. (19):
$$ {R}_0= \max \left(g\left({R}_j\right)\right) $$
It is worth mentioning that the queuing delay and transmission delay can be represented by a constant. This is because the process is in the route discovery stage, rather than the stage of routing data on the selected path, that does not refer to any delay caused by queuing and transmission.
Inspired by the SI model and the BCO model, the predictive energy-efficient model proposed by this paper is an optimal model-oriented WSN. It can be summarized as shown in Fig. 5. In the flow diagram, the process of optimal path discovery from the source node n s to the sink node S is as follows:
For each source node n s, in order to route its data packets to the sink node S efficiently, it can start the process of routing path discovery to select the optimal path in all possible M paths which can reach to the sink node S.
For the path discovery, each node sends the bee agent (via beacon message) associated with a TTL (time to live, predefined to prevent long delays and increased routing overhead) to all the neighbor nodes in the M potential paths. Bee agents (scouts) will collect and store all the required routing information, including the remaining battery power which is considered a key indicator, and other information, such as queuing delay.
If the TTL data packet expires, the data packet of the bee agent will indicate the failure to the source node and the path is denied.
When a bee agent reaches to the sink node S, after collecting all the necessary routing information, it is sent to its source node n s by the same recorded path. And then the bee agent (becoming a forager) will reveal the path information founded by itself according to the nodes of every potential path: the remaining battery power P(n i ), the hops h(P j ), and the end-to-end delay D(P j ) in each path j.
At the source node, the amount of energy that will be consumed should be calculated as a function, the energy consumption of this function is h(P j ) (the hops) times the amount of the energy consumption of each node on the path given by Eq. (13).
Energy consumption E(R j ) will be calculated by Eq. (16), and the propagation delay D(R j ) is calculated using Eq. (17).
Finally, the yield g(R j ) of each path will be inferred by foragers according to Eq. (18) to determine the optimal path R 0. The optimal path is the path with the highest yield, and the highest yield is given by Eq. (19) based on the predictive energy consumption, the hops, and end-to-end delay, so as to route data packets from the source node n s to the sink node S on this path. During transmission, if there are any problems or faults occurring on the optimal path, other potential routing paths can also be adopted, but this is related to their yields.
Strategy for data transmission based on the BCO model
Test and analysis of the results
Settings of simulation parameters
In order to verify the performance of the algorithm, the technical equipment used by us are Crossbow, MicaZ, and Iris, and the simulators used by us are TOSSIM, OMNeT++, and ATEMU. These technical equipment and the simulators are often used by our research group. Here, we show the simulations by OMNeT++. We distribute 100 wireless sensor nodes randomly in a 100 m × 100 m area. The sink node is set on the center of the area [19]. The simulation parameters are shown in Table 1. Here, we only show the analysis of the relationship between the number of clusters and the survival time, the analysis of the energy efficiency and the survival time, and the comparative analysis of energy consumption balance. The other more considered metrics or factors have been canceled because of the length limit of the paper.
Table 1 Experimental parameters
Analysis of the relationship between the number of clusters and the survival time
According to our analysis, we can obtain the optimal value R c ∈[16, 54] of the broadcast radius of the sensor node. Here, we provide the R c = 30 m to simulate. According to our analysis, in the case of R c = 30 m, the ideal number of optimal clusters is n = 6. The test (Fig. 6) of relationship between the number of clusters and the survival time shows the following: with the increase of the number of clusters, the survival time of the network has a rapid promotion, and when the number of clusters is 6, the survival time of the network reaches up to a peak of 1732, which is consistent with theoretical derivation and thus verifies the correctness of the algorithm in this paper.
Analysis of relationship between the number of clusters and the survival time
Survival time begins to decline when the number of clusters increases to 75. The survival time starts to be smooth and is maintained at about 150 rounds. The reason for such result generation is as follows: when the number of clusters is less than 6, although the communication between clusters and the number of nodes on the communication between clusters are both reduced, the communication radius between neighboring cluster heads becomes longer, which increases the energy consumption of communication between clusters. However, when the number of clusters is more than 6, despite the communication radius between neighboring clusters becoming smaller, the communication between clusters and the number of nodes on the communication between clusters are both increased.
When the number of clusters of the entire network is 6, the chain length in clusters accounts for about 17 % of all the number of nodes, the optimal performance achieves. When the number of clusters in the network reaches to 75, the average number of nodes within a cluster is 1 to 2, and the survival time of the network approaches the conditions which are under the algorithm of flat routing.
Analysis of the energy efficiency and the survival time
The network lifetime and the balance of energy consumption of nodes reflect the overall performance of the network. In the test of network performance comparison between PECE (shown as PEECR in Figs. 7 and 8), LEACH, and PEGASIS, we provide the number of clusters as 6. As shown in Fig. 7, the dead nodes generate respectively at the 343rd round with LEACH and at the 679th round with PEGASIS. However, the first dead node generates at the 1589th round with PECE, and it extends to the 1246th round with LEACH and at the 910th round with PEGASIS. The network lifetime increases substantially. All the nodes are dead when the LEACH and PEGASIS networks run respectively to the 574th round and the 1541st round. The rounds from the first dead node to the last dead node are respectively 233 and 862. All the nodes are dead in the PECE network at the 1732nd round. The process of node death is experienced at the 143rd round. It can be seen that the process of node death with PECE experiences much less time than LEACH and PEGASIS; the network energy decreases evenly.
Contrast of network lifetime
Contrast of remaining energy in the network
In addition, using the PECE algorithm (Fig. 8), the energies of the network ran out at about the 1700th round, and the PECE algorithm could keep more time and more balance of energy consumption than LEACH and PEGASIS; the nodes are more energy-efficient.
Comparative analysis of energy consumption balance
Based on our experiments, we know that using the LEACH protocol in the network, the forgoing 683 rounds is the network lifetime before the dead nodes have generated. Although the energy consumption of each round is fluctuating around the average of 0.23 J, the D value of energy consumption between different rounds is very large: the highest energy consumption appears at the 554th round, which is 0.34552 J, and the lowest at the 533rd round, which is 0.0407 J. That is to say that the network energy consumption of each data communication cycle is fluctuating in a larger energy region between 0.0407 and 0.34552 J; it indicates that the LEACH protocol makes the energy consumption of network so non-uniform, so after running up for short 683 rounds, nodes start to die one after another. After the 967th round, there is also much residual energy in the network, but the network was almost unable to communicate effectively at this time. Relative to the LEACH protocol, the network using PEGASIS has the better balance of energy consumption: the average energy consumption in the forgoing 1214 rounds is 0.13156 J, and it is fluctuating in an energy region between 0.096673 and 0.214323 J.
As the balance of energy consumption is better than that with the LEACH protocol, so the network using the PEGASIS protocol can serve for a longer time. The PECE routing protocol proposed in this paper shows the best performance in the test: in the forgoing 1649 rounds of running network, the shocks of energy consumption of rounds are all small except some individuals, and the values are all close basically to the average energy consumption of 0.08809 J. The PECE routing protocol which is based on the node degrees, the relative distance between nodes, and the residual energy of nodes has shown well performance in energy-efficient clustering algorithms. Also, the strategy for predictive energy-efficient data transmission based on the BCO model saves a considerable amount of energy for data communication and performs excellently.
We have presented a new clustering routing method based on PECE for WSN in this paper. It consists of two stages: cluster formation and stable data transfer. In the first stage, this method chooses optimally the nodes which have more neighbor nodes and shorter relative distance away from neighbor nodes to be the cluster head. Meanwhile taking into account the residual energy of nodes, the-low energy nodes are not likely to become the cluster head nodes. The formed clusters by such method, not only the number of cluster heads and the cluster size, have been controlled, but also reducing the cost of communications between member nodes and cluster head nodes, improving the quality of the cluster, thereby increasing the performance of the overall network. In the second stage, by using bee colony optimization, we design a new strategy for predictive energy-efficient data transmission. On the basis of considering the predictive values of energy consumption, the hops, and the propagation delay on this route, this strategy gives a precise definition of the route yield by using two types of bee agent to predict the route yield of each routing paths from the source node to the sink node. The experimental results show that the PECE algorithm not only significantly reduces the energy consumption of the network but also improves the balance of network energy consumption and extends efficiently the lifetime of the network, and compared with traditional clustering routing algorithm, the method has obvious advantages.
DG Zhang et al., Design and implementation of embedded un-interruptible power supply system (EUPSS) for web-based mobile application. Enterprise Inform Syst 6(4), 473–489 (2012)
DG Zhang et al., A novel multicast routing method with minimum transmission for WSN of cloud computing service. Soft Computing (2014). doi:10.1007/s00500-014-1366-x
GQ Zheng et al., The research process of MAC protocol of WSN. Acta Automatic Sinica 34(3), 305–316 (2008)
S Yi et al., PEACH: power-efficient and adaptive clustering hierarchy protocol for wireless sensor networks. Comput. Commun. 30, 2842–2852 (2007)
J Kennedy et al., Particle swarm optimization. Proceeding of IEEE International Conference Neural Networks 4(1), 1942–1947 (1995)
GG Llinas et al., Network and QoS-based selection of complementary services. IEEE Trans. Serv. Comput. 8(1), 79–91 (2015)
DG Zhang, A new approach and system for attentive mobile learning based on seamless migration. Appl. Intell. 36(1), 75–89 (2012)
S Stanislava et al., Cluster head election techniques for coverage preservation in wireless sensor networks. Ad Hoc Netw. 5(7), 955–972 (2009)
I Fahmy, et al., Evaluating Energy Consumption Efficiency of the Zone-Based Routing Protocol. Proceedings of the 46th conference for statistics, computer sciences and operations research, December 2012, Giza, Egypt (Springer, Heidelberg, Germany).
DG Zhang et al., Novel Quick Start (QS) Method for Optimization of TCP. Wirel. Netw 5, 110–119 (2015). doi:10.1007/s11276-015-0968-2
CY Li et al., The study and improvement of LEACH in WSN. Chin J Sensors Actuators 23, 1163–1167 (2010)
S Lindsey et al., PEGASIS: power efficient gathering in sensor information systems. IEEE Aero Conf 3, 1152–1130 (2002)
F Huang et al., Energy consumption balanced WSN routing protocol based on GASA. Chin J Sensors Actuators 4, 586–592 (2009)
YA Ridhawi et al., Decentralized plan-free semantic-based service composition in mobile networks. IEEE Trans. Serv. Comput. 8(1), 17–31 (2015)
D Karaboga et al., A novel clustering approach: artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2011(11), 652–657 (2011)
DG Zhang et al., A new method of constructing topology based on local-world weighted networks for WSN. Comp Math App 64(5), 1044–1055 (2012)
DG Zhang et al., A kind of novel method of service-aware computing for uncertain mobile applications. Math. Comput. Model. 57(3–4), 344–356 (2013)
IK Samaras et al., A modified DPWS protocol stack for 6LoWPAN-based wireless sensor networks. IEEE Trans Ind Inf 9(1), 209–217 (2013)
DG Zhang et al., An energy-balanced routing method based on forward-aware factor for wireless sensor network. IEEE Trans Ind Inf 10(1), 766–773 (2014)
DG Zhang et al., A novel approach to mapped correlation of ID for RFID anti-collision. IEEE Trans Serv Comp 7(4), 741–748 (2014)
M Li et al., A survey on topology control in wireless sensor networks: taxonomy, comparative study, and open issues. Proc. IEEE 101(12), 2538–2557 (2013)
DG Zhang et al., A novel image de-noising method based on spherical coordinates system. EURASIP J Ad Sign Proc 1, 110 (2012). doi:10.1186/1687-6180-2012-110
YJ Yao et al., EDAL: An Energy-Efficient, Delay-Aware, and Lifetime-Balancing Data Collection Protocol for Wireless Sensor Networks. MASS 1(1), 182–190 (2013)
Y J Yao, et al., EDAL: An Energy-Efficient, Delay-Aware, and Lifetime-Balancing Data Collection Protocol for Heterogeneous Wireless Sensor Networks. (2014) doi:10.1109/IEEE/ACM Transactions on Networking.2014.2306592
YY Zeng et al., Real-time data report and task execution in wireless sensor and actuator networks using self-aware mobile actuators. Comput. Commun. 36(9), 988–997 (2013)
DG Zhang et al., A new anti-collision algorithm for RFID tag. Int. J. Commun. Syst. 27(11), 3312–3322 (2014)
DJ He et al., ReTrust: attack-resistant and lightweight trust management for medical sensor networks. IEEE Trans. Inf. Technol. Biomed. 16(4), 623–632 (2012)
DJ He et al., A distributed trust evaluation model and its application scenarios for medical sensor networks. IEEE Trans. Inf. Technol. Biomed. 16(6), 1164–1175 (2012)
YY Zeng et al., Directional routing and scheduling for green vehicular delay tolerant networks. Wirel. Netw 19(2), 161–173 (2013)
YN Song et al., A biology-based algorithm to minimal exposure problem of wireless sensor networks. IEEE Trans. Netw. Serv. Manag. 11(3), 417–430 (2014)
YS Yen et al., Flooding-limited and multi-constrained QoS multicast routing based on the genetic algorithm for MANETs. Math. Comput. Model. 53(11–12), 2238–2250 (2011)
Z Sheng et al., A survey on the ietf protocol suite for the internet of things: standards, challenges, and opportunities. IEEE Wirel. Commun. 20(6), 91–98 (2012)
G Acampora et al., A survey on ambient intelligence in healthcare. Proc. IEEE 101(12), 2470–2494 (2013)
AV Vasilakos et al., Delay Tolerant Networks: Protocols and Applications. CRC Press (Taylor & Francis Group, London, England, 2012)
Y Xiao et al., Tight performance bounds of multihop fair access for MAC protocols in wireless sensor networks and underwater sensor networks. IEEE Trans. Mob. Comput. 11(10), 1538–1554 (2012)
X Liu et al., Compressed data aggregation for energy efficient wireless sensor networks. SECON 1(2), 46–54 (2011)
HJ Cheng et al., Nodes organization for channel assignment with topology preservation in multi-radio wireless mesh networks. Ad Hoc Netw. 10(5), 760–773 (2012)
S Sengupta et al., An evolutionary multiobjective sleep-scheduling scheme for differentiated coverage in wireless sensor networks. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 42(6), 1093–1102 (2012)
GY Wei et al., Prediction-based data aggregation in wireless sensor networks: combining grey model and Kalman filter. Comput. Commun. 34(6), 793–802 (2011)
M Chen et al., Body area networks: a survey. Mobile Net App 16(2), 171–193 (2011)
XF Wang et al., A survey of green mobile networks: opportunities and challenges. Mobile Net App 17(1), 4–20 (2012)
P Li et al., CodePipe: an opportunistic feeding and routing protocol for reliable multicast with pipelined network coding. INFOCOM 1(1), 100–108 (2012)
L Liu et al., Physarum optimization: a biology-inspired algorithm for the Steiner tree problem in networks. IEEE Trans. Comput. 64(3), 819–832 (2015)
MathSciNet
Y Liu et al., Multi-layer clustering routing algorithm for wireless vehicular sensor networks. IET Commun. 4(7), 810–816 (2012)
C Busch et al., Approximating congestion + dilation in networks via "quality of routing" games. IEEE Trans Comp 61(9), 1270–1283 (2012)
This research work is supported by the National 863 Program of China (Grant No. 2007AA01Z188), National Natural Science Foundation of China (Grant No. 61170173 and No. 61202169), Ministry of Education Program for New Century Excellent Talents (Grant No. NCET-09-0895) and Tianjin Municipal Natural Science Foundation (Grant No. 10JCYBJC00500), Tianjin Key Natural Science Foundation (No. 13JCZDJC34600), CSC Foundation (No. 201308120010), and Training Plan of Tianjin University Innovation Team (No. TD12-5016).
Key Laboratory of Computer Vision and System (Tianjin University of Technology), Ministry of Education, Tianjin, 300384, China
De-gan Zhang
Tianjin Key Lab of Intelligent Computing and Novel Software Technology, Tianjin University of Technology, Tianjin, 300384, China
, Xiang Wang
, Xiao-dong Song
& Ya-nan Zhu
Department of Computer Science and Technology, Tangshan College, Tangshan, 063200, China
Ting Zhang
Search for De-gan Zhang in:
Search for Xiang Wang in:
Search for Xiao-dong Song in:
Search for Ting Zhang in:
Search for Ya-nan Zhu in:
Correspondence to Ting Zhang.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Zhang, D., Wang, X., Song, X. et al. A new clustering routing method based on PECE for WSN. J Wireless Com Network 2015, 162 (2015) doi:10.1186/s13638-015-0399-x
Clustering routing
|
CommonCrawl
|
Lisa: inferring transcriptional regulators through integrative modeling of public chromatin accessibility and ChIP-seq data
Qian Qin1,2 na1,
Jingyu Fan1 na1,
Rongbin Zheng1,
Changxin Wan1,
Shenglin Mei1,
Qiu Wu1,
Hanfei Sun1,
Myles Brown5,6,
Jing Zhang3,
Clifford A. Meyer ORCID: orcid.org/0000-0002-8321-28394,6 &
X. Shirley Liu4,6
Genome Biology volume 21, Article number: 32 (2020) Cite this article
We developed Lisa (http://lisa.cistrome.org/) to predict the transcriptional regulators (TRs) of differentially expressed or co-expressed gene sets. Based on the input gene sets, Lisa first uses histone mark ChIP-seq and chromatin accessibility profiles to construct a chromatin model related to the regulation of these genes. Using TR ChIP-seq peaks or imputed TR binding sites, Lisa probes the chromatin models using in silico deletion to find the most relevant TRs. Applied to gene sets derived from targeted TF perturbation experiments, Lisa boosted the performance of imputed TR cistromes and outperformed alternative methods in identifying the perturbed TRs.
Transcriptional regulators (TRs), which include transcription factors (TFs) and chromatin regulators (CRs), play essential roles in controlling normal biological processes and are frequently implicated in disease [1,2,3,4]. The genomic landscape of TF binding sites and histone modifications collectively shape the transcriptional regulatory environments of genes [5,6,7,8]. ChIP-seq has been widely used to map the genome-wide set of cis-elements bound by trans-acting factors such as TFs and CRs, which we henceforth refer to as "cistromes" [9]. There are approximately 1500 transcription factors in humans and mice [10, 11], regulating a wide variety of biological processes in constitutive or cell-type-specific manners, and tens of thousands of ChIP-seq and DNase-seq experiments have been performed in humans and mice. We previously developed the Cistrome Data Browser (DB) [12], a collection of uniformly processed TF ChIP-seq (~ 11,000) and chromatin profiles (~ 12,000 histone mark ChIP-seq and DNase-seq) in humans and mice.
The question we address in this paper is how to effectively use these data to infer the TRs that regulate a query gene set derived from differential or correlated gene expression analyses in humans or mice. TR ChIP-seq data, when available, is the most accurate available data type representing TR binding. ChIP-seq data availability, in terms of covered TRs and cell types, even with large contributions from projects such as ENCODE [13], is still sparse due to the limited availability of specific antibodies. Although advances have been made in TR cistrome mapping with the introduction of technologies such as CETCh-seq [14] and CUT & RUN [15], the difficulties in acquiring TR ChIP-seq data for new factors limit the TR by cell type coverage of high-quality TR ChIP-seq data. Chromatin accessibility data, including DNase-seq [16, 17] and ATAC-seq [18], is available for hundreds of cell types and provides maps of the regions in which TRs are likely to be bound in the represented cell types. The H3K27ac histone modification, associated with active enhancers and promoters of actively transcribed genes, has been widely profiled using ChIP-seq in many cell types [5, 19]. When TF ChIP-seq data is not available, TF binding motifs, used in combination with chromatin accessibility data or H3K27ac ChIP-seq data might be used to infer TF binding sites [7, 20, 21]. Machine learning approaches that transfer models learned from TF ChIP-seq peaks, motifs, and DNase-seq data between cell types are promising ways of imputing TF cistromes, although imputation of TF binding sites on a large scale remains to be implemented [22,23,24,25,26,27]. Computationally imputed TF binding data is expected to represent TF binding sites less accurately than TF ChIP-seq experimental data, so we sought to develop a TR prediction method that could use imputed TF cistromes effectively, along with ChIP-seq-derived ones.
We previously developed MARGE to characterize the regulatory association between H3K27ac ChIP-seq and differential gene expression in terms of a regulatory potential (RP) model [28]. The RP model provides a summary statistic of the cis-regulatory influence of the many cis-regulatory elements that might influence a gene's transcription rate. MARGE builds a classifier based on H3K27ac ChIP-seq RPs from the Cistrome DB to discriminate the genes in a query differentially expressed gene set from a set of background genes. One of the functions of MARGE is to predict the cis-regulatory elements (i.e., genomic intervals) that regulate a gene set. BART [29] extends MARGE, to predict the TRs that regulate the query gene set through an analysis of the predicted cis-regulatory elements. Here, we describe Lisa (epigenetic Landscape In Silico deletion Analysis and the second descendent of MARGE), a more accurate method of integrating H3K27ac ChIP-seq and DNase-seq with TR ChIP-seq or imputed TR binding sites to predict the TRs that regulate a query gene set. Unlike BART, Lisa does not carry out an enrichment analysis of the cis-regulatory elements predicted by MARGE. Instead, Lisa analyses the relationship between TR binding and the gene set using RP models and RP model perturbations. We assessed the performance of Lisa and other TR identification methods, BART [29], i-cisTarget [30], and Enrichr [31], using differentially expressed gene sets derived from experiments in which the activities of specific TFs were perturbed by knockdown, knockout, overexpression, stimulation, or inhibition.
Regulatory TR prediction based on Cistrome DB ChIP-seq peaks
High-quality TR ChIP-seq data, when available, accurately characterizes genome-wide TR binding sites, which can be used to infer the regulated genes in particular cell types. Estimating the effect of TR binding on gene expression is not trivial because: (1) there is no accurate map linking enhancers to the genes they regulate [32]; (2) multiple enhancers can regulate the same gene [33], and a single enhancer can regulate multiple genes [34]; and (3) not all TR binding sites are functional enhancers [19]. A model is therefore needed to quantify the likelihood of a gene being regulated by a TR cistrome. The "peak-RP" model [35, 36] is based on TR ChIP-seq peaks, serving as a proxy for TR binding sites, without the use of DNase-seq or H3K27ac ChIP-seq data. In the peak-RP model (Fig. 1a), the effect a TR binding site has on the expression of a gene is assumed to decay exponentially with the genomic distance between the TR binding site and the transcription start site (TSS), and the contribution of multiple binding sites is assumed to be additive [36]. Accounting for the number of TR binding sites and for the distances of these sites from the TSS has been shown to be more accurate than alternative TR target assignment methods [37]. While it is possible that enhancers could modulate each other in non-additive ways [32], data on these types of behavior are too scarce to incorporate in a TR prediction model.
Illustration of the Lisa framework. a The peak-RP score models the effect of TR binding sites on the regulation of a gene. TR binding sites are binary values, and peaks nearer to the gene's TSS have a greater influence than ones further away. b The chrom-RP score summarizes the effect of the DNase-seq or H3K27ac chromatin environment on a gene. The chrom-RP score is based on a continuous rather than a binary signal quantification. c Overview of the Lisa framework. (1) H3K27ac ChIP-seq or DNase-seq data from the Cistrome DB is summarized using the chrom-RP score for each gene. (2) H3K27ac ChIP-seq or DNase-seq samples that can discriminate between the query gene set and the background gene set are selected, and the regression parameters define a chrom-RP model. (3) Each TR cistrome from the Cistrome DB is evaluated as a putative regulator of the query gene set through in silico deletion, which involves the elimination of H3K27ac ChIP-seq or DNase-seq signal at the binding sites of the putative regulator. (4) The chrom-RP model, based on in silico deletion signal, is compared to the model without deletion for each gene in the query and background gene sets. A p value is calculated using the Wilcoxon rank test comparison of the query and background ΔRPs. (5) The peak-RP based on TR ChIP-seq peaks is calculated for the putative regulatory cistrome, and the statistical significance of peak-RP distributions from the query and background gene sets is calculated. (6) p values from the H3K27ac ChIP-seq, DNase-seq, and peak-RP analysis are combined using the Cauchy combination test. TR cistromes are ranked based on the combined p value
We use the peak-RP model to identify TFs that are likely regulators of a target gene set by searching for Cistrome DB [12] cistromes that produce higher peak-RPs for the query gene set than for a set of background genes (Additional file 1: Figure S1, Additional file 2: Table S1). Statistical significance is calculated using the one-sided Wilcoxon rank-sum test statistic comparing the peak-RPs for the query gene set with the background. The TRs with the most significant p values are considered to be the candidate regulators. Lisa uses TR ChIP-seq within the peak-RP model, along with the chromatin landscape models described below to infer the TRs of a gene set.
Regulatory TR prediction using a chromatin landscape model
While TR ChIP-seq data provides accurate information about TR cistromes in specific cell types, the Cistrome DB TR by cell type coverage is skewed towards a few TRs, such as CTCF, which are represented in many cell types, and towards cell types such as K562 (Additional file 1: Figure S1b-c), in which many TRs have been characterized (Additional file 1: Figure S1d). H3K27ac ChIP-seq [19] and DNase-seq [16], available in a large number and variety of cell types, can be used to infer cell-type-specific regulatory regions. These types of data could enhance the use of TR ChIP-seq data as well as imputed TF binding data, which may not accurately represent TF binding sites in different cell contexts.
To boost the performance of TF ChIP-seq or imputed TF binding data in the identification of regulatory TRs, we developed Lisa chromatin landscape models, which use H3K27ac ChIP-seq and DNase-seq chromatin profiles (Fig. 1b, Additional file 3: Table S2; see the "Methods" section) to model the regulatory importance of different genomic loci. As differential gene expression experiments are not always carried out in parallel with chromatin profiling experiments, Lisa does not require the corresponding user-generated chromatin profiles but instead uses the DNase-seq and H3K27ac ChIP-seq data that is available in the Cistrome DB to help identify cis-regulatory elements controlling a differential expression gene set. To this end, Lisa models chromatin landscapes through chromatin RPs (chrom-RPs, Fig. 1b), which are defined in a similar way to the peak-RP with one small difference: genome-wide read signals instead of peak calls are used in the calculation of the chrom-RP [28]. Changes in H3K27ac ChIP-seq and DNase-seq associated with cell state perturbations are often a matter of degree rather than switch-like; therefore, we base the chrom-RP on reads rather than peaks. The chrom-RP is pre-calculated for each gene (Fig. 1c (1)) and for each H3K27ac ChIP-seq/DNase-seq profile in the Cistrome DB (Additional file 1: Figure S1a, Additional file 3: Table S2). These chrom-RPs quantify the cis-regulatory activities that influence each gene under cell-type-specific conditions.
Given the query gene set, Lisa identifies a small number of Cistrome DB DNase-seq and H3K27ac ChIP-seq samples that are informative about the regulation of these genes. Lisa does this by using the pre-calculated H3K27ac/DNase-seq chrom-RPs to discriminate between the query gene set and a background gene set. Using L1-regularized logistic regression, Lisa assigns a weight to each selected sample so that the weighted sum of the chrom-RPs on the genes best separates the query and the background gene sets (Fig. 1c (2)). This step is carried out separately for H3K27ac ChIP-seq and DNase-seq, yielding a chrom-RP model based on H3K27ac ChIP-seq and another model based on DNase-seq.
Next, by a process of in silico deletion (ISD), Lisa evaluates the effect deleting each TR cistrome has on the chromatin landscape model (Fig. 1c (3)). ISD of a TR cistrome involves setting DNase-seq or H3K27ac ChIP-seq chromatin signal to 0 in the 1-kb intervals containing the peaks in that cistrome and evaluating the effect on the predictions made by the chromatin landscape models. The difference of the model scores before ISD and after ISD quantifies the impact that the deleted TR cistrome is predicted to have on the query and background gene sets. Lisa does not make a prediction of cis-regulatory elements, the approach taken by MARGE and BART. Instead, Lisa probes the effects of deleting putative regulatory TR cistromes on the chrom-RP model. Whereas the chrom-RP integrates data over 200-kb intervals, the scale of individual cis-regulatory elements is of the order of 1 kb. The ISD approach mitigates the difficulties in transferring information contained in the chrom-RP model from the chrom-RP (200 kb) scale to the cis-regulatory element (1 kb) scale.
Finally, to prioritize the candidate TRs, Lisa compares the predicted effects on the query and background gene sets using the one-sided Wilcoxon rank-sum test (Fig. 1c (4)). A one-sided test is used because Lisa assumes that the in silico deletion of a true regulatory factor will decrease, not increase, the model's ability to discriminate between query and background gene sets. To utilize the power of predictions based on H3K27ac-ChIP-seq and DNase-seq ISD models, and TF ChIP-seq peak-only models (Fig. 1c (5)), the results are combined using the Cauchy combination test [38] (Fig. 1c (6)). Whereas MARGE [28] predicts cis-regulatory elements (but does not analyze TRs), and BART [29] carries out an enrichment analysis of predicted cis-elements to discover TRs, Lisa uses the chromatin landscape model in a different way. In combination with ChIP-seq-derived or computationally imputed TR binding, Lisa probes the effects of TRs on the chromatin RP models of query and background gene sets.
Demonstration of chromatin landscape models in a GATA6 knockdown study
We demonstrate Lisa chromatin landscapes and in silico deletion using a query gene set defined as the downregulated genes in a GATA6 knockdown experiment in the KATO-III stomach cancer cell line [39] (Fig. 2). Lisa identifies DNase-seq and H3K27ac ChIP-seq chromatin landscape models (Fig. 2a, Fig. 1c (2)), which include several gastro-intestinal samples (Additional file 1: Figure S2b,d) whose chrom-RPs can discriminate between the query and background gene sets (Additional file 1: Figure S2a, DNase-seq ROC AUC = 0.816, Additional file 1: Figure S2c, H3K27ac ROC AUC = 0.821). In silico deletion (Fig. 1c (3)) of GATA6 binding sites produces larger DNase-seq and H3K27ac ∆RPs (DNase ∆RP, 1.05; H3K27ac ∆RPs, 0.25) for an example downregulated gene, LINC01133 [40], than for a background gene, ZC3H12A (DNase ∆RP, 0.06; H3K27ac ∆RP, 0.01) (Fig. 2b). In silico deletion of CTCF binding sites, in contrast, has a smaller effect on the chromatin landscapes surrounding LINC01133 (DNase ∆RP, 0.02; H3K27ac ∆RP, 0.01), resulting in ∆RPs that are more similar to the ∆RPs for ZC3H12A (Fig. 2b) (DNase ∆RP, 0.004; H3K27ac ∆RP, 0.001). Statistical analysis is carried out comparing all the query gene ∆RPs with all the background gene ∆RPs (Fig. 1c (4)), producing significant p values for GATA4 (DNase p < 10−10, H3K27ac p < 10−5) and GATA6 (DNase p < 10−13, H3K27ac p < 10−7). After this analysis is conducted for all TR ChIP-seq samples in the Cistrome DB and the results are combined and compared, GATA6 and GATA4 ChIP-seq from intestinal and gastric tissues have the most significant p values (Fig. 2c, d).
A downregulated gene set from a GATA6 knockdown experiment in gastric cancer KATO-III cells is used as a case study to demonstrate the Lisa framework. a Heatmap of regulatory potentials used to discriminate downregulated genes from non-regulated background genes. b In silico deletion analysis using GATA6 and CTCF cistromes to probe chromatin landscape models near an illustrative downregulated gene, LINC01133, and a background gene, ZC3H12A. Only the H3K27ac ChIP-seq and DNase-seq chromatin profiles with the largest positive coefficients are shown, although other samples contribute to the respective H3K27ac ChIP-seq and DNase-seq chromatin models. c Comparison of ΔRPs indicates GATA6 and GATA4 cistromes have a large impact on the chromatin landscapes near downregulated genes and are therefore likely to be regulators of the query gene set. CTCF does not influence the chromatin landscape of the downregulated genes and is not likely to regulate the query gene set. d The rank statistics for the Lisa analysis of the downregulated gene set in the GATA6 knockdown experiment were combined to get overall TR ranks. The top eight and the bottom eight TRs for all TR ChIP-seq samples are shown
Lisa identification of regulatory TF ChIP-seq sample clusters
To investigate whether a TF ChIP-seq cistrome derived from one cell type can be informative about other cell types, we first clustered all the human TR cistromes in the Cistrome DB based on the pairwise Pearson correlation of peak-RP scores as a heatmap (Fig. 3). We then applied Lisa to differentially expressed gene sets defined by perturbations of individual TFs and examined the TR cistromes predicted to be the key regulators of these gene sets. In the analysis of upregulated genes on androgen receptor (AR) activation in the LNCaP prostate cancer cell line, Lisa identified a tight cluster of significant cistromes for AR and its known collaborator FOXA1 (Fig. 3 (a)). All samples in this cluster were derived from prostate cancer cell lines. In the analysis of the GATA6 knockdown in the gastric cancer cell line (KATO-III), Lisa found the GATA6 and FOXA2 cistromes in the stomach and colon samples to be the most significant. FOXA2 is an important pioneer TF which has been reported to collaborate with GATA6 in gut development to regulate Wnt6 [41] and Wnt7b [42] (Fig. 3 (b)). The identification of GATA6 cistromes in colon cancer cell lines, in addition to gastric cancer cell lines, shows that cistromes derived from cell types that are of related lineages can be used to inform the identification of the relevant regulators, even if the cell types are not the same. In the third example involving glucocorticoid receptor (GR) activation in the lung cancer cell line A549, Lisa correctly identified GR in A549 as a likely regulator and also identified GR in a different cell type HeLa (Fig. 3 (c)). AR, a member of the same nuclear receptor family as GR, is also implicated by Lisa even though the AR cistrome samples do not cluster with GR cistrome samples and have less statistical significance.
Lisa predicts key transcriptional regulators and assigns significance to each Cistrome DB cistrome. The large heatmap shows the hierarchical clustering of 8471 human Cistrome DB ChIP-seq cistromes based on peak-RP, with color representing Pearson correlation coefficients between peak-RPs. The three bars to the left of the heatmap display Lisa significance scores for differentially expressed genes sets derived from GR activation in the A549 cell line (upregulated), GATA6 knockdown in gastric cancer (downregulated), and AR activation in the LNCaP cell line (upregulated). Small heatmaps show details of the global heatmap relevant to (a) AR activation, (b) GATA6 knockdown, and (c) GR activation gene sets. In each case, the most significant cistromes are derived from the same cell type or lineage
We carried out an analysis of the effects of removing ChIP-seq and DNase-seq data on Lisa's accuracy. In particular, we tested Lisa's performance on three upregulated gene sets: (1) GR-activated genes in breast cancer (MCF7), (2) GR-activated genes in lung cancer (A549), and (3) estrogen receptor (ER)-activated genes in MCF7 (Additional file 4: Table S3). In these analyses, we assessed the effect of removing all relevant cell-line-specific (MCF7 or A549), H3K27ac ChIP-seq and DNase-seq data, or cell-line-specific TR ChIP-seq data (ER or GR). We also removed cell-line-specific TR ChIP-seq data together with H3K27ac ChIP-seq and DNase-seq data. We repeated the same analysis removing similar data, on the basis of tissue (breast and lung) instead of on the basis of cell line (MCF7 and A549). When MCF7 ER ChIP-seq are excluded, an ER sample from another breast cancer cell line (H3396) predicts the importance of ER (rank 6) as a regulator of the estrogen-activated gene set. When all ER breast ChIP-seq samples are excluded, Lisa can still identify ER (rank 18) from ER ChIP-seq in the VCaP prostate cancer cell line. For the GR-activated gene set in MCF7, when GR ChIP-seq data is unavailable in MCF7, Lisa can identify GR as a key regulator (rank 2) using GR ChIP-seq from the lung (A549). For the GR-activated gene set in the lung, Lisa identified GR as the key regulator (rank 1) using GR ChIP-seq data from the breast (MDA-MB-231). Together, these observations indicate that although TRs often bind in cell-type-specific ways, ChIP-seq-derived TR cistromes can be informative about the gene sets that TRs regulate in some other cell types.
Lisa identification of TF-associated cofactors in addition to TFs
To illustrate Lisa's capacity to find cofactors that interact with the regulatory TFs, we examined the Lisa analyses of four differentially expressed gene sets derived from experiments involving the activation of GR [43] and the knockdown/out of BCL6 [44], MYC [45], and SOX2 [46]. Lisa analysis of GR activation in lung cancer ranked GR itself as the most significant TR for the upregulated gene sets (Fig. 4a) and highly ranked pioneer TFs FOSL2 and CEBPB, which were downregulated after GR activation (Fig. 3c). BCL6, a predominantly repressive TF, is a driver of diffuse large B cell lymphoma (DLBCL) [47]. Lisa analysis of the upregulated genes in a BCL6 knockdown experiment in a DLBCL cell line ranked BCL6 as the most significant TR for this gene set (Fig. 4b). Lisa also identified NCOR1 and NCOR2, which are transcriptional BCL6 corepressors involved in the regulation of the germinal center [48,49,50]. SPI1, which recruits BCL6 [51], and BCOR, another BCL6 corepressor [52], were ranked among the top TRs for the upregulated gene set. In a MYC knockdown experiment in medulloblastoma, MYC and its dimerization partner, MAX [53], were among the top predicted regulators of the downregulated genes (Fig. 4c). The histone methyltransferase, KDM2B, known to physically interact with MYC and to augment MYC-regulated transcription [54] was also detected among the top regulators. In the SOX2 knockout experiment [2], NANOG, SOX2, and POU5F1, the key regulators of pluripotency, were the top predicted regulators of the downregulated genes (Fig. 4d). Lisa also discovered a similar set of TRs for the gene set derived from a POU5F1 knockdown experiment in embryonic stem cells (Additional file 1: Figure S3,4a). In addition, β-catenin (CTNNB1), which interacts with SOX2 and is oncogenic in SOX2+ cells [55], also ranked high for the downregulated genes. The predicted regulators of the upregulated genes in this experiment include FOXA1 and EOMES. FOXA1 is involved in early embryonic development [56] and has been observed to repress NANOG directly [57]. FOXA1 has been shown through co-immunoprecipitation to physically interact with SOX2 [58]. SOX2, known to bind to an enhancer regulating EOMES in human ESCs, when knocked down triggers EOMES expression and induces endoderm and trophectoderm differentiation [59]. Thus, in many cases, the known interactors are highly ranked along with the target activator or repressor. This suggests that even though the available TF ChIP-seq data in different cell types are sparse (Additional file 1: Figure S1d), Lisa can provide insights on possible regulatory TFs since transcriptional machinery tends to be organized in modules of interacting factors [60] (Additional file 1: Figure S4d).
Lisa can accurately identify key transcriptional regulators and co-regulators using Cistrome DB cistromes. Lisa analyses of up- and downregulated gene sets from a GR overexpression, b BCL6 knockdown, c MYC knockdown, and d SOX2 knockout experiments. The scatter plots show negative log10 Lisa p values of 1316 unique transcriptional regulators for up- and downregulated gene sets. Colors indicate log2 fold changes of the TF gene expression between treatment and control conditions in the gene expression experiments. Dots outlined with a circle denote transcriptional regulators that physically interact with the TF perturbed in the experiment, which is marked with a cross
Systematic evaluation of regulator prediction
To systematically evaluate Lisa, we compiled a benchmark panel of 122 differentially expressed gene sets from 61 studies involving the knockdown, knockout, activation, or overexpression of 27 unique human target TFs. In addition, we compiled 112 differentially expressed gene sets derived from 56 studies with 25 unique TF perturbations in mice (Additional file 5: Table S4, see "galleries" at http://lisa.cistrome.org). The full Lisa model was separately applied to the upregulated and downregulated gene sets in each experiment. We also carried out analyses of these gene sets using subcomponents of Lisa: the peak-RP method, as well as H3K27ac ChIP-seq- and DNase-seq-assisted ISD analyses. The putative regulatory cistromes were defined using either ChIP-seq peaks or from TF motif occurrence in the inferred chromatin models. The results allowed us to compare the effectiveness of DNase-seq and H3K27ac ChIP-seq in scenarios where the TF cistromes are well estimated (by ChIP-seq) or less well estimated (by motif). We measured the performance based on their ranking of the perturbed target TF (Fig. 5, Additional file 1: Figure S5).
Systematic evaluation of regulator prediction performance for humans using Cistrome DB ChIP-seq and DNA motif-derived cistromes. a Heatmap showing Lisa's performance in the analysis of human TF perturbation experiments. Each column represents a TF activation/overexpression or knockdown/out experiment with similar experiment types grouped together. Rows represent the methods based on cistromes from TR ChIP-seq data or imputed from motifs. The upper left red triangles represent the rank of the target TFs based on the analysis of the upregulated gene sets; the lower right blue triangles represent the analysis of downregulated gene sets. The heatmap includes non-redundant human experiments for the same TF. See Additional file 1: Figure S5 for the complete list of human and mouse experiments. b Boxplot showing the target TF rankings comparing Lisa ChIP-seq-based methods and the baseline model based on TF peak counts in gene promoter regions to analyze up- and downregulated gene sets in overexpression/activation (OE) and knockdown/out experiments (KD/KO). c Boxplot showing target TF rankings using Lisa motif-based methods and the baseline model based on motif hits in promoter regions
We compared the performance of methods that use TF ChIP-seq data and TF motifs, on up- and downregulated gene sets, and on overexpression/activation and knockdown/knockout samples (Fig. 5a). In overexpression studies, the prediction performance of all methods tended to be better for the upregulated gene sets than for the downregulated ones. The reverse is evident in the knockout and knockdown studies for which the prediction performances are better for the downregulated gene sets (Fig. 5b, c). This suggests that most of the TFs included in the study have a predominant activating role in the regulation of their target genes, under the conditions of the gene expression experiments, allowing these TFs to be more readily identified with the corresponding direction of primary gene expression response. Similar performance patterns were observed in the mouse benchmark datasets (Additional file 1: Figure S5). The performances of Lisa using ISD of TR ChIP-seq peak from chromatin landscapes were similar to the TR ChIP-seq peak-RP method, but outperformed motif-based methods by large margins.
To determine whether differences between the up- and downregulated gene sets could be explained by direct or indirect modes of TR recruitment, we studied two experiments involving ER and GR activation in greater detail. We defined "direct" ER and GR binding sites as ER/GR ChIP-seq peaks on genomic intervals containing the cognate DNA sequence elements and "indirect" ER and GR binding sites as ER/GR ChIP-seq peaks without the sequence elements. Comparing direct and indirect binding sites in the respective ER and GR activation experiments (Additional file 1: Figure S6), we found that the upregulated gene sets were more significantly associated with the direct binding sites (ER p value 1.5 × 10−15, GR p value 1.5 × 10−18) than with the indirect ones (ER p value 3.8 × 10−4, GR p value 1.4 × 10−12). The downregulated gene sets were more significantly associated with the indirect binding sites (ER p value 1.5 × 10−15, GR p value 1.5 × 10−11) than with the direct ones (ER p value 4.6 × 10−2, GR p value 3.0 × 10−3).
In some cases, the perturbation of a TR may trigger stress, immune, or cell cycle checkpoint responses that are not directly related to the initial perturbation. In the Lisa analysis of upregulated genes after 24 h of estradiol stimulation (GSE26834), for example, E2F4 is the top-ranked TR, followed by ER. Estrogen is known to stimulate the proliferation of breast cancer cells via a pathway involving E2F4, a key regulator of the G1/S cell cycle checkpoint [61]. In this case, Lisa might be correctly detecting a secondary response to the primary TR perturbation.
Comparison of Lisa with published methods
We next compared Lisa with other approaches, including BART [29], i-cisTarget [30], and Enrichr [31], which can use either TR ChIP-seq data or motifs. We also included a baseline method that ranks TRs by comparing query and background gene sets based on the TR binding site number within 5 kb centered on the TSS. Lisa outperformed BART, i-cisTarget, and Enrichr in terms of the percentage of the target TR identified within the top ten across all the experiments, either using TF binding sites from ChIP-seq data or motif hits (Fig. 6a, b). Lisa uses a model based on chromatin data to give more weight to the loci that are more likely to influence the expression of the query gene set. In this way, Lisa improves the performance of TR inference with noisy cistrome profiles such as those imputed from DNA sequence motifs. In addition to being more accurate than other methods in terms of TR prediction, the Lisa web server (lisa.cistrome.org) has several unique features that allow investigators to explore relevant ChIP-seq data in ways that are not available in other applications.
Lisa's performance surpasses published models. Lisa's performance is compared with alternative published methods for a upregulated genes in overexpression/activation experiments and b downregulated genes in knockdown/out experiments
Lisa web site and gallery of Lisa's benchmark data
The Lisa web site (lisa.cistrome.org) displays two tables of results for each query gene set. The first summarizes the Lisa analysis based on TR ChIP-seq data, and the second displays the Lisa analysis of TF binding sites imputed from DNA binding motifs. The ChIP-seq data table displays up to five ChIP-seq samples for each TR. Users can sort results by p value and inspect metadata and quality control statistics for each of the ChIP-seq samples to understand whether the predictive samples may be derived from particular cell types or experimental conditions. Lisa provides quality control metrics, metadata, publication, and read data repository links for the ChIP-seq data of putative regulatory TRs. Through Lisa, the ChIP-seq signal tracks can be viewed on the WashU Epigenome Browser [62]. Although the motif imputation-based analysis tends to be less accurate than the ChIP-seq based analysis, motifs can indicate roles for regulatory TRs for which ChIP-seq data is not widely available. Lisa's analysis of all the benchmark gene sets is also viewable on the Lisa web site. Users can explore these analyses to understand the "typical" results of the analysis. Robust methods combined with visualization and data exploration features make Lisa a valuable tool for analyzing gene regulation in humans and mice.
In this study, we describe an approach for using publicly available ChIP-seq and DNase-seq data to identify the regulators of differentially expressed gene sets in humans and mice. On the basis of a series of benchmarks, we demonstrate the effectiveness of our method and report recurrent patterns in the TRs predicted by these methods. We find the regulators of the upregulated genes and the downregulated ones are often different from each other; therefore, in any analysis of differential gene expression, up- and downregulated gene sets ought to be distinguished. Our results show that many TFs have a preferred directionality of effect, indicative of a predominant repressive or activating function. It is well known that many TFs can recruit both activating and repressive complexes [63], so the observed direction may be related to the stoichiometry and affinity of the activating or repressive cofactors. We also observe differences between ChIP-seq-based analysis and motif-based ones, suggesting differences in the TF activity depending on whether a TF interacts directly with DNA or whether it is recruited via another TF [64]. When a TF is recruited by another TF, it is likely that the enhancer has been already established by other TFs and protein complexes. Thus, the co-binding enhancer information of multiple TFs allows Lisa to identify both the DNA-bound TFs and their partners which might not directly bind DNA.
Lisa's accuracy in predicting the regulatory TRs of a gene set depends on the perturbation used in the production of the differential gene expression data; the quality of the gene expression data; the availability and quality of the DNase-seq, H3K27ac, and TR ChIP-seq data sets; the degree to which binding is dependent on a DNA sequence motif; and the validity of the model assumptions. Although we evaluate Lisa using differential gene expression data associated with a TR perturbation, the perturbed TR might not be the main regulator of the gene set. For example, perturbation of a TR may trigger a stress response [65] or secondary transcriptional effects that are not directly related to the primary TR [66].
The modeling approach used in Lisa facilitates the prediction of regulatory TRs using available ChIP-seq and DNase-seq data. DNase-seq and H3K27ac ChIP-seq are available in a broad variety of cell types, and these data are informative about cis-regulatory events mediated by many TRs. Although H3K27ac is considered to be a histone modification associated with gene activation, Lisa can still identify TRs, such as BCL6 and EZH2, with predominantly repressive functions. Although Lisa uses the correlation between H3K27ac or chromatin accessibility and gene expression to predict regulatory TRs, we do not assume that H3K27ac or chromatin accessibility causes the transcriptional changes. Other genomics data types that are predictive of general cis-regulatory activity, when available in quantity, variety, and quality, might improve Lisa's performance. More importantly, high-quality TR-specific binding data, generated by ChIP-seq or alternative technologies, like CETCh-seq [14] or CUT & RUN [15], will be needed to improve Lisa's accuracy in predicting TRs that are not yet well represented in Cistrome DB. TR imputation methods might fill in some gaps in TR binding data; however, families of TRs such as homeobox and forkhead factors, which have similar DNA-binding motifs, can be hard to discriminate based on DNA sequence analysis.
Although Lisa aims to identify the regulators of any differentially expressed gene set in humans or mice, no matter the contrast, in practice, the query gene sets should be derived from biologically meaningful differential expression or co-regulation analyses. In this study, we based the method evaluation on data from available TR perturbation experiments, which are biased towards well-studied systems. For this reason, the reliability of methods based on TR ChIP-seq data may be overestimated relative to imputation-based methods because the available TR ChIP-seq data tends to be derived from similar cell types and for the same factors used in the gene perturbation experiments. When the relevant cell-type-specific TR ChIP-seq data is available, the performance of the peak-RP method and ISD methods are similar, but when TR ChIP-seq data is not available, methods based on imputed TR cistromes are obligatory. The value of imputed cistromes relative to ChIP-seq derived ones will depend on the quantity, variety, and quality of available ChIP-seq data; the accuracy of the imputed cistromes; the degree of commonality of the genes that are regulated by the same TR in different cell types; and the number of TRs recognizing similar DNA sequence elements. Lisa provides invaluable information about the regulation of gene sets derived from both bulk and single-cell expression profiles [67] and will become more accurate over time with greater coverage of TF ChIP-seq augmented by computationally imputed TF cistromes.
Preprocessing of chromatin profiles
Using the BigWig format signal tracks of human and mouse H3K27ac ChIP-seq and DNase-seq from Cistrome DB, we precomputed the chromatin profile regulatory potential (chrom-RP) of each RefSeq gene and also summarized the signal in 1-kb windows genome-wide. The chrom-RP for gene k in sample j is defined as \( {R}_{jk}={\sum}_{i\in \left[{t}_k-L,{t}_k+L\right]}{w}_i{s}_{ji} \) (as defined in the MARGE algorithm [28]). L is set to 100 kb, and wi is a weight representing the regulatory influence of a locus at position i on the TSS of gene k at genomic position tk, wi = 2e−μd/1 + e−μd, where d = |i − tk|/L, and i stands for ith nucleotide position within the [−L, L] genomic interval centered on the TSS at tk. sji is the signal of chromatin profile j at position i. μ is the parameter to determine the decay rate of the weight, which is defined as \( \mu =-\ln \raisebox{1ex}{$L$}\!\left/ \!\raisebox{-1ex}{$3\Delta $}\right. \). For DNase-seq and H3K27ac ChIP-seq, the decay distance Δ is set to 10 kb. The genome-wide read counts on 1-kb windows were calculated using the UCSC utility bigWigAverageOverBed [68]. The chrom-RP matrix for chromatin profiles was normalized across RefSeq genes within one chromatin profile by \( {R}_{jk}^{\prime }=\log \left({R}_{jk}+1\right)-\frac{1}{k}{\sum}_1^k\left(\log \left({R}_{jk}+1\right)\right) \).
Preprocessing of cistromes
We converted TR ChIP-seq peaks from the Cistrome Data Browser (v.1) BED files into binary values to represent binding within 100bp resolution genomic intervals. DNA sequence scores were calculated from Cistrome DB position weight matrices, a redundant collection of 1,061 PWMs from TRANSFAC [69], JASPAR [70] and Cistrome DB ChIP-seq, representing 675 unique TFs in human and mouse. The peak-based regulatory potential (peak-RP) of a TR cistrome is defined in the same way as the chrom-RP except si represents the presence (sji = 1) or absence (sji = 0) of a peak summit within the upstream and downstream 100 kb centered on TSS. The genome-wide motif scores were scanned at a 100-bp window size with the library (https://github.com/qinqian/seqpos2) [71], and the motif hits are defined by thresholding at the 99th percentiles then mapped to the 1-kb windows. The genome-wide 1-kb windows in which the TR peak summits are located were determined using Bedtools [72]. All of the peak-RPs, TR binding, and motif hit data were deposited into hdf5 format files.
Lisa framework
Chromatin landscape model
Lisa selects 3000 background genes by proportionally sampling from non-query genes with a range of different TAD and promoter activities based on compendia of Cistrome DB H3K4me3 and H3K27ac ChIP-seq signals. There is no gene ontology enrichment in the background gene set. Lisa then uses L1-regularized logistic regression to select an optimum sample set for H3K27ac ChIP-seq or DNase-seq samples based on \( {R}_{jk}^{\prime } \). The L1 penalty parameter is determined by binary search to constrain the number of selected chromatin profiles to be small but sufficient to capture the information (different sample sizes were explored, and 10 was used in all the benchmark cases [28]). Lisa trains a final logistic regression model to predict the target gene set and obtains a weight αj for each candidate chromatin profile j, from which the weighted sum of chrom-RP is the model regulatory potential (model-RP).
In silico deletion method
The rationale for the ISD method is that the peaks of the true regulatory TFs should align with the high chromatin accessibility signals from the corresponding tissue or cell type. Therefore, the computational deletion of the chromatin signals on the peaks of regulatory cistromes would result in a more substantial effect on the model-RP for query genes than for background genes. The regulatory potentials are recalculated after erasing the signal in all 1-kb windows containing at least one peak from a putative regulatory cistrome i, \( {\overset{\sim }{R}}_{ijk}={R}_{jk}-{\sum}_{m\in {M}_{ik}}l\ {w}_m{s}_{jm} \) (where Mik is the set of 1-kb windows containing at least one peak in cistrome i for gene k; l is the window size, which is set to 1 kb for this study; wm is the exponential decay weight with the distance between the mth window center and TSS, the weight function is the same as chrom-RP; and sjm is the jth average chromatin profile signal on the mth window). These RPs are then normalized using the same normalization factors from the original RPs \( {\overset{\sim }{R}}_{ijk}^{\prime }=\log \left({\overset{\sim }{R}}_{ijk}+1\right)-\frac{1}{K}{\sum}_1^K\left(\log \left({R}_{jk}+1\right)\right). \)
After deletion, the model RPs are recalculated using the weights from the logistic regression model from chromatin profile feature selection without refitting and subtracted from the non-deletion model-RP, producing a ΔRP value for each gene, defined as the linear combination of differences in regulatory potentials: \( \Delta {R}_{ik}^{\prime }={\sum}_j{\alpha}_j\Big({R}_{jk}^{\prime }-{\overset{\sim }{R}}_{ijk}^{\prime } \)).
Combined statistics method for TR ranking
The peak-RPs or ΔRPs of the query gene set are compared with that of the background gene set through the one-sided Wilcoxon rank-sum test. For ChIP-seq-based methods, peak-RP, DNase-seq, and H3K27ac chom-RP are combined to get a robust prediction of the TRs. For motif-based methods, DNase-seq and H3K27ac ΔRPs are combined to get the final inference of TRs. Both combinations of statistics follow the Cauchy combination test [38], in which the combined statistics for each TR is \( {t}_j=\sum \limits_{i=1}^d{w}_i\tan \left\{\left(0.5-{p}_i\right)\pi \right\} \), where j represents one TR, i represents the ith method within ChIP-seq-based or motif-based methods, pi is the corresponding p value, and wi is set to 1/d where d is 3 for ChIP-seq-based method or 2 for the motif-based method. The combined p value for a TR j is computed as pj = 1/2 − (arctan(tj))/π.
Baseline method
The baseline method, which is the "peaks in promoter" for ChIP-seq-based method or "hits in promoter" for the motif-based method, is implemented by counting the number of TF ChIP-seq binding summits or motif hits within the genomic interval from 5 kb upstream to 5 kb downstream of the TSS. The peaks or motif counts in the promoter of the target gene set are compared with that of the background gene set using the one-sided Wilcoxon rank-sum test.
Comparison of "direct" and "indirect" binding sites
For up- and downregulated gene sets from the same experiment, the peaks of the target TR ChIP-seq samples with the most significant p values are defined as "direct" or "indirect" binding sites based on the target TR motif scores. Peak-RPs of "direct" or "indirect" binding sites are calculated and normalized to percentiles. Statistical significance between query and background gene sets was calculated by the one-sided Wilcoxon rank-sum test.
All up- and downregulated gene sets in Lisa's benchmark dataset were also used to test other published methods. BART and i-cisTarget were manually run through online websites with the default settings. Enrichr was run using the API. When comparing the motif-based methods, PWMs from species other than humans or mice were removed since they are not included in the Lisa framework: BART (http://bartweb.org/), i-cisTarget (https://gbiomed.kuleuven.be/apps/lcb/i-cisTarget/?ref=labworm), and Enrichr (http://amp.pharm.mssm.edu/Enrichr/).
Lisa pipeline
The Lisa pipeline is implemented with Snakemake [73]. Lisa contains an interface to process FASTQ format files to BigWig format files and to generate hdf5 files containing the chrom-RP matrices and 1-kb resolution data required by the Lisa model module.
Lisa online application
We have implemented the online version of Lisa (http://lisa.cistrome.org) using the Flask Python web development framework, along with process control software Celery to queue numerous queries. The analysis result of the target gene set is closely linked to the Cistrome DB. The scatterplot comparing TR ranking results from a pair of query gene sets such as up- and downregulated gene sets are implemented in Plot.ly.
Lisa is available under the MIT open source license at https://github.com/liulab-dfci/lisa [74] and at Zenodo [75]. All TR ChIP-seq, DNase-seq, and H3K27ac ChIP-seq data are from the Cistrome Data Browser (http://cistrome.org/db) [71]. Gene expression profiles used for benchmarking the method were accessed at Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/). The lists of all the data used in this study are available in the additional files. The processed gene lists and Lisa results are available at the gallery of the Lisa server (http://lisa.cistrome.org).
Hanahan D, Weinberg RA. Hallmarks of cancer: the next generation. Cell. 2011;144:646–74.
Takahashi K, Yamanaka S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell. 2006;126:663–76.
Thurman RE, et al. The accessible chromatin landscape of the human genome. Nature. 2012;489:75–82.
CAS PubMed PubMed Central Article Google Scholar
Gerstein MB, et al. Architecture of the human regulatory network derived from ENCODE data. Nature. 2012;488:91–100.
Creyghton MP, et al. Histone H3K27ac separates active from poised enhancers and predicts developmental state. Proc. Natl. Acad. Sci. U. S. A. 2010;107:21931–6.
Heinz S, et al. Simple combinations of lineage-determining transcription factors prime cis-regulatory elements required for macrophage and B cell identities. Mol. Cell. 2010;38:576–89.
He HH, et al. Nucleosome dynamics define transcriptional enhancers. Nat. Genet. 2010;42:343–7.
Mikkelsen TS, et al. Genome-wide maps of chromatin state in pluripotent and lineage-committed cells. Nature. 2007;448:553–60.
Johnson DS, Mortazavi A, Myers RM, Wold B. Genome-wide mapping of in vivo protein-DNA interactions. Science. 2007;316:1497–502.
Lambert SA, et al. The human transcription factors. Cell. 2018;172:650–65.
Fulton DL, et al. TFCat: the curated catalog of mouse and human transcription factors. Genome Biol. 2009;10:R29.
PubMed PubMed Central Article CAS Google Scholar
Mei S, et al. Cistrome Data Browser: a data portal for ChIP-Seq and chromatin accessibility data in human and mouse. Nucleic Acids Res. 2017;45:D658–62.
ENCODE. An integrated encyclopedia of DNA elements in the human genome. Nature. 2012;489:57–74.
Savic D, et al. CETCh-seq: CRISPR epitope tagging ChIP-seq of DNA-binding proteins. Genome Res. 2015;25:1581–9.
Skene PJ, Henikoff S. An efficient targeted nuclease strategy for high-resolution mapping of DNA binding sites. Elife. 2017. https://doi.org/10.7554/eLife.21856.
Hesselberth JR, et al. Global mapping of protein-DNA interactions in vivo by digital genomic footprinting. Nature Methods. 2009;6:283–9.
Boyle AP, et al. High-resolution genome-wide in vivo footprinting of diverse transcription factors in human cells. Genome Res. 2011;21:456–64.
Buenrostro JD, Giresi PG, Zaba LC, Chang HY, Greenleaf WJ. Transposition of native chromatin for fast and sensitive epigenomic profiling of open chromatin, DNA-binding proteins and nucleosome position. Nat. Methods. 2013:1–8. https://doi.org/10.1038/nmeth.2688.
Rada-Iglesias A, et al. A unique chromatin signature uncovers early developmental enhancers in humans. Nature. 2011;470:279–83.
Meyer CA, He HH, Brown M, Liu XS. BINOCh: Binding inference from nucleosome occupancy changes. Bioinformatics. 2011;27:1867–8.
He HH, et al. Differential DNase I hypersensitivity reveals factor-dependent chromatin dynamics. Genome Res. 2012;22:1015–25.
Keilwagen J, Posch S, Grau J. Accurate prediction of cell type-specific transcription factor binding. Genome Biol. 2019;20:1–17.
Schreiber J, Bilmes J, Noble WS. Completing the ENCODE3 compendium yields accurate imputations across a variety of assays and human biosamples; 2019. p. 1–20.
Qin Q, Feng J. Imputation for transcription factor binding predictions based on deep learning. PLOS Comput. Biol. 2017;13:e1005403.
Li H, Quang D, Guan Y. Anchor: trans-cell type prediction of transcription factor binding sites; 2019. p. 281–92. https://doi.org/10.1101/gr.237156.118.29.
Karimzadeh M, Hoffman MM. Virtual ChIP-seq: predicting transcription factor binding by learning from the transcriptome; 2018.
Quang D, Xie X. FactorNet: a deep learning framework for predicting cell type specific transcription factor binding from nucleotide-resolution sequential data; 2017. p. 1–27.
Wang S, et al. Modeling cis-regulation with a compendium of genome-wide histone H3K27ac profiles. Genome Res. 2016;26:1417–29.
Wang, Z. et al. BART: a transcription factor prediction tool with query gene sets or epigenetic profiles. Bioinformatics 0–2 (2018). doi:https://doi.org/10.1093/bioinformatics/bty194
Imrichova H, Hulselmans G, Atak ZK, Potier D, Aerts S. I-cisTarget 2015 update: generalized cis-regulatory enrichment analysis in human, mouse and fly. Nucleic Acids Res. 2015;43:W57–64.
Kuleshov MV, et al. Enrichr: a comprehensive gene set enrichment analysis web server 2016 update. Nucleic Acids Res. 2016;44:W90–7.
Long HK, Prescott SL, Wysocka J. Ever-changing landscapes: transcriptional enhancers in development and evolution. Cell. 2016;167:1170–87.
Osterwalder M, et al. Enhancer redundancy provides phenotypic robustness in mammalian development. Nature. 2018. https://doi.org/10.1038/nature25461.
Fukaya T, Lim B, Levine M. Enhancer control of transcriptional bursting. Cell. 2016;166:358–68.
Ouyang Z, Zhou Q, Hung W. ChIP-Seq of transcription factors predicts absolute and differential gene expression in embryonic stem cells; 2009.
Wang S, et al. Target analysis by integration of transcriptome and ChIP-seq data with BETA. Nat. Protoc. 2013;8:2502–15.
Sikora-Wohlfeld W, Ackermann M, Christodoulou EG, Singaravelu K, Beyer A. Assessing computational methods for transcription factor target gene identification based on ChIP-seq data. PLoS Comput. Biol. 2013;9:e1003342.
Liu Y, Xie J. Cauchy combination test: a powerful test with analytic p-value calculation under arbitrary dependency structures; 2018.
Chia NY, et al. Regulatory crosstalk between lineage-survival oncogenes KLF5, GATA4 and GATA6 cooperatively promotes gastric cancer development. Gut. 2015. https://doi.org/10.1136/gutjnl-2013-306596.
PubMed Article CAS PubMed Central Google Scholar
Yang XZ, et al. LINC01133 as ceRNA inhibits gastric cancer progression by sponging miR-106a-3p to regulate APC expression and the Wnt/β-catenin pathway. Mol. Cancer. 2018. https://doi.org/10.1186/s12943-018-0874-1.
Hwang JTK, Kelly GM. GATA6 and FOXA2 regulate Wnt6 expression during extraembryonic endoderm formation. Stem Cells Dev. 2012;21:3220–32.
Weidenfeld J, Shu W, Zhang L, Millar SE, Morrisey EE. The WNT7b promoter is regulated by TTF-1, GATA6, and Foxa2 in lung epithelium. J. Biol. Chem. 2002;277:21061–70.
Muzikar KA, Nickols NG, Dervan PB. Repression of DNA-binding dependent glucocorticoid receptor-mediated gene expression; 2009. p. 2009.
Alvarez MJ, et al. Functional characterization of somatic mutations in cancer using network-based inference of protein activity. Nat. Genet. 2016. https://doi.org/10.1038/ng.3593.
Fiaschetti G, et al. Bone morphogenetic protein-7 is a MYC target with prosurvival functions in childhood medulloblastoma. Oncogene. 2011. https://doi.org/10.1038/onc.2011.10.
Vencken SF, et al. An integrated analysis of the SOX2 microRNA response program in human pluripotent and nullipotent stem cell lines. BMC Genomics. 2014. https://doi.org/10.1186/1471-2164-15-711.
Ci W, et al. The BCL6 transcriptional program features repression of multiple oncogenes in primary B cells and is deregulated in DLBCL. Blood. 2009. https://doi.org/10.1182/blood-2008-12-193037.
Parekh S, et al. BCL6 programs lymphoma cells for survival and differentiation through distinct biochemical mechanisms. Blood. 2007;110:2067–74.
Huynh KD, Bardwell VJ. The BCL-6 POZ domain and other POZ domains interact with the co-repressors N-CoR and SMRT. Oncogene. 1998;17:2473–84.
Cui J, et al. FBI-1 functions as a novel AR co-repressor in prostate cancer cells. Cell. Mol. Life Sci. 2011. https://doi.org/10.1007/s00018-010-0511-7.
PubMed Article CAS Google Scholar
Wei, F., Zaprazna, K., Wang, J. & Atchison, M. L. PU.1 Can Recruit BCL6 to DNA to repress gene expression in germinal center B cells. Mol. Cell. Biol. 29, 4612–4622 (2009).
Huynh KD, Fischle W, Verdin E, Bardwell VJ. BCoR, a novel corepressor involved in BCL-6 repression. Genes Dev. 2000. https://doi.org/10.1111/j.1754-7121.1984.tb00653.x.
Grandori C, Cowley SM, James LP, Eisenman RN. The Myc/Max/Mad network and the transcriptional control of cell behavior. Annu. Rev. Cell Dev. Biol. 2000. https://doi.org/10.1146/annurev.cellbio.16.1.653.
Tzatsos A, et al. KDM2B promotes pancreatic cancer via Polycomb-dependent and -independent transcriptional programs. J. Clin. Invest. 2013. https://doi.org/10.1172/JCI64535.
Andoniadou CL, et al. Sox2+stem/progenitor cells in the adult mouse pituitary support organ homeostasis and have tumor-inducing potential. Cell Stem Cell. 2013;13:433–45.
Friedman JR, Kaestner KH. The Foxa family of transcription factors in development and metabolism. Cell Molr Life Sci. 2006. https://doi.org/10.1007/s00018-006-6095-6.
Chen T, et al. Foxa1 contributes to the repression of Nanog expression by recruiting Grg3 during the differentiation of pluripotent P19 embryonal carcinoma cells; 2014. p. 6.
Hagey DW, et al. SOX2 regulates common and specific stem cell features in the CNS and endoderm derived organs. PLoS Genet. 2018. https://doi.org/10.1371/journal.pgen.1007224.
Teo AKK, et al. Pluripotency factors regulate definitive endoderm specification through eomesodermin. Genes Dev. 2011. https://doi.org/10.1101/gad.607311.
Segal E, et al. Module networks: identify regulatory modules and their condition-specific regulators from gene expression data. Nat. Genet. 2003;34:166–76.
CAS PubMed Article PubMed Central Google Scholar
Carroll JS, Prall OWJ, Musgrove EA, Sutherland RL. A pure estrogen antagonist inhibits cyclin E-Cdk2 activity in MCF-7 breast cancer cells and induces accumulation of p130-E2F4 complexes characteristic of quiescence. J. Biol. Chem. 2000;275:38221–9.
Li D, Hsu S, Purushotham D, Sears RL, Wang T. WashU Epigenome Browser update 2019. Nucleic Acids Res. 2019. https://doi.org/10.1093/nar/gkz348.
Shang Y, Hu X, DiRenzo J, Lazar MA, Brown M. Cofactor Dynamics and sufficiency in estrogen receptor–regulated transcription. Cell. 2000;103:843–52.
Vockley CM, et al. Direct GR binding sites potentiate clusters of TF binding across the human genome. Cell. 2016;166:1269–81.e19.
Crow M, Lim N, Ballouz S, Pavlidis P, Gillis J. Predictability of human differential gene expression. Proc. Natl. Acad. Sci. U. S. A. 2019;116:6491–500.
Muhar M, et al. SLAM-seq defines direct gene-regulatory functions of the BRD4-MYC axis. Science (80-. ). 2018;360:800–5.
Aibar S, et al. SCENIC: single-cell regulatory network inference and clustering; 2017. p. 14.
Kent WJ, Zweig AS, Barber G, Hinrichs AS, Karolchik D. BigWig and BigBed: enabling browsing of large distributed datasets. Bioinformatics. 2010;26:2204–7.
Matys V, et al. TRANSFAC®: transcriptional regulation, from patterns to profiles. Nucleic Acids Res. 2003;31:374–8.
Mathelier A. et al. JASPAR 2016: a major expansion and update of the open-access database of transcription factor binding. Nucleic Acids Res. 2016;44(D1):110–5.
Liu T, et al. Cistrome: an integrative platform for transcriptional regulation studies. Genome Biol. 2011;12:R83.
Quinlan AR, Hall IM. BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010;26:841–2.
Köster J, Rahmann S. Snakemake--a scalable bioinformatics workflow engine. Bioinformatics. 2012;28:2520–2.
Qin Q, et al. Lisa: inferring transcriptional regulators through integrative modeling of public chromatin accessibility and ChIP-seq data. Github. 2019; https://github.com/liulab-dfci/lisa.
Qin Q, et al. Lisa: inferring transcriptional regulators through integrative modeling of public chromatin accessibility and ChIP-seq data. Zenodo. 2019; https://zenodo.org/record/3583466#.XhjmQlVKhaQ.
Kevin Pang was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
Review history
The review history is available as Additional file 6.
This work was supported by grants from the NIH (U24 HG009446 to XLS, U24 CA237617 to XSL and CM), National Natural Science Foundation of China (31801110 to SM), and the Shanghai Sailing Program (18YF1402500 to QQ).
Qian Qin and Jingyu Fan contributed equally to this work and can interchangeably be ordered as co-first authors.
Clinical Translational Research Center, Shanghai Pulmonary Hospital, School of Life Science and Technology, Tongji University, Shanghai, 200433, China
Qian Qin, Jingyu Fan, Rongbin Zheng, Changxin Wan, Shenglin Mei, Qiu Wu & Hanfei Sun
Center of Molecular Medicine, Children's Hospital of Fudan University, Shanghai, 201102, China
Stem Cell Translational Research Center, Tongji Hospital, School of Life Science and Technology, Tongji University, Shanghai, 200065, China
Jing Zhang
Center for Functional Cancer Epigenetics, Dana-Farber Cancer Institute, Boston, MA, 02215, USA
Clifford A. Meyer & X. Shirley Liu
Department of Medical Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, 02215, USA
Myles Brown
Department of Data Sciences, Dana-Farber Cancer Institute and Harvard T.H. Chan School of Public Health, Boston, MA, 02215, USA
Myles Brown, Clifford A. Meyer & X. Shirley Liu
Jingyu Fan
Rongbin Zheng
Changxin Wan
Shenglin Mei
Qiu Wu
Hanfei Sun
Clifford A. Meyer
X. Shirley Liu
CM, XSL, and MB conceived the concept and initiated the project. CM, QQ, and JF developed the Lisa algorithm. QQ implemented the Lisa software and website. JF and QQ evaluated Lisa's performance and carried out the analysis of the results. JF collected and processed the gene expression data. JF, QQ, RZ, CW, QW, and HS collected and processed the ChIP-seq and DNase-seq data. CM, XSL, and JZ supervised the project. CM, XSL, QQ, JF, and JZ wrote the paper. All authors read and approved the final manuscript.
Correspondence to Jing Zhang or Clifford A. Meyer or X. Shirley Liu.
MB receives sponsored research support from Novartis. MB serves on the SAB of Kronos Bio and is a consultant to H3 Biomedicine. XSL is a cofounder and board member of GV20 Oncotherapy, SAB of 3DMed Care, consultant for Genentech, and stockholder of BMY, TMO, WBA, ABT, ABBV, and JNJ. All other authors declare that they have no competing interests.
Figure S1-S6.
Additional file 2:
Table S1. Cistrome profile annotation table including TR ChIP-seq and TF motifs.
Table S2. DNase-seq and H3K27ac sample annotation table for mouse and human.
Table S3. Analysis of Lisa predictions of GR and ER regulated genes using data which does not match the specific cell type. The cell line and cell type of the highest ranked Lisa predicted target TR sample are shown in parentheses in each case.
Table S4. TF perturbation DNA microarray meta table for benchmarking the peak-RP and Lisa methods.
Review history.
Qin, Q., Fan, J., Zheng, R. et al. Lisa: inferring transcriptional regulators through integrative modeling of public chromatin accessibility and ChIP-seq data. Genome Biol 21, 32 (2020). https://doi.org/10.1186/s13059-020-1934-6
Accepted: 13 January 2020
Gene regulation
Chromatin accessibility
DNase-seq
H3K27ac ChIP-seq
Differential gene expression
Gene set analysis
|
CommonCrawl
|
Sample records for ae star v380
Chemical evolution of high-mass stars in close binaries. II. The evolved component of the eclipsing binary V380 Cygni
Pavlovski, K.; Tamajo, E.; Koubský, Pavel; Southworth, J.; Yang, S.; Kolbas, V.
Ro�. 400, �. 2 (2009), s. 791-804 ISSN 0035-8711 Institutional research plan: CEZ:AV0Z10030501 Keywords : binaries star s * eclipsing * fundamental parameters Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 5.103, year: 2009
HDE 323771: a new Herbig Ae star
Persi, P.; Polcaro, V.F.; Viotti, R.
From an analysis of the blue and red spectrum, and a study of the energy distribution from the optical up to the far infrared, we identify HDE 323771 as a new PMS Herbig Ae star. The P Cygni line profiles observed in the Balmer and Fe II lines indicate the presence of a stellar wind with a velocity of about 250-350 km s -1 . An upper limit of mass loss rate is derived from the observed upper limit for the Br γ luminosity. The near-IR images and the IR energy distribution indicate the presence of an extended circumstellar dust envelope with a temperature of about 1500 K. (author)
Pulsational stability of the SX Phe star AE UMa
Pena, J. H.; Renteria, A.; Villarreal, C.; Pina, D. S.; Soni, A. A.; Guillen, J.; Vargas, K.; Trejo, O.
From newly determined times of maxima of the SX Phe star AE UMa and a compilation of previous times of maxima, we were able to determine the nature of this star. With uv photometry we determined its physical parameters.
On the polarization of Herbig Ae/Be star radiation
Petrova, N I; Shevchenko, V S
Results of multicolor UBVRI polarimetry of 14 Herbig Ae/Be stars including 7 stars for which observations of polarization have been made for the first time are presented. 6 bright Herbig Ae/Be stars (As 441, AS 442, LK H..cap alpha..134, LK H..cap alpha..135, Lk H..cap alpha..169 and V517 Cyg) which belong to star formation region connected with IC 5070 show the polarization from 1 to 4.5. per cent with similar theta (approx. 180 deg) (basically of interstellar nature). The polarimetrical variability of BD+46 deg 3471, BD+65 deg 1637, HD 200775 and Lk H..cap alpha..234 is confirmed. Mechanismes of polarization in Herbig Ae/Be stars in circumstellar formations are discussed.
A statistical spectropolarimetric study of Herbig Ae/Be stars
Ababakr, K. M.; Oudmaijer, R. D.; Vink, J. S.
We present H α linear spectropolarimetry of a large sample of Herbig Ae/Be stars. Together with newly obtained data for 17 objects, the sample contains 56 objects, the largest such sample to date. A change in linear polarization across the H α line is detected in 42 (75 per cent) objects, which confirms the previous finding that the circumstellar environment around these stars on small spatial scales has an asymmetric structure, which is typically identified with a disc. A second outcome of this research is that we confirm that Herbig Ae stars are similar to T Tauri stars in displaying a line polarization effect, while depolarization is more common among Herbig Be stars. This finding had been suggested previously to indicate that Herbig Ae stars form in the same manner than T Tauri stars through magnetospheric accretion. It appears that the transition between these two differing polarization line effects occurs around the B7-B8 spectral type. This would in turn not only suggest that Herbig Ae stars accrete in a similar fashion as lower mass stars, but also that this accretion mechanism switches to a different type of accretion for Herbig Be stars. We report that the magnitude of the line effect caused by electron scattering close to the stars does not exceed 2 per cent. Only a very weak correlation is found between the magnitude of the line effect and the spectral type or the strength of the H α line. This indicates that the detection of a line effect only relies on the geometry of the line-forming region and the geometry of the scattering electrons.
Magnetic fields of Herbig Ae/Be stars
Hubrig S.
Full Text Available We report on the status of our spectropolarimetric studies of Herbig Ae/Be stars carried out during the last years. The magnetic field geometries of these stars, investigated with spectropolarimetric time series, can likely be described by centred dipoles with polar magnetic field strengths of several hundred Gauss. A number of Herbig Ae/Be stars with detected magnetic fields have recently been observed with X-shooter in the visible and the near-IR, as well as with the high-resolution near-IR spectrograph CRIRES. These observations are of great importance to understand the relation between the magnetic field topology and the physics of the accretion flow and the accretion disk gas emission.
Monitoring of V380 Oph requested in support of HST observations
Waagen, Elizabeth O.
On behalf of a large Hubble Space Telescope consortium of which they are members, Dr. Joseph Patterson (Columbia University, Center for Backyard Astrophysics) and Dr. Arne Henden (AAVSO) requested observations from the amateur astronomer community in support of upcoming HST observations of the novalike VY Scl-type cataclysmic variable V380 Oph. The HST observations will likely take place in September but nightly visual observations are needed beginning immediately and continuing through at least October 2012. The astronomers plan to observe V380 Oph while it is in its current low state. Observations beginning now are needed to determine the behavior of this system at minimum and to ensure that the system is not in its high state at the time of the HST observations. V380 Oph is very faint in its low state: magnitude 17 to 19 and perhaps even fainter. Nightly snapshot observations, not time series, are requested, as is whatever technique - adding frames, lengthening exposur! es, etc. - necessary to measure the magnitude. It is not known whether V380 Oph is relatively inactive at minimum or has flares of one to two magnitudes; it is this behavior that is essential to learn in order to safely execute the HST observations. Finder charts with sequence may be created using the AAVSO Variable Star Plotter (http://www.aavso.org/vsp). Observations should be submitted to the AAVSO International Database. See full Alert Notice for more details. NOTE: This campaign was subsequently cancelled when it was learned V830 Oph was not truly in its low state. See AAVSO Alert Notice 468 for details.
Velocity Curve Studies of Spectroscopic Binary Stars V380 Cygni ...
via the method introduced by Karami & Mohebi (2007) and Karami &. Teimoorinia (2007). Our numerical results are ... One of the usual methods to analyze the velocity curve is the method of Lehmann-Filhés, .... (1987). Figure 5. Same as Fig. 1, but for V2388 Oph. The observational data belong to Rucinski et al. (2002a, b).
Water Depletion in the Disk Atmosphere of Herbig AeBe Stars
Fedele, D.; Pascucci, I.; Brittain, S.; Kamp, I.; Woitke, P.; Williams, J. P.; Dent, W. R. F.; Thi, W. -F.
We present high-resolution (R similar to 100,000) L-band spectroscopy of 11 Herbig AeBe stars with circumstellar disks. The observations were obtained with the VLT/CRIRES to detect hot water and hydroxyl radical emission lines previously detected in disks around T Tauri stars. OH emission lines are
The excess infrared emission of Herbig Ae/Be stars - Disks or envelopes?
Hartmann, Lee; Kenyon, Scott J.; Calvet, Nuria
It is suggested that the near-IR emission in many Herbig Ae/Be stars arises in surrounding dusty envelopes, rather than circumstellar disks. It is shown that disks around Ae/Be stars are likely to remain optically thick at the required accretion rates. It is proposed that the IR excesses of many Ae/Be stars originate in surrounding dust nebulae instead of circumstellar disks. It is suggested that the near-IR emission of the envelope is enhanced by the same processes that produce anomalous strong continuum emission at temperatures of about 1000 K in reflection nebulae surrounding hot stars. This near-IR emission could be due to small grains transiently heated by UV photons. The dust envelopes could be associated with the primary star or a nearby companion star. Some Ae/Be stars show evidence for the 3.3-6.3-micron emission features seen in reflection nebulae around hot stars, which lends further support to this suggestion.
Chemical spots on the surface of the strongly magnetic Herbig Ae star HD 101412
Järvinen, S. P.; Hubrig, S.; Schöller, M.
Due to the knowledge of the rotation period and the presence of a rather strong surface magnetic field, the sharp-lined young Herbig Ae star HD 101412 with a rotation period of 42 d has become one of the most well-studied targets among the Herbig Ae stars. High-resolution HARPS polarimetric spectra...... that is opposite to the behaviour of the other elements studied. Since classical Ap stars usually show a relationship between the magnetic field geometry and the distribution of element spots, we used in our magnetic field measurements different line samples belonging to the three elements with the most numerous...
THE RADIO JET ASSOCIATED WITH THE MULTIPLE V380 ORI SYSTEM
Rodríguez, Luis F.; Yam, J. Omar; Carrasco-González, Carlos [Instituto de Radioastronomía y Astrofísica, UNAM, Apdo. Postal 3-72 (Xangari), 58089 Morelia, Michoacán, México (Mexico); Anglada, Guillem [Instituto de Astrofísica de Andalucía, CSIC, Glorieta de la Astronomía, s/n, E-18008, Granada (Spain); Trejo, Alfonso, E-mail: [email protected] [Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 10617, Taiwan (China)
The giant Herbig–Haro object 222 extends over ∼6′ in the plane of the sky, with a bow shock morphology. The identification of its exciting source has remained uncertain over the years. A non-thermal radio source located at the core of the shock structure was proposed to be the exciting source. However, Very Large Array studies showed that the radio source has a clear morphology of radio galaxy and a lack of flux variations or proper motions, favoring an extragalactic origin. Recently, an optical–IR study proposed that this giant HH object is driven by the multiple stellar system V380 Ori, located about 23′ to the SE of HH 222. The exciting sources of HH systems are usually detected as weak free–free emitters at centimeter wavelengths. Here, we report the detection of an elongated radio source associated with the Herbig Be star or with its close infrared companion in the multiple V380 Ori system. This radio source has the characteristics of a thermal radio jet and is aligned with the direction of the giant outflow defined by HH 222 and its suggested counterpart to the SE, HH 1041. We propose that this radio jet traces the origin of the large scale HH outflow. Assuming that the jet arises from the Herbig Be star, the radio luminosity is a few times smaller than the value expected from the radio–bolometric correlation for radio jets, confirming that this is a more evolved object than those used to establish the correlation.
Rodríguez, Luis F.; Yam, J. Omar; Carrasco-González, Carlos; Anglada, Guillem; Trejo, Alfonso
Spatially and spectrally resolved 10 mu m emission in Herbig Ae/Be stars
van Boekel, R; Waters, LBFM; Dominik, C; Dullemond, CP; Tielens, AGGM; de Koter, A
We present new mid-infrared spectroscopy of the emission from warm circumstellar dust grains in the Herbig Ae stars HD 100546. HD 97048 and HD 104237, with a spatial resolution Of of approximate to0."9. We find that the emission in the UIR bands at 8.6, 11.3 and (HD 97048 only) 12.7 mum is extended
HH 222: A GIANT HERBIG-HARO FLOW FROM THE QUADRUPLE SYSTEM V380 ORI
Reipurth, Bo; Aspin, Colin; Connelley, M. S. [Institute for Astronomy, University of Hawaii at Manoa, 640 North Aohoku Place, Hilo, HI 96720 (United States); Bally, John [Center for Astrophysics and Space Astronomy, University of Colorado, Boulder, CO 80309 (United States); Geballe, T. R. [Gemini Observatory, 670 North Aohoku Place, Hilo, HI 96720 (United States); Kraus, Stefan [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-78, Cambridge, MA 02138 (United States); Appenzeller, Immo [Landessternwarte Heidelberg, Königstuhl 12, D-69117 Heidelberg (Germany); Burgasser, Adam, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Center for Astrophysics and Space Science, University of California San Diego, La Jolla, CA 92093 (United States)
HH 222 is a giant shocked region in the L1641 cloud, and is popularly known as the Orion Streamers or ''the waterfall'' on account of its unusual structure. At the center of these streamers are two infrared sources coincident with a nonthermal radio jet aligned along the principal streamer. The unique morphology of HH 222 has long been associated with this radio jet. However, new infrared images show that the two sources are distant elliptical galaxies, indicating that the radio jet is merely an improbable line-of-sight coincidence. Accurate proper motion measurements of HH 222 reveal that the shock structure is a giant bow shock moving directly away from the well-known, very young, Herbig Be star V380 Ori. The already known Herbig-Haro object HH 35 forms part of this flow. A new Herbig-Haro object, HH 1041, is found precisely in the opposite direction of HH 222 and is likely to form part of a counterflow. The total projected extent of this HH complex is 5.3 pc, making it among the largest HH flows known. A second outflow episode from V380 Ori is identified as a pair of HH objects, HH 1031 to the northwest and the already known HH 130 to the southeast, along an axis that deviates from that of HH 222/HH 1041 by only 3.°7. V380 Ori is a hierarchical quadruple system, including a faint companion of spectral type M5 or M6, which at an age of ∼1 Myr corresponds to an object straddling the stellar-to-brown dwarf boundary. We suggest that the HH 222 giant bow shock is a direct result of the dynamical interactions that led to the conversion from an initial non-hierarchical multiple system into a hierarchical configuration. This event occurred no more than 28,000 yr ago, as derived from the proper motions of the HH 222 giant bow shock.
Einstein X-ray observations of Herbig Ae/Be stars
Damiani, F.; Micela, G.; Sciortino, S.; Harnden, F. R., Jr.
We have investigated the X-ray emission from Herbig Ae/Be stars, using the full set of Einstein Imaging Proportional Counter (IPC) observations. Of a total of 31 observed Herbig stars, 11 are confidently identified with X-ray sources, with four additonal dubious identifications. We have used maximum likelihood luminosity functions to study the distribution of X-ray luminosity, and we find that Be stars are significantly brighter in X-rays than Ae stars and that their X-ray luminosity is independent of projected rotational velocity v sin i. The X-ray emission is instead correlated with stellar bolometric luminosity and with effective temperature, and also with the kinetic luminosity of the stellar wind. These results seem to exclude a solar-like origin for the X-ray emission, a possibility suggested by the most recent models of Herbig stars' structure, and suggest an analogy with the X-ray emission of O (and early B) stars. We also observe correlations between X-ray luminosity and the emission at 2.2 microns (K band) and 25 microns, which strengthen the case for X-ray emission of Herbig stars originating in their circumstellar envelopes.
A high-resolution spectropolarimetric survey of Herbig Ae/Be stars - II. Rotation
Alecian, E.; Wade, G. A.; Catala, C.; Grunhut, J. H.; Landstreet, J. D.; Böhm, T.; Folsom, C. P.; Marsden, S.
We report the analysis of the rotational properties of our sample of Herbig Ae/Be (HAeBe) and related stars for which we have obtained high-resolution spectropolarimetric observations. Using the projected rotational velocities measured at the surface of the stars, we have calculated the angular momentum of the sample and plotted it as a function of age. We have then compared the angular momentum and the v sin i distributions of the magnetic to the non-magnetic HAeBe stars. Finally, we have predicted v sin i of the non-magnetic, non-binary (`normal') stars in our sample when they reach the zero-age main sequence (ZAMS), and compared them to various catalogues of v sin i of main-sequence stars. First, we observe that magnetic HAeBe stars are much slower rotators than normal stars, indicating that they have been more efficiently braked than the normal stars. In fact, the magnetic stars have already lost most of their angular momentum, despite their young ages (lower than 1 Myr for some of them). Secondly, our analysis suggests that the low-mass (1.5 5 M⊙) are losing angular momentum. We propose that winds, which are expected to be stronger in massive stars, are at the origin of this phenomenon.
NuSTAR and swift observations of the fast rotating magnetized white dwarf AE Aquarii
Kitaguchi, Takao; An, Hongjun; Beloborodov, Andrei M.
AE Aquarii is a cataclysmic variable with the fastest known rotating magnetized white dwarf (P-spin = 33.08 s). Compared to many intermediate polars, AE Aquarii shows a soft X-ray spectrum with a very low luminosity (L-X similar to 10(31) erg s(-1)). We have analyzed overlapping observations...... of this system with the NuSTAR and the Swift X-ray observatories in 2012 September. We find the 0.5-30 keV spectra to be well fitted by either an optically thin thermal plasma model with three temperatures of 0.75(-0.45)(+0.18), 2.29(-0.82)(+0.96), and 9.33(-2.18)(+6.07) keV, or an optically thin thermal plasma...
A search for strong, ordered magnetic fields in Herbig Ae/Be stars
Wade, G. A.; Bagnulo, S.; Drouin, D.; Landstreet, J. D.; Monin, D.
The origin of magnetic fields in intermediate- and high-mass stars is fundamentally a mystery. Clues towards solving this basic astrophysical problem can likely be found at the pre-main-sequence (PMS) evolutionary stage. With this work, we perform the largest and most sensitive search for magnetic fields in PMS Herbig Ae/Be (HAeBe) stars. We seek to determine whether strong, ordered magnetic fields, similar to those of main-sequence Ap/Bp stars, can be detected in these objects, and if so, to determine the intensities, geometrical characteristics, and statistical incidence of such fields. 68 observations of 50 HAeBe stars have been obtained in circularly polarized light using the FORS1 spectropolarimeter at the ESO VLT. An analysis of both Balmer and metallic lines reveals the possible presence of weak longitudinal magnetic fields in photospheric lines of two HAeBe stars, HD 101412 and BF Ori. Results for two additional stars, CPD-53 295 and HD 36112, are suggestive of the presence of magnetic fields, but no firm conclusions can be drawn based on the available data. The intensity of the longitudinal fields detected in HD 101412 and BF Ori suggest that they correspond to globally ordered magnetic fields with surface intensities of order 1 kG. On the other hand, no magnetic field is detected in 4 other HAeBe stars in our sample in which magnetic fields had previously been confirmed. Monte Carlo simulations of the longitudinal field measurements of the undetected stars allow us to place an upper limit of about 300 G on the general presence of aligned magnetic dipole magnetic fields, and of about 500 G on perpendicular dipole fields. Taking into account the results of our survey and other published results, we find that the observed bulk incidence of magnetic HAeBe stars in our sample is 8-12 per cent, in good agreement with that of magnetic main-sequence stars of similar masses. We also find that the rms longitudinal field intensity of magnetically detected HAe
A spatial study of the mid-IR emission features in four Herbig Ae/Be stars
Boersma, C.; Peeters, E.; Martin-Hernandez, N. L.; van der Wolk, G.; Verhoeff, A. P.; Tielens, A. G. G. M.; Waters, L. B. F. M.; Pel, J. W.
Context. Infrared (IR) spectroscopy and imaging provide a prime tool to study the characteristics of polycyclic aromatic hydrocarbon (PAH) molecules and the mineralogy in regions of star formation. Herbig Ae/Be stars are known to have varying amounts of natal cloud material present in their
The characteristics of the IR emission features in the spectra of Herbig Ae stars : evidence for chemical evolution
Boersma, C.; Bouwman, J.; Lahuis, F.; van Kerckhoven, C.; Tielens, A. G. G. M.; Waters, L. B. F. M.; Henning, T.
Context. Infrared ( IR) spectra provide a prime tool to study the characteristics of polycyclic aromatic hydrocarbon ( PAH) molecules in regions of star formation. Herbig Ae/Be stars are a class of young pre-main sequence stellar objects of intermediate mass. They are known to have varying amounts
A STUDY OF RO-VIBRATIONAL OH EMISSION FROM HERBIG Ae/Be STARS
Brittain, Sean D.; Reynolds, Nickalas [Department of Physics and Astronomy, 118 Kinard Laboratory, Clemson University, Clemson, SC 29634-0978 (United States); Najita, Joan R. [National Optical Astronomy Observatory, 950 N. Cherry Ave., Tucson, AZ 85719 (United States); Carr, John S. [Naval Research Laboratory, Code 7211, Washington, DC 20375 (United States); �dámkovics, Máté, E-mail: [email protected] [Department of Astronomy, University of California, Berkeley, CA 94720-3411 (United States)
We present a study of ro-vibrational OH and CO emission from 21 disks around Herbig Ae/Be stars. We find that the OH and CO luminosities are proportional over a wide range of stellar ultraviolet luminosities. The OH and CO line profiles are also similar, indicating that they arise from roughly the same radial region of the disk. The CO and OH emission are both correlated with the far-ultraviolet luminosity of the stars, while the polycyclic aromatic hydrocarbon (PAH) luminosity is correlated with the longer wavelength ultraviolet luminosity of the stars. Although disk flaring affects the PAH luminosity, it is not a factor in the luminosity of the OH and CO emission. These properties are consistent with models of UV-irradiated disk atmospheres. We also find that the transition disks in our sample, which have large optically thin inner regions, have lower OH and CO luminosities than non-transition disk sources with similar ultraviolet luminosities. This result, while tentative given the small sample size, is consistent with the interpretation that transition disks lack a gaseous disk close to the star.
EVOLUTION OF THE DUST/GAS ENVIRONMENT AROUND HERBIG Ae/Be STARS
Liu Tie; Zhang Huawei; Wu Yuefang; Qin Shengli; Miller, Martin
Using the KOSMA 3 m telescope, 54 Herbig Ae/Be (HAe/Be) stars were surveyed in CO and 13 CO emission lines. The properties of the stars and their circumstellar environments were studied by fitting spectral energy distributions (SEDs). The mean line width of 13 CO (2-1) lines of this sample is 1.87 km s -1 . The average column density of H 2 is found to be 4.9 x 10 21 cm -2 for stars younger than 10 6 yr, while this drops to 2.5 x 10 21 cm -2 for those older than 10 6 yr. No significant difference is found among the SEDs of HAe and HBe stars of the same age. Infrared excess decreases with age, envelope masses and envelope accretion rates decease with age after 10 5 yr, the average disk mass of the sample is 3.3 x 10 -2 M sun , the disk accretion rate decreases more slowly than the envelope accretion rate, and a strong correlation between the CO line intensity and the envelope mass is found.
Near Infrared High Resolution Spectroscopy and Spectro-astrometry of Gas in Disks around Herbig Ae/Be Stars
Brittain, Sean D.; Najita, Joan R.; Carr, John S.
In this review, we describe how high resolution near infrared spectroscopy and spectro-astrometry have been used to study the disks around Herbig~Ae/Be stars. We show how these tools can be used to identify signposts of planet formation and elucidate the mechanism by which Herbig Ae/Be stars accrete. We also highlight some of the artifacts that can complicate the interpretation of spectro-astrometric measurements and discuss best practices for mitigating these effects. We conclude with a brie...
Kitaguchi, Takao; An, Hongjun; Beloborodov, Andrei M.; Gotthelf, Eric V.; Hayashi, Takayuki; Kaspi, Victoria M.; Rana, Vikram R.; Boggs, Steven E.; Christensen, Finn E.; Craig, William W.;
AE Aquarii is a cataclysmic variable with the fastest known rotating magnetized white dwarf (P(sub spin) = 33.08 s). Compared to many intermediate polars, AE Aquarii shows a soft X-ray spectrum with a very low luminosity (LX (is) approximately 10(exp 31) erg per second). We have analyzed overlapping observations of this system with the NuSTAR and the Swift X-ray observatories in 2012 September. We find the 0.5-30 keV spectra to be well fitted by either an optically thin thermal plasma model with three temperatures of 0.75(+0.18 / -0.45), 2.29(+0.96 / -0.82), and 9.33 (+6.07 / -2.18) keV, or an optically thin thermal plasma model with two temperatures of 1.00 (+0.34 / -0.23) and 4.64 (+1.58 / -0.84) keV plus a power-law component with photon index of 2.50 (+0.17 / -0.23). The pulse profile in the 3-20 keV band is broad and approximately sinusoidal, with a pulsed fraction of 16.6% +/- 2.3%. We do not find any evidence for a previously reported sharp feature in the pulse profile.
Observations of zodiacal light of the isolated Herbig BF Ori Ae star
Grinin, V.P.; Kiselev, N.N.; Minikulov, N.Kh.; AN Tadzhikskoj SSR, Dushanbe
The isolated Herbig BF Ori Ae-star belongs to the subclass of young irreguLar variables with non-periodic Algol-type brightness variations. In the course of polarization and photometrical patrol observations carried out in 1987-88 in the Crimea and Sanglock, high linear polarization has been observed in deep minima. The analysis of the observations show, that the most probably a source of polarization is the scattered light from the circumstellar dust disklike envelope that is the analogue of the solar zodiacal light. It is concluded that the bimodal distribution oF positional angles of linear polarization in L 1641 reflects a complex structure of the magnetic field in this giant cloud
Spectral Variability of the Herbig Ae/Be Star HD 37806
Pogodin, M. A.; Pavlovskiy, S. E.; Kozlova, O. V.; Beskrovnaya, N. G.; Alekseev, I. Yu.; Valyavin, G. G.
Results are reported from a spectroscopic study of the Herbig Ae/Be star HD 37806 from 2009 through 2017 using high resolution spectrographs at the Crimean Astrophysical Observatory and the OAN SPM Observatory in Mexico. 72 spectra of this object near the Hα, Hβ, HeI 5876 and D NaI lines are analyzed. The following results were obtained: 1. The type of spectral profile of the Hα line can change from P Cyg III to double emission and vice versa over a time scale on the order of a month. 2. Narrow absorption components are observed in the profiles of the Hα and D NaI lines with radial velocities that vary over a characteristic time on the order of a day. 3. On some days, the profiles of the Hβ, HeI 5876, and D NaI lines show signs of accretion of matter to the star with a characteristic lifetime of a few days. A possible interpretation of these phenomena was considered. The transformation of the Hα profile may be related to a change in the outer latitudinal width of the boundary of the wind zone. The narrow variable absorption lines may be caused by the rotation of local azimuthal inhomogeneities in the wind zone owing to the interaction of the disk with the star's magnetosphere in a propeller regime. Several current theoretical papers that predict the formation of similar inhomogeneous wind structures were examined. It is suggested that the episodes with signs of accretion in the spectral line profiles cannot be a consequence of the modulation of these profiles by the star's rotation but are more likely caused by sudden, brief changes in the accretion rate. These spectral observations of HD 37806 should be continued in a search for cyclical variability in the spectral parameters in order to identify direct signs of magnetospheric accretion and detect possible binary behavior in this object.
AN IONIZED OUTFLOW FROM AB AUR, A HERBIG AE STAR WITH A TRANSITIONAL DISK
Rodríguez, Luis F.; Zapata, Luis A.; Ortiz-León, Gisela N.; Loinard, Laurent [Centro de Radioastronomía y Astrofísica, UNAM, Apdo. Postal 3-72 (Xangari), 58089 Morelia, Michoacán (Mexico); Dzib, Sergio A. [Max Planck Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany); Macías, Enrique; Anglada, Guillem, E-mail: [email protected] [Instituto de Astrofísica de Andalucía (CSIC), Apartado 3004, E-18080 Granada (Spain)
AB Aur is a Herbig Ae star with a transitional disk. Transitional disks present substantial dust clearing in their inner regions, most probably because of the formation of one or more planets, although other explanations are still viable. In transitional objects, accretion is found to be about an order of magnitude smaller than in classical full disks. Since accretion is believed to be correlated with outflow activity, centimeter free-free jets are expected to be present in association with these systems, at weaker levels than in classical protoplanetary (full) systems. We present new observations of the centimeter radio emission associated with the inner regions of AB Aur and conclude that the morphology, orientation, spectral index, and lack of temporal variability of the centimeter source imply the presence of a collimated, ionized outflow. The radio luminosity of this radio jet is, however, about 20 times smaller than that expected for a classical system of similar bolometric luminosity. We conclude that centimeter continuum emission is present in association with stars with transitional disks, but at levels than are becoming detectable only with the upgraded radio arrays. On the other hand, assuming that the jet velocity is 300 km s{sup –1}, we find that the ratio of mass loss rate to accretion rate in AB Aur is ∼0.1, similar to that found for less evolved systems.
Using He I λ10830 to Diagnose Mass Flows Around Herbig Ae/Be Stars
Cauley, Paul W.; Johns-Krull, Christopher M.
The pre-main sequence Herbig Ae/Be stars (HAEBES) are the intermediate mass cousins of the low mass T Tauri stars (TTSs). However, it is not clear that the same accretion and mass outflow mechanisms operate identically in both mass regimes. Classical TTSs (CTTSs) accrete material from their disks along stellar magnetic field lines in a scenario called magnetospheric accretion. Magnetospheric accretion requires a strong stellar dipole field in order to truncate the inner gas disk. These fields are either absent or very weak on a large majority of HAEBES, challenging the view that magnetospheric accretion is the dominant accretion mechanism. If magnetospheric accretion does not operate similarly around HAEBES as it does around CTTSs, then strong magnetocentrifugal outflows, which are directly linked to accretion and are ubiquitous around CTTSs, may be driven less efficiently from HAEBE systems. Here we present high resolution spectroscopic observations of the He I λ10830 line in a sample of 48 HAEBES. He I λ10830 is an excellent tracer of both mass infall and outflow which is directly manifested as red and blue-shifted absorption in the profile morphologies. These features, among others, are common in our sample. The occurrence of both red and blue-shifted absorption profiles is less frequent, however, than is found in CTTSs. Statistical contingency tests confirm this difference at a significant level. In addition, we find strong evidence for smaller disk truncation radii in the objects displaying red-shifted absorption profiles. This is expected for HAEBES experiencing magnetospheric accretion based on their large rotation rates and weak magnetic field strengths. Finally, the low incidence of blue-shifted absorption in our sample compared to CTTSs and the complete lack of simultaneous red and blue-shifted absorption features suggests that magnetospheric accretion in HAEBES is less efficient at driving strong outflows. The stellar wind-like outflows that are
Undergraduate Observations of Separation and Position Angle of Double Stars ARY 6 AD and ARY 6 AE at Manzanita Observatory
Hoffert, Michael J.; Weise, Eric; Clow, Jenna; Hirzel, Jacquelyn; Leeder, Brett; Molyneux, Scott; Scutti, Nicholas; Spartalis, Sarah; Tokuhara, Corey
Six beginning astronomy students, part of an undergraduate stellar astronomy course, one advanced undergraduate student assistant, and a professor measured the position angles and separations of Washington Double Stars (WDS) 05460 + 2119 (also known as ARY 6 AD and ARY 6 AE). The measurements were made at the Manzanita Observatory (116° 20'42" W, 32° 44' 5" N) of the Tierra Astronomical Institute on 10 Blackwood Rd. in Boulevard, California (www.youtube.com/watch?v=BHVdcMGBGDU), at an elevation of 4,500 ft. A Celestron 11" HD Edge telescope was used to measure the position angles and separations of ARY 6 AD and ARY 6 AE. The averages of our measurements are as follows: separation AD: trial 1 124.1 arcseconds and trial 2 124.5 arcseconds. The average of separation for AE: trial 1 73.3 arcseconds and trial 2 73.8 arcseconds. The averages of position angle for AD: trial 1 159.9 degrees and trial 2 161.3 degrees. The averages of position angle for AE: trial 1 232.6 degrees and trial 2 233.7 degrees.
The variable Herbig Ae star HR 5999: VIII. Spectroscopic observations 1975 - 1985 and correlations with simulataneous photometry
Tjin, A.; Djie, H.R.E.; The, P.S.; Andersen, J.; Nordstroem, B.; Finkenzeller, U.; Jankovics, I.
Visual spectroscopy of the irregularly variable Herbig Ae star HR 5999 over the past 15 years is summarised. The general features of the spectrum indicate that HR 5999 is an A5-7 III-IVe star with an extended circumstellar atmosphere. Typical lines have a rotationally broadened photospheric component and one or two blue-shifted ''shell'' components. The average radial velocity of the photospheric components, together with the common proper motions of the stars strongly suggest that HR 5999 and the peculiar B6 star HR 6000 from a physical pair, with common age. In order to study the relation between variations in the spectrum and in the brightness of the star, three sequences of simultaneous spectroscopic and photometric observations have been obtained during the past decade. From these and other (isolated) simultaneous observations we concluded that: (a) the photospheric radial velocity component is variable, possibly with a period of about 14 days, which could point to the presence of a close companion, (b) occasionally, sudden variations may occur (within one night) either in the photospheric line components,or in the shell (absorption or emission) components or both; (c) a decreasing brightness is correlated with increasing Hα-emission flux and a decreasing wind velocity in the shell region. An interpretation of these correlations in terms of magnetic activity is proposed
Determinations of the 12C/13C Ratio for the Secondary Stars of AE Aquarii, SS Cygni, and RU Pegasi
Harrison, Thomas E.; Marra, Rachel E.
We present new moderate-resolution near-infrared spectroscopy of three CVs obtained using GNIRS on Gemini-North. These spectra covered three 13CO bandheads found in the K-band, allowing us to derive the isotopic abundance ratios for carbon. We find small 12C/13C ratios for all three donor stars. In addition, these three objects show carbon deficits, with AE Aqr being the most extreme ([C/Fe] = -1.4). This result confirms the conjecture that the donor stars in some long-period CVs have undergone considerable nuclear evolution prior to becoming semi-contact binaries. In addition to the results for carbon, we find that the abundance of sodium is enhanced in these three objects, and the secondary stars in both RU Peg and SS Cyg suffer magnesium deficits. Explaining such anomalies appears to require higher mass progenitors than commonly assumed for the donor stars of CVs. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina), and Ministério da Ciência, Tecnologia e Inovação (Brazil).
HDE 229189 - A variable Ae star in the field of NGC 6910
Halbedel, E.M.
The star HDE 229189 (BD + 40 4145; in the field, though probably not a member, of the open cluster NGC 6910) has been found to exhibit large photometric changes in V magnitude over relatively short time scales. The total observed range was 0.416 V magnitude. An outburst in 1982 showed an even greater V range (Delta V = 1.66) and concomitant color changes. A coude spectrum of the star taken a week before a minor outburst showed emission at H-alpha but no other unusual lines. The star is likely an A3 (V)e star, an unusual object in itself (since stars as late as A3 seldom show emission at H-α), or else possibly a member of a binary system undergoing mass transfer between the members. 19 refs
Polarimetry of the T Tau and Ae/Be Herbig stars
Bergner, Yu K; Miroshnichenko, A S; Yudin, R V; Yutanov, N Yu; Dzhakuseva, K G; Mukanov, D B
The results of polarimetric observations of stars suspected to be at the Pre-Main-Sequence stage are given. Simple phenomenological models of circumstellar shells are proposed for interpretation of the observed polarization variations.
Photometric investigation of the Herbig Ae/Be star MWC 297. I. Quasisimultaneous UBVRIJHK observations
Bergner, Yu.K.; Kozlov, V.P.; Krivtsov, A.A.; Miroshnichenko, A.S.; Yudin, R.V.; Yutanov, N.Yu.; Dzhakusheva, K.G.; Kuratov, K.S.; Mukanov, D.B.
In order to make a statistical investigation of the photometric variability of the young star MWC 297 a number of quasisimultaneous observations in the photometric bands UBVRIJHK has been made. The coefficients of the correlation between the variations of the brightness in the different photometric bands have been determined by the proposed method. An anticorrelation between the variations in the bands U and K has been found. A possible mechanisms of the irregular variability of the star is proposed
Results of a Long-Term Spectral and Photometric Monitoring of the Herbig Ae Star HD 179218
Kozlova, O. V.; Alekseev, I. Yu.
We present the results of the long-term spectroscopic and photometric monitoring of the Herbig Ae star HD 179218. The high-resolution spectra (R = 20000) were made on the 2.6-m telescope at the Crimean Astrophysical Observatory (CrAO) in the regions of Hα emission line. The photometric observations were obtained on 1.25-m telescope AZT-11 (CrAO) equipped with the UBVRI electrophotometer-polarimeter. Our results showed that Hα emission parameters demonstrate the long- term variability on the time scale about 4000 days. The same variability is observed in the change of the stellar brightness in V-band. The unusual behavior the V-I color index depending on the stellar brightness in the V-band can be concluded that around the star there is an additional source of IR-radiation. We believe that the obtained results can be explained by the presence in the vicinity of HD 179218 a low-mass component.
HIGH-RESOLUTION 25 μM IMAGING OF THE DISKS AROUND HERBIG AE/BE STARS
Honda, M. [Department of Mathematics and Physics, Kanagawa University, 2946 Tsuchiya, Hiratsuka, Kanagawa 259-1293 (Japan); Maaskant, K. [Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden (Netherlands); Okamoto, Y. K. [Institute of Astrophysics and Planetary Sciences, Faculty of Science, Ibaraki University, 2-1-1 Bunkyo, Mito, Ibaraki 310-8512 (Japan); Kataza, H. [Department of Infrared Astrophysics, Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Sagamihara, Kanagawa 229-8510 (Japan); Yamashita, T. [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Miyata, T.; Sako, S.; Kamizuka, T. [Institute of Astronomy, School of Science, University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015 (Japan); Fujiyoshi, T.; Fujiwara, H. [Subaru Telescope, National Astronomical Observatory of Japan, 650 North A'ohoku Place, Hilo, Hawaii 96720 (United States); Sakon, I.; Onaka, T. [Department of Astronomy, School of Science, University of Tokyo, Bunkyo-ku, Tokyo 113-0033 (Japan); Mulders, G. D. [Lunar and Planetary Laboratory, The University of Arizona, Tucson, AZ 85721 (United States); Lopez-Rodriguez, E.; Packham, C. [Department of Physics and Astronomy, University of Texas at San Antonio, One UTSA Circle, San Antonio, TX 78249 (United States)
We imaged circumstellar disks around 22 Herbig Ae/Be stars at 25 μm using Subaru/COMICS and Gemini/T-ReCS. Our sample consists of an equal number of objects from each of the two categories defined by Meeus et al.; 11 group I (flaring disk) and II (flat disk) sources. We find that group I sources tend to show more extended emission than group II sources. Previous studies have shown that the continuous disk is difficult to resolve with 8 m class telescopes in the Q band due to the strong emission from the unresolved innermost region of the disk. This indicates that the resolved Q-band sources require a hole or gap in the disk material distribution to suppress the contribution from the innermost region of the disk. As many group I sources are resolved at 25 μm, we suggest that many, but not all, group I Herbig Ae/Be disks have a hole or gap and are (pre-)transitional disks. On the other hand, the unresolved nature of many group II sources at 25 μm supports the idea that group II disks have a continuous flat disk geometry. It has been inferred that group I disks may evolve into group II through the settling of dust grains into the mid-plane of the protoplanetary disk. However, considering the growing evidence for the presence of a hole or gap in the disk of group I sources, such an evolutionary scenario is unlikely. The difference between groups I and II may reflect different evolutionary pathways of protoplanetary disks.
Honda, M.; Maaskant, K.; Okamoto, Y. K.; Kataza, H.; Yamashita, T.; Miyata, T.; Sako, S.; Kamizuka, T.; Fujiyoshi, T.; Fujiwara, H.; Sakon, I.; Onaka, T.; Mulders, G. D.; Lopez-Rodriguez, E.; Packham, C.
We imaged circumstellar disks around 22 Herbig Ae/Be stars at 25 μm using Subaru/COMICS and Gemini/T-ReCS. Our sample consists of an equal number of objects from each of the two categories defined by Meeus et al.; 11 group I (flaring disk) and II (flat disk) sources. We find that group I sources tend to show more extended emission than group II sources. Previous studies have shown that the continuous disk is difficult to resolve with 8 m class telescopes in the Q band due to the strong emission from the unresolved innermost region of the disk. This indicates that the resolved Q-band sources require a hole or gap in the disk material distribution to suppress the contribution from the innermost region of the disk. As many group I sources are resolved at 25 μm, we suggest that many, but not all, group I Herbig Ae/Be disks have a hole or gap and are (pre-)transitional disks. On the other hand, the unresolved nature of many group II sources at 25 μm supports the idea that group II disks have a continuous flat disk geometry. It has been inferred that group I disks may evolve into group II through the settling of dust grains into the mid-plane of the protoplanetary disk. However, considering the growing evidence for the presence of a hole or gap in the disk of group I sources, such an evolutionary scenario is unlikely. The difference between groups I and II may reflect different evolutionary pathways of protoplanetary disks
Determinations of the {sup 12}C/{sup 13}C Ratio for the Secondary Stars of AE Aquarii, SS Cygni, and RU Pegasi
Harrison, Thomas E.; Marra, Rachel E., E-mail: [email protected], E-mail: [email protected] [Department of Astronomy, New Mexico State University, Box 30001, MSC 4500, Las Cruces, NM 88003-8001 (United States)
We present new moderate-resolution near-infrared spectroscopy of three CVs obtained using GNIRS on Gemini-North. These spectra covered three {sup 13}CO bandheads found in the K -band, allowing us to derive the isotopic abundance ratios for carbon. We find small {sup 12}C/{sup 13}C ratios for all three donor stars. In addition, these three objects show carbon deficits, with AE Aqr being the most extreme ([C/Fe] = −1.4). This result confirms the conjecture that the donor stars in some long-period CVs have undergone considerable nuclear evolution prior to becoming semi-contact binaries. In addition to the results for carbon, we find that the abundance of sodium is enhanced in these three objects, and the secondary stars in both RU Peg and SS Cyg suffer magnesium deficits. Explaining such anomalies appears to require higher mass progenitors than commonly assumed for the donor stars of CVs.
Undergraduate Observations of Separation and Position Angle of Double Stars WDS J05460+2119AB (ARY 6AD and ARY 6 AE) at Manzanita Observatory (Abstract)
HOffert, M. J.; Weise, E.; Clow, J.; Hirzel, J.; Leeder, B.; Molyneux, S.; Scutti, N.; Spartalis, S.; Takuhara, C.
(Abstract only) Six beginning astronomy students, part of an undergraduate stellar astronomy course, one advanced undergraduate student assistant, and a professor measured the position angles and separations of Washington Double Stars (WDS) J05460+2119 (= WDS J05460+2119AB; also known as ARY 6 AD and ARY 6 AE). The measurements were made at the Manzanita Observatory (116º 20' 42" W, 32º 44' 5" N) of the Tierra Astronomical Institute on 10 Blackwood Road in Boulevard, California (www.youtube.com/watch?v=BHVdeMGBGDU), at an elevation of 4,500 ft. A Celestron 11-inch HD Edge telescope was used to measure the position angles and separations of ARY 6 AD and ARY 6 AE. The averages of our measurements are as follows: separation AD: trial 1 124.1 arcseconds and trial 2 124.5 arcseconds; separation AE: trial 1 73.3 arcseconds and trial 2 73.8 arcseconds. The averages of positon angle for AD: trial 1 159.9 degrees and trial 2 161.3 degrees, for AE: trial 1 232.6 degrees and trial 2 233.7 degrees.
EFFECT OF PHOTODESORPTION ON THE SNOW LINES AT THE SURFACE OF OPTICALLY THICK CIRCUMSTELLAR DISKS AROUND HERBIG Ae/Be STARS
Oka, Akinori; Nakamoto, Taishi; Inoue, Akio K.; Honda, Mitsuhiko
We investigate the effect of photodesorption on the snow line position at the surface of a protoplanetary disk around a Herbig Ae/Be star, motivated by the detection of water ice particles at the surface of the disk around HD142527 by Honda et al. For this aim, we obtain the density and temperature structure in the disk with a 1+1D radiative transfer and determine the distribution of water ice particles in the disk by the balance between condensation, sublimation, and photodesorption. We find that photodesorption induced by far-ultraviolet radiation from the central star depresses the ice-condensation front toward the mid-plane and pushes the surface snow line significantly outward when the stellar effective temperature exceeds a certain critical value. This critical effective temperature depends on the stellar luminosity and mass, the water abundance in the disk, and the yield of photodesorption. We present an approximate analytic formula for the critical temperature. We separate Herbig Ae/Be stars into two groups on the HR diagram according to the critical temperature: one is the disks where photodesorption is effective and from which we may not find ice particles at the surface, and the other is the disks where photodesorption is not effective. We estimate the snow line position at the surface of the disk around HD142527 to be 100-300 AU, which is consistent with the water ice detection at >140 AU in the disk. All the results depend on the dust grain size in a complex way, and this point requires more work in the future.
First Images from the PIONIER/VLTI optical interferometry imaging survey of Herbig Ae/Be stars
Kluska, Jacques; Malbet, Fabien; Berger, Jean-Philippe; Benisty, Myriam; Lazareff, Bernard; Le Bouquin, Jean-Baptiste; Baron, Fabien; Dominik, Carsten; Isella, Andrea; Juhasz, Attila; Kraus, Stefan; Lachaume, Régis; Ménard, François; Millan-Gabet, Rafael; Monnier, John; Pinte, Christophe; Thi, Wing-Fai; Thiébaut, Eric; Zins, Gérard
The morphology of the close environment of herbig stars is being revealed step by step and appears to be quite complex. Many physical phenomena could interplay : the dust sublimation causing a puffed-up inner rim, a dusty halo, a dusty wind or an inner gaseous component. To investigate more deeply these regions, getting images at the first Astronomical Unit scale is crucial. This has become possible with near infrared instruments on the VLTi. We are carrying out the first Large Program survey of HAeBe stars with statistics on the geometry of these objects at the first astronomical unit scale and the first images of the very close environment of some of them. We have developed a new numerical method specific to young stellar objects which removes the stellar component reconstructing an image of the environment only. To do so we are using the differences in the spectral behaviour between the star and its environment. The images reveal the environement which is not polluted by the star and allow us to derive the best fit for the flux ratio and the spectral slope between the two components (stellar and environmental). We present the results of the survey with some statistics and the frist images of Herbig stars made by PIONIER on the VLTi.
AE 941.
AE 941 [Arthrovas, Neoretna, Psovascar] is shark cartilage extract that inhibits angiogenesis. AE 941 acts by blocking the two main pathways that contribute to the process of angiogenesis, matrix metalloproteases and the vascular endothelial growth factor signalling pathway. When initial development of AE 941 was being conducted, AEterna assigned the various indications different trademarks. Neovastat was used for oncology, Psovascar was used for dermatology, Neoretna was used for ophthalmology and Arthrovas was used for rheumatology. However, it is unclear if these trademarks will be used in the future and AEterna appears to only be using the Neovastat trademark in its current publications regardless of the indication. AEterna Laboratories signed commercialisation agreements with Grupo Ferrer Internacional SA of Spain and Medac GmbH of Germany in February 2001. Under the terms of the agreement, AEterna has granted exclusive commercialisation and distribution rights to AE 941 in oncology to Grupo Ferrer Internacional for the Southern European countries of France, Belgium, Spain, Greece, Portugal and Italy. It also has rights in Central and South America. Medac GmbH will have marketing rights in Germany, the UK, Scandinavia, Switzerland, Austria, Ireland, the Netherlands and Eastern Europe. In October 2002, AEterna Laboratories announced that it had signed an agreement with Australian healthcare products and services company Mayne Group for marketing AE 941 (as Neovastat) in Australia, New Zealand, Canada and Mexico. In March 2003, AEterna Laboratories announced it has signed an agreement with Korean based LG Life Sciences Ltd for marketing AE 941 (as Neovastat) in South Korea. The agreement provides AEterna with upfront and milestone payments, as well as a return on manufacturing and sales of AE 941. AEterna Laboratories had granted Alcon Laboratories an exclusive worldwide licence for AE 941 for ophthalmic products. However, this licence has been terminated. In
Polarized Disk Emission from Herbig Ae/Be Stars Observed Using Gemini Planet Imager: HD 144432, HD 150193, HD 163296, and HD 169142
Monnier, John D.; Aarnio, Alicia; Adams, Fred C.; Calvet, Nuria; Hartmann, Lee [Astronomy Department, University of Michigan, Ann Arbor, MI 48109 (United States); Harries, Tim J.; Hinkley, Sasha; Kraus, Stefan [University of Exeter, Exeter (United Kingdom); Andrews, Sean; Wilner, David [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 91023 (United States); Espaillat, Catherine [Boston University, Boston, MA (United States); McClure, Melissa [European Southern Observatory, Garching (Germany); Oppenheimer, Rebecca [American Museum of Natural History, New York (United States); Perrin, Marshall [Space Telescope Science Institute, Baltimore, MD (United States)
In order to look for signs of ongoing planet formation in young disks, we carried out the first J -band polarized emission imaging of the Herbig Ae/Be stars HD 150193, HD 163296, and HD 169142 using the Gemini Planet Imager, along with new H band observations of HD 144432. We confirm the complex "double ring� structure for the nearly face-on system HD 169142 first seen in H -band, finding the outer ring to be substantially redder than the inner one in polarized intensity. Using radiative transfer modeling, we developed a physical model that explains the full spectral energy distribution and J - and H -band surface brightness profiles, suggesting that the differential color of the two rings could come from reddened starlight traversing the inner wall and may not require differences in grain properties. In addition, we clearly detect an elongated, off-center ring in HD 163296 (MWC 275), locating the scattering surface to be 18 au above the midplane at a radial distance of 77 au, co-spatial with a ring seen at 1.3 mm by ALMA linked to the CO snow line. Lastly, we report a weak tentative detection of scattered light for HD 150193 (MWC 863) and a non-detection for HD 144432; the stellar companion known for each of these targets has likely disrupted the material in the outer disk of the primary star. For HD 163296 and HD 169142, the prominent outer rings we detect could be evidence for giant planet formation in the outer disk or a manifestation of large-scale dust growth processes possibly related to snow-line chemistry.
Monnier, John D.; Aarnio, Alicia; Adams, Fred C.; Calvet, Nuria; Hartmann, Lee; Harries, Tim J.; Hinkley, Sasha; Kraus, Stefan; Andrews, Sean; Wilner, David; Espaillat, Catherine; McClure, Melissa; Oppenheimer, Rebecca; Perrin, Marshall
VLBA DETERMINATION OF THE DISTANCE TO NEARBY STAR-FORMING REGIONS. IV. A PRELIMINARY DISTANCE TO THE PROTO-HERBIG AeBe STAR EC 95 IN THE SERPENS CORE
Dzib, Sergio; Loinard, Laurent; Rodriguez, Luis F.; Mioduszewski, Amy J.; Boden, Andrew F.; Torres, Rosa M.
Using the Very Long Base Array, we observed the young stellar object EC 95 in the Serpens cloud core at eight epochs from 2007 December to 2009 December. Two sources are detected in our field and are shown to form a tight binary system. The primary (EC 95a) is a 4-5 M sun proto-Herbig AeBe object (arguably the youngest such object known), whereas the secondary (EC 95b) is most likely a low-mass T Tauri star. Interestingly, both sources are non-thermal emitters. While T Tauri stars are expected to power a corona because they are convective while they go down the Hayashi track, intermediate-mass stars approach the main sequence on radiative tracks. Thus, they are not expected to have strong superficial magnetic fields, and should not be magnetically active. We review several mechanisms that could produce the non-thermal emission of EC 95a and argue that the observed properties of EC 95a might be most readily interpreted if it possessed a corona powered by a rotation-driven convective layer. Using our observations, we show that the trigonometric parallax of EC 95 is π = 2.41 ± 0.02 mas, corresponding to a distance of 414.9 +4.4 -4.3 pc. We argue that this implies a distance to the Serpens core of 415 ± 5 pc and a mean distance to the Serpens cloud of 415 ± 25 pc. This value is significantly larger than previous estimates (d ∼ 260 pc) based on measurements of the extinction suffered by stars in the direction of Serpens. A possible explanation for this discrepancy is that these previous observations picked out foreground dust clouds associated with the Aquila Rift system rather than Serpens itself.
INFRARED SPECTROSCOPY OF SYMBIOTIC STARS. VIII. ORBITS FOR THREE S-TYPE SYSTEMS: AE ARAE, Y CORONAE AUSTRALIS, AND SS 73-147
Fekel, Francis C.; Hinkle, Kenneth H.; Joyce, Richard R.; Wood, Peter R.
With new infrared radial velocities we have computed orbits of the M giants in three southern S-type symbiotic systems. AE Ara and SS 73-147 have circular orbits with periods of 803 and 820 days, respectively. The eccentric orbit of Y CrA has a period that is about twice as long, 1619 days. Except for CH Cyg it is currently the S-type symbiotic system with the longest period for which a spectroscopic orbit has been determined. The Paschen δ emission line velocities of AE Ara are nearly in antiphase with the M giant absorption feature velocities and result in a mass ratio of 2.7. Emission lines in the 1.005 μm region for the other two symbiotic systems are not good proxies for the hot components in those systems. There is no evidence that these three symbiotics are eclipsing. With spectral classes of M5.5 or M6, the three giants presumably also have velocity variations that result from pulsations, but we have been unable to identify specific pulsation periods in the absorption line velocity residuals.
Discovery of a point-like source and a third spiral arm in the transition disk around the Herbig Ae star MWC 758
Reggiani, M.; Christiaens, V.; Absil, O.; Mawet, D.; Huby, E.; Choquet, E.; Gomez Gonzalez, C. A.; Ruane, G.; Femenia, B.; Serabyn, E.; Matthews, K.; Barraza, M.; Carlomagno, B.; Defrère, D.; Delacroix, C.; Habraken, S.; Jolivet, A.; Karlsson, M.; Orban de Xivry, G.; Piron, P.; Surdej, J.; Vargas Catalan, E.; Wertz, O.
Context. Transition disks offer the extraordinary opportunity to look for newly born planets and to investigate the early stages of planet formation. Aim. In this context we observed the Herbig A5 star MWC 758 with the L'-band vector vortex coronagraph installed in the near-infrared camera and spectrograph NIRC2 at the Keck II telescope, with the aim of unveiling the nature of the spiral structure by constraining the presence of planetary companions in the system. Methods: Our high-contrast imaging observations show a bright (ΔL' = 7.0 ± 0.3 mag) point-like emission south of MWC 758 at a deprojected separation of 20 au (r = 0.''111 ± 0.''004) from the central star. We also recover the two spiral arms (southeast and northwest), already imaged by previous studies in polarized light, and discover a third arm to the southwest of the star. No additional companions were detected in the system down to 5 Jupiter masses beyond 0.''6 from the star. Results: We propose that the bright L'-band emission could be caused by the presence of an embedded and accreting protoplanet, although the possibility of it being an asymmetric disk feature cannot be excluded. The spiral structure is probably not related to the protoplanet candidate, unless on an inclined and eccentric orbit, and it could be due to one (or more) yet undetected planetary companions at the edge of or outside the spiral pattern. Future observations and additional simulations will be needed to shed light on the true nature of the point-like source and its link with the spiral arms. The reduced images (FITS files) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A74
AES Modular Power Systems
National Aeronautics and Space Administration — The AES Modular Power Systems (AMPS) project will demonstrate and infuse modular power electronics, batteries, fuel cells, and autonomous control for exploration...
A study of dust properties in the inner sub-au region of the Herbig Ae star HD 169142 with VLTI/PIONIER
Chen, L.; Kóspál, �.; �brahám, P.; Kreplin, A.; Matter, A.; Weigelt, G.
Context. An essential step to understanding protoplanetary evolution is the study of disks that contain gaps or inner holes. The pre-transitional disk around the Herbig star HD 169142 exhibits multi-gap disk structure, differentiated gas and dust distribution, planet candidates, and near-infrared fading in the past decades, which make it a valuable target for a case study of disk evolution. Aims: Using near-infrared interferometric observations with VLTI/PIONIER, we aim to study the dust properties in the inner sub-au region of the disk in the years 2011-2013, when the object is already in its near-infrared faint state. Methods: We first performed simple geometric modeling to characterize the size and shape of the NIR-emitting region. We then performed Monte-Carlo radiative transfer simulations on grids of models and compared the model predictions with the interferometric and photometric observations. Results: We find that the observations are consistent with optically thin gray dust lying at Rin 0.07 au, passively heated to T 1500 K. Models with sub-micron optically thin dust are excluded because such dust will be heated to much higher temperatures at similar distance. The observations can also be reproduced with a model consisting of optically thick dust at Rin 0.06 au, but this model is plausible only if refractory dust species enduring 2400 K exist in the inner disk. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programs 190.C-963 and 087.C-0709.
A More Compact AES
Canright, David; Osvik, Dag Arne
We explore ways to reduce the number of bit operations required to implement AES. One way involves optimizing the composite field approach for entire rounds of AES. Another way is integrating the Galois multiplications of MixColumns with the linear transformations of the S-box. Combined with careful optimizations, these reduce the number of bit operations to encrypt one block by 9.0%, compared to earlier work that used the composite field only in the S-box. For decryption, ...
Popular Herbig AE star AB Aur
Shevchenko, V.S.
The variability of AB Aur emission line H α , H β , H γ profiles, equivalent widths (EW λ ) and relative intensity have been observed on the photoelectric scanner. During the 20 d observation period EW λ H α ranged from 23.20 to 35.35 A. Mean EW λ H α is 27.25 A, daily average deviation is 3.60 ± 0.07 A. The minimum time of variability is 1 h . The chromospheric lines near-infrared triplet Ca II and KCaII and emission lines H β -H13, P12-P20, 0I 8446.5 A and the variability of other lines have been studied on the photographic and image-tube spectra. The intensity of these lines and EW λ changed 2-4 times during an interval from 1 h to several years. The AB Aur variability nature of emission lines made it possible to assume that the ''deep chromosphere'' is not a centre-symmetrical or axisymmetrical formation but is a conglomerate of different density and speed gas condensations
We explore ways to reduce the number of bit operations required to implement AES. One way involves optimizing the composite field approach for entire rounds of AES. Another way is integrating the Galois multiplications of MixColumns with the linear transformations of the S-box. Combined with careful optimizations, these reduce the number of bit operations to encrypt one block by 9.0%, compared to earlier work that used the composite field only in the S-box. For decryption, the improvement is 13.5%. This work may be useful both as a starting point for a bit-sliced software implementation, where reducing operations increases speed, and also for hardware with limited resources.
Secure Multiparty AES
Damgård, Ivan; Keller, Marcel
We propose several variants of a secure multiparty computation protocol for AES encryption. The best variant requires 2200 + {{400}over{255}} expected elementary operations in expected 70 + {{20}over{255}} rounds to encrypt one 128-bit block with a 128-bit key. We implemented the variants using VIFF, a software framework for implementing secure multiparty computation (MPC). Tests with three players (passive security against at most one corrupted player) in a local network showed that one block can be encrypted in 2 seconds. We also argue that this result could be improved by an optimized implementation.
References: AePW publications
National Aeronautics and Space Administration — This page is the repository for the publications resulting from the AePW. This includes the special sessions at conferences: AIAA ASM 2012, Grapevine TX; AIAA SDM...
AE Recorder Characteristics and Development.
Partridge, Michael E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Curtis, Shane Keawe [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); McGrogan, David Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
The Anomalous Environment Recorder (AE Recorder) provides a robust data recording capability for multiple high-shock applications including earth penetrators. The AE Recorder, packaged as a 2.4" di ameter cylinder 3" tall, acquires 12 accelerometer, 2 auxiliary, and 6 discrete signal channels at 250k samples / second. Recording depth is 213 seconds plus 75ms of pre-trigger data. The mechanical, electrical, and firmware are described as well as support electro nics designed for the first use of the recorder.
IUE observations of W UMa systems AE Phoenicis and TY Mensae
Rucinski, S.M.
SWP spectra of AE Phe and TY Men are presented and discussed. The spectrum of AE Phe is typical in showing strong emission lines which originate in the chromosphere and transition region. On the basis of the strenght of the He II emission line we predict that AE Phe should be a moderately strong X-ray source. The spectrum of TY Men is too weakly exposed for a full discussion. It is pointed out that this star is one of the most important for understanding the activity in W UMa systems. 18 refs., 3 figs., 1 tab. (author)
A Novel Recommendation To AES Limitation
Falguni Patel
Full Text Available Among all available conventional encryption algorithms the AES Advanced Encryption Standard is the most secured and highly used algorithm. AES algorithm is widely used by variety of applications like Archive and Compression tools File Encryption Encryption File System Disk Partition Encryption Networking Signal Protocol among others. This paper highlights the Brute Force attack and Cryptanalysis attack on AES Algorithm. This paper also discusses about a novel recommendation of a combination model of AES Algorithm and Random-X Cipher.
Attacks and countermeasures on AES and ECC
Tange, Henrik; Andersen, Birger
AES (Advanced Encryption Standard) is widely used in LTE and Wi-Fi communication systems. AES has recently been exposed to new attacks which have questioned the overall security of AES. The newest attack is a so called biclique attack, which is using the fact that the content of the state array...
Toepassing ICP-AES op het RIKILT
Ruig, de W.G.
Rapportage over de toepassing van ICP-AES op het RIKILT. Bij ICP-AES worden twee manieren van lichtemissie detectie toegepast nl. simultaan en sequentieel. De voor- en nadelen van ICP-AES t.o.v. AAS worden op een rij gezet.
Optical spectrophotometry of oscillations and flickering in AE Aquarii
Welsh, William F.; Horne, Keith; Oke, J. B.
We observed rapid variations in the nova-like cataclysmic variable AE Aquarii for 1.7 hr with 4.3 s time resolution using the 30-channel (3227-10494 A) spectrophotometer on the Hale 5 m telescope. The 16.5 and 33.0 s oscillations show a featureless blue spectrum that can be represented by a blackbody with temperature and area much smaller than the accretion disk. Models consisting of the sum of a K star spectrum and a hydrogen slab in LTE at T = 6000-10,000 K can fit the spectrum of AE Aquarii reasonably well. The spectrum of a flare indicates optically thin gas with T = 8000-12,000 K. The energy released by the flare is large compared to typical stellar flares.
Developing A/E Capabilities
Gonzalez, A.; Gurbindo, J.
During the last few years, the methods used by EMPRESARIOS AGRUPADOS and INITEC to perform Architect-Engineering work in Spain for nuclear projects has undergone a process of significant change in project management and engineering approaches. Specific practical examples of management techniques and design practices which represent a good record of results will be discussed. They are identified as areas of special interest in developing A/E capabilities for nuclear projects . Command of these areas should produce major payoffs in local participation and contribute to achieving real nuclear engineering capabities in the country. (author)
Infrared observations of AE Aquarii
Tanzi, E. G.; Chincarini, G.; Tarenghi, M.
Broadband infrared observations of the cataclysmic variable AE Aquarii are reported. The observations were obtained in the J, H, K and L filters with the InSb photometer attached to the 1-m telescope of the European Southern Observatory. The infrared energy distribution observed from 0.35 to 3.5 microns for phase 0.5 suggests a spectral type of K5 V for the secondary and a distance to the system of approximately 70 pc if an absolute magnitude of 7.3 is assumed. Monitoring of the flux at 2.2 microns reveals a variability with an amplitude of approximately 0.3 magnitude over one third of the orbital period, the nature of which is under investigation.
Biclique cryptanalysis of the full AES
Bogdanov, Andrey; Khovratovich, Dmitry; Rechberger, Christian
Since Rijndael was chosen as the Advanced Encryption Standard (AES), improving upon 7-round attacks on the 128-bit key variant (out of 10 rounds) or upon 8-round attacks on the 192/256-bit key variants (out of 12/14 rounds) has been one of the most difficult challenges in the cryptanalysis of block...... ciphers for more than a decade. In this paper, we present the novel technique of block cipher cryptanalysis with bicliques, which leads to the following results: The first key recovery method for the full AES-128 with computational complexity 2126.1. The first key recovery method for the full AES-192...... with computational complexity 2189.7. The first key recovery method for the full AES-256 with computational complexity 2254.4. Key recovery methods with lower complexity for the reduced-round versions of AES not considered before, including cryptanalysis of 8-round AES-128 with complexity 2124.9. Preimage search...
Chromosome isolation by flow sorting in Aegilops umbellulata and Ae. comosa and their allotetraploid hybrids Ae. biuncialis and Ae. geniculata.
István Molnár
Full Text Available This study evaluates the potential of flow cytometry for chromosome sorting in two wild diploid wheats Aegilops umbellulata and Ae. comosa and their natural allotetraploid hybrids Ae. biuncialis and Ae. geniculata. Flow karyotypes obtained after the analysis of DAPI-stained chromosomes were characterized and content of chromosome peaks was determined. Peaks of chromosome 1U could be discriminated in flow karyotypes of Ae. umbellulata and Ae. biuncialis and the chromosome could be sorted with purities exceeding 95%. The remaining chromosomes formed composite peaks and could be sorted in groups of two to four. Twenty four wheat SSR markers were tested for their position on chromosomes of Ae. umbellulata and Ae. comosa using PCR on DNA amplified from flow-sorted chromosomes and genomic DNA of wheat-Ae. geniculata addition lines, respectively. Six SSR markers were located on particular Aegilops chromosomes using sorted chromosomes, thus confirming the usefulness of this approach for physical mapping. The SSR markers are suitable for marker assisted selection of wheat-Aegilops introgression lines. The results obtained in this work provide new opportunities for dissecting genomes of wild relatives of wheat with the aim to assist in alien gene transfer and discovery of novel genes for wheat improvement.
Dynamic AES – Extending the Lifetime?
proven that AES is vulnerable to side-channelattacks, related sub-key attacks and biclicque attacks. This paper introducesa new dynamic version of AES where the main flow is depending on theTNAF (Ï" -adic Non-Adjacent Form) value. This new approach can preventside-channel attacks, related sub-key attacks...... and biclique attacks....
AES in deficit on Georgian trade
According to Vedomosti journal Russian state energy company RAO JES Rossii paid for Georgian assets acquisition to American company AES only 23 million USD, what is less than a tenth of sum which AES had originally paid for it. Thus total transaction value including debts reached 80 millions USD. But internal documents of American AES confirm that Americans paid 260 millions USD for Georgian assets and they took besides over 60 millions USD of obligations. Russians bought Georgian AES assets through Finnish filial Nordic Oy. Thus they obtained 75 per cent share in Telasi company which operates distributive network in Tbilisi, two blocks of plant in Tbilisi and half share in AES-Transenergy company which exports electric energy from Georgia to Turkey. RAO JES besides got managerial laws on Hramesi company which owns two water plants. Russian company controls one fifth of production and 35 per cent of electric energy sale in Georgia by these assets
Irvine, J.M.
The subject is covered in chapters entitled: introduction (resume of stellar evolution, gross characteristics of neutron stars); pulsars (pulsar characteristics, pulsars as neutron stars); neutron star temperatures (neutron star cooling, superfluidity and superconductivity in neutron stars); the exterior of neutron stars (the magnetosphere, the neutron star 'atmosphere', pulses); neutron star structure; neutron star equations of state. (U.K.)
AE/VCE Unconfirmed Vernal Pools
Vermont Center for Geographic Information — This dataset is derived from a project by the Vermont Center for Ecostudies(VCE) and Arrowwood Environmental(AE) to map vernal pools throughout the state of Vermont....
AE/VCE Confirmed Vernal Pools
Clinical epidemiology of human AE in Europe.
Vuitton, D A; Demonmerot, F; Knapp, J; Richou, C; Grenouillet, F; Chauchet, A; Vuitton, L; Bresson-Hadni, S; Millon, L
This review gives a critical update of the situation regarding alveolar echinococcosis (AE) in Europe in humans, based on existing publications and on findings of national and European surveillance systems. All sources point to an increase in human cases of AE in the "historic endemic areas" of Europe, namely Germany, Switzerland, Austria and France and to the emergence of human cases in countries where the disease had never been recognised until the end of the 20th century, especially in central-eastern and Baltic countries. Both increase and emergence could be only due to methodological biases; this point is discussed in the review. One explanation may be given by changes in the animal reservoir of the parasite, Echinococcus multilocularis (increase in the global population of foxes in Europe and its urbanisation, as well as a possible increased involvement of pet animals as definitive infectious hosts). The review also focuses onto 2 more original approaches: (1) how changes in therapeutic attitudes toward malignant and chronic inflammatory diseases may affect the epidemiology of AE in the future in Europe, since a recent survey of such cases in France showed the emergence of AE in patients with immune suppression since the beginning of the 21st century; (2) how setting a network of referral centres in Europe based on common studies on the care management of patients might contribute to a better knowledge of AE epidemiology in the future. Copyright © 2015. Published by Elsevier B.V.
15 CFR 758.2 - Automated Export System (AES).
... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Automated Export System (AES). 758.2... CLEARANCE REQUIREMENTS § 758.2 Automated Export System (AES). The Census Bureau's Foreign Trade Statistics...) electronically using the Automated Export System (AES). In order to use AES, you must apply directly to the...
AES ALGORITHM IMPLEMENTATION IN PROGRAMMING LANGUAGES
Luminiţa DEFTA
Full Text Available Information encryption represents the usage of an algorithm to convert an unknown message into an encrypted one. It is used to protect the data against unauthorized access. Protected data can be stored on a media device or can be transmitted through the network. In this paper we describe a concrete implementation of the AES algorithm in the Java programming language (available from Java Development Kit 6 libraries and C (using the OpenSSL library. AES (Advanced Encryption Standard is an asymmetric key encryption algorithm formally adopted by the U.S. government and was elected after a long process of standardization.
Study, analysis, assess and compare the nuclear engineering systems of nuclear power plant with different reactor types VVER-1000, namely AES-91, AES-92 and AES-2006
Le Van Hong; Tran Chi Thanh; Hoang Minh Giang; Le Dai Dien; Nguyen Nhi Dien; Nguyen Minh Tuan
On November 25, 2009, in Hanoi, the National Assembly had been approved the resolution about policy for investment of nuclear power project in Ninh Thuan province which include two sites, each site has two units with power around 1000 MWe. For the nuclear power project at Ninh Thuan 1, Vietnam Government signed the Joint-Governmental Agreement with Russian Government for building the nuclear power plant with reactor type VVER. At present time, the Russian Consultant proposed four reactor technologies can be used for Ninh Thuan 1 project, namely: AES-91, AES-92, AES-2006/V491 and AES-2006/V392M. This report presents the main reactor engineering systems of nuclear power plants with VVER-1000/1200. The results from analysis, comparison and assessment between the designs of AES-91, AES-92 and AES-2006 are also presented. The obtained results show that the type AES-2006 is appropriate selection for Vietnam. (author)
Symbiotic stars
Boyarchuk, A.A.
There are some arguments that the symbiotic stars are binary, where one component is a red giant and the other component is a small hot star which is exciting a nebula. The symbiotic stars belong to the old disc population. Probably, symbiotic stars are just such an evolutionary stage for double stars as planetary nebulae for single stars. (Auth.)
AES Water Architecture Study Interim Results
Sarguisingh, Miriam J.
The mission of the Advanced Exploration System (AES) Water Recovery Project (WRP) is to develop advanced water recovery systems in order to enable NASA human exploration missions beyond low earth orbit (LEO). The primary objective of the AES WRP is to develop water recovery technologies critical to near term missions beyond LEO. The secondary objective is to continue to advance mid-readiness level technologies to support future NASA missions. An effort is being undertaken to establish the architecture for the AES Water Recovery System (WRS) that meets both near and long term objectives. The resultant architecture will be used to guide future technical planning, establish a baseline development roadmap for technology infusion, and establish baseline assumptions for integrated ground and on-orbit environmental control and life support systems (ECLSS) definition. This study is being performed in three phases. Phase I of this study established the scope of the study through definition of the mission requirements and constraints, as well as indentifying all possible WRS configurations that meet the mission requirements. Phase II of this study focused on the near term space exploration objectives by establishing an ISS-derived reference schematic for long-duration (>180 day) in-space habitation. Phase III will focus on the long term space exploration objectives, trading the viable WRS configurations identified in Phase I to identify the ideal exploration WRS. The results of Phases I and II are discussed in this paper.
AE Test of Calcareous Sands with Particle Rushing
Tan Fengyi
Full Text Available The particle of calcareous sands was forced to crush, then the energy from the crushing was released by the form of sound waves. Therefore the AE technique was used to detect the calcareous sands AE signal when it crushed. by to study the AE characteristics, the mechanics of calcareous sands was studied. Study showed that: (1 there was the AE activities on the low confining pressure condition at the beginnig of test, (2 there was more and more AE activities with the continuing of test until to the end, (3 the calcareous sands' AE activities was on the whole testing, (4 the calcareous sands' particle crushing and mutual friction played different roles for its AE activities. Then the AE model based on the calcarous sands' particle crushing was discussed.
22 CFR 120.30 - The Automated Export System (AES).
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false The Automated Export System (AES). 120.30... DEFINITIONS § 120.30 The Automated Export System (AES). The Automated Export System (AES) is the Department of... system for collection of export data for the Department of State. In accordance with this subchapter U.S...
IUE observations of new A star candidate proto-planetary systems
Grady, Carol A.
As a result of the detection of accreting gas in the A5e PMS Herbig Ae star, HR 5999, most of the observations for this IUE program were devoted to Herbig Ae stars rather than to main sequence A stars. Mid-UV emission at optical minimum light was detected for UX Ori (A1e), BF Ori (A5e), and CQ Tau (F2e). The presence of accreting gas in HD 45677 and HD 50138 prompted reclassification of these stars as Herbig Be stars rather than as protoplanetary nebulae. Detailed results are discussed.
Effect of ECAP processing on corrosion resistance of AE21 and AE42 magnesium alloys
Minárik, P.; Král, R.; Jane�ek, M.
Corrosion properties of AE21 and AE42 magnesium alloys were investigated in the extruded state and after subsequent 8 passes of Equal Channel Angular Pressing (ECAP) via route Bc, by Electrochemical Impedance Spectroscopy (EIS) in 0.1 M NaCl solution. The resulting microstructure was observed by the Transmission Electron Microscope (TEM) and the Scanning Electron Microscope (SEM). Corrosion layer created after 7 days of immersion was observed by (SEM) in order to explain different evolution of the corrosion resistance after ECAP processing in both alloys. It was found that Al-rich Al11RE3 dispersed particles (present in both alloys) strongly influence the corrosion process and enhance the corrosion resistance. Ultra-fine grained structure was found to reduce the corrosion resistance in AE21. On the other hand, the microstructure of AE42 after ECAP and particularly the better distribution of the alloying elements in the matrix enhance the corrosion resistance when compared to the extruded material.
Minárik, P., E-mail: [email protected] [Charles University, Department of Physics of Materials, Prague (Czech Republic); Král, R.; Jane�ek, M. [Charles University, Department of Physics of Materials, Prague (Czech Republic)
Atomic-AES: A compact implementation of the AES encryption/decryption core
Banik, Subhadeep; Bogdanov, Andrey; Regazzoni, Francesco
The implementation of the AES encryption core by Moradi et al. at Eurocrypt 2011 is one of the smallest in terms of gate area. The circuit takes around 2400 gates and operates on an 8 bit datapath. However this is an encryption only core and unable to cater to block cipher modes like CBC and ELm...
15 CFR Appendix D to Part 30 - AES Filing Citation, Exemption and Exclusion Legends
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false AES Filing Citation, Exemption and... Appendix D to Part 30—AES Filing Citation, Exemption and Exclusion Legends I. USML Proof of Filing Citation AES ITN Example: AES X20060101987654. II. AES Proof of Filing Citation subpart A § 30.7 AES ITN...
Development of AE monitoring system for journal bearing
Yoon, Dong Jin; Kwon, Oh Yang; Chung, Min Hwa
For the purpose of monitoring of the bearing conditions in rotating machinery, a system for journal bearing diagnosis by AE was developed. Acoustic emission technique is used to detect abnormal conditions in the bearing system. And various data such as AE events, rms voltage level of AE signals, and AE parameters were acquired during experiments with the simulated journal bearing system. Based on the above results and practical application for power plant, algorithms and judgement criteria for diagnosis system was accomplished. Bearing diagnosis system is composed of four parts as follows : sensing part for AE sensor and preamplifier, signal processing part for rms-to-dc converting to measure AE rms voltage, interface part for connecting rms signal to PC using A/D converter, graphic display and software part for display of bearing condition and for managing of diagnosis program.
Stars and Star Myths.
Eason, Oliver
Myths and tales from around the world about constellations and facts about stars in the constellations are presented. Most of the stories are from Greek and Roman mythology; however, a few Chinese, Japanese, Polynesian, Arabian, Jewish, and American Indian tales are also included. Following an introduction, myths are presented for the following 32…
Energy efficiency analysis and implementation of AES on an FPGA
Kenney, David
The Advanced Encryption Standard (AES) was developed by Joan Daemen and Vincent Rjimen and endorsed by the National Institute of Standards and Technology in 2001. It was designed to replace the aging Data Encryption Standard (DES) and be useful for a wide range of applications with varying throughput, area, power dissipation and energy consumption requirements. Field Programmable Gate Arrays (FPGAs) are flexible and reconfigurable integrated circuits that are useful for many different applications including the implementation of AES. Though they are highly flexible, FPGAs are often less efficient than Application Specific Integrated Circuits (ASICs); they tend to operate slower, take up more space and dissipate more power. There have been many FPGA AES implementations that focus on obtaining high throughput or low area usage, but very little research done in the area of low power or energy efficient FPGA based AES; in fact, it is rare for estimates on power dissipation to be made at all. This thesis presents a methodology to evaluate the energy efficiency of FPGA based AES designs and proposes a novel FPGA AES implementation which is highly flexible and energy efficient. The proposed methodology is implemented as part of a novel scripting tool, the AES Energy Analyzer, which is able to fully characterize the power dissipation and energy efficiency of FPGA based AES designs. Additionally, this thesis introduces a new FPGA power reduction technique called Opportunistic Combinational Operand Gating (OCOG) which is used in the proposed energy efficient implementation. The AES Energy Analyzer was able to estimate the power dissipation and energy efficiency of the proposed AES design during its most commonly performed operations. It was found that the proposed implementation consumes less energy per operation than any previous FPGA based AES implementations that included power estimations. Finally, the use of Opportunistic Combinational Operand Gating on an AES cipher
Improving the throughput of the AES algorithm with multicore processors
Barnes, A.; Fernando, R.; Mettananda, K.; Ragel, R. G.
AES, Advanced Encryption Standard, can be considered the most widely used modern symmetric key encryption standard. To encrypt/decrypt a file using the AES algorithm, the file must undergo a set of complex computational steps. Therefore a software implementation of AES algorithm would be slow and consume large amount of time to complete. The immense increase of both stored and transferred data in the recent years had made this problem even more daunting when the need to encrypt/decrypt such d...
Multiple Lookup Table-Based AES Encryption Algorithm Implementation
Gong, Jin; Liu, Wenyi; Zhang, Huixin
Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.
Identifying gaps in flaring Herbig Ae/Be disks using spatially resolved mid-infrared imaging. Are all group I disks transitional?
Maaskant, K.M.; Honda, M.; Waters, L.; Tielens, A.G.G.M.; Dominik, C.; Min, M.; Verhoeff, A.; Meeus, G.; Ancker, van den M.
Context. The evolution of young massive protoplanetary disks toward planetary systems is expected to correspond to structural changes in observational appearance, which includes the formation of gaps and the depletion of dust and gas. Aims: A special group of disks around Herbig Ae/Be stars do not
Maaskant, K.M.; Honda, M.; Waters, L.B.F.M.; Tielens, A.G.G.M.; Dominik, C.; Min, M.; Verhoeff, A.; Meeus, G.; van den Ancker, M.
Context. The evolution of young massive protoplanetary disks toward planetary systems is expected to correspond to structural changes in observational appearance, which includes the formation of gaps and the depletion of dust and gas. Aims. A special group of disks around Herbig Ae/Be stars do not
AE8/AP8 Implementations in AE9/AP9, IRBEM, and SPENVIS
period applies to orbit generation only; AE8/AP8 utilizes geomagnetic field models from other epochs as specified in the table below.) SHIELDOSE2 model...finite and semi- infinite slab data tables for Bremsstrahlung have been reversed [Heynderickx, private communication, May 2013]. This correction is...Cain, J. C., S. J. Hendricks, R. A. Langel, and W. V. Hudson (1967), A proposed model for the international geomagnetic reference field, 1965, J
AES, Automated Construction Cost Estimation System
Holder, D.A.
A - Description of program or function: AES (Automated Estimating System) enters and updates the detailed cost, schedule, contingency, and escalation information contained in a typical construction or other project cost estimates. It combines this information to calculate both un-escalated and escalated and cash flow values for the project. These costs can be reported at varying levels of detail. AES differs from previous versions in at least the following ways: The schedule is entered at the WBS-Participant, Activity level - multiple activities can be assigned to each WBS-Participant combination; the spending curve is defined at the schedule activity level and a weighing factor is defined which determines percentage of cost for the WBS-Participant applied to the schedule activity; Schedule by days instead of Fiscal Year/Quarter; Sales Tax is applied at the Line Item Level- a sales tax codes is selected to indicate Material, Large Single Item, or Professional Services; a 'data filter' has been added to allow user to define data the report is to be generated for. B - Method of solution: Average Escalation Rate: The average escalation for a Bill of is calculated in three steps. 1. A table of quarterly escalation factors is calculated based on the base fiscal year and quarter of the project entered in the estimate record and the annual escalation rates entered in the Standard Value File. 2. The percentage distribution of costs by quarter for the Bill of Material is calculated based on the schedule entered and the curve type. 3. The percent in each fiscal year and quarter in the distribution is multiplied by the escalation factor for the fiscal year and quarter. The sum of these results is the average escalation rate for that Bill of Material. Schedule by curve: The allocation of costs to specific time periods is dependent on three inputs, starting schedule date, ending schedule date, and the percentage of costs allocated to each quarter. Contingency Analysis: The
Unbelievable security : Matching AES using public key systems
Lenstra, A.K.; Boyd, C.
The Advanced Encryption Standard (AES) provides three levels of security: 128, 192, and 256 bits. Given a desired level of security for the AES, this paper discusses matching public key sizes for RSA and the ElGamal family of protocols. For the latter both traditional multiplicative groups of finite
Test and Verification of AES Used for Image Encryption
Zhang, Yong
In this paper, an image encryption program based on AES in cipher block chaining mode was designed with C language. The encryption/decryption speed and security performance of AES based image cryptosystem were tested and used to compare the proposed cryptosystem with some existing image cryptosystems based on chaos. Simulation results show that AES can apply to image encryption, which refutes the widely accepted point of view that AES is not suitable for image encryption. This paper also suggests taking the speed of AES based image encryption as the speed benchmark of image encryption algorithms. And those image encryption algorithms whose speeds are lower than the benchmark should be discarded in practical communications.
The global compendium of Aedes aegypti and Ae. albopictus occurrence
Kraemer, Moritz U. G.; Sinka, Marianne E.; Duda, Kirsten A.; Mylne, Adrian; Shearer, Freya M.; Brady, Oliver J.; Messina, Jane P.; Barker, Christopher M.; Moore, Chester G.; Carvalho, Roberta G.; Coelho, Giovanini E.; van Bortel, Wim; Hendrickx, Guy; Schaffner, Francis; Wint, G. R. William; Elyazar, Iqbal R. F.; Teng, Hwa-Jen; Hay, Simon I.
Aedes aegypti and Ae. albopictus are the main vectors transmitting dengue and chikungunya viruses. Despite being pathogens of global public health importance, knowledge of their vectors' global distribution remains patchy and sparse. A global geographic database of known occurrences of Ae. aegypti and Ae. albopictus between 1960 and 2014 was compiled. Herein we present the database, which comprises occurrence data linked to point or polygon locations, derived from peer-reviewed literature and unpublished studies including national entomological surveys and expert networks. We describe all data collection processes, as well as geo-positioning methods, database management and quality-control procedures. This is the first comprehensive global database of Ae. aegypti and Ae. albopictus occurrence, consisting of 19,930 and 22,137 geo-positioned occurrence records respectively. Both datasets can be used for a variety of mapping and spatial analyses of the vectors and, by inference, the diseases they transmit.
AE/flaw characterization for nuclear pressure vessels
Hutton, P.H.; Kurtz, R.J.; Pappas, R.A.
This chapter discusses the use of acoustic emission (AE) detected during continuous monitoring to identify and evaluate growing flaws in pressure vessels. Off-reactor testing and on-reactor testing are considered. Relationships for identifying acoustic emission (AE) from crack growth and using the AE data to estimate flaw severity have been developed experimentally by laboratory testing. The purpose of the off-reactor vessel test is to evaluate AE monitoring/interpretation methodology on a heavy section steel vessel under simulated reactor operating conditions. The purpose of on-reactor testing is to evaluate the capability of a monitor system to function in the reactor environment, calibrate the ability to detect AE signals, and to demonstrate that a meaningful criteria can be established to prevent false alarms. An expanded data base is needed from application testing and methodology standardization
Wave Star
Kramer, Morten; Brorsen, Michael; Frigaard, Peter
Denne rapport beskriver numeriske beregninger af forskellige flydergeometrier for bølgeenergianlæget Wave Star.......Denne rapport beskriver numeriske beregninger af forskellige flydergeometrier for bølgeenergianlæget Wave Star....
Reconfigurable Secure Video Codec Based on DWT and AES Processor
Rached Tourki
Full Text Available In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT and the Advanced Encryption Standard (AES processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffman coding to the JPEG and DWT to the JPEG2000. Furthermore, an improved motion estimation algorithm is proposed. Second, the encryptiondecryption effects are achieved by the AES processor. AES is aim to encrypt group of LL bands. The prominent feature of this method is an encryption of LL bands by AES-128 (128-bit keys, or AES-192 (192-bit keys, or AES-256 (256-bit keys.Third, we focus on a method that implements partial encryption of LL bands. Our approach provides considerable levels of security (key size, partial encryption, mode encryption, and has very limited adverse impact on the compression efficiency. The proposed codec can provide up to 9 cipher schemes within a reasonable software cost. Latency, correlation, PSNR and compression rate results are analyzed and shown.
Characterization of Aes nuclear foci in colorectal cancer cells
Itatani, Yoshiro; Sonoshita, Masahiro; Kakizaki, Fumihiko; Okawa, Katsuya; Stifani, Stefano; Itoh, Hideaki; Sakai, Yoshiharu; Taketo, M. Mark
Amino-terminal enhancer of split (Aes) is a member of Groucho/Transducin-like enhancer (TLE) family. Aes is a recently found metastasis suppressor of colorectal cancer (CRC) that inhibits Notch signalling, and forms nuclear foci together with TLE1. Although some Notch-associated proteins are known to form subnuclear bodies, little is known regarding the dynamics or functions of these structures. Here, we show that Aes nuclear foci in CRC observed under an electron microscope are in a rather amorphous structure, lacking surrounding membrane. Investigation of their behaviour during the cell cycle by time-lapse cinematography showed that Aes nuclear foci dissolve during mitosis and reassemble after completion of cytokinesis. We have also found that heat shock cognate 70 (HSC70) is an essential component of Aes foci. Pharmacological inhibition of the HSC70 ATPase activity with VER155008 reduces Aes focus formation. These results provide insight into the understanding of Aes-mediated inhibition of Notch signalling. PMID:26229111
From Star Wars to 'turf wars'.
Just as we are witnessing the re-emergence of Star Wars, it seems the 'turf wars' that have dogged A&E care are back. Since its inception as a specialty, A&E nurses have been accused of being 'Jacks (and Jill's, to be politically correct) of all trades and masters of none'. The inference being that all we do is 'mind' patients until they receive definitive care. Clearly this is not the case. As A&E nurses have demonstrated over the years, our skills are in the recognition and management of acute illness or injury, regardless of the patient's age, physical or psychological condition. Rather than being a 'master of none' we are masters of immediate care.
Power efficient and high performance VLSI architecture for AES algorithm
K. Kalaiselvi
Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06Â Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.
Crack classification in concrete beams using AE parameters
Bahari, N. A. A. S.; Shahidan, S.; Abdullah, S. R.; Ali, N.; Zuki, S. S. Mohd; Ibrahim, M. H. W.; Rahim, M. A.
The acoustic emission (AE) technique is an effective tool for the evaluation of crack growth. The aim of this study is to evaluate crack classification in reinforced concrete beams using statistical analysis. AE has been applied for the early monitoring of reinforced concrete structures using AE parameters such as average frequency, rise time, amplitude counts and duration. This experimental study focuses on the utilisation of this method in evaluating reinforced concrete beams. Beam specimens measuring 150 mm × 250 mm × 1200 mm were tested using a three-point load flexural test using Universal Testing Machines (UTM) together with an AE monitoring system. The results indicated that RA value can be used to determine the relationship between tensile crack and shear movement in reinforced concrete beams.
Significant progress has been shown toward resolving major problems in continuous AE monitoring to detect cracking in reactor pressure boundries. Application is considered an attainable goal. Major needs are an expanded data base from application testiong and methodology standardization
Rached Tourki; M. Machhout; B. Bouallegue; M. Atri; M. Zeghid; D. Dia
In this paper, we proposed a secure video codec based on the discrete wavelet transformation (DWT) and the Advanced Encryption Standard (AES) processor. Either, use of video coding with DWT or encryption using AES is well known. However, linking these two designs to achieve secure video coding is leading. The contributions of our work are as follows. First, a new method for image and video compression is proposed. This codec is a synthesis of JPEG and JPEG2000,which is implemented using Huffm...
Implementasi Algoritma Kriptografi Aes Pada Mikrokontroler Atmega32
The purpose of this study is to develop a microcomputer or microcontroller system which has a feature to encrypt data with AES algorithm. The input comes from keystrokes of keyboard and the result of the encryption process is stored in the microcontroller's EEPROM. The system is developed using the C programming languange with WinAVR interface. The research method being used to develop the system are data library collection such as the hardware datasheet, the specification of AES algorithm an...
Radio stars
Hjellming, R.M.
Any discussion of the radio emission from stars should begin by emphasizing certain unique problems. First of all, one must clarify a semantic confusion introduced into radio astronomy in the late 1950's when most new radio sources were described as radio stars. All of these early 'radio stars' were eventually identified with other galactic and extra-galactic objects. The study of true radio stars, where the radio emission is produced in the atmosphere of a star, began only in the 1960's. Most of the work on the subject has, in fact, been carried out in only the last few years. Because the real information about radio stars is quite new, it is not surprising that major aspects of the subject are not at all understood. For this reason this paper is organized mainly around three questions: what is the available observational information; what physical processes seem to be involved; and what working hypotheses look potentially fruitful. (Auth.)
Maurette, M.; Hammer, C.
A shooting star passage -even a star shower- can be sometimes easily seen during moonless black night. They represent the partial volatilization in earth atmosphere of meteorites or micrometeorites reduced in cosmic dusts. Everywhere on earth, these star dusts are searched to be gathered. This research made one year ago on the Greenland ice-cap is this article object; orbit gathering projects are also presented [fr
Ionization of Polycyclic Aromatic Hydrocarbon Molecules around the Herbig Ae/be ENVIRONMENT*
Sakon, Itsuki; Onaka, Takashi; Okamoto, Yoshiko K.; Kataza, Hirokazu; Kaneda, Hidehiro; Honda, Mitsuhiko
We present the results of mid-infrared N-band spectroscopy of the Herbig Ae/Be system MWC1080 using the Cooled Mid-Infrared Camera and Spectrometer (COMICS) on board the 8 m Subaru Telescope. The MWC1080 has a geometry such that the diffuse nebulous structures surround the central Herbig B0 type star. We focus on the properties of polycyclic aromatic hydrocarbons (PAHs) and PAH-like species, which are thought to be the carriers of the unidentified infrared (UIR) bands in such environments. A series of UIR bands at 8.6, 11.0, 11.2, and 12.7 μm is detected throughout the system and we find a clear increase in the UIR 11.0 μm/11.2 μm ratio in the vicinity of the central star. Since the UIR 11.0 μm feature is attributed to a solo-CH out-of-plane wagging mode of cationic PAHs while the UIR 11.2 μm feature to a solo-CH out-of-plane bending mode of neutral PAHs, the large 11.0 μm/11.2 μm ratio directly indicates a promotion of the ionization of PAHs near the central star.
Changes in histopathology and cytokeratin AE1/AE3 expression in skin graft with different time on Indonesian local cats.
Erwin; Etriwati; Gunanti; Handharyani, Ekowati; Noviana, Deni
A good skin graft histopathology is followed by formation of hair follicle, sweat gland, sebaceous gland, blood vessel, lightly dense connective tissue, epidermis, and dermis layer. This research aimed to observe histopathology feature and cytokeratin AE1/AE3 expression on cat skin post skin grafting within a different period of time. Nine male Indonesian local cats aged 1-2 years old weighing 3-4 kg were separated into three groups. First surgery created defect wound of 2 cm × 2 cm in size to whole groups. The wounds were left alone for several days, differing in interval between each group, respectively: Group I (for 2 days), Group II (for 4 days), and Group III (for 6 days). The second surgery was done to each group which harvested skin of thoracic area and applied it on recipient wound bed. On day 24 th post skin graft was an examination of histopathology and cytokeratin AE1/AE3 immunohistochemistry. Group I donor skin's epidermis layer had not formed completely whereas epidermis of donor skin of Groups II and III had completely formed. In all group hair follicle, sweat gland, sebaceous gland, and neovascularization were found. The density of connective tissue in Group I was very solid than other groups. Cytokeratin AE1/AE3 expression was found on donor skin's epithelial cell in epidermis and dermis layer with very brown intensity for Group II, brown intensity for Group II, and lightly brown for Group I. Histopathological structure and cytokeratin AE1/AE3 expression post skin graft are better in Groups II and III compared to Group I.
Star Polymers.
Ren, Jing M; McKenzie, Thomas G; Fu, Qiang; Wong, Edgar H H; Xu, Jiangtao; An, Zesheng; Shanmugam, Sivaprakash; Davis, Thomas P; Boyer, Cyrille; Qiao, Greg G
Recent advances in controlled/living polymerization techniques and highly efficient coupling chemistries have enabled the facile synthesis of complex polymer architectures with controlled dimensions and functionality. As an example, star polymers consist of many linear polymers fused at a central point with a large number of chain end functionalities. Owing to this exclusive structure, star polymers exhibit some remarkable characteristics and properties unattainable by simple linear polymers. Hence, they constitute a unique class of technologically important nanomaterials that have been utilized or are currently under audition for many applications in life sciences and nanotechnologies. This article first provides a comprehensive summary of synthetic strategies towards star polymers, then reviews the latest developments in the synthesis and characterization methods of star macromolecules, and lastly outlines emerging applications and current commercial use of star-shaped polymers. The aim of this work is to promote star polymer research, generate new avenues of scientific investigation, and provide contemporary perspectives on chemical innovation that may expedite the commercialization of new star nanomaterials. We envision in the not-too-distant future star polymers will play an increasingly important role in materials science and nanotechnology in both academic and industrial settings.
Nærværende rapport beskriver numeriske beregninger af den hydrodynamiske interaktion mellem 5 flydere i bølgeenergianlægget Wave Star.......Nærværende rapport beskriver numeriske beregninger af den hydrodynamiske interaktion mellem 5 flydere i bølgeenergianlægget Wave Star....
Star Imager
Madsen, Peter Buch; Jørgensen, John Leif; Thuesen, Gøsta
The version of the star imager developed for Astrid II is described. All functions and features are described as well as the operations and the software protocol.......The version of the star imager developed for Astrid II is described. All functions and features are described as well as the operations and the software protocol....
The Peculiar Binary System AE Aquarii from its Characteristic Multi-wavelength Emission
Oruru B.
Full Text Available The multi-wavelength properties of the novalike variable system AE Aquarii are discussed in terms of the interaction between the accretion inflow from a late-type main sequence star and the magnetosphere of a fast rotating white dwarf. This results in an efficient magnetospheric propeller process and particle acceleration. The spin-down of the white dwarf at a period rate of 5.64×10−14 s s−1 results in a huge spin-down luminosity of Ls−d ≃ 6 10×33 erg s−1. Hence, the observed non-thermal hard X-ray emission and VHE and TeV gamma-ray emission may suggest that AE Aquarii can be placed in the category of spin-powered pulsars. Besides, observed hard X-ray luminosity of LX,hard ≤ 5 × 1030 erg s−1 constitutes 0.1 % of the total spin-down luminosity of the white dwarf. This paper will discuss some recent theoretical studies and data analysis of the system.
Hjellming, R.M.; Gibson, D.M.
Studies of stellar radio emission became an important field of research in the 1970's and have now expanded to become a major area of radio astronomy with the advent of new instruments such as the Very Large Array in New Mexico and transcontinental telescope arrays. This volume contains papers from the workshop on stellar continuum radio astronomy held in Boulder, Colorado, and is the first book on the rapidly expanding field of radio emission from stars and stellar systems. Subjects covered include the observational and theoretical aspects of stellar winds from both hot and cool stars, radio flares from active double star systems and red dwarf stars, bipolar flows from star-forming regions, and the radio emission from X-ray binaries. (orig.)
The Origin of Runaway Stars
Hoogerwerf, R.; de Bruijne, J. H. J.; de Zeeuw, P. T.
Milliarcsecond astrometry provided by Hipparcos and by radio observations makes it possible to retrace the orbits of some of the nearest runaway stars and pulsars to determine their site of origin. The orbits of the runaways AE Aurigae and μ Columbae and of the eccentric binary ι Orionis intersected each other ~2.5 Myr ago in the nascent Trapezium cluster, confirming that these runaways were formed in a binary-binary encounter. The path of the runaway star ζ Ophiuchi intersected that of the nearby pulsar PSR J1932+1059, ~1 Myr ago, in the young stellar group Upper Scorpius. We propose that this neutron star is the remnant of a supernova that occurred in a binary system that also contained ζ Oph and deduce that the pulsar received a kick velocity of ~350 km s-1 in the explosion. These two cases provide the first specific kinematic evidence that both mechanisms proposed for the production of runaway stars, the dynamical ejection scenario and the binary-supernova scenario, operate in nature.
Kafatos, M.; Michalitsianos, A.G.
Among the several hundred million binary systems estimated to lie within 3000 light years of the solar system, a tiny fraction, no more than a few hundred, belong to a curious subclass whose radiation has a wavelength distribution so peculiar that it long defied explanation. Such systems radiate strongly in the visible region of the spectrum, but some of them do so even more strongly at both shorter and longer wavelengths: in the ultraviolet region and in the infrared and radio regions. This odd distribution of radiation is best explained by the pairing of a cool red giant star and an intensely hot small star that is virtually in contact with its larger companion. Such objects have become known as symbiotic stars. On photographic plate only the giant star can be discerned, but evidence for the existence of the hot companion has been supplied by satellite-born instruments capable of detecting ultraviolet radiation. The spectra of symbiotic stars indicate that the cool red giant is surrounded by a very hot ionized gas. Symbiotic stars also flared up in outbursts indicating the ejection of material in the form of a shell or a ring. Symbiotic stars may therefore represent a transitory phase in the evolution of certain types of binary systems in which there is substantial transfer of matter from the larger partner to the smaller
Accomplishments: AE characterization program for remote flaw evaluation
Hutton, P.H.; Schwenk, E.B.; Kurtz, R.J.
The purpose of the program is to develop an experimental/analytical evaluation of the feasibility of detecting and analyzing flaw growth in reactor pressure boundaries by means of continuously monitoring acoustic emission (AE). The investigation is devoted exclusively to ASTM Type A533, Grade B, Class 1 material. The basic approach to interpretive model development is through laboratory testing of 1 to 1 1 / 2 inch (25.4 to 38 mm) thick fracture mechanics specimens in both fatigue and fracture at both room temperature and 550 0 F (288 0 C). Seven parameters are measured for each AE signal and related to fracture mechanics functions. AE data from fracture testing of 6 inch (152 mm) wall pressure vessels are also incorporated in analysis
An Improved Recovery Algorithm for Decayed AES Key Schedule Images
Tsow, Alex
A practical algorithm that recovers AES key schedules from decayed memory images is presented. Halderman et al. [1] established this recovery capability, dubbed the cold-boot attack, as a serious vulnerability for several widespread software-based encryption packages. Our algorithm recovers AES-128 key schedules tens of millions of times faster than the original proof-of-concept release. In practice, it enables reliable recovery of key schedules at 70% decay, well over twice the decay capacity of previous methods. The algorithm is generalized to AES-256 and is empirically shown to recover 256-bit key schedules that have suffered 65% decay. When solutions are unique, the algorithm efficiently validates this property and outputs the solution for memory images decayed up to 60%.
A high performance hardware implementation image encryption with AES algorithm
Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab
This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.
Shaft Crack Identification Based on Vibration and AE Signals
Wenxiu Lu
Full Text Available The shaft crack is one of the main serious malfunctions that often occur in rotating machinery. However, it is difficult to locate the crack and determine the depth of the crack. In this paper, the acoustic emission (AE signal and vibration response are used to diagnose the crack. The wavelet transform is applied to AE signal to decompose into a series of time-domain signals, each of which covers a specific octave frequency band. Then an improved union method based on threshold and cross-correlation method is applied to detect the location of the shaft crack. The finite element method is used to build the model of the cracked rotor, and the crack depth is identified by comparing the vibration response of experiment and simulation. The experimental results show that the AE signal is effective and convenient to locate the shaft crack, and the vibration signal is feasible to determine the depth of shaft crack.
Choosing the governing solutions for FA of AES-2006
Vasilchenko, I.; Dragunov, Y.; Ryzhov, S.; Kobelev, S.; Vyalitsyn, V.; Troyanov, V.
According to the program approved by the Government of Russia the AES-2006 design intended as a base one for the beginning of realization of the plans on development of the nuclear power engineering of Russia in the near future has been under way now. The most crucial components of the reactor plant are certainly the core and its basic component - FA. In the FA design such factors are concentrated that mainly define safety, profitability, adaptability to manufacture and operation of the fuel and NPP as a whole. As the nearest prototype for AES-2006 the TVS-2M design used at the Balakovo NPP is taken. The report shows on the basis of qualitative and quantitative evaluation that design of TVS-2 and its modifications TVS-2M are in the best compliance with the requirements of the project of the new RP AES-2006. This compliance is confirmed by the operational experience of the basic variant of the design
Security of the AES with a Secret S-Box
Tiessen, Tyge; Knudsen, Lars Ramkilde; Kölbl, Stefan
How does the security of the AES change when the S-box is replaced by a secret S-box, about which the adversary has no knowledge? Would it be safe to reduce the number of encryption rounds? In this paper, we demonstrate attacks based on integral cryptanalysis which allow to recover both the secret...... key and the secret S-box for respectively four, five, and six rounds of the AES. Despite the significantly larger amount of secret information which an adversary needs to recover, the attacks are very efficient with time/data complexities of 217/216, 238/240 and 290/264, respectively. Another...
Star formation
Woodward, P.R.
Theoretical models of star formation are discussed beginning with the earliest stages and ending in the formation of rotating, self-gravitating disks or rings. First a model of the implosion of very diffuse gas clouds is presented which relies upon a shock at the edge of a galactic spiral arm to drive the implosion. Second, models are presented for the formation of a second generation of massive stars in such a cloud once a first generation has formed. These models rely on the ionizing radiation from massive stars or on the supernova shocks produced when these stars explode. Finally, calculations of the gravitational collapse of rotating clouds are discussed with special focus on the question of whether rotating disks or rings are the result of such a collapse. 65 references
Kramer, Morten; Frigaard, Peter
Nærværende rapport beskriver modelforsøg udført på Aalborg Universitet, Institut for Byggeri og Anlæg med bølgeenergianlæget Wave Star.......Nærværende rapport beskriver modelforsøg udført på Aalborg Universitet, Institut for Byggeri og Anlæg med bølgeenergianlæget Wave Star....
Kramer, Morten; Andersen, Thomas Lykke
Nærværende rapport beskriver modelforsøg udført på Aalborg Universitet, Institut for Vand, Jord og Miljøteknik med bølgeenergianlægget Wave Star.......Nærværende rapport beskriver modelforsøg udført på Aalborg Universitet, Institut for Vand, Jord og Miljøteknik med bølgeenergianlægget Wave Star....
Summary of detection, location, and characterization capabilities of AE for continuous monitoring of cracks in reactors
Hutton, P.H.; Kurtz, R.J.; Friesel, M.A.; Pappas, R.A.; Skorpik, J.R.; Dawson, J.F.
The objective of the program is to develop acoustic emission (AE) methods for continuous monitoring of reactor pressure boundaries to detect and evaluate crack growth. The approach involves three phases: develop relationships to identify crack growth AE signals and to use identified crack growth AE data to estimate flaw severity; evaluate and refine AE/flaw relationships through fatigue testing a heavy section vessel under simulated reactor conditions; and demonstrate continuous AE monitoring on a nuclear power reactor system
STARS no star on Kauai
Jones, M.
The island of Kuai, home to the Pacific Missile Range Facility, is preparing for the first of a series of Star Wars rocket launches expected to begin early this year. The Strategic Defense Initiative plans 40 launches of the Stategic Target System (STARS) over a 10-year period. The focus of the tests appears to be weapons and sensors designed to combat multiple-warhead ICBMs, which will be banned under the START II Treaty that was signed in January. The focus of this article is to express the dubious value of testing the STARS at a time when their application will not be an anticipated problem
Flare stars
Nicastro, A.J.
The least massive, but possibly most numerous, stars in a galaxy are the dwarf M stars. It has been observed that some of these dwarfs are characterized by a short increase in brightness. These stars are called flare stars. These flare stars release a lot of energy in a short amount of time. The process producing the eruption must be energetic. The increase in light intensity can be explained by a small area rising to a much higher temperature. Solar flares are looked at to help understand the phenomenon of stellar flares. Dwarfs that flare are observed to have strong magnetic fields. Those dwarf without the strong magnetic field do not seem to flare. It is believed that these regions of strong magnetic fields are associated with star spots. Theories on the energy that power the flares are given. Astrophysicists theorize that the driving force of a stellar flare is the detachment and collapse of a loop of magnetic flux. The mass loss due to stellar flares is discussed. It is believed that stellar flares are a significant contributor to the mass of interstellar medium in the Milky Way
Rapid oscillations in cataclysmic variables. III. An oblique rotator in AE aquarii
Patternson, J.
A rapid, strictly periodic oscillation has been discovered in the light curve of the novalike variable AE Aquarii. The fundamental period is 33.076737 s, with comparable power at the first harmonic. The amplitude averages 0.2--0.3% but can exceed 1% in flares. Pulse timings around the binary orbit prove that the periodicity arises in the white dwarf, and lead to an accurate measurement of the projected orbital velocity. The velocity curve and other constraints lead to a mass determination for the component stars :0.74 +- 0.06 M/sub sun/ for the late-type star and 0.94 +- 0.10 M/sub sun/ for the white dwarf. Estimates are also given for the system dimensions, luminosity, distance, and mass transfer rate.Quasi-periodic oscillations are also detected in flares, and have periods near the coherent periods of 16.5 and 33 s. Their characteristics suggest an origin in gaseous blobs produced by instabilities near the inner edge of the accretion disk.A model is presented in which the strict periodicity arises from the rotation of an accreting, magnetized white dwarf, with a surface field of 10 6 --10 7 gauss. Future spectroscopic, polarimetric, and X-ray observations should provide critical tests for predictions of the model
Optimization of DIII-D discharges to avoid AE destabilization
Varela, Jacobo; Spong, Donald; Garcia, Luis; Huang, Juan; Murakami, Masanori
The aim of the study is to analyze the stability of Alfven Eigenmodes (AE) perturbed by energetic particles (EP) during DIII-D operation. We identify the optimal NBI operational regimes that avoid or minimize the negative effects of AE on the device performance. We use the reduced MHD equations to describe the linear evolution of the poloidal flux and the toroidal component of the vorticity in a full 3D system, coupled with equations of density and parallel velocity moments for the energetic particles, including the effect of the acoustic modes. We add the Landau damping and resonant destabilization effects using a closure relation. We perform parametric studies of the MHD and AE stability, taking into account the experimental profiles of the thermal plasma and EP, also using a range of values of the energetic particles β, density and velocity as well the effect of the toroidal couplings. We reproduce the AE activity observed in high poloidal β discharge at the pedestal and reverse shear discharges. This material based on work is supported both by the U.S. Department of Energy, Office of Science, under Contract DE-AC05-00OR22725 with UT-Battelle, LLC. Research sponsored in part by the Ministerio de Economia y Competitividad of Spain under the project.
Faster and timing-attack resistant AES-GCM
Käsper, E.; Schwabe, P.; Clavier, C.; Gaj, K.
We present a bitsliced implementation of AES encryption in counter mode for 64-bit Intel processors. Running at 7.59 cycles/byte on a Core 2, it is up to 25% faster than previous implementations, while simultaneously offering protection against timing attacks. In particular, it is the only
AES/STEM grain boundary analysis of stabilized zirconia ceramics
Winnubst, Aloysius J.A.; Kroot, P.J.M.; Burggraaf, A.J.
Semiquantitative Auger Electron Spectroscopy (AES) on pure monophasic (ZrO2)0.83(YO1.5)0.17 was used to determine the chemical composition of the grain boundaries. Grain boundary enrichment with Y was observed with an enrichment factor of about 1.5. The difference in activation energy of the ionic
Spectroscopic classification of AT2018aes as a supernova impostor
Andrews, Jennifer; Smith, Nathan; Van Dyk, Schuyler D.
A visual-wavelength optical spectrum of AT2018aes obtained on UT 2018 Mar 13 (JD 2458190.84) with the Magellan Clay telescope (+ LDSS3 spectrograph, VPH-all grism) reveals a narrow H-alpha emission line with a velocity of 525 km/s, with wings extending to roughly +/-1000 km/s.
Enhanced ATM Security using Biometric Authentication and Wavelet Based AES
Sreedharan Ajish
Full Text Available The traditional ATM terminal customer recognition systems rely only on bank cards, passwords and such identity verification methods are not perfect and functions are too single. Biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. This paper presents a highly secured ATM banking system using biometric authentication and wavelet based Advanced Encryption Standard (AES algorithm. Two levels of security are provided in this proposed design. Firstly we consider the security level at the client side by providing biometric authentication scheme along with a password of 4-digit long. Biometric authentication is achieved by considering the fingerprint image of the client. Secondly we ensure a secured communication link between the client machine to the bank server using an optimized energy efficient and wavelet based AES processor. The fingerprint image is the data for encryption process and 4-digit long password is the symmetric key for the encryption process. The performance of ATM machine depends on ultra-high-speed encryption, very low power consumption, and algorithmic integrity. To get a low power consuming and ultra-high speed encryption at the ATM machine, an optimized and wavelet based AES algorithm is proposed. In this system biometric and cryptography techniques are used together for personal identity authentication to improve the security level. The design of the wavelet based AES processor is simulated and the design of the energy efficient AES processor is simulated in Quartus-II software. Simulation results ensure its proper functionality. A comparison among other research works proves its superiority.
C2D spitzer-IRS spectra of disks around T tauri stars : II. PAH emission features
Geers, V. C.; Augereau, J. -C; Pontoppidan, K. M.; Dullemond, C. P.; Visser, R.; Kessler-Silacci, J. E.; Evans, N. J.; van Dishoeck, E. F.; Blake, G. A.; Boogert, A. C. A.; Lahuis, F.; Merin, B.
Aims. We search for Polycyclic Aromatic Hydrocarbon (PAH) features towards young low-mass (T Tauri) stars and compare them with surveys of intermediate mass (Herbig Ae/Be) stars. The presence and strength of the PAH features are interpreted with disk radiative transfer models exploring the PAH
Kafatos, M.; Michalitsianos, A. G.
The physical characteristics of symbiotic star systems are discussed, based on a review of recent observational data. A model of a symbiotic star system is presented which illustrates how a cool red-giant star is embedded in a nebula whose atoms are ionized by the energetic radiation from its hot compact companion. UV outbursts from symbiotic systems are explained by two principal models: an accretion-disk-outburst model which describes how material expelled from the tenuous envelope of the red giant forms an inwardly-spiralling disk around the hot companion, and a thermonuclear-outburst model in which the companion is specifically a white dwarf which superheats the material expelled from the red giant to the point where thermonuclear reactions occur and radiation is emitted. It is suspected that the evolutionary course of binary systems is predetermined by the initial mass and angular momentum of the gas cloud within which binary stars are born. Since red giants and Mira variables are thought to be stars with a mass of one or two solar mass, it is believed that the original cloud from which a symbiotic system is formed can consist of no more than a few solar masses of gas.
Maselli, Andrea; Pnigouras, Pantelis; Nielsen, Niklas Grønlund
to the formation of compact objects predominantly made of dark matter. Considering both fermionic and bosonic (scalar φ4) equations of state, we construct the equilibrium structure of rotating dark stars, focusing on their bulk properties and comparing them with baryonic neutron stars. We also show that these dark......Theoretical models of self-interacting dark matter represent a promising answer to a series of open problems within the so-called collisionless cold dark matter paradigm. In case of asymmetric dark matter, self-interactions might facilitate gravitational collapse and potentially lead...... objects admit the I-Love-Q universal relations, which link their moments of inertia, tidal deformabilities, and quadrupole moments. Finally, we prove that stars built with a dark matter equation of state are not compact enough to mimic black holes in general relativity, thus making them distinguishable...
Infrared Observations of FS CMa Stars
Sitko, Michael L.; Russell, R. W.; Lynch, D. K.; Grady, C. A.; Hammel, H. B.; Beerman, L. C.; Day, A. N.; Huelsman, D.; Rudy, R. J.; Brafford, S. M.; Halbedel, E. M.
A subset of non-supergiant B[e] stars has recently been recognized as forming a fairly unique class of objects with very strong emission lines, infrared excesses, and locations not associated with star formation. The exact evolutionary state of these stars, named for the prototype FS CMa, is uncertain, and they have often been classified as isolated Herbig AeBe stars. We present infrared observations of two of these stars, HD 45677 (FS CMa), HD 50138 (MWC 158), and the candidate FS CMa star HD 190073 (V1295 Aql) that span over a decade in time. All three exhibit an emission band at 10 microns due to amorphous silicates, confirming that much (if not all) of the infrared excess is due to dust. HD 50138 is found to exhibit 20% variability between 3-13 microns that resembles that found in pre-main sequence systems (HD 163296 and HD 31648). HD 45677, despite large changes at visual wavelengths, has remained relatively stable in the infrared. To date, no significant changes have been observed in HD 190073. This work is supported in part by NASA Origins of Solar Systems grant NAG5-9475, NASA Astrophysics Data Program contract NNH05CD30C, and the Independent Research and Development program at The Aerospace Corporation.
Hybrid stars
Hybrid stars. AsHOK GOYAL. Department of Physics and Astrophysics, University of Delhi, Delhi 110 007, India. Abstract. Recently there have been important developments in the determination of neutron ... number and the electric charge. ... available to the system to rearrange concentration of charges for a given fraction of.
Pulsating stars
Catelan, M?rcio
The most recent and comprehensive book on pulsating stars which ties the observations to our present understanding of stellar pulsation and evolution theory. Written by experienced researchers and authors in the field, this book includes the latest observational results and is valuable reading for astronomers, graduate students, nuclear physicists and high energy physicists.
Feast, M.W.; Wenzel, W.; Fernie, J.D.; Percy, J.R.; Smak, J.; Gascoigne, S.C.B.; Grindley, J.E.; Lovell, B.; Sawyer Hogg, H.B.; Baker, N.; Fitch, W.S.; Rosino, L.; Gursky, H.
A critical review of variable stars is presented. A fairly complete summary of major developments and discoveries during the period 1973-1975 is given. The broad developments and new trends are outlined. Essential problems for future research are identified. (B.R.H. )
ICP/AES radioactive sample analyses at Pacific Northwest Laboratory
Matsuzaki, C.L.; Hara, F.T.
Inductively coupled argon plasma atomic emission spectroscopy (ICP/AES) analyses of radioactive materials at Pacific Northwest Laboratory (PNL) began about three years ago upon completion of the installation of a modified Applied Research Laboratory (ARL) 3560. Funding for the purchase and installation of the ICP/AES was provided by the Nuclear Waste Materials Characterization Center (MCC) established at PNL by the Department of Energy in 1979. MCC's objective is to ensure that qualified materials data are available on waste materials. This paper is divided into the following topics: (1) Instrument selection considerations; (2) initial installation of the simultaneous system with the source stand enclosed in a 1/2'' lead-shielded glove box; (3) retrofit installation of the sequential spectrometer; and (4) a brief discussion on several types of samples analyzed. 1 ref., 7 figs., 1 tab
AE monitoring simplified using digital memory storage and source isolation
Hutton, P.H.; Skorpik, J.R.
The general trend in acoustic emission (AE) monitoring systems has been one of increasing complexity. This is particularly true in systems for continuous monitoring which are usually multichannel (perhaps 20 to 40) and incorporate a dedicated minicomputer. A unique concept which reverses this trend for selected applications has been developed at Battelle-Northwest, Richland, WA. This concept uses solid state digital memories to store acquired data in a permanent form which is easily retrieved. It also uses a fundamental method to accept AE data only from a selected area. The digital memory system is designed for short term or long term (months) monitoring. It has been successfully applied in laboratory testing such as fatigue crack growth studies, as well as field monitoring on bridges and piping to detect crack growth. The features of simplicity, versatility, and low cost contribute to expanded practical application of acoustic emission technology
Accelerating Solution Proposal of AES Using a Graphic Processor
STRATULAT, M.
Full Text Available The main goal of this work is to analyze the possibility of using a graphic processing unit in non graphical calculations. Graphic Processing Units are being used nowadays not only for game engines and movie encoding/decoding, but also for a vast area of applications, like Cryptography. We used the graphic processing unit as a cryptographic coprocessor in order accelerate AES algorithm. Our implementation of AES is on a GPU using CUDA architecture. The performances obtained show that the CUDA implementation can offer speedups of 11.95Gbps. The tests are conducted in two directions: running the tests on small data sizes that are located in memory and large data that are stored in files on hard drives.
Star Products and Applications
Iida, Mari; Yoshioka, Akira
Star products parametrized by complex matrices are defined. Especially commutative associative star products are treated, and star exponentials with respect to these star products are considered. Jacobi's theta functions are given as infinite sums of star exponentials. As application, several concrete identities are obtained by properties of the star exponentials.
Kramer, Morten; Frigaard, Peter; Brorsen, Michael
Nærværende rapport beskriver foreløbige hovedkonklusioner på modelforsøg udført på Aalborg Universitet, Institut for Vand, Jord og Miljøteknik med bølgeenergianlægget Wave Star i perioden 13/9 2004 til 12/11 2004.......Nærværende rapport beskriver foreløbige hovedkonklusioner på modelforsøg udført på Aalborg Universitet, Institut for Vand, Jord og Miljøteknik med bølgeenergianlægget Wave Star i perioden 13/9 2004 til 12/11 2004....
AE Characteristics affecting the Notch Effect of the Cold Steel SKD11
Han, Eung Kyo; Kim, Ki Choong; Kwon, Dong Ho; Kim, Jae Yeor [Hanyang University, Seoul (Korea, Republic of)
Acoustic Emission is not only expected as a non-destructive evaluation technique in practice but also noted as a new powerful means of evaluation of materials. AE occurs with plastic deformation and propagation of crack, and this patterns of occurrence of AE vary with materials. AE which comes from propagation of crack depends oil the shapes and properties of materials. Like this AE has characteristic of material. The present work is an attempt to evaluate characteristics of carbon steel (SM55C) and Die steel(SKD11) by means of dynamic response of AE method
Multilevel Analysis of Continuous AE from Helicopter Gearbox
Chlada, Milan; Převorovský, Zdeněk; Heřmánek, Jan; Krofta, Josef
Ro�. 19, �. 12 (2014) ISSN 1435-4934. [European Conference on Non-Destructive Testing (ECNDT 2014) /11./. Praha, 06.10.2014-10.10.2014] R&D Projects: GA MPO FR-TI3/755 Institutional support: RVO:61388998 Keywords : structural health monitoring (SHM) * signal processing * acoustic emission (AE) * diagnostics of helicopter gearbox * wavelet analysis * continuous acoustic emission Subject RIV: JU - Aeronautics, Aerodynamics, Aircrafts http://www.ndt.net/events/ECNDT2014/app/content/Paper/630_Chlada_Rev1.pdf
A New Structural-Differential Property of 5-Round AES
Grassi, Lorenzo; Rechberger, Christian; Ronjom, Sondre
AES is probably the most widely studied and used block cipher. Also versions with a reduced number of rounds are used as a building block in many cryptographic schemes, e.g. several candidates of the SHA-3 and CAESAR competition are based on it.So far, non-random properties which are independent ...... a random permutation with only 2 32 chosen texts that has a computational cost of 2(35.6) look-ups into memory of size 2 36 bytes which has a success probability greater than 99%....
"Storms of crustal stress" and AE earthquake precursors
G. P. Gregori
Full Text Available Acoustic emission (AE displays violent paroxysms preceding strong earthquakes, observed within some large area (several hundred kilometres wide around the epicentre. We call them "storms of crustal stress" or, briefly "crustal storms". A few case histories are discussed, all dealing with the Italian peninsula, and with the different behaviour shown by the AE records in the Cephalonia island (Greece, which is characterized by a different tectonic setting.
AE is an effective tool for diagnosing the state of some wide slab of the Earth's crust, and for monitoring its evolution, by means of AE of different frequencies. The same effect ought to be detected being time-delayed, when referring to progressively lower frequencies. This results to be an effective check for validating the physical interpretation.
Unlike a seismic event, which involves a much limited focal volume and therefore affects a restricted area on the Earth's surface, a "crustal storm" typically involves some large slab of lithosphere and crust. In general, it cannot be easily reckoned to any specific seismic event. An earthquake responds to strictly local rheological features of the crust, which are eventually activated, and become crucial, on the occasion of a "crustal storm". A "crustal storm" lasts typically few years, eventually involving several destructive earthquakes that hit at different times, at different sites, within that given lithospheric slab.
Concerning the case histories that are here discussed, the lithospheric slab is identified with the Italian peninsula. During 1996–1997 a "crustal storm" was on, maybe elapsing until 2002 (we lack information for the period 1998–2001. Then, a quiet period occurred from 2002 until 26 May 2008, when a new "crustal storm" started, and by the end of 2009 it is still on. During the 1996–1997 "storm" two strong earthquakes occurred (Potenza and
ICP-AES determination of trace elements in carbon steel
Sengupta, Arijit; Rajeswari, B.; Kadam, R.M.; Babu, Y.; Godbole, S.V.
Full text: Carbon steel, a combination of the elements iron and carbon, can be classified into four types as mild, medium, high and very high depending on the carbon content which varies from 0.05% to 2.1%. Carbon steel of different types finds application in medical devices, razor blades, cutlery and spring. In the nuclear industry, it is used in feeder pipes in the reactor. A strict quality control measure is required to monitor the trace elements, which have deleterious effects on the mechanical properties of the carbon steel. Thus, it becomes imperative to check the purity of carbon steel as a quality control measure before it is used in feeder pipes in the reactor. Several methods have been reported in literature for trace elemental determination in high purity iron. Some of these include neutron activation analysis, atomic absorption spectrometry and atomic emission spectrometry. Inductively coupled plasma atomic emission spectrometry (ICP-AES) is widely recognized as a sensitive technique for the determination of trace elements in various matrices, its major advantages being good accuracy and precision, high sensitivity, multi-element capability, large linear dynamic range and relative freedom from matrix effects. The present study mainly deals with the direct determination of trace elements in carbon steel using ICP-AES. An axially viewing ICP spectrometer having a polychromator with 35 fixed analytical channels and limited sequential facility to select any analytical line within 2.2 nm of a polychromator line was used in these studies. Iron, which forms one of the main constituents of carbon steel, has a multi electronic configuration with line rich emission spectrum and, therefore, tends to interfere in the determination of trace impurities in carbon steel matrix. Spectral interference in ICP-AES can be seriously detrimental to the accuracy and reliability of trace element determinations, particularly when they are performed in the presence of high
Developing A/E capabilities; areas of special interest
During the last few years, the methods used by Empresarios Agrupados and INITEC to perform Architect-Engineering work in Spain for nuclear projects has undergone a process of significant change in project management and engineering approaches. Specific practical examples of management techniques and design practices which represent a good record of results will be discussed. They are identified as areas of special interest in developing A/E capabilities for nuclear projects. Command of these areas should produce major payoffs in local participation and contribute to achieving real nuclear engineering capabilities in the country
Power Consumption and Calculation Requirement Analysis of AES for WSN IoT.
Hung, Chung-Wen; Hsu, Wen-Ting
Because of the ubiquity of Internet of Things (IoT) devices, the power consumption and security of IoT systems have become very important issues. Advanced Encryption Standard (AES) is a block cipher algorithm is commonly used in IoT devices. In this paper, the power consumption and cryptographic calculation requirement for different payload lengths and AES encryption types are analyzed. These types include software-based AES-CB, hardware-based AES-ECB (Electronic Codebook Mode), and hardware-based AES-CCM (Counter with CBC-MAC Mode). The calculation requirement and power consumption for these AES encryption types are measured on the Texas Instruments LAUNCHXL-CC1310 platform. The experimental results show that the hardware-based AES performs better than the software-based AES in terms of power consumption and calculation cycle requirements. In addition, in terms of AES mode selection, the AES-CCM-MIC64 mode may be a better choice if the IoT device is considering security, encryption calculation requirement, and low power consumption at the same time. However, if the IoT device is pursuing lower power and the payload length is generally less than 16 bytes, then AES-ECB could be considered.
Genome-Wide Identification and Expression Analysis of the UGlcAE Gene Family in Tomato
Xing Ding
Full Text Available The UGlcAE has the capability of interconverting UDP-d-galacturonic acid and UDP-d-glucuronic acid, and UDP-d-galacturonic acid is an activated precursor for the synthesis of pectins in plants. In this study, we identified nine UGlcAE protein-encoding genes in tomato. The nine UGlcAE genes that were distributed on eight chromosomes in tomato, and the corresponding proteins contained one or two trans-membrane domains. The phylogenetic analysis showed that SlUGlcAE genes could be divided into seven groups, designated UGlcAE1 to UGlcAE6, of which the UGlcAE2 were classified into two groups. Expression profile analysis revealed that the SlUGlcAE genes display diverse expression patterns in various tomato tissues. Selective pressure analysis indicated that all of the amino acid sites of SlUGlcAE proteins are undergoing purifying selection. Fifteen stress-, hormone-, and development-related elements were identified in the upstream regions (0.5 kb of these SlUGlcAE genes. Furthermore, we investigated the expression patterns of SlUGlcAE genes in response to three hormones (indole-3-acetic acid (IAA, gibberellin (GA, and salicylic acid (SA. We detected firmness, pectin contents, and expression levels of UGlcAE family genes during the development of tomato fruit. Here, we systematically summarize the general characteristics of the SlUGlcAE genes in tomato, which could provide a basis for further function studies of tomato UGlcAE genes.
Flow sorting of C-genome chromosomes from wild relatives of wheat Aegilops markgrafii, Ae. triuncialis and Ae. cylindrica, and their molecular organization
Molnár, I.; Vrána, Jan; Farkas, A.; Kubaláková, Marie; Cseh, A.; Molnár-Láng, M.; Doležel, Jaroslav
Ro�. 116, �. 2 (2015), s. 189-200 ISSN 0305-7364 R&D Projects: GA MŠk(CZ) LO1204 Institutional support: RVO:61389030 Keywords : Aegilops markgrafii * Ae. triuncialis * Ae. cylindrica Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.982, year: 2015
Decryption-decompression of AES protected ZIP files on GPUs
Duong, Tan Nhat; Pham, Phong Hong; Nguyen, Duc Huu; Nguyen, Thuy Thanh; Le, Hung Duc
AES is a strong encryption system, so decryption-decompression of AES encrypted ZIP files requires very large computing power and techniques of reducing the password space. This makes implementations of techniques on common computing system not practical. In [1], we reduced the original very large password search space to a much smaller one which surely containing the correct password. Based on reduced set of passwords, in this paper, we parallel decryption, decompression and plain text recognition for encrypted ZIP files by using CUDA computing technology on graphics cards GeForce GTX295 of NVIDIA, to find out the correct password. The experimental results have shown that the speed of decrypting, decompressing, recognizing plain text and finding out the original password increases about from 45 to 180 times (depends on the number of GPUs) compared to sequential execution on the Intel Core 2 Quad Q8400 2.66 GHz. These results have demonstrated the potential applicability of GPUs in this cryptanalysis field.
The spectrum and variability of radio emission from AE Aquarii
Abada-Simon, Meil; Lecacheux, Alain; Bastian, Tim S.; Bookbinder, Jay A.; Dulk, George A.
The first detections of the magnetic cataclysmic variable AE Aquarii at millimeter wavelengths are reported. AE Aqr was detected at wavelengths of 3.4 and 1.25 mm. These data are used to show that the time-averaged spectrum is generally well fitted by a power law S(nu) varies as nu exp alpha, where alpha is approximately equal to 0.35-0.60, and that the power law extends to millimeter wavelengths, i.e., the spectral turnover is at a frequency higher than 240 GHz. It is suggested that the spectrum is consistent with that expected from a superposition of flarelike events where the frequency distribution of the initial flux density is a power law f (S0) varies as S0 exp -epsilon, with index epsilon approximately equal to 1.8. Within the context of this model, the high turnover frequency of the radio spectrum implies magnetic field strengths in excess of 250 G in the source.
Spontaneous wheat-Aegilops biuncialis, Ae. geniculata and Ae. triuncialis amphiploid production, a potential way of gene transference
Loureiro, I.; Escorial, C.; García-Baudin, J.M.; Chueca, C.
Some F1 hybrid plants between three species of the Aegilops genus and different hexaploid wheat Triticum aestivum cultivars show certain self-fertility, with averages of F1 hybrids bearing F2 seeds of 8.17%, 5.12% and 48.14% for Aegilops biuncialis, Aegilops geniculata and Aegilops triuncialis respectively. In the Ae. triuncialis-wheat combination with ";Astral" wheat cultivar, the fertility was higher than that found in the other combinations. All the F2 seeds studied were spontaneous amphip...
Life of a star
Henbest, Nigel.
The paper concerns the theory of stellar evolution. A description is given of:- how a star is born, main sequence stars, red giants, white dwarfs, supernovae, neutron stars and black holes. A brief explanation is given of how the death of a star as a supernova can trigger off the birth of a new generation of stars. Classification of stars and the fate of our sun, are also described. (U.K.)
N-body simulations of stars escaping from the Orion nebula
Gualandris, A.; Portegies Zwart, S.F.; Eggleton, P.P.
We study the dynamical interaction in which the two single runaway stars, AE Aurigæ and mu Columbæ, and the binary iota Orionis acquired their unusually high space velocity. The two single runaways move in almost opposite directions with a velocity greater than 100 km s-1 away from the Trapezium
Generic demonstration plant study (A/E package)
Molzen, D.F.
Molzen--Corbin and Associates, Albuquerque, New Mexico, under contract to Sandia Laboratories, has prepared preliminary drawings, descriptive material and a scale model of the demonstration plant. This information will be made available to A/E firms to assist them in the preparation of proposals for complete construction plans and specifications. The four categories for which preliminary work has been prepared consist of structural work, mechanical work, electrical work, and cost estimates. In addition, preliminary specifications, including a written description of the facility consisting of mechanical electrical systems and operations, a description of the safety features, the basic design criteria, three-dimensional sketches, and a scale model of the design have been prepared. The preliminary drawings indicate the required minimum wall thicknesses, overall dimensions and the necessary layout of the removable concrete blocks and slabs required for radiation protection and control
Performance analysis of AES-Blowfish hybrid algorithm for security of patient medical record data
Mahmud H, Amir; Angga W, Bayu; Tommy; Marwan E, Andi; Siregar, Rosyidah
A file security is one method to protect data confidentiality, integrity and information security. Cryptography is one of techniques used to secure and guarantee data confidentiality by doing conversion to the plaintext (original message) to cipher text (hidden message) with two important processes, they are encrypt and decrypt. Some researchers proposed a hybrid method to improve data security. In this research we proposed hybrid method of AES-blowfish (BF) to secure the patient's medical report data into the form PDF file that sources from database. Generation method of private and public key uses two ways of approach, those are RSA method f RSA and ECC. We will analyze impact of these two ways of approach for hybrid method at AES-blowfish based on time and Throughput. Based on testing results, BF method is faster than AES and AES-BF hybrid, however AES-BF hybrid is better for throughput compared with AES and BF is higher.
Competence of Aedes aegypti, Ae. albopictus, and Culex quinquefasciatus Mosquitoes as Zika Virus Vectors, China
Liu, Zhuanzhuan; Zhou, Tengfei; Lai, Zetian; Zhang, Zhenhong; Jia, Zhirong; Zhou, Guofa; Williams, Tricia; Xu, Jiabao; Gu, Jinbao; Zhou, Xiaohong; Lin, Lifeng; Yan, Guiyun
In China, the prevention and control of Zika virus disease has been a public health threat since the first imported case was reported in February 2016. To determine the vector competence of potential vector mosquito species, we experimentally infected Aedes aegypti, Ae. albopictus, and Culex quinquefasciatus mosquitoes and determined infection rates, dissemination rates, and transmission rates. We found the highest vector competence for the imported Zika virus in Ae. aegypti mosquitoes, some susceptibility of Ae. albopictus mosquitoes, but no transmission ability for Cx. quinquefasciatus mosquitoes. Considering that, in China, Ae. albopictus mosquitoes are widely distributed but Ae. aegypti mosquito distribution is limited, Ae. albopictus mosquitoes are a potential primary vector for Zika virus and should be targeted in vector control strategies. PMID:28430562
Competence of Aedes aegypti, Ae. albopictus, and Culex quinquefasciatus Mosquitoes as Zika Virus Vectors, China.
Liu, Zhuanzhuan; Zhou, Tengfei; Lai, Zetian; Zhang, Zhenhong; Jia, Zhirong; Zhou, Guofa; Williams, Tricia; Xu, Jiabao; Gu, Jinbao; Zhou, Xiaohong; Lin, Lifeng; Yan, Guiyun; Chen, Xiao-Guang
In China, the prevention and control of Zika virus disease has been a public health threat since the first imported case was reported in February 2016. To determine the vector competence of potential vector mosquito species, we experimentally infected Aedes aegypti, Ae. albopictus, and Culex quinquefasciatus mosquitoes and determined infection rates, dissemination rates, and transmission rates. We found the highest vector competence for the imported Zika virus in Ae. aegypti mosquitoes, some susceptibility of Ae. albopictus mosquitoes, but no transmission ability for Cx. quinquefasciatus mosquitoes. Considering that, in China, Ae. albopictus mosquitoes are widely distributed but Ae. aegypti mosquito distribution is limited, Ae. albopictus mosquitoes are a potential primary vector for Zika virus and should be targeted in vector control strategies.
O stars and Wolf-Rayet stars
Conti, P.S.; Underhill, A.B.; Jordan, S.; Thomas, R.
Basic information is given about O and Wolf-Rayet stars indicating how these stars are defined and what their chief observable properties are. Part 2 of the volume discussed four related themes pertaining to the hottest and most luminous stars. Presented are: an observational overview of the spectroscopic classification and extrinsic properties of O and Wolf-Rayet stars; the intrinsic parameters of luminosity, effective temperature, mass, and composition of the stars, and a discussion of their viability; stellar wind properties; and the related issues concerning the efforts of stellar radiation and wind on the immediate interstellar environment are presented
Conti, Peter S.; Underhill, Anne B.; Jordan, Stuart (Editor); Thomas, Richard (Editor)
Basic information is given about O and Wolf-Rayet stars indicating how these stars are defined and what their chief observable properties are. Part 2 of the volume discussed four related themes pertaining to the hottest and most luminous stars. Presented are: an observational overview of the spectroscopic classification and extrinsic properties of O and Wolf-Rayet stars; the intrinsic parameters of luminosity, effective temperature, mass, and composition of the stars, and a discussion of their viability; stellar wind properties; and the related issues concerning the efforts of stellar radiation and wind on the immediate interstellar environment are presented.
Continuous AE monitoring of nuclear plants to detect flaws - status and future
Hutton, P.H.
This paper gives a brief commentary on the evolution of acoustic emission (AE) technology for continuous monitoring of nuclear reactors and the current status. The technical work described to support the status description has the objective of developing and validating the use of AE to detect, locate, and evaluate growing flaws in reactor pressure boundaries. The future of AE for continuous monitoring is discussed in terms of envisioned applications and further accomplishments required to achieve them. 12 refs.
Global Dispersal Pattern of HIV Type 1 Subtype CRF01_AE
Poljak, Mario; Angelis, Konstantinos; Albert, Jan; Mamais, Ioannis; Magiorkinis, Gkikas; Hatzakis, Angelos; Hamouda, Osamah; Stuck, Daniel; Vercauteren, Jurgen; Wensing, Annemarie; Alexiev, Ivailo
Background. Human immunodeficiency virus type 1 (HIV-1) subtype CRF01_AE originated in Africa and then passed to Thailand, where it established a major epidemic. Despite the global presence of CRF01_AE, little is known about its subsequent dispersal pattern. Methods. We assembled a global data set of 2736 CRF01_AE sequences by pooling sequences from public databases and patient-cohort studies. We estimated viral dispersal patterns, using statistical phylogeographic analysis run over bootstrap...
Comportamento de Aedes albopictus e de Ae. scapularis adultos (Diptera: Culicidae no Sudeste do Brasil Adults Aedes albopictus and Ae. scapularis behavior (Diptera: Culidae in Southeastern Brazil
Oswaldo Paulo Forattini
Full Text Available OBJETIVO: Observar e comparar o comportamento das espécies de Aedes albopictus e de Ae. scapularis, na localidade de Pedrinhas, litoral sul do Estado de São Paulo, Brasil. MÉTODOS: As observações foram feitas de outubro de 1996 a janeiro de 2000. Foram realizadas coletas sistemáticas de formas adultas mediante a utilização de isca humana, aspirações ambientais e armadilha tipo Shannon. A domiciliação foi estimada pelo índice de Nuorteva e pela razão de sinantropia. RESULTADOS: Foram feitas 87 coletas diurnas, com a obtenção de 872 adultos fêmeas. As médias de Williams', multiplicadas por 100, foram de 118 e 21 para Ae. albopictus nos horários de 7h às 18h e de 18h às 20h, respectivamente. Quanto a Ae. scapularis, foram de 100 e 106 nos mesmos períodos. Esse último revelou pico de atividade crepuscular vespertina. Na aspiração de abrigos, obteve-se o total de 1.124 espécimens, dos quais 226 Ae. albopictus e 898 Ae. scapularis. O período de janeiro a maio correspondeu ao de maior rendimento para ambos os mosquitos. Quanto à armadilha de Shannon, as coletas realizadas na mata revelaram a ausência de Ae. albopictus. No que concerne à domiciliação, esse último mostrou os maiores valores de índices, enquanto Ae. scapularis revelou comportamento de tipo ubiquista. CONCLUSÕES: Os resultados confirmam outras observações, permitindo levantar hipóteses. Em relação a Ae. scapularis, sugere-se que possa existir fenômeno de diapausa das fêmeas no período verão-outono, a qual cessaria no inverno-primavera quando então a atividade seria retomada. Quanto a Ae. albopictus, os dados sugerem que se trata de população em processo adaptativo ao novo ambiente.OBJECTIVE: Aedes albopictus and Ae. scapularis were found living together in the Pedrinhas Village, Southeastern of São Paulo State, Brazil. This finding was a good opportunity to make observations about the mosquitoes' behavior. METHODS: From October 1996 to
Evaluation of fracturing process of soft rocks at great depth by AE measurement and DEM simulation
Aoki, Kenji; Mito, Yoshitada; Kurokawa, Susumu; Matsui, Hiroya; Niunoya, Sumio; Minami, Masayuki
The authors developed the stress-based evaluation system of EDZ by AE monitoring and Distinct Element Method (DEM) simulation. In order to apply this system to the soft rock site, the authors try to grasp the relationship between AE parameters, stress change and rock fracturing process by performing the high stiffness tri-axial compression tests including AE measurements on the soft rock samples, and its simulations by DEM using bonded particle model. As the result, it is found that change in predominant AE frequency is effective to evaluate fracturing process in sedimentary soft rocks, and the relationship between stress change and fracturing process is also clarified. (author)
Xing Ding; Jinhua Li; Yu Pan; Yue Zhang; Lei Ni; Yaling Wang; Xingguo Zhang
The UGlcAE has the capability of interconverting UDP-d-galacturonic acid and UDP-d-glucuronic acid, and UDP-d-galacturonic acid is an activated precursor for the synthesis of pectins in plants. In this study, we identified nine UGlcAE protein-encoding genes in tomato. The nine UGlcAE genes that were distributed on eight chromosomes in tomato, and the corresponding proteins contained one or two trans-membrane domains. The phylogenetic analysis showed that SlUGlcAE genes could be divided into s...
Structure of the Aeropyrum pernix L7Ae multifunctional protein and insight into its extreme thermostability
Bhuiya, Mohammad Wadud; Suryadi, Jimmy; Zhou, Zholi; Brown, Bernard Andrew II
The crystal structure of A. pernix L7Ae is reported, providing insight into the extreme thermostability of this protein. Archaeal ribosomal protein L7Ae is a multifunctional RNA-binding protein that directs post-transcriptional modification of archaeal RNAs. The L7Ae protein from Aeropyrum pernix (Ap L7Ae), a member of the Crenarchaea, was found to have an extremely high melting temperature (>383 K). The crystal structure of Ap L7Ae has been determined to a resolution of 1.56 Ã…. The structure of Ap L7Ae was compared with the structures of two homologs: hyperthermophilic Methanocaldococcus jannaschii L7Ae and the mesophilic counterpart mammalian 15.5 kD protein. The primary stabilizing feature in the Ap L7Ae protein appears to be the large number of ion pairs and extensive ion-pair network that connects secondary-structural elements. To our knowledge, Ap L7Ae is among the most thermostable single-domain monomeric proteins presently observed
Egyptian "Star Clocks"
Symons, Sarah
Diagonal, transit, and Ramesside star clocks are tables of astronomical information occasionally found in ancient Egyptian temples, tombs, and papyri. The tables represent the motions of selected stars (decans and hour stars) throughout the Egyptian civil year. Analysis of star clocks leads to greater understanding of ancient Egyptian constellations, ritual astronomical activities, observational practices, and pharaonic chronology.
MAGNETIC FIELDS OF STARS
Bychkov, V. D.; Bychkova, L. V.; Madej, J.
Now it is known about 1212 stars of the main sequence and giants (from them 610 stars - it is chemically peculiarity (CP) stars) for which direct measurements of magnetic fields were spent (Bychkov et al.,2008). Let's consider, what representations were generated about magnetic fields (MT) of stars on the basis of available observations data.
Loureiro, I.; Escorial, C.; Garcia-Baudin, J. M.; Chueca, M. C.
Some F1 hybrid plants between three species of the Aegilops genus and different hexaploid wheat Triticum aestivum cultivars show certain self-fertility, with averages of F{sub 1} hybrids bearing F{sub 2} seeds of 8.17%, 5.12% and 48.14% for Aegilops biuncialis, Aegilops geniculata and Aegilops triuncialis respectively. In the Ae. triuncialis-wheat combination with Astral wheat cultivar, the fertility was higher than that found in the other combinations. All the F2 seeds studied were spontaneous amphiploids (2n=10x=70). The present study evidences the possibility of spontaneous formation of amphiploids between these three Aegilops species and hexaploid wheat and discusses their relevance for gene transference. Future risk assessment of transgenic wheat cultivars needs to evaluate the importance of amphiploids as a bridge for transgene introgression and for gene escape to the wild. (Author)
Compact stars
Estevez-Delgado, Gabino; Estevez-Delgado, Joaquin
An analysis and construction is presented for a stellar model characterized by two parameters (w, n) associated with the compactness ratio and anisotropy, respectively. The reliability range for the parameter w ≤ 1.97981225149 corresponds with a compactness ratio u ≤ 0.2644959374, the density and pressures are positive, regular and monotonic decrescent functions, the radial and tangential speed of sound are lower than the light speed, moreover, than the plausible stability. The behavior of the speeds of sound are determinate for the anisotropy parameter n, admitting a subinterval where the speeds are monotonic crescent functions and other where we have monotonic decrescent functions for the same speeds, both cases describing a compact object that is also potentially stable. In the bigger value for the observational mass M = 2.05 M⊙ and radii R = 12.957 Km for the star PSR J0348+0432, the model indicates that the maximum central density �c = 1.283820319 × 1018 Kg/m3 corresponds to the maximum value of the anisotropy parameter and the radial and tangential speed of the sound are monotonic decrescent functions.
Neutron Stars and NuSTAR
Bhalerao, Varun
My thesis centers around the study of neutron stars, especially those in massive binary systems. To this end, it has two distinct components: the observational study of neutron stars in massive binaries with a goal of measuring neutron star masses and participation in NuSTAR, the first imaging hard X-ray mission, one that is extremely well suited to the study of massive binaries and compact objects in our Galaxy. The Nuclear Spectroscopic Telescope Array (NuSTAR) is a NASA Small Explorer mission that will carry the first focusing high energy X-ray telescope to orbit. NuSTAR has an order-of-magnitude better angular resolution and has two orders of magnitude higher sensitivity than any currently orbiting hard X-ray telescope. I worked to develop, calibrate, and test CdZnTe detectors for NuSTAR. I describe the CdZnTe detectors in comprehensive detail here - from readout procedures to data analysis. Detailed calibration of detectors is necessary for analyzing astrophysical source data obtained by the NuSTAR. I discuss the design and implementation of an automated setup for calibrating flight detectors, followed by calibration procedures and results. Neutron stars are an excellent probe of fundamental physics. The maximum mass of a neutron star can put stringent constraints on the equation of state of matter at extreme pressures and densities. From an astrophysical perspective, there are several open questions in our understanding of neutron stars. What are the birth masses of neutron stars? How do they change in binary evolution? Are there multiple mechanisms for the formation of neutron stars? Measuring masses of neutron stars helps answer these questions. Neutron stars in high-mass X-ray binaries have masses close to their birth mass, providing an opportunity to disentangle the role of "nature" and "nurture" in the observed mass distributions. In 2006, masses had been measured for only six such objects, but this small sample showed the greatest diversity in masses
CHANDRA HIGH-ENERGY TRANSMISSION GRATING SPECTRUM OF AE AQUARII
Mauche, Christopher W.
The nova-like cataclysmic binary AE Aqr, which is currently understood to be a former supersoft X-ray binary and current magnetic propeller, was observed for over two binary orbits (78 ks) in 2005 August with the High-Energy Transmission Grating (HETG) on board the Chandra X-ray Observatory. The long, uninterrupted Chandra observation provides a wealth of details concerning the X-ray emission of AE Aqr, many of which are new and unique to the HETG. First, the X-ray spectrum is that of an optically thin multi-temperature thermal plasma; the X-ray emission lines are broad, with widths that increase with the line energy from σ ∼ 1 eV (510 km s -1 ) for O VIII to σ ∼ 5.5 eV (820 km s -1 ) for Si XIV; the X-ray spectrum is reasonably well fit by a plasma model with a Gaussian emission measure distribution that peaks at log T(K) = 7.16, has a width σ = 0.48, an Fe abundance equal to 0.44 times solar, and other metal (primarily Ne, Mg, and Si) abundances equal to 0.76 times solar; and for a distance d = 100 pc, the total emission measure EM = 8.0 x 10 53 cm -3 and the 0.5-10 keV luminosity L X = 1.1 x 10 31 erg s -1 . Second, based on the f/(i + r) flux ratios of the forbidden (f), intercombination (i), and recombination (r) lines of the Heα triplets of N VI, O VII, and Ne IX measured by Itoh et al. in the XMM-Newton Reflection Grating Spectrometer spectrum and those of O VII, Ne IX, Mg XI, and Si XIII in the Chandra HETG spectrum, either the electron density of the plasma increases with temperature by over three orders of magnitude, from n e ∼ 6 x 10 10 cm -3 for N VI [log T(K) ∼ 6] to n e ∼ 1 x 10 14 cm -3 for Si XIII [log T(K) ∼ 7], and/or the plasma is significantly affected by photoexcitation. Third, the radial velocity of the X-ray emission lines varies on the white dwarf spin phase, with two oscillations per spin cycle and an amplitude K ∼ 160 km s -1 . These results appear to be inconsistent with the recent models of Itoh et al., Ikhsanov, and
Theoretical characterization of quaternary iridium based hydrides NaAeIrH{sub 6} (Ae = Ca, Ba and Sr)
Bouras, S. [Laboratory of Studies Surfaces and Interfaces of Solids Materials, Department of Physics, Faculty of Science, University of Setif 1, 19000 (Algeria); Ghebouli, B., E-mail: [email protected] [Laboratory of Studies Surfaces and Interfaces of Solids Materials, Department of Physics, Faculty of Science, University of Setif 1, 19000 (Algeria); Benkerri, M. [Laboratory of Studies Surfaces and Interfaces of Solids Materials, Department of Physics, Faculty of Science, University of Setif 1, 19000 (Algeria); Ghebouli, M.A., E-mail: [email protected] [Microelectronic Laboratory (LMSE), University of Bachir Ibrahimi, Bordj-Bou-Arreridj 34000 (Algeria); Research Unit on Emerging Materials (RUEM), University of Setif 1, 19000 (Algeria); Choutri, H. [Microelectronic Laboratory (LMSE), University of Bachir Ibrahimi, Bordj-Bou-Arreridj 34000 (Algeria); Louail, L.; Chihi, T.; Fatmi, M. [Research Unit on Emerging Materials (RUEM), University of Setif 1, 19000 (Algeria); Bouhemadou, A. [Laboratory for Developing New Materials and Their Characterization, Department of Physics, Faculty of Science, University of Setif 1, 19000 (Algeria); Department of Physics and Astronomy, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451 (Saudi Arabia); Khenata, R.; Khachai, H. [Laboratoire de Physique Quantique et de Modélisation Mathématique, Université de Mascara, 29000 (Algeria)
The quaternary iridium based hydrides NaAeIrH{sub 6} (Ae = Ca, Ba and Sr) are promising candidates as hydrogen storage materials. We have studied the structural, elastic, electronic, optical and thermodynamic properties of NaAeIrH{sub 6} (Ae = Ca, Ba and Sr) within the generalized gradient approximation, the local density approximation (LDA) and mBj in the frame of density functional perturbation theory. These alloys have a large indirect Γ–X band gap. The thermodynamic functions were computed using the phonon density of states. The origin of the possible transitions from valence band to conduction band was illustrated. By using the complex dielectric function, the optical properties such as absorption, reflectivity, loss function, refractive index and optical conductivity have been obtained. - Graphical abstract: Real and imaginary parts of the dielectric function, the absorption spectrum α(ω), reflectivity R(ω) and energy-loss spectrum L(ω). - Highlights: • NaAeIrH{sub 6} (Ae = Ca, Ba and Sr) alloys have been investigated. • The elastic moduli, energy gaps are predicted. • The optical and thermal properties were studied.
Giant CP stars
Loden, L.O.; Sundman, A.
This study is part of an investigation of the possibility of using chemically peculiar (CP) stars to map local galactic structure. Correct luminosities of these stars are therefore crucial. CP stars are generally regarded as main-sequence or near-main-sequence objects. However, some CP stars have been classified as giants. A selection of stars, classified in literature as CP giants, are compared to normal stars in the same effective temperature interval and to ordinary 'non giant' CP stars. There is no clear confirmation of a higher luminosity for 'CP giants', than for CP stars in general. In addition, CP characteristics seem to be individual properties not repeated in a component star or other cluster members. (author). 50 refs., 5 tabs., 3 figs
Preliminary study of acoustic emission (ae) noise signal identification for crude oil storage tank
Nurul Ain Ahmad Latif; Shukri Mohd
This preliminary work was carried out to simulate the Acoustic Emission (AE) signal contributed by pitting corrosion, and noise signal from environment during crude oil storage tanks monitoring. The purpose of this study is to prove that acoustic emission (AE) could be used to detect the formation of pitting corrosion in the crude oil storage tank and differentiated it from other sources of noise signal. In this study, the pitting corrosion was simulated by inducing low voltage and low amperage current onto the crude oil storage tank material (ASTM 516 G 70). Water drop, air blow and surface rubbing were applied onto the specimen surface. To simulate the noise signal produce by rain fall, wind blow and other sources of noise during AE crude oil storage tanks monitoring. AE sensor was attached onto the other surface of specimen to acquire all of these AE signals which then has send to AE DiSP 24 data acquisition system for signal conditioning. AE win software has been used to analyse this entire signal. It is found that, simulated pitting corrosion could be detected by AE system and differentiated from other sources of noise by using amplitude analysis. From the amplitude analysis is shown that 20-30 dB is the range amplitude for the blow test, 50-60 dB for surface rubbing test and over than 60 dB for water drop test. (Author)
AE characteristic for monitoring of fatigue crack in steel bridge members
Yoon, Dong-Jin; Jung, Juong-Chae; Park, Philip; Lee, Seung-Seok
Acoustic emission technique was employed for the monitoring of crack activity in both steel bridge members and laboratory specimen. Laboratory experiment was carried out to identify AE characteristics of fatigue cracks for compact tension specimen. The relationship between a stress intensity factor and AE signals activity as well as conventional AE parameter analysis was discussed. A field test was also conducted on a railway bridge, which contain several fatigue cracks. Crack activities were investigated while in service with strain measurement. From the results, in the laboratory tests, the features of three parameters such as the length of crack growth, the AE energy, and the cumulative AE events, showed the almost same trend in their increase as the number of fatigue cycle increased. From the comparisons of peak amplitude and AE energy with stress intensity factor, it was verified that the higher stress intensity factors generated AE signals with higher peak amplitude and a large number of AE counts. In the field test, real crack propagation signals were captured and the crack activity was verified in two cases.
Low-area hardware implementations of CLOC, SILC and AES-OTR
Banik, Subhadeep; Bogdanov, Andrey; Minematsu, Kazuhiko
The most compact implementation of the AES-128 algorithm was the 8-bit serial circuit proposed in the work of Moradi et. al. (Eurocrypt 2011). The circuit has an 8-bit datapath and occupies area equivalent to around 2400 GE. Since many authenticated encryption modes use the AES-128 algorithm...
AES based secure low energy adaptive clustering hierarchy for WSNs
Kishore, K. R.; Sarma, N. V. S. N.
Wireless sensor networks (WSNs) provide a low cost solution in diversified application areas. The wireless sensor nodes are inexpensive tiny devices with limited storage, computational capability and power. They are being deployed in large scale in both military and civilian applications. Security of the data is one of the key concerns where large numbers of nodes are deployed. Here, an energy-efficient secure routing protocol, secure-LEACH (Low Energy Adaptive Clustering Hierarchy) for WSNs based on the Advanced Encryption Standard (AES) is being proposed. This crypto system is a session based one and a new session key is assigned for each new session. The network (WSN) is divided into number of groups or clusters and a cluster head (CH) is selected among the member nodes of each cluster. The measured data from the nodes is aggregated by the respective CH's and then each CH relays this data to another CH towards the gateway node in the WSN which in turn sends the same to the Base station (BS). In order to maintain confidentiality of data while being transmitted, it is necessary to encrypt the data before sending at every hop, from a node to the CH and from the CH to another CH or to the gateway node.
Test and acceptance from the AE point of view
Sommer, W.C.
The power industry has undergone the transition from utilizing construction engineers for startup activities to utilizing test engineers who are responsible for the preparation or execution of a formal test program. This has come about because testing has been given sufficient importance to justify those participating within the industry to establish it as a specialty phase apart from engineering/design and construction. Presently testing is being conducted by those organizations that have either engineered and/or constructed the unit which may be a position of conflict in regards to unbiased test. The tester (third party) concept promotes the repetition, independence, and expertise of a test organization responsible to the utility for certifying that each system has been tested to the design criteria established by others. A move to this concept should result in better generating stations with higher availability because it has been completely tested by an organization with this sole contractual responsibility. The AE, NSSS, and utility would all benefit with possibly no additional costs incurred
AE Monitoring of Diamond Turned Rapidly Soldified Aluminium 443
Onwuka, G; Abou-El-Hossein, K; Mkoko, Z
The fast replacement of conventional aluminium with rapidly solidified aluminium alloys has become a noticeable trend in the current manufacturing industries involved in the production of optics and optical molding inserts. This is as a result of the improved performance and durability of rapidly solidified aluminium alloys when compared to conventional aluminium. Melt spinning process is vital for manufacturing rapidly solidified aluminium alloys like RSA 905, RSA 6061 and RSA 443 which are common in the industries today. RSA 443 is a newly developed alloy with few research findings and huge research potential. There is no available literature focused on monitoring the machining of RSA 443 alloys. In this research, Acoustic Emission sensing technique was applied to monitor the single point diamond turning of RSA 443 on an ultrahigh precision lathe machine. The machining process was carried out after careful selection of feed, speed and depths of cut. The monitoring process was achieved with a high sampling data acquisition system using different tools while concurrent measurement of the surface roughness and tool wear were initiated after covering a total feed distance of 13km. An increasing trend of raw AE spikes and peak to peak signal were observed with an increase in the surface roughness and tool wear values. Hence, acoustic emission sensing technique proves to be an effective monitoring method for the machining of RSA 443 alloy. (paper)
A Study on AE Signal Analysis of Composite Materials Using Matrix Piezo Electric Sensor
Yu, Yeun Ho; Choi, Jin Ho; Kweon, Jin Hwe
As fiber reinforced composite materials are widely used in aircraft, space structures and robot arms, the study on non-destructive testing methods has become an important research area for improving their reliability and safety. AE (acoustic emission) can evaluate the defects by detecting the emitting strain energy when elastic waves are generated by the initiation and growth of crack, plastic deformation, fiber breakage, matrix cleavage, or delamination. In the paper, AE signals generated under uniaxial tension were measured and analyzed using the 8x8 matrix piezo electric sensor. The electronic circuit to control the transmitting distance of AE signals was designed and constructed. The optical data storage system was also designed to store the AE signal of 64 channels using LED (light emitting diode) elements. From the tests, it was shown that the source location and propagation path of AE signals in composite materials could be detected effectively by the 8x8 matrix piezo electric sensor
Rates of star formation
Larson, R.B.
It is illustrated that a theoretical understanding of the formation and evolution of galaxies depends on an understanding of star formation, and especially of the factors influencing the rate of star formation. Some of the theoretical problems of star formation in galaxies, some approaches that have been considered in models of galaxy evolution, and some possible observational tests that may help to clarify which processes or models are most relevant are reviewed. The material is presented under the following headings: power-law models for star formation, star formation processes (conditions required, ways of achieving these conditions), observational indications and tests, and measures of star formation rates in galaxies. 49 references
Energy production in stars
Bethe, Hans.
Energy in stars is released partly by gravitation, partly by nuclear reactions. For ordinary stars like our sun, nuclear reactions predominate. However, at the end of the life of a star very large amounts of energy are released by gravitational collapse; this can amount to as much as 10 times the total energy released nuclear reactions. The rotational energy of pulsars is a small remnant of the energy of gravitation. The end stage of small stars is generally a white dwarf, of heavy stars a neutron star of possibly a black hole
Regular Generalized Star Star closed sets in Bitopological Spaces
K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar
The aim of this paper is to introduce the concepts of Ï"1Ï"2-regular generalized star star closed sets , Ï"1Ï"2-regular generalized star star open sets and study their basic properties in bitopological spaces.
Quark core stars, quark stars and strange stars
Grassi, F.
A recent one flavor quark matter equation of state is generalized to several flavors. It is shown that quarks undergo a first order phase transition. In addition, this equation of state depends on just one parameter in the two flavor case, two parameters in the three flavor case, and these parameters are constrained by phenomenology. This equation of state is then applied to the hadron-quark transition in neutron stars and the determination of quark star stability, the investigation of strange matter stability and possible strange star existence. 43 refs., 6 figs
ENERGY STAR Certified Displays
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 7.0 ENERGY STAR Program Requirements for Displays that are effective as of July 1, 2016....
ENERGY STAR Certified Boilers
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Boilers that are effective as of October 1,...
ENERGY STAR Certified Televisions
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 7.0 ENERGY STAR Program Requirements for Televisions that are effective as of October 30,...
ENERGY STAR Certified Dehumidifiers
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 4.0 ENERGY STAR Program Requirements for Dehumidifiers that are effective as of October...
Observations of central stars
Lutz, J.H.
Difficulties occurring in the observation of central stars of planetary nebulae are reviewed with emphasis on spectral classifications and population types, and temperature determination. Binary and peculiar central stars are discussed. (U.M.G.)
ENERGY STAR Certified Telephones
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Telephony (cordless telephones and VoIP...
Sahade, J
Aspects of the problems of the Wolf-Rayet stars related to their chemical composition, their evolutionary status, and their apparent dichotomy in two spectral sequences are discussed. Dogmas concerning WR stars are critically discussed, including the belief that WR stars lack hydrogen, that they are helium stars evolved from massive close binaries, and the existence of a second WR stage in which the star is a short-period single-lined binary. The relationship of WR stars with planetary nebulae is addressed, as is the membership of these stars in clusters and associations. The division of WR stars into WN and WC sequences is considered, questioning the reasonability of accounting for WR line formation in terms of abundance differences.
Star formation: Cosmic feast
Scaringi, Simone
Low-mass stars form through a process known as disk accretion, eating up material that orbits in a disk around them. It turns out that the same mechanism also describes the formation of more massive stars.
A catalog of pre-main-sequence emission-line stars with IRAS source associations
Weintraub, D.A.
To aid in finding premain-sequence (PMS) emission-line stars that might have dusty circumstellar environments, 361 PMS stars that are associated with 304 separate IRAS sources were identified. These stars include 200 classical T Tauri stars, 25 weak-lined (naked) T Tauri stars, 56 Herbig Ae/Be stars, six FU Orionis stars, and two SU Aurigae stars. All six of the FU Orionis stars surveyed by IRAS were detected. Of the PMS-IRAS Point Source Catalog (PSC) associations, 90 are new and are not noted in the PSC. The other 271 entries include 104 that are correctly identified in the PSC but have not yet appeared in the literature, 56 more that can be found in both the PSC and in the published and unpublished iterature, and 111 that are in the literature but not in the PSC. Spectral slope diagrams constructed from the 12-, 25-, and 60-micron flux densities reveal unique distributions for the different PMS subclasses; these diagrams may help identify the best candidate PMS stars for observations of circumstellar dust. 30 refs
Autonomous Star Tracker Algorithms
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Phylodynamic analysis of the dissemination of HIV-1 CRF01_AE in Vietnam.
Liao, Huanan; Tee, Kok Keng; Hase, Saiki; Uenishi, Rie; Li, Xiao-Jie; Kusagawa, Shigeru; Thang, Pham Hong; Hien, Nguyen Tran; Pybus, Oliver G; Takebe, Yutaka
To estimate the epidemic history of HIV-1 CRF01_AE in Vietnam and adjacent Guangxi, China, we determined near full-length nucleotide sequences of CRF01_AE from a total of 33 specimens collected in 1997-1998 from different geographic regions and risk populations in Vietnam. Phylogenetic and Bayesian molecular clock analyses were performed to estimate the date of origin of CRF01_AE lineages. Our study reconstructs the timescale of CRF01_AE expansion in Vietnam and neighboring regions and suggests that the series of CRF01_AE epidemics in Vietnam arose by the sequential introduction of founder strains into new locations and risk groups. CRF01_AE appears to have been present among heterosexuals in South-Vietnam for more than a decade prior to its epidemic spread in the early 1990s. In the late 1980s, the virus spread to IDUs in Southern Vietnam and subsequently in the mid-1990s to IDUs further north. Our results indicate the northward dissemination of CRF01_AE during this time.
Covering tree with stars
Baumbach, Jan; Guo, Jian-Ying; Ibragimov, Rashid
We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting ...
America's Star Libraries
Lyons, Ray; Lance, Keith Curry
"Library Journal"'s new national rating of public libraries, the "LJ" Index of Public Library Service, identifies 256 "star" libraries. It rates 7,115 public libraries. The top libraries in each group get five, four, or three Michelin guide-like stars. All included libraries, stars or not, can use their scores to learn from their peers and improve…
Molecular epidemiological study of HIV-1 CRF01_AE transmission in Hong Kong.
Chen, J H K; Wong, K H; Li, P; Chan, K C; Lee, M P; Lam, H Y; Cheng, V C C; Yuen, K Y; Yam, W C
The objective of this study was to investigate the transmission history of the HIV-1 CRF01_AE epidemics in Hong Kong between 1994 and 2007. A total of 465 HIV-1 CRF01_AE pol sequences were derived from an in-house or a commercial HIV-1 genotyping system. Phylogenies of CRF01_AE sequences were analyzed by the Bayesian coalescent method. CRF01_AE patient population included 363 males (78.1%) and 102 females (21.9%), whereas 65% (314 of 465) were local Chinese. Major transmission routes were heterosexual contact (63%), followed by intravenous drug use (IDU) (19%) and men having sex with men (MSM) (17%). From phylogenetic analysis, local CRF01_AE strains were from multiple origins with 3 separate transmission clusters identified. Cluster 1 consisted mainly of Chinese male IDUs and heterosexuals. Clusters 2 and 3 included mainly local Chinese MSM and non-Chinese Asian IDUs, respectively. Chinese reference isolates available from China (Fujian, Guangxi, or Liaoning) were clonally related to our transmission clusters, demonstrating the epidemiological linkage of CRF01_AE infections between Hong Kong and China. The 3 individual local transmission clusters were estimated to have initiated since late 1980s and late 1990s, causing subsequent epidemics in the early 2000s. This is the first comprehensive molecular epidemiological study of HIV-1 CRF01_AE in Hong Kong. It revealed that MSM contact is becoming a major route of local CRF01_AE transmission in Hong Kong. Epidemiological linkage of CRF01_AE between Hong Kong and China observed in this study indicates the importance of regular molecular epidemiological surveillance for the HIV-1 epidemic in our region.
Applying transpose matrix on advanced encryption standard (AES) for database content
Manurung, E. B. P.; Sitompul, O. S.; Suherman
Advanced Encryption Standard (AES) is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) and has been adopted by the U.S. government and is now used worldwide. This paper reports the impact of transpose matrix integration to AES. Transpose matrix implementation on AES is aimed at first stage of chypertext modifications for text based database security so that the confidentiality improves. The matrix is also able to increase the avalanche effect of the cryptography algorithm 4% in average.
Multielemental analysis of Korean geological reference samples by INAA, ICP-AES and ICP-MS
Naoki Shirai; Hiroki Takahashi; Yuta Yokozuka; Mitsuru Ebihara; Meiramkhan Toktaganov; Shun Sekimoto
Six Korean geological reference samples (KB-1, KGB-1, KT-1, KD-1, KG-1 and KG-2) prepared by Korea Institutes of Geoscience and Mineral Resources were analyzed by using INAA, ICP-AES and ICP-MS. Some elements could be determined by both INAA and non-INAA methods (ICP-AES and ICP-MS), and these data are consistent with each other. This study confirms that a combination of ICP-AES and ICP-MS is comparable to INAA in determining a wide range of major, minor and trace elements in geological materials. (author)
White Dwarf Stars
Kepler, S. O.; Romero, Alejandra Daniela; Pelisoli, Ingrid; Ourique, Gustavo
White dwarf stars are the final stage of most stars, born single or in multiple systems. We discuss the identification, magnetic fields, and mass distribution for white dwarfs detected from spectra obtained by the Sloan Digital Sky Survey up to Data Release 13 in 2016, which lead to the increase in the number of spectroscopically identified white dwarf stars from 5000 to 39000. This number includes only white dwarf stars with log g >= 6.5 stars, i.e., excluding the Extremely Low Mass white dw...
Rotating Stars in Relativity
Stergioulas Nikolaos
Full Text Available Rotating relativistic stars have been studied extensively in recent years, both theoretically and observationally, because of the information they might yield about the equation of state of matter at extremely high densities and because they are considered to be promising sources of gravitational waves. The latest theoretical understanding of rotating stars in relativity is reviewed in this updated article. The sections on the equilibrium properties and on the nonaxisymmetric instabilities in f-modes and r-modes have been updated and several new sections have been added on analytic solutions for the exterior spacetime, rotating stars in LMXBs, rotating strange stars, and on rotating stars in numerical relativity.
Nuclear physics of stars
Iliadis, Christian
Most elements are synthesized, or ""cooked"", by thermonuclear reactions in stars. The newly formed elements are released into the interstellar medium during a star's lifetime, and are subsequently incorporated into a new generation of stars, into the planets that form around the stars, and into the life forms that originate on the planets. Moreover, the energy we depend on for life originates from nuclear reactions that occur at the center of the Sun. Synthesis of the elements and nuclear energy production in stars are the topics of nuclear astrophysics, which is the subject of this book
Detection of multiple AE signal by triaxial hodogram analysis; Sanjiku hodogram ho ni yoru taju acoustic emission no kenshutsu
Nagano, K; Yamashita, T [Muroran Institute of Technology, Hokkaido (Japan)
In order to evaluate dynamic behavior of underground cracks, analysis and detection were attempted on multiple acoustic emission (AE) events. The multiple AE is a phenomenon in which multiple AE signals generated by underground cracks developed in an extremely short time interval are superimposed, and observed as one AE event. The multiple AE signal consists of two AE signals, whereas the second P-wave is supposed to have been inputted before the first S-wave is inputted. The first P-wave is inputted first, where linear three-dimensional particle movements are observed, but the movements are made random due to scattering and sensor characteristics. When the second P-wave is inputted, the linear particle movements are observed again, but are superimposed with the existing input signals and become multiple AE, which creates poor S/N ratio. The multiple AE detection determines it a multiple AE event when three conditions are met, i. e. a condition of equivalent time interval of a maximum value in a scalogram analysis, a condition of P-wave vibrating direction, and a condition of the linear particle movement. Seventy AE signals observed in the Kakkonda geothermal field were analyzed and AE signals that satisfy the multiple AE were detected. However, further development is required on an analysis method with high resolution for the time. 4 refs., 4 figs.
Evolution of variable stars
Becker, S.A.
Throughout the domain of the H R diagram lie groupings of stars whose luminosity varies with time. These variable stars can be classified based on their observed properties into distinct types such as β Cephei stars, δ Cephei stars, and Miras, as well as many other categories. The underlying mechanism for the variability is generally felt to be due to four different causes: geometric effects, rotation, eruptive processes, and pulsation. In this review the focus will be on pulsation variables and how the theory of stellar evolution can be used to explain how the various regions of variability on the H R diagram are populated. To this end a generalized discussion of the evolutionary behavior of a massive star, an intermediate mass star, and a low mass star will be presented. 19 refs., 1 fig., 1 tab
Shot peening influence on corrosion resistance of AE21 magnesium alloy.
"Evaluation of the electrochemical characteristics of the AE21 magnesium alloy is presented in the article. : The surfaces of tested alloys were treated by grinding and grinding followed by sodium bicarbonate shotpeening. : The specimens were evaluat...
Fast Oblivious AES A Dedicated application of the MiniMac protocol
Zakarias, Rasmus Winther; Damgård, Ivan Bjerre
We present an actively secure multi-partycomputation of the Advanced Encryption Standard (AES). To the best of our knowledge it is the fastest of its kind to date. We start from an efficient actively secure evaluation of general binary circuits that was implemented by the authors of [DLT14......]. They presented an optimized implementation of the so-called MiniMac protocol [DZ13] that runs in the pre-processing model, and applied this to a binary AES circuit. In this paper we de- scribe how to dedicate the pre-processing to the structure of AES, which improves significantly the throughput and latency...... of previous actively secure implementations. We get a latency of about 6 ms and amortised time about 0.4 ms per AES block, which seems completely adequate for practical applications such as verification of 1-time passwords....
XPS, AES and laser raman spectroscopy: A fingerprint for a materials surface characterisation
Zaidi Embong
This review briefly describes some of the techniques available for analysing surfaces and illustrates their usefulness with a few examples such as a metal and alloy. In particular, Auger electron spectroscopy (AES), X-ray photoelectron spectroscopy (XPS) and laser Raman spectroscopy are all described as advanced surface analytical techniques. In analysing a surface, AES and XPS would normally be considered first, with AES being applied where high spatial resolution is required and XPS where chemical state information is needed. Laser Raman spectroscopy is useful for determining molecular bonding. A combination of XPS, AES and Laser Raman spectroscopy can give quantitative analysis from the top few atomic layers with a lateral spatial resolution of < 10 nm. (author)
Chemical separation and ICP-AES determination of rare earths in Al2O3 matrix
Argekar, A.A.; Kulkarni, M.J.; Page, A.G.; Manchanda, V.K.
A chemical separation-ICP-AES method has been developed for determination of rare earths in alumina matrix. The quantitative separation of rare earths has also been confirmed using radiotracers. (author)
"Smart COPVs� - Continued Successful Development of JSC IR&D Acoustic Emissions (AE) SHM
National Aeronautics and Space Administration — Developed and applied promising quantitative pass/fail criteria to COPVs using acoustic emission (AE) and developed automated data analysis software. This lays the...
Dosimetry of 64Cu-DOTA-AE105, a PET tracer for uPAR imaging
Persson, Morten; El Ali, Henrik H.; Binderup, Tina
64Cu-DOTA-AE105 is a novel positron emission tomography (PET) tracer specific to the human urokinase-type plasminogen activator receptor (uPAR). In preparation of using this tracer in humans, as a new promising method to distinguish between indolent and aggressive cancers, we have performed PET...... studies in mice to evaluate the in vivo biodistribution and estimate human dosimetry of 64Cu-DOTA-AE105. MethodsFive mice received iv tail injection of 64Cu-DOTA-AE105 and were PET/CT scanned 1, 4.5 and 22h post injection. Volume-of-interest (VOI) were manually drawn on the following organs: heart, lung......Favorable dosimetry estimates together with previously reported uPAR PET data fully support human testing of 64Cu-DOTA-AE105....
Auroral Electrojet (AE, AL, AO, AU) - A Global Measure of Auroral Zone Magnetic Activity
National Oceanic and Atmospheric Administration, Department of Commerce — The AE index is derived from geomagnetic variations in the horizontal component observed at selected (10-13) observatories along the auroral zone in the northern...
Application of AE to evaluate deterioration of prt and harbor structures
Matsuyama, Kimitoshi; Ishibashi, Akichika; Fujiwara, Tetsuro; Kanemoto, Yasuhiro; Hamada, Shigenori; Ohtsu, Masayasu
The degree of concrete deterioration is normally evaluated by uniaxial compression tests of core samples taken from existing concrete structures. The uniaxial compression test can give details on concrete strength, elastic modulus and so forth. By using AE(acoustic emission) technique with uniaxial compression tests, we can get more useful information on interior failures of the core sample. The observation of AE activity was conducted during uniaxial compression tests of concrete specimens and samples. The specimens were cast with 5, 10, 15 and 20% air content. The samples were taken from 2 harbor sites in Japan. Generating behavior of AE was studied quantitatively on the basis of rate process theory. Distribution of pores in the concrete could be measured by the porosimeter. The relation between AE activity in the uniaxial compression test of core samples and concrete property is clarified.
Everyone is working together to ease the pressures in A&E.
Kimber, Mark
Nurse consultant Janet Youd, chair of the RCN Emergency Care Association, says nurses struggling to cope with unprecedented pressures in A&E departments should be awarded an extra half day's annual leave (Online News January 8).
Star-Branched Polymers (Star Polymers)
Hirao, Akira; Hayashi, Mayumi; Ito, Shotaro; Goseki, Raita; Higashihara, Tomoya; Hadjichristidis, Nikolaos
The synthesis of well-defined regular and asymmetric mixed arm (hereinafter miktoarm) star-branched polymers by the living anionic polymerization is reviewed in this chapter. In particular, much attention is being devoted to the synthetic
Clinical and molecular genetic features of Hb H and AE Bart's diseases in central Thai children.
Traivaree, Chanchai; Boonyawat, Boonchai; Monsereenusorn, Chalinee; Rujkijyanont, Piya; Photia, Apichat
α-Thalassemia, one of the major thalassemia types in Thailand, is caused by either deletion or non-deletional mutation of one or both α-globin genes. Inactivation of three α-globin genes causes hemoglobin H (Hb H) disease, and the combination of Hb H disease with heterozygous hemoglobin E (Hb E) results in AE Bart's disease. This study aimed to characterize the clinical and hematological manifestations of 76 pediatric patients with Hb H and AE Bart's diseases treated at Phramongkutklao Hospital, a tertiary care center for thalassemia patients in central Thailand. Seventy-six unrelated pediatric patients, 58 patients with Hb H disease and 18 patients with AE Bart's disease, were enrolled in this study. Their clinical presentations, transfusion requirement, laboratory findings, and mutation analysis were retrospectively reviewed and analyzed. A total of 76 pediatric patients with Hb H and AE Bart's diseases who mainly lived in central Thailand were included in this study. The clinical severities of patients with non-deletional mutations were more severe than those with deletional mutations. Eighty-six percent of patients with non-deletional AE Bart's disease required more blood transfusion compared to 12.5% of patients with deletional AE Bart's disease. Non-deletional AE Bart's disease also had a history of urgent blood transfusion with the average of 6±0.9 times compared to 1±0.3 times in patients with deletional Hb H disease. The difference was statistically significant. This study revealed the differences in clinical spectrum between patients with Hb H disease and those with AE Bart's disease in central Thailand. The differentiation of α-thalassemia is essential for appropriate management of patients. The molecular diagnosis is useful for diagnostic confirmation and genotype-phenotype correlation.
Complementary Metal-Oxide-Silicon (CMOS)-Memristor Hybrid Nanoelectronics for Advanced Encryption Standard (AES) Encryption
were built in-house at the SUNY Poly-technic Institute's Center for Semiconductor Research ( CSR ); however, the initial devices for materials screening...A code that models the sweep-mode behavior of the bipolar ReRAM device that is initially in HRS. ............................................ 15...Standard (AES). AES is one of the most important encryption systems and is widely used in military and commercial systems. Based on an iterative
Validation and empirical correction of MODIS AOT and AE over ocean
N. A. J. Schutgens
Full Text Available We present a validation study of Collection 5 MODIS level 2 Aqua and Terra AOT (aerosol optical thickness and AE (Ångström exponent over ocean by comparison to coastal and island AERONET (AErosol RObotic NETwork sites for the years 2003–2009. We show that MODIS (MODerate-resolution Imaging Spectroradiometer AOT exhibits significant biases due to wind speed and cloudiness of the observed scene, while MODIS AE, although overall unbiased, exhibits less spatial contrast on global scales than the AERONET observations. The same behaviour can be seen when MODIS AOT is compared against Maritime Aerosol Network (MAN data, suggesting that the spatial coverage of our datasets does not preclude global conclusions. Thus, we develop empirical correction formulae for MODIS AOT and AE that significantly improve agreement of MODIS and AERONET observations. We show these correction formulae to be robust. Finally, we study random errors in the corrected MODIS AOT and AE and show that they mainly depend on AOT itself, although small contributions are present due to wind speed and cloud fraction in AOT random errors and due to AE and cloud fraction in AE random errors. Our analysis yields significantly higher random AOT errors than the official MODIS error estimate (0.03 + 0.05 τ, while random AE errors are smaller than might be expected. This new dataset of bias-corrected MODIS AOT and AE over ocean is intended for aerosol model validation and assimilation studies, but also has consequences as a stand-alone observational product. For instance, the corrected dataset suggests that much less fine mode aerosol is transported across the Pacific and Atlantic oceans.
PIXE analysis of human stones: comparison with data from ICP-AES
Pougnet, M.A.B.; Peisach, M.; Pineda, C.A.; Rodgers, A.L.
25 human stone samples previously analyzed by inductively coupled plasma atomic emission spectroscopy (ICP-AES) and the IAEA Animal Bone Standard Reference Material were used to evaluate trace element analysis by PIXE. Bombardment with 4 MeV protons was used for the determination of Mn, Fe, Cu, Pb, Br, Rb, Sr and Ca. PIXE and ICP-AES data gave correlation factors better than 0.97 for the elements Ca, Fe, Zn, Sr and Pb. (author) 9 refs.; 5 figs
PETROLOGIC CONSTRAINTS ON AMORPHOUS AND CRYSTALLINE MAGNESIUM SILICATES: DUST FORMATION AND EVOLUTION IN SELECTED HERBIG Ae/Be SYSTEMS
Rietmeijer, Frans J. M. [Department of Earth and Planetary Sciences, MSC 03 2040, 1-University of New Mexico, Albuquerque, NM 87131-001 (United States); Nuth, Joseph A., E-mail: [email protected] [Astrochemistry Laboratory, Solar System Exploration Division, Code 691, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)
The Infrared Space Observatory, Spitzer Space Telescope, and Herschel Space Observatory surveys provided a wealth of data on the Mg-silicate minerals (forsterite, enstatite), silica, and ''amorphous silicates with olivine and pyroxene stoichiometry'' around Herbig Ae/Be stars. These incredible findings do not resonate with the mainstream Earth Sciences because of (1) disconnecting ''astronomical nomenclature'' and the long existing mineralogical and petrologic terminology of minerals and amorphous materials, and (2) the fact that Earth scientists (formerly geologists) are bound by the ''Principle of Actualism'' that was put forward by James Hutton (1726-1797). This principle takes a process-oriented approach to understanding mineral and rock formation and evolution. This paper will (1) review and summarize the results of laboratory-based vapor phase condensation and thermal annealing experiments, (2) present the pathways of magnesiosilica condensates to Mg-silicate mineral (forsterite, enstatite) formation and processing, and (3) present mineralogical and petrologic implications of the properties and compositions of the infrared-observed crystalline and amorphous dust for the state of circumstellar disk evolution. That is, the IR-observation of smectite layer silicates in HD142527 suggests the break-up of asteroid-like parent bodies that had experienced aqueous alteration. We discuss the persistence of amorphous dust around some young stars and an ultrafast amorphous to crystalline dust transition in HD 163296 that leads to forsterite grains with numerous silica inclusions. These dust evolution processes to form forsterite, enstatite {+-} tridymite could occur due to amorphous magnesiosilica dust precursors with a serpentine- or smectite-dehydroxylate composition.
Rietmeijer, Frans J. M.; Nuth, Joseph A.
The Infrared Space Observatory, Spitzer Space Telescope, and Herschel Space Observatory surveys provided a wealth of data on the Mg-silicate minerals (forsterite, enstatite), silica, and ''amorphous silicates with olivine and pyroxene stoichiometry'' around Herbig Ae/Be stars. These incredible findings do not resonate with the mainstream Earth Sciences because of (1) disconnecting ''astronomical nomenclature'' and the long existing mineralogical and petrologic terminology of minerals and amorphous materials, and (2) the fact that Earth scientists (formerly geologists) are bound by the ''Principle of Actualism'' that was put forward by James Hutton (1726-1797). This principle takes a process-oriented approach to understanding mineral and rock formation and evolution. This paper will (1) review and summarize the results of laboratory-based vapor phase condensation and thermal annealing experiments, (2) present the pathways of magnesiosilica condensates to Mg-silicate mineral (forsterite, enstatite) formation and processing, and (3) present mineralogical and petrologic implications of the properties and compositions of the infrared-observed crystalline and amorphous dust for the state of circumstellar disk evolution. That is, the IR-observation of smectite layer silicates in HD142527 suggests the break-up of asteroid-like parent bodies that had experienced aqueous alteration. We discuss the persistence of amorphous dust around some young stars and an ultrafast amorphous to crystalline dust transition in HD 163296 that leads to forsterite grains with numerous silica inclusions. These dust evolution processes to form forsterite, enstatite ± tridymite could occur due to amorphous magnesiosilica dust precursors with a serpentine- or smectite-dehydroxylate composition.
Obsidian provenance studies in archaeology: A comparison between PIXE, ICP-AES and ICP-MS
Bellot-Gurlet, Ludovic; Poupeau, Gerard; Salomon, Joseph; Calligaro, Thomas; Moignard, Brice; Dran, Jean-Claude; Barrat, Jean-Alix; Pichon, Laurent
Elemental composition fingerprinting by PIXE technique is very attractive for obsidian provenance studies as it may proceed in a non-destructive mode, even if a more complete elemental characterization can be obtained by ICP-MS and/or ICP-AES. Only few studies have compared results obtained by both methods for solid rock samples. In this work, elemental compositions were determined by ICP-MS/-AES for international geochemical standards and by ICP-MS/-AES and PIXE for inter-laboratory reference obsidians. In addition 49 obsidian source samples and artefacts were analysed by both ICP-MS/-AES and PIXE. Instrumental work and measurement quality control performed for obsidian chemical characterization, underline that PIXE and ICP-MS/-AES provide reproducible, accurate and comparable measurements. In some volcanic districts the limited number of elements dosed by PIXE is sufficient for the discrimination of the potential raw sources of obsidians. Therefore, PIXE can be an advantageous substitute to ICP-MS/-AES techniques for provenance studies
Evaluation of the use of envelope analysis and DWT on AE signals generated from degrading shafts
Gu, Dongsik; Kim, Jaegu; Kelimu, Tulugan; Huh, Sun-Chul; Choi, Byeong-Keun
Vibration analysis is widely used in machinery diagnosis. Wavelet transforms and envelope analysis, which have been implemented in many applications in the condition monitoring of machinery, are applied in the development of a condition monitoring system for early detection of faults generated in several key components of machinery. Early fault detection is a very important factor in condition monitoring and a basic component for the application of condition-based maintenance (CBM) and predictive maintenance (PM). In addition, acoustic emission (AE) sensors have specific characteristics that are highly sensitive to high-frequency and low-energy signals. Therefore, the AE technique has been applied recently in studies on the early detection of failure. In this paper, AE signals caused by crack growth on a rotating shaft were captured through an AE sensor. The AE signatures were pre-processed using the proposed signal processing method, after which power spectrums were generated from the FFT results. In the power spectrum, some peaks from fault frequencies were presented. According to the results, crack growth in rotating machinery can be considered and detected using an AE sensor and the signal processing method.
Hirao, Akira
The synthesis of well-defined regular and asymmetric mixed arm (hereinafter miktoarm) star-branched polymers by the living anionic polymerization is reviewed in this chapter. In particular, much attention is being devoted to the synthetic development of miktoarm star polymers since 2000. At the present time, the almost all types of multiarmed and multicomponent miktoarm star polymers have become feasible by using recently developed iterative strategy. For example, the following well-defined stars have been successfully synthesized: 3-arm ABC, 4-arm ABCD, 5-arm ABCDE, 6-arm ABCDEF, 7-arm ABCDEFG, 6-arm ABC, 9-arm ABC, 12-arm ABC, 13-arm ABCD, 9-arm AB, 17-arm AB, 33-arm AB, 7-arm ABC, 15-arm ABCD, and 31-arm ABCDE miktoarm star polymers, most of which are quite new and difficult to synthesize by the end of the 1990s. Several new specialty functional star polymers composed of vinyl polymer segments and rigid rodlike poly(acetylene) arms, helical polypeptide, or helical poly(hexyl isocyanate) arms are introduced.
Massive stars in galaxies
Humphreys, R.M.
The relationship between the morphologic type of a galaxy and the evolution of its massive stars is explored, reviewing observational results for nearby galaxies. The data are presented in diagrams, and it is found that the massive-star populations of most Sc spiral galaxies and irregular galaxies are similar, while those of Sb spirals such as M 31 and M 81 may be affected by morphology (via differences in the initial mass function or star-formation rate). Consideration is also given to the stability-related upper luminosity limit in the H-R diagram of hypergiant stars (attributed to radiation pressure in hot stars and turbulence in cool stars) and the goals of future observation campaigns. 88 references
Synthesis and structural characterization of the ternary Zintl phases AE3Al2Pn4 and AE3Ga2Pn4 (AE=Ca, Sr, Ba, Eu; Pn=P, As)
He, Hua; Tyson, Chauntae; Saito, Maia; Bobev, Svilen
Ten new ternary phosphides and arsenides with empirical formulae AE 3 Al 2 Pn 4 and AE 3 Ga 2 Pn 4 (AE=Ca, Sr, Ba, Eu; Pn=P, As) have been synthesized using molten Ga, Al, and Pb fluxes. They have been structurally characterized by single-crystal and powder X-ray diffraction to form with two different structures—Ca 3 Al 2 P 4 , Sr 3 Al 2 As 4 , Eu 3 Al 2 P 4 , Eu 3 Al 2 As 4 , Ca 3 Ga 2 P 4 , Sr 3 Ga 2 P 4 , Sr 3 Ga 2 As 4 , and Eu 3 Ga 2 As 4 crystallize with the Ca 3 Al 2 As 4 structure type (space group C2/c, Z=4); Ba 3 Al 2 P 4 and Ba 3 Al 2 As 4 adopt the Na 3 Fe 2 S 4 structure type (space group Pnma, Z=4). The polyanions in both structures are made up of TrPn 4 tetrahedra, which share common corners and edges to form 2 ∞ [TrPn 2 ] 3– layers in the phases with the Ca 3 Al 2 As 4 structure, and 1 ∞ [TrPn 2 ] 3– chains in Ba 3 Al 2 P 4 and Ba 3 Al 2 As 4 with the Na 3 Fe 2 S 4 structure type. The valence electron count for all of these compounds follows the Zintl–Klemm rules. Electronic band structure calculations confirm them to be semiconductors. - Graphical abstract: AE 3 Al 2 Pn 4 and AE 3 Ga 2 Pn 4 (AE=Ca, Sr, Ba, Eu; Pn=P, As) crystallize in two different structures—Ca 3 Al 2 P 4 , Sr 3 Al 2 As 4 , Eu 3 Al 2 P 4 , Eu 3 Al 2 As 4 , Ca 3 Ga 2 P 4 , Sr 3 Ga 2 P 4 , Sr 3 Ga 2 As 4 , and Eu 3 Ga 2 As 4 , are isotypic with the previously reported Ca 3 Al 2 As 4 (space group C2/c (No. 15)), while Ba 3 Al 2 P 4 and Ba 3 Al 2 As 4 adopt a different structure known for Na 3 Fe 2 S 4 (space group Pnma (No. 62). The polyanions in both structures are made up of TrPn 4 tetrahedra, which by sharing common corners and edges, form 2 ∞ [TrPn 2 ] 3– layers in the former and 1 ∞ [TrPn 2 ] 3– chains in Ba 3 Al 2 P 4 and Ba 3 Al 2 As 4 . Highlights: ► AE 3 Ga 2 Pn 4 (AE=Ca, Sr, Ba, Eu; Pn=P, As) are new ternary pnictides. ► Ba 3 Al 2 P 4 and Ba 3 Al 2 As 4 adopt the Na 3 Fe 2 S 4 structure type. ► The Sr- and Ca-compounds crystallize with the Ca 3
Evolution of massive stars
Loore, C. de
The evolution of stars with masses larger than 15 sun masses is reviewed. These stars have large convective cores and lose a substantial fraction of their matter by stellar wind. The treatment of convection and the parameterisation of the stellar wind mass loss are analysed within the context of existing disagreements between theory and observation. The evolution of massive close binaries and the origin of Wolf-Rayet Stars and X-ray binaries is also sketched. (author)
Fast pulsars, strange stars
Glendenning, N.K.
The initial motivation for this work was the reported discovery in January 1989 of a 1/2 millisecond pulsar in the remnant of the spectacular supernova, 1987A. The status of this discovery has come into grave doubt as of data taken by the same group in February, 1990. At this time we must consider that the millisecond signal does not belong to the pulsar. The existence of a neutron star in remnant of the supernova is suspected because of recent observations on the light curve of the remnant, and of course by the neutrino burst that announced the supernova. However its frequency is unknown. I can make a strong case that a pulsar rotation period of about 1 ms divides those that can be understood quite comfortably as neutron stars, and those that cannot. What we will soon learn is whether there is an invisible boundary below which pulsar periods do not fall, in which case, all are presumable neutron stars, or whether there exist sub- millisecond pulsars, which almost certainly cannot be neutron stars. Their most plausible structure is that of a self-bound star, a strange-quark-matter star. The existence of such stars would imply that the ground state of the strong interaction is not, as we usually assume, hadronic matter, but rather strange quark matter. Let us look respectively at stars that are bound only by gravity, and hypothetical stars that are self-bound, for which gravity is so to speak, icing on the cake
Baumbach, Jan; Guo, Jiong; Ibragimov, Rashid
We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting...... tree is isomorphic to T? We prove that in the general setting, CST is NP-complete, which implies that the tree edit distance considered here is also NP-hard, even when both input trees having diameters bounded by 10. We also show that, when the number of distinct stars is bounded by a constant k, CTS...
Introduction to neutron stars
Lattimer, James M. [Dept. of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794-3800 (United States)
Neutron stars contain the densest form of matter in the present universe. General relativity and causality set important constraints to their compactness. In addition, analytic GR solutions are useful in understanding the relationships that exist among the maximum mass, radii, moments of inertia, and tidal Love numbers of neutron stars, all of which are accessible to observation. Some of these relations are independent of the underlying dense matter equation of state, while others are very sensitive to the equation of state. Recent observations of neutron stars from pulsar timing, quiescent X-ray emission from binaries, and Type I X-ray bursts can set important constraints on the structure of neutron stars and the underlying equation of state. In addition, measurements of thermal radiation from neutron stars has uncovered the possible existence of neutron and proton superfluidity/superconductivity in the core of a neutron star, as well as offering powerful evidence that typical neutron stars have significant crusts. These observations impose constraints on the existence of strange quark matter stars, and limit the possibility that abundant deconfined quark matter or hyperons exist in the cores of neutron stars.
Strangeon and Strangeon Star
Xiaoyu, Lai; Renxin, Xu
The nature of pulsar-like compact stars is essentially a central question of the fundamental strong interaction (explained in quantum chromo-dynamics) at low energy scale, the solution of which still remains a challenge though tremendous efforts have been tried. This kind of compact objects could actually be strange quark stars if strange quark matter in bulk may constitute the true ground state of the strong-interaction matter rather than 56Fe (the so-called Witten's conjecture). From astrophysical points of view, however, it is proposed that strange cluster matter could be absolutely stable and thus those compact stars could be strange cluster stars in fact. This proposal could be regarded as a general Witten's conjecture: strange matter in bulk could be absolutely stable, in which quarks are either free (for strange quark matter) or localized (for strange cluster matter). Strange cluster with three-light-flavor symmetry is renamed strangeon, being coined by combining "strange nucleon� for the sake of simplicity. A strangeon star can then be thought as a 3-flavored gigantic nucleus, and strangeons are its constituent as an analogy of nucleons which are the constituent of a normal (micro) nucleus. The observational consequences of strangeon stars show that different manifestations of pulsarlike compact stars could be understood in the regime of strangeon stars, and we are expecting more evidence for strangeon star by advanced facilities (e.g., FAST, SKA, and eXTP).
Interacting binary stars
Sahade, Jorge; Ter Haar, D
Interacting Binary Stars deals with the development, ideas, and problems in the study of interacting binary stars. The book consolidates the information that is scattered over many publications and papers and gives an account of important discoveries with relevant historical background. Chapters are devoted to the presentation and discussion of the different facets of the field, such as historical account of the development in the field of study of binary stars; the Roche equipotential surfaces; methods and techniques in space astronomy; and enumeration of binary star systems that are studied
Polarization of Be stars
Johns, M.W.
Linear polarization of starlight may be produced by electron scattering in the extended atmospheres of early type stars. Techniques are investigated for the measurement and interpretation of this polarization. Polarimetric observations were made of twelve visual double star systems in which at least one member was a B type star as a means of separating the intrinsic stellar polarization from the polarization produced in the interstellar medium. Four of the double stars contained a Be star. Evidence for intrinsic polarization was found in five systems including two of the Be systems, one double star with a short period eclipsing binary, and two systems containing only normal early type stars for which emission lines have not been previously reported. The interpretation of these observations in terms of individual stellar polarizations and their wavelength dependence is discussed. The theoretical basis for the intrinsic polarization of early type stars is explored with a model for the disk-like extended atmospheres of Be stars. Details of a polarimeter for the measurement of the linear polarization of astronomical point sources are also presented with narrow band (Δ lambda = 100A) measurements of the polarization of γ Cas from lambda 4000 to lambda 5800
ENERGY STAR Unit Reports
Department of Housing and Urban Development — These quarterly Federal Fiscal Year performance reports track the ENERGY STAR qualified HOME units that Participating Jurisdictions record in HUD's Integrated...
Environmental application of XRF, ICP-AES and INAA on biological matrix
Zararsiz, A.; Dogangun, A.; Tuncel, S.
Full text: It is very important to determine trace quantities of metals in different matrices with high accuracy since the metals are used as markers for different sources in air pollution studies. In this study, the analytical capabilities of XRF, ICP-AES and INM techniques on a biological matrix namely lichens, which are widely used as bio monitoring organisms for the pollutants mapping in the atmosphere, were tested. Lichen samples were collected in Aegean Region of Turkey where pollution is an important issue. 9 elements were determined by XRF, 14 elements by ICP-AES and 13 elements by INM. Quality assurance was achieved using lichen SRM (IAEA-336) and Orchard leaves SRM (NIST- 1571). Produced data are subjected to statistical tests, like t-test, Q-test in order to determine the accuracy and precision of each technique. A recommendation list of the proper analytical technique is obtained for determination of each specific element considering analytical capabilities of ICP-AES, XRF and INM. As a result we can recommend that the first choice for Cd, Cu, Mg is ICP-AES, for In, K, Rb is INAA, for Br is XRF, if the concentrations are not close to the detection limit of XRF. For V, Cr, AI, Na, Fe ICP-AES and INM are both well, for Pb ICP-AES and XRF are both well, if the concentrations are not close to the detection limit of XRF, for Mn and Ca INM, XRF and ICP-AES are all give similar results for this type of biological matrix
Stars and Flowers, Flowers and Stars
Minti, Hari
The author, a graduated from the Bucharest University (1964), actually living and working in Israel, concerns his book to variable stars and flowers, two domains of his interest. The analogies includes double stars, eclipsing double stars, eclipses, Big Bang. The book contains 34 chapters, each of which concerns various relations between astronomy and other sciences and pseudosciences such as Psychology, Religion, Geology, Computers and Astrology (to which the author is not an adherent). A special part of the book is dedicated to archeoastronomy and ethnoastronomy, as well as to history of astronomy. Between the main points of interest of these parts: ancient sanctuaries in Sarmizegetusa (Dacia), Stone Henge(UK) and other. The last chapter of the book is dedicated to flowers. The book is richly illustrated. It is designed for a wide circle of readers.
Correlation between Earthquakes and AE Monitoring of Historical Buildings in Seismic Areas
Giuseppe Lacidogna
Full Text Available In this contribution a new method for evaluating seismic risk in regional areas based on the acoustic emission (AE technique is proposed. Most earthquakes have precursors, i.e., phenomena of changes in the Earth's physical-chemical properties that take place prior to an earthquake. Acoustic emissions in materials and earthquakes in the Earth's crust, despite the fact that they take place on very different scales, are very similar phenomena; both are caused by a release of elastic energy from a source located in a medium. For the AE monitoring, two important constructions of Italian cultural heritage are considered: the chapel of the "Sacred Mountain of Varallo� and the "Asinelli Tower� of Bologna. They were monitored during earthquake sequences in their relative areas. By using the Grassberger-Procaccia algorithm, a statistical method of analysis was developed that detects AEs as earthquake precursors or aftershocks. Under certain conditions it was observed that AEs precede earthquakes. These considerations reinforce the idea that the AE monitoring can be considered an effective tool for earthquake risk evaluation.
The Growing-up of a Star
Using ESO's Very Large Telescope Interferometer, astronomers have probed the inner parts of the disc of material surrounding a young stellar object, witnessing how it gains its mass before becoming an adult. ESO PR Photo 03/08 ESO PR Photo 03a/08 The disc around MWC 147 (Artist's Impression) The astronomers had a close look at the object known as MWC 147, lying about 2,600 light years away towards the constellation of Monoceros ('the Unicorn'). MWC 147 belongs to the family of Herbig Ae/Be objects. These have a few times the mass of our Sun and are still forming, increasing in mass by swallowing material present in a surrounding disc. MWC 147 is less than half a million years old. If one associated the middle-aged, 4.6 billion year old Sun with a person in his early forties, MWC 147 would be a 1-day-old baby [1]. The morphology of the inner environment of these young stars is however a matter of debate and knowledge of it is important to better understand how stars and their cortège of planets form. The astronomers Stefan Kraus, Thomas Preibisch, and Keiichi Ohnaka have used the four 8.2-m Unit Telescopes of ESO's Very Large Telescope to this purpose, combining the light from two or three telescopes with the MIDI and AMBER instruments. "With our VLTI/MIDI and VLTI/AMBER observations of MWC147, we combine, for the first time, near- and mid-infrared interferometric observations of a Herbig Ae/Be star, providing a measurement of the disc size over a wide wavelength range [2]," said Stefan Kraus, lead-author of the paper reporting the results. "Different wavelength regimes trace different temperatures, allowing us to probe the disc's geometry on the smaller scale, but also to constrain how the temperature changes with the distance from the star." The near-infrared observations probe hot material with temperatures of up to a few thousand degrees in the innermost disc regions, while the mid-infrared observations trace cooler dust further out in the disc. The
Science Through ARts (STAR)
Kolecki, Joseph; Petersen, Ruth; Williams, Lawrence
Science Through ARts (STAR) is an educational initiative designed to teach students through a multidisciplinary approach to learning. This presentation describes the STAR pilot project, which will use Mars exploration as the topic to be integrated. Schools from the United Kingdom, Japan, the United States, and possibly eastern Europe are expected to participate in the pilot project.
European Stars and Stripes
Hendricks, Nancy
The European Stars and Stripes (ES&S) organization publishes a daily newspaper, The Stars and Stripes, for DoD personnel stationed in Germany, Italy, the United Kingdom, and other DoD activities in the U.S. European Command...
Nebraska STARS: Achieving Results
Roschewski, Pat; Isernhagen, Jody; Dappen, Leon
In 2000, the state of Nebraska passed legislation requiring the assessment of student performance on content standards, but its requirements were very different from those of any other state. Nebraska created what has come to be known as STARS (School-based Teacher-led Assessment and Reporting System). Under STARS, each of Nebraska's nearly 500…
Convective overshooting in stars
Andrássy, R.
Numerous observations provide evidence that the standard picture, in which convective mixing is limited to the unstable layers of a star, is incomplete. The mixing layers in real stars are significantly more extended than what the standard models predict. Some of the observations require changing
By Draconis Stars
Bopp, Bernard W.
An optical spectroscopic survey of dK-M stars has resulted in the discovery of several new H-alpha emission objects. Available optical data suggest these stars have a level of chromospheric activity midway between active BY Dra stars and quiet dM's. These "marginal" BY Dra stars are single objects that have rotation velocities slightly higher than that of quiet field stars but below that of active flare/BY Dra objects. The marginal BY Dra stars provide us with a class of objects rotating very near a "trigger velocity" (believed to be 5 km/s) which appears to divide active flare/BY Dra stars from quiet dM's. UV data on Mg II emission fluxes and strength of transition region features such as C IV will serve to fix activity levels in the marginal objects and determine chromosphere and transition-region heating rates. Simultaneous optical magnetic field measures will be used to explore the connection between fieldstrength/filling-factor and atmospheric heating. Comparison of these data with published information on active and quiet dM stars will yield information on the character of the stellar dynamo as it makes a transition from "low" to "high" activity.
Observing Double Stars
Genet, Russell M.; Fulton, B. J.; Bianco, Federica B.; Martinez, John; Baxter, John; Brewer, Mark; Carro, Joseph; Collins, Sarah; Estrada, Chris; Johnson, Jolyon; Salam, Akash; Wallen, Vera; Warren, Naomi; Smith, Thomas C.; Armstrong, James D.; McGaughey, Steve; Pye, John; Mohanan, Kakkala; Church, Rebecca
Double stars have been systematically observed since William Herschel initiated his program in 1779. In 1803 he reported that, to his surprise, many of the systems he had been observing for a quarter century were gravitationally bound binary stars. In 1830 the first binary orbital solution was obtained, leading eventually to the determination of stellar masses. Double star observations have been a prolific field, with observations and discoveries - often made by students and amateurs - routinely published in a number of specialized journals such as the Journal of Double Star Observations. All published double star observations from Herschel's to the present have been incorporated in the Washington Double Star Catalog. In addition to reviewing the history of visual double stars, we discuss four observational technologies and illustrate these with our own observational results from both California and Hawaii on telescopes ranging from small SCTs to the 2-meter Faulkes Telescope North on Haleakala. Two of these technologies are visual observations aimed primarily at published "hands-on" student science education, and CCD observations of both bright and very faint doubles. The other two are recent technologies that have launched a double star renaissance. These are lucky imaging and speckle interferometry, both of which can use electron-multiplying CCD cameras to allow short (30 ms or less) exposures that are read out at high speed with very low noise. Analysis of thousands of high speed exposures allows normal seeing limitations to be overcome so very close doubles can be accurately measured.
NEAR-INFRARED IMAGING OF THE STAR-FORMING REGIONS SH2-157 AND SH2-152
Chen Yafeng; Yang Ji; Zeng Qin; Yao Yongqiang; Sato, Shuji
Near-infrared JHK' and H 2 v = 1-0 S (1) imaging observations of the star-forming regions Sh2-157 and Sh2-152 are presented. The data reveal a cluster of young stars associated with H 2 line emission in each region. Additionally, many IR point sources are found in the dense core of each molecular cloud. Most of these sources exhibit infrared color excesses typical of T Tauri stars, Herbig Ae/Be stars, and protostars. Several display the characteristics of massive stars. We calculate histograms of the K'-magnitude and [H - K'] color for all sources, as well as two-color and color-magnitude diagrams. The stellar populations inside and outside the clusters are similar, suggesting that these systems are rather evolved. Shock-driven H 2 emission knots are also detected, which may be related to evident subclusters in an earlier evolutionary stage.
Neutron Stars and Pulsars
Becker, Werner
Neutron stars are the most compact astronomical objects in the universe which are accessible by direct observation. Studying neutron stars means studying physics in regimes unattainable in any terrestrial laboratory. Understanding their observed complex phenomena requires a wide range of scientific disciplines, including the nuclear and condensed matter physics of very dense matter in neutron star interiors, plasma physics and quantum electrodynamics of magnetospheres, and the relativistic magneto-hydrodynamics of electron-positron pulsar winds interacting with some ambient medium. Not to mention the test bed neutron stars provide for general relativity theories, and their importance as potential sources of gravitational waves. It is this variety of disciplines which, among others, makes neutron star research so fascinating, not only for those who have been working in the field for many years but also for students and young scientists. The aim of this book is to serve as a reference work which not only review...
Spectrophotometry of carbon stars
Oganesyan, R.K.; Karapetyan, M.S.; Nersisyan, S.E.
The results are given of the spectrophotometric investigation of 56 carbon stars in the spectral range from 4000 to 6800 A with resolution 3 A. The observed energy distributions of these stars are determined relative to the flux at the wavelength /sub 0/ = 5556; they are presented in the form of graphs. The energy distributions have been obtained for the first time for 35 stars. Variation in the line Ba II 4554 A has been found in the spectra of St Cam, UU Aur, and RV Mon. Large changes have taken place in the spectra of RT UMa and SS Vir. It is noted that the spectra of carbon stars have a depression, this being situated in different spectral regions for individual groups of stars.
Rotating stars in relativity.
Paschalidis, Vasileios; Stergioulas, Nikolaos
Rotating relativistic stars have been studied extensively in recent years, both theoretically and observationally, because of the information they might yield about the equation of state of matter at extremely high densities and because they are considered to be promising sources of gravitational waves. The latest theoretical understanding of rotating stars in relativity is reviewed in this updated article. The sections on equilibrium properties and on nonaxisymmetric oscillations and instabilities in f -modes and r -modes have been updated. Several new sections have been added on equilibria in modified theories of gravity, approximate universal relationships, the one-arm spiral instability, on analytic solutions for the exterior spacetime, rotating stars in LMXBs, rotating strange stars, and on rotating stars in numerical relativity including both hydrodynamic and magnetohydrodynamic studies of these objects.
On the evolution of stars
Kippenhahn, R.
A popular survey is given of the present knowledge on evolution and ageing of stars. Main sequence stars, white dwarf stars, and red giant stars are classified in the Hertzsprung-Russell (HR)-diagram by measurable quantities: surface temperature and luminosity. From the HR-diagram it can be concluded to star mass and age. Star-forming processes in interstellar clouds as well as stellar burning processes are illustrated. The changes occurring in a star due to the depletion of the nuclear energy reserve are described. In this frame the phenomena of planetary nebulae, supernovae, pulsars, neutron stars as well as of black holes are explained
Monte Carlo simulation of star/linear and star/star blends with chemically identical monomers
Theodorakis, P E [Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina (Greece); Avgeropoulos, A [Department of Materials Science and Engineering, University of Ioannina, 45110 Ioannina (Greece); Freire, J J [Departamento de Ciencias y Tecnicas FisicoquImicas, Universidad Nacional de Educacion a Distancia, Facultad de Ciencias, Senda del Rey 9, 28040 Madrid (Spain); Kosmas, M [Department of Chemistry, University of Ioannina, 45110 Ioannina (Greece); Vlahos, C [Department of Chemistry, University of Ioannina, 45110 Ioannina (Greece)
The effects of chain size and architectural asymmetry on the miscibility of blends with chemically identical monomers, differing only in their molecular weight and architecture, are studied via Monte Carlo simulation by using the bond fluctuation model. Namely, we consider blends composed of linear/linear, star/linear and star/star chains. We found that linear/linear blends are more miscible than the corresponding star/star mixtures. In star/linear blends, the increase in the volume fraction of the star chains increases the miscibility. For both star/linear and star/star blends, the miscibility decreases with the increase in star functionality. When we increase the molecular weight of linear chains of star/linear mixtures the miscibility decreases. Our findings are compared with recent analytical and experimental results.
Theodorakis, P. E.; Avgeropoulos, A.; Freire, J. J.; Kosmas, M.; Vlahos, C.
Theodorakis, P E; Avgeropoulos, A; Freire, J J; Kosmas, M; Vlahos, C
The effects of chain size and architectural asymmetry on the miscibility of blends with chemically identical monomers, differing only in their molecular weight and architecture, are studied via Monte Carlo simulation by using the bond fluctuation model. Namely, we consider blends composed of linear/linear, star/linear and star/star chains. We found that linear/linear blends are more miscible than the corresponding star/star mixtures. In star/linear blends, the increase in the volume fraction of the star chains increases the miscibility. For both star/linear and star/star blends, the miscibility decreases with the increase in star functionality. When we increase the molecular weight of linear chains of star/linear mixtures the miscibility decreases. Our findings are compared with recent analytical and experimental results
Integrated interpretation of AE clusters and fracture system in Hijiori HDR artificial reservoir; Hijiori koon gantai jinko choryuso no AE cluster to kiretsu system ni kansuru togoteki kaishaku
Tezuka, K [Japan Petroleum Exploration Corp., Tokyo (Japan); Niitsuma, H [Tohoku University, Sendai (Japan)
With regard to a fracture system in the Hijiori hot dry rock artificial reservoir, an attempt was made on an interpretation which integrates different data. Major factors that characterize development and performance of an artificial reservoir are composed of a fracture system in rocks, which acts as circulating water paths, a heat exchange face and a reservoir space. The system relates not only with crack density distribution, but also with cracks activated by water pressure fracturing, cracks generating acoustic emission (AE), and cracks working as major flow paths, all of which are characterized by having respective behaviors and roles. Characteristics are shown on AE cluster distribution, crack distribution, production zone and estimated stress fields. Mutual relationship among these elements was discussed based on the Coulomb`s theory. The most important paths are characterized by distribution of slippery cracks. Directions and appearance frequencies of the slippery cracks affect strongly directionality of the paths, which are governed by distribution of the cracks (weak face) and stress field. Among the slippery cracks, cracks that generate AE are cracks that release large energy when a slip occurs. Evaluation on slippery crack distribution is important. 7 refs., 8 figs.
Analyzing Permutations for AES-like Ciphers: Understanding ShiftRows
Beierle, Christof; Jovanovic, Philipp; Lauridsen, Martin Mehl
Designing block ciphers and hash functions in a manner that resemble the AES in many aspects has been very popular since Rijndael was adopted as the Advanced Encryption Standard. However, in sharp contrast to the MixColumns operation, the security implications of the way the state is permuted...... by the operation resembling ShiftRows has never been studied in depth. Here, we provide the first structured study of the influence of ShiftRows-like operations, or more generally, word-wise permutations, in AES-like ciphers with respect to diffusion properties and resistance towards differential- and linear...... normal form. Using a mixed-integer linear programming approach, we obtain optimal parameters for a wide range of AES-like ciphers, and show improvements on parameters for Rijndael-192, Rijndael-256, PRIMATEs-80 and Prøst-128. As a separate result, we show for specific cases of the state geometry...
KOMBINASI ALGORITMA AES, RC4 DAN ELGAMAL DALAM SKEMA HYBRID UNTUK KEAMANAN DATA
Adi Widarma
Full Text Available Pengiriman atau pertukaran data adalah hal yang sering terjadi dalam dunia teknologi informasi. Data yang dikirim kadang sering berisi data informasi yang penting bahkan sangat rahasia dan harus dijaga keamanannya. Untuk menjaga keamanan data, dapat dilakukan dengan menggunakan teknik kriptografi. Algoritma AES dan RC4 adalah salah satu dari algoritma simetri. Kelemahan dari algoritma simetri adalah proses enkripsi dan dekripsi menggunakan kunci yang sama. Untuk mengatasi tersebut dilakukan dengan menggunakan algoritma Elgamal. Algoritma Elgamal adalah termasuk algoritma asimetri. Algoritma Elgamal digunakan untuk mengamankan kunci dari algoritma AES dan RC4. Peningkatan keamanan pesan dan kunci dilakukan dengan algoritma hybrid. Algoritma hybrid dengan mengkombinasikan beberapa algoritma baik algoritma simetri maupun algortima asimetri akan menambah keamanan sehingga menjadi lebih aman dan powerful (Jain & Agrawal 2014. Penelitian ini akan dilakukan metode hybrid yaitu mengkombinasikan beberapa algoritma kriptografi dengan menggunakan algoritma Advanced Encryption Standard (AES dan RC4 untuk kerahasiaan data serta algoritma Elgamal digunakan untuk enkripsi dan dekripsi kunci.
Evolution of microstructure and hardness of AE42 alloy after heat treatments
Huang, Y.D.; Dieringa, H.; Hort, N.
properties even further. It is shown that the microstructure of the squeeze-cast AE42 alloy is stable at high temperature 450 degrees C. The subsequent solution and ageing treatments have a limited effect on the hardness. The weak age-hardening is attributed to the precipitation of small amount Of Mg17Al12......The AE42 magnesium alloy was developed for high pressure die casting (HPDC) from low-aluminum magnesium alloys. In this alloy the rare earth (RE) elements were shown to increase creep resistance by forming AlxREy intermetallics along the grain boundaries. The present work investigates...... the microstructure of squeeze cast AE42 magnesium alloy and evaluates its hardness before and after heat treatments. The change in hardness is discussed based on the microstructural observations. Some suggestions are given concerning future design of alloy compositions in order to improve high temperature creep...
The effect of CFRP on retrofitting of damaged HSRC beams using AE technique
Soffian Noor, M. S.; Noorsuhada, M. N.
This paper presents the effect of carbon fibre reinforced polymer (CFRP) on retrofitted high strength reinforced concrete (HSRC) beams using acoustic emission (AE) technique. Two RC beam parameters were prepared. The first was the control beam which was undamaged HSRC beam. The second was the damaged HSRC beam retrofitted with CFRP on the soffit. The main objective of this study is to assess the crack modes of HSRC beams using AE signal strength. The relationship between signal strength, load and time were analysed and discussed. The crack pattern observed from the visual observation was also investigated. HSRC beam retrofitted with CFRP produced high signal strength compared to control beam. It demonstrates the effect of the AE signal strength for interpretation and prediction of failure modes that might occur in the beam specimens.
Some aspects of ICP-AES analysis of high purity rare earths
Murty, P.S.; Biswas, S.S.
Inductively coupled plasma atomic emission spectrometry (ICP-AES) is a technique capable of giving high sensitivity in trace elemental analysis. While the technique possesses high sensitivity, it lacks high selectivity. Selectivity is important where substances emitting complex spectra are to be analysed for trace elements. Rare earths emit highly complex spectra in a plasma source and the determination of adjacent rare earths in a high purity rare earth matrix, with high sensitivity, is not possible due to the inadequate selectivity of ICP-AES. One approach that has yielded reasonably good spectral selectivity in the high purity rare earth analysis by ICP-AES is by employing a combination of wavelength modulation techniques and high resolution echelle grating. However, it was found that by using a high resolution monochromator senstitivities either comparable to or better than those reported by the wavelength modulation technique could be obtained. (author). 2 refs., 2 figs., 2 tabs
Estimation of metal ions during the dissolution of corrosion product oxides by ICP-AES
Sathyaseelan, V.S.; Mittal, Vinit Kumar; Velmurugan, S.
Inductively coupled plasma atomic emission spectroscopy (ICP-AES) is one of the several techniques, which is being widely used for both qualitative and quantitative estimation of various elements. Determination of metals in wine, arsenic in food, trace elements in soil, nutrient levels in agricultural soils, trace elements bound to protein, motor oil analysis etc. are the examples of its application. ICP-AES utilizes plasma as the atomization and excitation source. The plasma temperature in the analytical zone ranges from 5000-8000 K. This high temperature assures that most of the compounds in the samples are completely atomized. Advantages of ICP-AES are high sensitivity and linear dynamic range, multi-element capability, low chemical interference and a stable and reproducible signal. The typical limits of detection for some of the elements like Mg, Sr, Ca, Li, Cu etc are well below 1 μg L -1
Star Cluster Structure from Hierarchical Star Formation
Grudic, Michael; Hopkins, Philip; Murray, Norman; Lamberts, Astrid; Guszejnov, David; Schmitz, Denise; Boylan-Kolchin, Michael
Young massive star clusters (YMCs) spanning 104-108 M⊙ in mass generally have similar radial surface density profiles, with an outer power-law index typically between -2 and -3. This similarity suggests that they are shaped by scale-free physics at formation. Recent multi-physics MHD simulations of YMC formation have also produced populations of YMCs with this type of surface density profile, allowing us to narrow down the physics necessary to form a YMC with properties as observed. We show that the shallow density profiles of YMCs are a natural result of phase-space mixing that occurs as they assemble from the clumpy, hierarchically-clustered configuration imprinted by the star formation process. We develop physical intuition for this process via analytic arguments and collisionless N-body experiments, elucidating the connection between star formation physics and star cluster structure. This has implications for the early-time structure and evolution of proto-globular clusters, and prospects for simulating their formation in the FIRE cosmological zoom-in simulations.
Mineral distribution in rice: Measurement by Microwave Plasma Atomic Emission Spectroscopy (MP-AES)
Ramos, Nerissa C.; Ramos, R.G.A.; Quirit, L.L.; Arcilla, C.A.
Microwave Plasma Atomic Emission Spectroscopy (MP-AES) is a new technology with comparable performance and sensitivity to Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES). Both instrument use plasma as the energy source that produces atomic and ionic emission lines. However, MP-AES uses nitrogen as the plasma gas instead of argon which is an additional expense for ICP-OES. Thus, MP-AES is more economical. This study quantified six essential minerals (Se, Zn, Fe, Cu, Mn and K) in rice using MP-AES. Hot plate digestion was used for sample extraction and the detection limit for each instrument was compared with respect to the requirement for routine analysis in rice. Black, red and non-pigmented rice samples were polished in various intervals to determine the concentration loss of minerals. The polishing time corresponds to the structure of the rice grains such as outer bran layer (0 to 15), inner bran layer (15 to 30), outer endosperm layer (30 to 45), and middle endosperm layer (45 to 60). Results of MP-AES analysis showed that black rice had all essential materials (except K) in high concentration at the outer bran layer. The red and non-pigmented rice samples on the other hand, contained high levels of Se, Zn, Fe, and Mn in the whole bran portion. After 25 seconds, the mineral concentrations remained constant. The concentration of Cu however, gave consistent value in all polishing intervals, hence Cu might be located in the inner endosperm layer. Results also showed that K was uniformly distributed in all samples where 5% loss was consistently observed for every polishing interval. Therefore, the concentration of K was also affected by polishing time. Thus, the new MP-AES technology with comparable performance to ICP-OES is a promising tool for routine analysis in rice. (author)
Making star teams out of star players.
Mankins, Michael; Bird, Alan; Root, James
Top talent is an invaluable asset: In highly specialized or creative work, for instance, "A" players are likely to be six times as productive as "B" players. So when your company has a crucial strategic project, why not multiply all that firepower and have a team of your best performers tackle it? Yet many companies hesitate to do this, believing that all-star teams don't work: Big egos will get in the way. The stars won't be able to work with one another. They'll drive the team Leader crazy. Mankins, Bird, and Root of Bain & Company believe it's time to set aside that thinking. They have seen all-star teams do extraordinary work. But there is a right way and a wrong way to organize them. Before you can even begin to assemble such a team, you need to have the right talent management practices, so you hire and develop the best people and know what they're capable of. You have to give the team appropriate incentives and leaders and support staffers who are stars in their own right. And projects that are ill-defined or small scale are not for all-star teams. Use them only for critical missions, and make sure their objectives are clear. Even with the right setup, things can still go wrong. The wise executive will take steps to manage egos, prune non-team-players, and prevent average coworkers from feeling completely undervalued. She will also invest a lot of time in choosing the right team Leader and will ask members for lots of feedback to monitor how that leader is doing.
Complementary AES and AEM of grain boundary regions in irradiated γ'-strengthened alloys
Farrell, K.; Kishimoto, N.; Clausing, R.E.; Heatherly, L.; Lehman, G.L.
Two microchemical analysis techniques are used to measure solute segregation at grain boundaries in two γ'-strengthened, fcc Fe-Ni-Cr alloys that display radiation-induced intergranular fracture. Scanning Auger electron spectroscopy (AES) of grain boundary fracture surfaces and analytical electron microscopy (AEM) of intact grain boundaries using energy-dispersive x-ray spectroscopy show good agreement on the nature and extent of segregation. The elements Ni, Si, Ti, and Mo are found to accumulate in G, Laves and γ' phases on the grain boundaries. Segregation of P is detected by AES. The complementary features of the two analytical techniques are discussed briefly
Uses of AES and RGA to study neutron-irradiation-enhanced segregation to internal surfaces
Gessel, G.R.; White, C.L.
The high flux of point defects to sinks during neutron irradiation can result in segregation of impurity or alloy additions to metals. Such segregants can be preexisting or produced by neutron-induced transmutations. This segregation is known to strongly influence swelling and mechanical properties. Over a period of years, facilities have been developed at ORNL incorporating AES and RGA to examine irradiated materials. Capabilities of this system include in situ tensile fracture at elevated temperatures under ultrahigh vacuum 10 -10 torr and helium release monitoring. AES and normal incidence inert ion sputtering are exploited to examine segregation at the fracture surface and chemical gradients near the surface
Hello Darkness My Old Friend: The Fading of the Nearby TDE ASASSN-14ae
Brown, Jonathan S.; Shappee, Benjamin J.; Holoien, T. W. -S; Stanek, K. Z.; Kochanek, C. S.; Prieto, J. L.
We present late-time optical spectroscopy taken with the Large Binocular Telescope's Multi-Object Double Spectrograph, an improved ASAS-SN pre-discovery non-detection, and late-time SWIFT observations of the nearby ($d=193$ Mpc, $z=0.0436$) tidal disruption event (TDE) ASASSN-14ae. Our observations span from $\\sim$20 days before to $\\sim$750 days after discovery. The proximity of ASASSN-14ae allows us to study the optical evolution of the flare and the transition to a host dominated state wit...
Studies on the male sterility-fertility restoration system of AE. Kotschyi 19
Cheng Junyuan; Sun Guoqing; Liu Luxiang; Zhao Linshu; Lu Xiuxia
Sterile plants were obtained from the distant hybridization between Ae. Kotschyi 19 as the female parent and the Chinese Spring and T. yunnanense King as the male parent. Common wheat lines were used to testcross and backcross with the F 1 sterile plants successively. Male sterile line K-19 with Ae. Kotschyi cytoplasm and common wheat nucleus was bred. Over 10 K-19 MS lines were obtained. They are steady without monoploid. 7 restorers were obtained with the restoring ability from 88.2% to 96.9% according to the domestic method, from 116.4% to 150.4% according to the international method
Stability of boson stars
Gleiser, M.
Boson stars are gravitationally bound, spherically symmetric equilibrium configurations of cold, free, or interacting complex scalar fields phi. As these equilibrium configurations naturally present local anisotropy, it is sensible to expect departures from the well-known stability criteria for fluid stars. With this in mind, I investigate the dynamical instability of boson stars against charge-conserving, small radial perturbations. Following the method developed by Chandrasekhar, a variational base for determining the eigenfrequencies of the perturbations is found. This approach allows one to find numerically an upper bound for the central density where dynamical instability occurs. As applications of the formalism, I study the stability of equilibrium configurations obtained both for the free and for the self-interacting [with V(phi) = (λ/4)chemical bondphichemical bond 4 ] massive scalar field phi. Instabilities are found to occur not for the critical central density as in fluid stars but for central densities considerably higher. The departure from the results for fluid stars is sensitive to the coupling λ; the higher the value of λ, the more the stability properties of boson stars approach those of a fluid star. These results are linked to the fractional anisotropy at the radius of the configuration
From clouds to stars
Elmegreen, B.G.
At the present time, the theory of star formation must be limited to what we know about the lowest density gas, or about the pre-main sequence stars themselves. We would like to understand two basic processes: 1) how star-forming clouds are created from the ambient interstellar gas in the first place, and 2) how small parts of these clouds condense to form individual stars. We are interested also in knowing what pre-main sequence stars are like, and how they can interact with their environment. These topics are reviewed in what follows. In this series of lectures, what we know about the formation of stars is tentatively described. The lectures begin with a description of the interstellar medium, and then they proceed along the same direction that a young star would follow during its creation, namely from clouds through the collapse phase and onto the proto-stellar phase. The evolution of viscous disks and two models for the formation of the solar system are described in the last lectures. The longest lectures, and the topics that are covered in most detail, are not necessarily the ones for which we have the most information. Physically intuitive explanations for the various processes are emphasized, rather then mathematical explanations. In some cases, the mathematical aspects are developed as well, but only when the equations can be used to give important numerical values for comparison with the observations
Thermonuclear reactions in stars is a major topic in the field of nuclear astrophysics, and deals with the topics of how precisely stars generate their energy through nuclear reactions, and how these nuclear reactions create the elements the stars, planets and - ultimately - we humans consist of. The present book treats these topics in detail. It also presents the nuclear reaction and structure theory, thermonuclear reaction rate formalism and stellar nucleosynthesis. The topics are discussed in a coherent way, enabling the reader to grasp their interconnections intuitively. The book serves bo
Entropy Production of Stars
Leonid M. Martyushev
Full Text Available The entropy production (inside the volume bounded by a photosphere of main-sequence stars, subgiants, giants, and supergiants is calculated based on B–V photometry data. A non-linear inverse relationship of thermodynamic fluxes and forces as well as an almost constant specific (per volume entropy production of main-sequence stars (for 95% of stars, this quantity lies within 0.5 to 2.2 of the corresponding solar magnitude is found. The obtained results are discussed from the perspective of known extreme principles related to entropy production.
HOT GAS LINES IN T TAURI STARS
Ardila, David R.; Herczeg, Gregory J.; Gregory, Scott G.; Hillenbrand, Lynne A.; Ingleby, Laura; Bergin, Edwin; Bethell, Thomas; Calvet, Nuria; France, Kevin; Brown, Alexander; Edwards, Suzan; Johns-Krull, Christopher; Linsky, Jeffrey L.; Yang, Hao; Valenti, Jeff A.; Abgrall, Hervé; Alexander, Richard D.; Brown, Joanna M.; Espaillat, Catherine; Hussain, Gaitee
For Classical T Tauri Stars (CTTSs), the resonance doublets of N V, Si IV, and C IV, as well as the He II 1640 Å line, trace hot gas flows and act as diagnostics of the accretion process. In this paper we assemble a large high-resolution, high-sensitivity data set of these lines in CTTSs and Weak T Tauri Stars (WTTSs). The sample comprises 35 stars: 1 Herbig Ae star, 28 CTTSs, and 6 WTTSs. We find that the C IV, Si IV, and N V lines in CTTSs all have similar shapes. We decompose the C IV and He II lines into broad and narrow Gaussian components (BC and NC). The most common (50%) C IV line morphology in CTTSs is that of a low-velocity NC together with a redshifted BC. For CTTSs, a strong BC is the result of the accretion process. The contribution fraction of the NC to the C IV line flux in CTTSs increases with accretion rate, from ∼20% to up to ∼80%. The velocity centroids of the BCs and NCs are such that V BC ∼> 4 V NC , consistent with the predictions of the accretion shock model, in at most 12 out of 22 CTTSs. We do not find evidence of the post-shock becoming buried in the stellar photosphere due to the pressure of the accretion flow. The He II CTTSs lines are generally symmetric and narrow, with FWHM and redshifts comparable to those of WTTSs. They are less redshifted than the CTTSs C IV lines, by ∼10 km s –1 . The amount of flux in the BC of the He II line is small compared to that of the C IV line, and we show that this is consistent with models of the pre-shock column emission. Overall, the observations are consistent with the presence of multiple accretion columns with different densities or with accretion models that predict a slow-moving, low-density region in the periphery of the accretion column. For HN Tau A and RW Aur A, most of the C IV line is blueshifted suggesting that the C IV emission is produced by shocks within outflow jets. In our sample, the Herbig Ae star DX Cha is the only object for which we find a P-Cygni profile in the C IV
Carbon Stars T. Lloyd Evans
that the features used in estimating luminosities of ordinary giant stars are just those whose abundance ... This difference between the spectral energy distributions (SEDs) of CH stars and the. J stars, which belong to .... that the first group was binaries, as for the CH stars of the solar vicinity, while those of the second group ...
AgSTAR
AgSTAR promotes biogas recovery projects, which generate renewable energy and other beneficial products from the anaerobic digestion of livestock manure and organic wastes while decreasing greenhouse gas emissions from the agriculture sector.
Orbiting radiation stars
Foster, Dean P; Langford, John; Perez-Giz, Gabe
We study a spherically symmetric solution to the Einstein equations in which the source, which we call an orbiting radiation star (OR-star), is a compact object consisting of freely falling null particles. The solution avoids quantum scale regimes and hence neither relies upon nor ignores the interaction of quantum mechanics and gravitation. The OR-star spacetime exhibits a deep gravitational well yet remains singularity free. In fact, it is geometrically flat in the vicinity of the origin, with the flat region being of any desirable scale. The solution is observationally distinct from a black hole because a photon from infinity aimed at an OR-star escapes to infinity with a time delay. (paper)
Cataclysmic Variable Stars
Hellier, Coel
Cataclysmic variable stars are the most variable stars in the night sky, fluctuating in brightness continually on timescales from seconds to hours to weeks to years. The changes can be recorded using amateur telescopes, yet are also the subject of intensive study by professional astronomers. That study has led to an understanding of cataclysmic variables as binary stars, orbiting so closely that material transfers from one star to the other. The resulting process of accretion is one of the most important in astrophysics. This book presents the first account of cataclysmic variables at an introductory level. Assuming no previous knowledge of the field, it explains the basic principles underlying the variability, while providing an extensive compilation of cataclysmic variable light curves. Aimed at amateur astronomers, undergraduates, and researchers, the main text is accessible to those with no mathematical background, while supplementary boxes present technical details and equations.
SX Phoenicis stars
Nemec, J.; Mateo, M.
The purpose of this paper is to review the basic observational information concerning SX Phe stars, including recent findings such as the discovery of about 40 low-luminosity variable stars in the Carina dwarf galaxy and identification of at least one SX Phe star in the metal-rich globular cluster M71. Direct evidence supporting the hypothesis that at least some BSs are binary systems comes from the discovery of two contact binaries and a semidetached binary among the 50 BSs in the globular cluster NGC 5466. Since these systems will coalesce on a time scale 500 Myr, it stands to reason that many (if not most) BSs are coalesced binaries. The merger hypothesis also explains the relatively-large masses (1.0-1.2 solar masses) that have been derived for SX Phe stars and halo BSs, and may also account for the nonvariable BSs in the 'SX Phe instability strip'. 132 refs
Sounds of a Star
Acoustic Oscillations in Solar-Twin "Alpha Cen A" Observed from La Silla by Swiss Team Summary Sound waves running through a star can help astronomers reveal its inner properties. This particular branch of modern astrophysics is known as "asteroseismology" . In the case of our Sun, the brightest star in the sky, such waves have been observed since some time, and have greatly improved our knowledge about what is going on inside. However, because they are much fainter, it has turned out to be very difficult to detect similar waves in other stars. Nevertheless, tiny oscillations in a solar-twin star have now been unambiguously detected by Swiss astronomers François Bouchy and Fabien Carrier from the Geneva Observatory, using the CORALIE spectrometer on the Swiss 1.2-m Leonard Euler telescope at the ESO La Silla Observatory. This telescope is mostly used for discovering exoplanets (see ESO PR 07/01 ). The star Alpha Centauri A is the nearest star visible to the naked eye, at a distance of a little more than 4 light-years. The new measurements show that it pulsates with a 7-minute cycle, very similar to what is observed in the Sun . Asteroseismology for Sun-like stars is likely to become an important probe of stellar theory in the near future. The state-of-the-art HARPS spectrograph , to be mounted on the ESO 3.6-m telescope at La Silla, will be able to search for oscillations in stars that are 100 times fainter than those for which such demanding observations are possible with CORALIE. PR Photo 23a/01 : Oscillations in a solar-like star (schematic picture). PR Photo 23b/01 : Acoustic spectrum of Alpha Centauri A , as observed with CORALIE. Asteroseismology: listening to the stars ESO PR Photo 23a/01 ESO PR Photo 23a/01 [Preview - JPEG: 357 x 400 pix - 96k] [Normal - JPEG: 713 x 800 pix - 256k] [HiRes - JPEG: 2673 x 3000 pix - 2.1Mb Caption : PR Photo 23a/01 is a graphical representation of resonating acoustic waves in the interior of a solar-like star. Red and blue
Formation of titanium nitride layers on titanium metal: Results of XPS and AES investigations
Moers, H.; Pfennig, G.; Klewe-Nebenius, H.; Penzhorn, R.D.; Sirch, M.; Willin, E.
The reaction of titanium metal with gaseous nitrogen and ammonia at temperatures of 890 0 C leads to the formation of nitridic overlayers on the metallic substrate. The thicknesses of the overlayers increase with increasing reaction time. Under comparable conditions ammonia reacts much slower than nitrogen. XPS and AES depth profile analyses show continuous changes of the in-depth compositions of the overlayers. This can be interpreted in terms of a very irregular thickness of the overlayers, an assumption which is substantiated by local AES analyses and by the observation of a pronounced crystalline structure of the substrate after annealing pretreatment, which can give rise to locally different reaction rates. The depth profile is also influenced by the broad ranges of stability of the titanium nitride phases formed during the reaction. The quantitative analysis of the titanium/nitrogen overlayers by AES is difficult because of the overlap of titanium and nitrogen Auger peaks. In quantitative XPS analysis problems arise due to difficulties in defining Ti 2p peak areas. This work presents practical procedures for the quantitative evaluation by XPS and AES of nitridic overlayers with sufficient accuracy. (orig.) [de
On using peak amplitude and rise time for AE source characterization
R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22
The major objective of signal analysis is to study the characteristics of the sources of emissions. ... When AE is used as a non-destructive evaluation tool, this information is extracted using a .... Hence, frequency response H (f ) of the transducer.
Design and implementation of an ASIP-based cryptography processor for AES, IDEA, and MD5
Karim Shahbazi
Full Text Available In this paper, a new 32-bit ASIP-based crypto processor for AES, IDEA, and MD5 is designed. The instruction-set consists of both general purpose and specific instructions for the above cryptographic algorithms. The proposed architecture has nine function units and two data buses. It has also two types of 32-bit instruction formats for executing Memory Reference (M.R., Register Reference (R.R., and Input/Output Reference (I/O R. instructions. The maximum achieved frequency is 166.916Â MHz. The encoded output results of the encryption process of a 128-bit input block are obtained after 122, 146 and 170 clock cycles for AES-128, AES-192, and AES-256, respectively. Moreover, it takes 95 clock cycles to encrypt or decrypt a 64-bit input block by using IDEA. Finally, the MD5 hash algorithm requires 469 clock cycles to generate the coded outputs for a block of 512Â bits. The performance of the proposed processor is compared to some previous and state-of-the-art implementations in terms of speed, latency, throughput, and flexibility.
Key features of MIR.1200 (AES-2006) design and current stage of Leningrad NPP-2 construction
Ivkov, Igor
MIR.1200/AES-2006 is an abbreviated name of the evolving NPP design developed on the basis of the VVER-1000 Russian design with gross operation life of 480 reactor-years. This design is being implemented in four Units of Leningrad NPP-2 (LNPP-2. The AES-91/99 was used as reference during development of the AES-2006 design for LNPP-2; this design was implemented in two Units of Tianwan NPP (China). The main technical features of the MIR.1200/AES-2006 design include a double containment, four trains of active safety systems (4x100%, 4x50%), and special engineering measures for BDBA management (core catcher, H2 PARs, PHRS) based mainly on passive principles. The containment is described in detail, the main features in comparison with the reference NPP are outlined, the design layout principles are highlighted, the safety system structure and parameters are described. Attention is paid to the BDBA management system, hydrogen removal system, core catcher, and PHRS-SG and C-PHRS. (P.A.)
Repression of HNF1α-mediated transcription by amino-terminal enhancer of split (AES)
Han, Eun Hee [Section of Structural Biology, Hormel Institute, University of Minnesota, Austin, MN 55912 (United States); Gorman, Amanda A. [Department of Molecular and Cellular Biochemistry, University of Kentucky, Lexington, KY 40536 (United States); Singh, Puja [Section of Structural Biology, Hormel Institute, University of Minnesota, Austin, MN 55912 (United States); Chi, Young-In, E-mail: [email protected] [Section of Structural Biology, Hormel Institute, University of Minnesota, Austin, MN 55912 (United States)
HNF1α (Hepatocyte Nuclear Factor 1α) is one of the master regulators in pancreatic beta-cell development and function, and the mutations in Hnf1α are the most common monogenic causes of diabetes mellitus. As a member of the POU transcription factor family, HNF1α exerts its gene regulatory function through various molecular interactions; however, there is a paucity of knowledge in their functional complex formation. In this study, we identified the Groucho protein AES (Amino-terminal Enhancer of Split) as a HNF1α-specific physical binding partner and functional repressor of HNF1α-mediated transcription, which has a direct link to glucose-stimulated insulin secretion in beta-cells that is impaired in the HNF1α mutation-driven diabetes. - Highlights: • We identified AES as a transcriptional repressor for HNF1α in pancreatic beta-cell. • AES's repressive activity was HNF1α-specific and was not observed with HNF1β. • AES interacts with the transactivation domain of HNF1α. • Small molecules can be designed or discovered to disrupt this interaction and improve insulin secretion and glucose homeostasis.
Han, Eun Hee; Gorman, Amanda A.; Singh, Puja; Chi, Young-In
HNF1α (Hepatocyte Nuclear Factor 1α) is one of the master regulators in pancreatic beta-cell development and function, and the mutations in Hnf1α are the most common monogenic causes of diabetes mellitus. As a member of the POU transcription factor family, HNF1α exerts its gene regulatory function through various molecular interactions; however, there is a paucity of knowledge in their functional complex formation. In this study, we identified the Groucho protein AES (Amino-terminal Enhancer of Split) as a HNF1α-specific physical binding partner and functional repressor of HNF1α-mediated transcription, which has a direct link to glucose-stimulated insulin secretion in beta-cells that is impaired in the HNF1α mutation-driven diabetes. - Highlights: • We identified AES as a transcriptional repressor for HNF1α in pancreatic beta-cell. • AES's repressive activity was HNF1α-specific and was not observed with HNF1β. • AES interacts with the transactivation domain of HNF1α. • Small molecules can be designed or discovered to disrupt this interaction and improve insulin secretion and glucose homeostasis.
Full Text Available Abstrak — Pengiriman atau pertukaran data adalah hal yang sering terjadi dalam dunia teknologi informasi. Data yang dikirim kadang sering berisi data informasi yang penting bahkan sangat rahasia dan harus dijaga keamanannya. Untuk menjaga keamanan data, dapat dilakukan dengan menggunakan teknik kriptografi. Algoritma AES dan RC4 adalah salah satu dari algoritma simetri. Kelemahan dari algoritma simetri adalah proses enkripsi dan dekripsi menggunakan kunci yang sama. Untuk mengatasi tersebut dilakukan dengan menggunakan algoritma Elgamal. Algoritma Elgamal adalah termasuk algoritma asimetri. Algoritma Elgamal digunakan untuk mengamankan kunci dari algoritma AES dan RC4. Peningkatan keamanan pesan dan kunci dilakukan dengan algoritma hybrid. Algoritma hybrid dengan mengkombinasikan beberapa algoritma baik algoritma simetri maupun algortima asimetri akan menambah keamanan sehingga menjadi lebih aman dan powerful (Jain & Agrawal 2014. Penelitian ini akan dilakukan metode hybrid yaitu mengkombinasikan beberapa algoritma kriptografi dengan menggunakan algoritma Advanced Encryption Standard (AES dan RC4 untuk kerahasiaan data serta algoritma Elgamal digunakan untuk enkripsi dan dekripsi kunci. Kata Kunci : kriptograpi, algoritma AES, RC4, Elgamal, hybrid
The (related-key) impossible boomerang attack and its application to the AES block cipher
Lu, J.
The Advanced Encryption Standard (AES) is a 128-bit block cipher with a user key of 128, 192 or 256 bits, released by NIST in 2001 as the next-generation data encryption standard for use in the USA. It was adopted as an ISO international standard in 2005. Impossible differential cryptanalysis and
Investigation of Participation in Adult Education in Turkey: AES Data Analysis
Dincer, N. Nergiz; Tekin-Koru, Ayca; Askar, Petek
The aim of this study is to identify the determinants of participation in adult education in Turkey. The analysis is conducted using the Adult Education Survey (AES), conducted by TurkStat. The results indicate that economic growth in the sector of employment significantly and positively affects the odds for adult education participation. The data…
Wireless AE Event and Environmental Monitoring for Wind Turbine Blades at Low Sampling Rates
Bouzid, Omar M.; Tian, Gui Y.; Cumanan, K.; Neasham, J.
Integration of acoustic wireless technology in structural health monitoring (SHM) applications introduces new challenges due to requirements of high sampling rates, additional communication bandwidth, memory space, and power resources. In order to circumvent these challenges, this chapter proposes a novel solution through building a wireless SHM technique in conjunction with acoustic emission (AE) with field deployment on the structure of a wind turbine. This solution requires a low sampling rate which is lower than the Nyquist rate. In addition, features extracted from aliased AE signals instead of reconstructing the original signals on-board the wireless nodes are exploited to monitor AE events, such as wind, rain, strong hail, and bird strike in different environmental conditions in conjunction with artificial AE sources. Time feature extraction algorithm, in addition to the principal component analysis (PCA) method, is used to extract and classify the relevant information, which in turn is used to classify or recognise a testing condition that is represented by the response signals. This proposed novel technique yields a significant data reduction during the monitoring process of wind turbine blades.
Implementing AES via an Actively/Covertly Secure Dishonest-Majority MPC Protocol
Damgård, Ivan Bjerre; Keller, Marcel; Keller, Enrique
, but produces significant performance enhancements; the second enables us to perform bit-wise operations in characteristic two fields. As a bench mark application we present the evaluation of the AES cipher, a now standard bench marking example for multi-party computation. We need examine two different...
Vertical Transmission of Zika Virus by Aedes aegypti and Ae. albopictus Mosquitoes.
Ciota, Alexander T; Bialosuknia, Sean M; Ehrbar, Dylan J; Kramer, Laura D
To determine the potential role of vertical transmission in Zika virus expansion, we evaluated larval pools of perorally infected Aedes aegypti and Ae. albopictus adult female mosquitoes; ≈1/84 larvae tested were Zika virus-positive; and rates varied among mosquito populations. Thus, vertical transmission may play a role in Zika virus spread and maintenance.
Rapid synthesis of the A-E fragment of ciguatoxin CTX3C.
Clark, J Stephen; Conroy, Joanne; Blake, Alexander J
The A-E fragment of the marine natural product CTX3C has been prepared in an efficient manner by using a strategy in which two-directional and iterative ring-closing metathesis (RCM) reactions were employed for ring construction.
Analysis by SIMS and AES of H:TiO2 electrodes
Pena, J.L.; Farias, M.H.; Sanchez Sinencio, F.
TiO 2 electrodes produced by heating in H 2 atmosphere have been analysed. SIMS (Secondary Ion Mass Spectroscopy) and AES (Auger Electron Spectroscopy) techniques were used in order to identify the atomic composition in the electrodes surface. (A.R.H.) [pt
Infection of adult Aedes aegypti and Ae. albopictus mosquitoes with the entomopathogenic fungus Metarhizium anisopliae
Scholte, E.J.; Takken, W.; Knols, B.G.J.
This study describes a laboratory investigation on the use of the insect-pathogenic fungus Metarhizium anisopliae against adult Aedes aegypti and Ae. albopictus mosquitoes. At a dosage of 1.6 × 1010 conidia/m2, applied on material that served as a mosquito resting site, an average of 87.1 ± 2.65% of
The anion exchanger Ae2 is required for enamel maturation in mouse teeth
Lyaruu, D.M.; Bronckers, A.L.J.J.; Mulder, L.; Mardones, P.; Medina, J.F.; Kellokumpu, S.; Oude Elferink, R.P.J.; Everts, V.
One of the mechanisms by which epithelial cells regulate intracellular pH is exchanging bicarbonate for Cl-. We tested the hypothesis that in ameloblasts the anion exchanger-2 (Ae2) is involved in pH regulation during maturation stage amelogenesis. Quantitative X-ray microprobe mineral content
Lyaruu, D. M.; Bronckers, A. L. J. J.; Mulder, L.; Mardones, P.; Medina, J. F.; Kellokumpu, S.; Oude Elferink, R. P. J.; Everts, V.
One of the mechanisms by which epithelial cells regulate intracellular pH is exchanging bicarbonate for Cl(-). We tested the hypothesis that in ameloblasts the anion exchanger-2 (Ae2) is involved in pH regulation during maturation stage amelogenesis. Quantitative X-ray microprobe mineral content
Gow, C.E.
Observations of over one hundred carbon stars have been made with the Indiana rapid spectral scanner in the red and, when possible, in the visual and blue regions of the spectrum. Five distinct subtypes of carbon stars (Barium, CH, R, N, and hydrogen deficient) are represented in the list of observed stars, although the emphasis was placed on the N stars when the observations were made. The rapid scanner was operated in the continuous sweep mode with the exit slit set at twenty angstroms, however, seeing fluctuations and guiding errors smear the spectrum to an effective resolution of approximately thirty angstroms. Nightly observations of Hayes standard stars yielded corrections for atmospheric extinction and instrumental response. The reduction scheme rests on two assumptions, that thin clouds are gray absorbers and the wavelength dependence of the sky transparency does not change during the course of the night. Several stars have been observed in the blue region of the spectrum with the Indiana SIT vidicon spectrometer at two angstroms resolution. It is possible to derive a color temperature for the yellow--red spectral region by fitting a black-body curve through two chosen continuum points. Photometric indices were calculated relative to the blackbody curve to measure the C 2 Swan band strength, the shape of the CN red (6,1) band to provide a measure of the 12 C/ 13 C isotope ratio, and in the hot carbon stars (Barium, CH, and R stars) the strength of an unidentified feature centered at 400 angstroms. An extensive abundance grid of model atmospheres was calculated using a modified version of the computer code ATLAS
On the Absence of Non-thermal X-Ray Emission around Runaway O Stars
Toalá, J. A. [Institute of Astronomy and Astrophysics, Academia Sinica (ASIAA), Taipei 10617, Taiwan (China); Oskinova, L. M. [Institute for Physics and Astronomy, University of Potsdam, D-14476 Potsdam (Germany); Ignace, R. [Department of Physics and Astronomy, East Tennessee State University, Johnson City, TN 37614 (United States)
Theoretical models predict that the compressed interstellar medium around runaway O stars can produce high-energy non-thermal diffuse emission, in particular, non-thermal X-ray and γ -ray emission. So far, detection of non-thermal X-ray emission was claimed for only one runaway star, AE Aur. We present a search for non-thermal diffuse X-ray emission from bow shocks using archived XMM-Newton observations for a clean sample of six well-determined runaway O stars. We find that none of these objects present diffuse X-ray emission associated with their bow shocks, similarly to previous X-ray studies toward ζ Oph and BD+43°3654. We carefully investigated multi-wavelength observations of AE Aur and could not confirm previous findings of non-thermal X-rays. We conclude that so far there is no clear evidence of non-thermal extended emission in bow shocks around runaway O stars.
Young Stars with SALT
Riedel, Adric R. [Department of Astronomy, California Institute of Technology, Pasadena, CA 91125 (United States); Alam, Munazza K.; Rice, Emily L.; Cruz, Kelle L. [Department of Astrophysics, The American Museum of Natural History, New York, NY 10024 (United States); Henry, Todd J., E-mail: [email protected] [RECONS Institute, Chambersburg, PA (United States)
We present a spectroscopic and kinematic analysis of 79 nearby M dwarfs in 77 systems. All of these dwarfs are low-proper-motion southern hemisphere objects and were identified in a nearby star survey with a demonstrated sensitivity to young stars. Using low-resolution optical spectroscopy from the Red Side Spectrograph on the South African Large Telescope, we have determined radial velocities, H-alpha, lithium 6708 Ã…, and potassium 7699 Ã… equivalent widths linked to age and activity, and spectral types for all of our targets. Combined with astrometric information from literature sources, we identify 44 young stars. Eighteen are previously known members of moving groups within 100 pc of the Sun. Twelve are new members, including one member of the TW Hydra moving group, one member of the 32 Orionis moving group, 9 members of Tucana-Horologium, one member of Argus, and two new members of AB Doradus. We also find 14 young star systems that are not members of any known groups. The remaining 33 star systems do not appear to be young. This appears to be evidence of a new population of nearby young stars not related to the known nearby young moving groups.
STAR facility tritium accountancy
Pawelko, R. J.; Sharpe, J. P.; Denny, B. J.
The Safety and Tritium Applied Research (STAR) facility has been established to provide a laboratory infrastructure for the fusion community to study tritium science associated with the development of safe fusion energy and other technologies. STAR is a radiological facility with an administrative total tritium inventory limit of 1.5 g (14,429 Ci) [1]. Research studies with moderate tritium quantities and various radionuclides are performed in STAR. Successful operation of the STAR facility requires the ability to receive, inventory, store, dispense tritium to experiments, and to dispose of tritiated waste while accurately monitoring the tritium inventory in the facility. This paper describes tritium accountancy in the STAR facility. A primary accountancy instrument is the tritium Storage and Assay System (SAS): a system designed to receive, assay, store, and dispense tritium to experiments. Presented are the methods used to calibrate and operate the SAS. Accountancy processes utilizing the Tritium Cleanup System (TCS), and the Stack Tritium Monitoring System (STMS) are also discussed. Also presented are the equations used to quantify the amount of tritium being received into the facility, transferred to experiments, and removed from the facility. Finally, the STAR tritium accountability database is discussed. (authors)
Asteroseismology of white dwarf stars
Córsico, A. H.
Most of low- and intermediate-mass stars that populate the Universe will end their lives as white dwarf stars. These ancient stellar remnants have encrypted inside a precious record of the evolutionary history of the progenitor stars, providing a wealth of information about the evolution of stars, star formation, and the age of a variety of stellar populations, such as our Galaxy and open and globular clusters. While some information like surface chemical composition, temperature and gravity ...
Corrosion testing on crude oil tankers and other product carriers by means of acoustic emission (AE)
Lackner, Gerald [TUV Austria, Deutschstrasse 10, 1230 Wien (Austria); Tscheliesnig, Peter [TUV Austria, Deutschstrasse 10, 1230 Wien (Austria)
In the last decades a lot of maritime disasters with crude oil tankers occurred (e.g. Exxon- Valdez, Erika, Prestige). Every accident led to extreme pollution with horrible consequences not only for the environment but also for the life of the inhabitants of the affected coasts. Although most of these accidents were caused by human errors, the material degradation of the ship hull due to corrosion played an important role. Acoustic emission (AE) is already used to detect and discriminate the stage of corrosion of structures located on land. A consortium consisting of experienced partners from the fields of ship building and classification as well as from AE testing and equipment manufacturing started to investigate the feasibility of this testing technique for its application on oil tankers. The aim of the research project funded by the European Commission is to develop an on-line corrosion monitoring technique based on a permanent installation of AE sensors as well as a spot testing technique during stops in harbors or at anchorages using mobile equipment. Since the project was started, a lot of lab tests as well as background measurements were done on different types of tankers up to a size of 35.000 dead weight tons (DWT). The gathered data were evaluated with a frequency domain based pattern recognition system and it was possible to distinguish the AE signals related to corrosion from those signals, which were emitted by the structure due to the harsh environment on sea (background noise). Together with the oncoming developments of the AE equipment and the improvement of the data base, this project will lead to an important breakthrough for the safe shipping of hazardous products like crude oil. (authors)
Forensic discrimination of aluminum foil by SR-XRF and ICP-AES
Kasamatsu, Masaaki; Suzuki, Yasuhiro; Suzuki, Shinichi; Miyamoto, Naoki; Watanabe, Seiya; Shimoda, Osamu; Takatsu, Masahisa; Nakanishi, Toshio
The application of synchrotron radiation X-ray fluorescence spectrometry (SR-XRF) was investigated for the forensic discrimination of aluminum foil by comparisons of the elemental components. Small fragments (1 x 1 mm) were taken from 4 kinds of aluminum foils produced by different manufactures and used for measurements of the XRF spectrum at BL37XU of SPring-8. A comparison of the XRF spectra was effective for the discrimination of aluminum foils from different sources, because significant differences were observed in the X-ray peak intensities of Fe, Cu, Zn, Ga, Zr and Sn. These elements, except for Zr and Sn in the aluminum foils and NIST SRM1258 (Aluminium Alloy 6011), were also determined by inductively coupled plasma atomic emission spectrometry (ICP-AES). The observed values of Fe, Cu, Zn and Ga in NIST standard samples by ICP-AES showed satisfactorily good agreements with the certified or information values with relative standard deviations from 1.1% for Zn to 6.7% for Ga. The observed values for the aluminum foils by ICP-AES were compared with those by SR-XRF. Correlation coefficients from 0.997 for Cu/Fe to 0.999 for Zn/Fe and Ga/Fe were obtained between the ratio of the elemental concentration by ICP-AES and normalized the X-ray intensity by SR-XRF. This result demonstrates that a comparison of the normalized X-ray intensity is nearly as effective for the discrimination of aluminum foils as quantitative analysis by ICP-AES. Comparisons of the analytical results by SR-XRF allow the discrimination of all aluminum foils using only a 1 mm 2 fragment with no destruction of the samples. (author)
A robust star identification algorithm with star shortlisting
Mehta, Deval Samirbhai; Chen, Shoushun; Low, Kay Soon
A star tracker provides the most accurate attitude solution in terms of arc seconds compared to the other existing attitude sensors. When no prior attitude information is available, it operates in "Lost-In-Space (LIS)" mode. Star pattern recognition, also known as star identification algorithm, forms the most crucial part of a star tracker in the LIS mode. Recognition reliability and speed are the two most important parameters of a star pattern recognition technique. In this paper, a novel star identification algorithm with star ID shortlisting is proposed. Firstly, the star IDs are shortlisted based on worst-case patch mismatch, and later stars are identified in the image by an initial match confirmed with a running sequential angular match technique. The proposed idea is tested on 16,200 simulated star images having magnitude uncertainty, noise stars, positional deviation, and varying size of the field of view. The proposed idea is also benchmarked with the state-of-the-art star pattern recognition techniques. Finally, the real-time performance of the proposed technique is tested on the 3104 real star images captured by a star tracker SST-20S currently mounted on a satellite. The proposed technique can achieve an identification accuracy of 98% and takes only 8.2 ms for identification on real images. Simulation and real-time results depict that the proposed technique is highly robust and achieves a high speed of identification suitable for actual space applications.
75 FR 70742 - AES Laurel Mountain, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER11-2036-000] AES Laurel Mountain, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for Blanket... proceeding of AES Laurel Mountain, LLC's application for market-based rate authority, with an accompanying...
77 FR 6471 - Bacillus thuringiensis Cry2Ae Protein in Cotton; Exemption from the Requirement of a Tolerance
... rather provides a guide for readers regarding entities likely to be affected by this action. Other types... degradation by acid and proteases (Ref. 4). The Cry2Ae protein was rapidly digested (within 30 seconds) in SGF... shown to be rapidly digested in vitro. As previously stated, when Cry2Ae protein is used as a PIP in...
77 FR 71189 - AES Beaver Valley, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER13-442-000] AES Beaver Valley, LLC; Supplemental Notice That Initial Market- Based Rate Filing Includes Request for Blanket Section 204 Authorization This is a supplemental notice in the above-referenced proceeding, of AES Beaver...
Production of Mucosally Transmissible SHIV Challenge Stocks from HIV-1 Circulating Recombinant Form 01_AE env Sequences.
Lawrence J Tartaglia
Full Text Available Simian-human immunodeficiency virus (SHIV challenge stocks are critical for preclinical testing of vaccines, antibodies, and other interventions aimed to prevent HIV-1. A major unmet need for the field has been the lack of a SHIV challenge stock expressing circulating recombinant form 01_AE (CRF01_AE env sequences. We therefore sought to develop mucosally transmissible SHIV challenge stocks containing HIV-1 CRF01_AE env derived from acutely HIV-1 infected individuals from Thailand. SHIV-AE6, SHIV-AE6RM, and SHIV-AE16 contained env sequences that were >99% identical to the original HIV-1 isolate and did not require in vivo passaging. These viruses exhibited CCR5 tropism and displayed a tier 2 neutralization phenotype. These challenge stocks efficiently infected rhesus monkeys by the intrarectal route, replicated to high levels during acute infection, and established chronic viremia in a subset of animals. SHIV-AE16 was titrated for use in single, high dose as well as repetitive, low dose intrarectal challenge studies. These SHIV challenge stocks should facilitate the preclinical evaluation of vaccines, monoclonal antibodies, and other interventions targeted at preventing HIV-1 CRF01_AE infection.
Hamann, Wolf-Rainer; Sander, Andreas; Todt, Helge
Nearly 150 years ago, the French astronomers Charles Wolf and Georges Rayet described stars with very conspicuous spectra that are dominated by bright and broad emission lines. Meanwhile termed Wolf-Rayet Stars after their discoverers, those objects turned out to represent important stages in the life of massive stars. As the first conference in a long time that was specifically dedicated to Wolf-Rayet stars, an international workshop was held in Potsdam, Germany, from 1.-5. June 2015. About 100 participants, comprising most of the leading experts in the field as well as as many young scientists, gathered for one week of extensive scientific exchange and discussions. Considerable progress has been reported throughout, e.g. on finding such stars, modeling and analyzing their spectra, understanding their evolutionary context, and studying their circumstellar nebulae. While some major questions regarding Wolf-Rayet stars still remain open 150 years after their discovery, it is clear today that these objects are not just interesting stars as such, but also keystones in the evolution of galaxies. These proceedings summarize the talks and posters presented at the Potsdam Wolf-Rayet workshop. Moreover, they also include the questions, comments, and discussions emerging after each talk, thereby giving a rare overview not only about the research, but also about the current debates and unknowns in the field. The Scientific Organizing Committee (SOC) included Alceste Bonanos (Athens), Paul Crowther (Sheffield), John Eldridge (Auckland), Wolf-Rainer Hamann (Potsdam, Chair), John Hillier (Pittsburgh), Claus Leitherer (Baltimore), Philip Massey (Flagstaff), George Meynet (Geneva), Tony Moffat (Montreal), Nicole St-Louis (Montreal), and Dany Vanbeveren (Brussels).
Models of symbiotic stars
Friedjung, Michael
One of the most important features of symbiotic stars is the coexistence of a cool spectral component that is apparently very similar to the spectrum of a cool giant, with at least one hot continuum, and emission lines from very different stages of ionization. The cool component dominates the infrared spectrum of S-type symbiotics; it tends to be veiled in this wavelength range by what appears to be excess emission in D-type symbiotics, this excess usually being attributed to circumstellar dust. The hot continuum (or continua) dominates the ultraviolet. X-rays have sometimes also been observed. Another important feature of symbiotic stars that needs to be explained is the variability. Different forms occur, some variability being periodic. This type of variability can, in a few cases, strongly suggest the presence of eclipses of a binary system. One of the most characteristic forms of variability is that characterizing the active phases. This basic form of variation is traditionally associated in the optical with the veiling of the cool spectrum and the disappearance of high-ionization emission lines, the latter progressively appearing (in classical cases, reappearing) later. Such spectral changes recall those of novae, but spectroscopic signatures of the high-ejection velocities observed for novae are not usually detected in symbiotic stars. However, the light curves of the 'symbiotic nova' subclass recall those of novae. We may also mention in this connection that radio observations (or, in a few cases, optical observations) of nebulae indicate ejection from symbiotic stars, with deviations from spherical symmetry. We shall give a historical overview of the proposed models for symbiotic stars and make a critical analysis in the light of the observations of symbiotic stars. We describe the empirical approach to models and use the observational data to diagnose the physical conditions in the symbiotics stars. Finally, we compare the results of this empirical
Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology
Jia, Wen-bin; Xiao, Fu-hai
The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.
Acoustic Emission Monitoring of Incipient Failure in Journal Bearings( III ) - Development of AE Diagnosis System for Journal Bearings -
Chung, Min Hwa; Cho, Yong Sang; Yoon, Dong Jin; Kwon, Oh Yang
For the condition monitoring of the journal bearing in rotating machinery, a system for their diagnosis by acoustic emission(AE) was developed. AE has been used to detect abnormal conditions in the bearing system. It was found from the field application study as well as the laboratory experiment using a simulated journal bearing system that AE RMS voltage was the most efficient parameter for the purpose of current study. Based on the above results, algorithms and judgement criteria for the diagnosis system was established. The system is composed of four parts as follows: the sensing part including AE sensor and preamplifier, the signal processing part for RMS-to-DC conversion to measure AE ms voltage, the interface part for transferring RMS voltage data into PC using A/D converter, and the software part including the graphic display of bearing conditions and the diagnosis program
Detection of AE signals from a HTS tape during quenching in a solid cryogen-cooling system
Kim, K.J.; Song, J.B.; Kim, J.H.; Lee, J.H.; Kim, H.M.; Kim, W.S.; Na, J.B.; Ko, T.K.; Lee, H.G.
The acoustic emission (AE) technique is suitable for detecting the presence of thermal and mechanical stress in superconductors, which have adverse effects on the stability of their application systems. However, the detection of AE signals from a HTS tape in a bath of liquid cryogen (such as liquid nitrogen, LN 2 ) has not been reported because of its low signal to noise ratio due to the noise from the boiling liquid cryogen. In order to obtain the AE signals from the HTS tapes during quenching, this study carried out repetitive quench tests for YBCO coated conductor (CC) tapes in a cooling system using solid nitrogen (SN 2 ). This paper examined the performance of the AE sensor in terms of the amplitudes of the AE signals in the SN 2 cooling system.
Continuous A.E. monitoring of nuclear systems: some feasibility studies now in-progress in France
Roget, J.; Germain, J.L.
Continuous A.E. monitoring of Nuclear systems can give some unique information about abnormal behaviour (leak appearance...) or crack initiation or propagation. Some feasibility studies have been undertaken in France in this field and this paper presents the results we have got in two cases. The study showed that an A.E. surveillance of pressurized safety valves indicates the appearance or the presence of a leak. Functioning noise is not a problem in this case. Secondly a large study have been undertaken to test the resistance of the pipe inner sleeve to thermal fatigue. An A.E. monitoring showed that it is possible to separate A.E. due to Crack extension from other signals by a location method in spite of high noise level. A.E. seems applicable for continuous monitoring. So, complementary tests are in progress to confirm and improve these results
The OB run-away stars from Sco-Cen and Orion reviewed
Blaauw, A.
The author studies the past paths of the run-away star Zeta Oph from the OB association Sco-Cen, and of the run-away stars AE Aur, Mu Col and 53 Ari from the OB association Ori OB1, in connection with the question of the origin of these high velocities. Should the binary-hypothesis be adhered to (supernova explosion of one of the components) or, perhaps, dynamical evolution in young, dense clusters offer a clue to this phenomenon? It is shown that the latter hypothesis is very unlikely to apply to Zeta Oph. For the run-away stars from Orion conclusive evidence may well be obtained in the course of the next decade, from improved accuracy of the proper motions
Structural, elastic, electronic, optical and thermoelectric properties of the Zintl-phase Ae3AlAs3 (Ae = Sr, Ba)
Benahmed, A.; Bouhemadou, A.; Alqarni, B.; Guechi, N.; Al-Douri, Y.; Khenata, R.; Bin-Omran, S.
First-principles calculations were performed to investigate the structural, elastic, electronic, optical and thermoelectric properties of the Zintl-phase Ae3AlAs3 (Ae = Sr, Ba) using two complementary approaches based on density functional theory. The pseudopotential plane-wave method was used to explore the structural and elastic properties whereas the full-potential linearised augmented plane wave approach was used to study the structural, electronic, optical and thermoelectric properties. The calculated structural parameters are in good consistency with the corresponding measured ones. The single-crystal and polycrystalline elastic constants and related properties were examined in details. The electronic properties, including energy band dispersions, density of states and charge-carrier effective masses, were computed using Tran-Blaha modified Becke-Johnson functional for the exchange-correlation potential. It is found that both studied compounds are direct band gap semiconductors. Frequency-dependence of the linear optical functions were predicted for a wide photon energy range up to 15 eV. Charge carrier concentration and temperature dependences of the basic parameters of the thermoelectric properties were explored using the semi-classical Boltzmann transport model. Our calculations unveil that the studied compounds are characterised by a high thermopower for both carriers, especially the p-type conduction is more favourable.
Circulation of Stars
Boitani, P.
Since the dawn of man, contemplation of the stars has been a primary impulse in human beings, who proliferated their knowledge of the stars all over the world. Aristotle sees this as the product of primeval and perennial "wonder� which gives rise to what we call science, philosophy, and poetry. Astronomy, astrology, and star art (painting, architecture, literature, and music) go hand in hand through millennia in all cultures of the planet (and all use catasterisms to explain certain phenomena). Some of these developments are independent of each other, i.e., they take place in one culture independently of others. Some, on the other hand, are the product of the "circulation of stars.� There are two ways of looking at this. One seeks out forms, the other concentrates on the passing of specific lore from one area to another through time. The former relies on archetypes (for instance, with catasterism), the latter constitutes a historical process. In this paper I present some of the surprising ways in which the circulation of stars has occurred—from East to West, from East to the Far East, and from West to East, at times simultaneously.
Potential topical natural repellent against Ae. aegypti, Culex sp. and Anopheles sp. mosquitoes
Dewi Nur Hodijah
Full Text Available AbstrakLatar belakang:Minyak atsiri daun sirih diketahui mempunyai daya proteksi. Dibuatkan losion berdasarkan pengantar sediaan farmasi yang ditambahkan minyak atsiri daun nilam. Sediaan losion dipilih agar dapat menempel lebih lama di permukaan kulit. Tujuan penelitian ini untuk membandingkan daya proteksi antara losion dengan penambahan minyak nilam dan losion tanpa penambahan minyak nilam dibandingkan daya proteksi dengan DEET. Metode: Penelitian ini merupakan penelitian eksperimental laboratorium. Semua nyamuk uji berasal dari insektarium laboratorium penelitian kesehatan Loka litbang P2B2 Ciamis. Konsentrasi minyak atsiri daun sirih dalam losion adalah 4%; konsentrasi minyak nilam sebagai zat pengikat adalah 0,4%. Formula yang digunakan yaitu formula dasar yang ada pada pengantar sediaan farmasi. Uji repelensi dilakukan dengan menggunakan metoda yang direkomendasikan oleh Komisi pestisida.Hasil: Dihasilkan formulasi losion yang stabil dan masih memenuhi standar formulasi sediaan. Berdasarkan hasil, diperoleh data bahwa DEET dan losion hasil modifikasi memiliki rata-rata daya proteksi di atas 90% selama 6 jam terhadap nyamuk Ae.aegypti dan Culex sp. Kesimpulan: Penambahan minyak nilam pada losion sirih dapat meningkatkan daya proteksi terhadap hinggapan nyamuk Ae. aegypti dan Culex sp. (Health Science Indones 2014;1:44-8Kata kunci:repelen alamiah, minyak atsiri, daun sirih, daun nilam, Ae. aegypti, Culex sp.AbstractBackground: Betel leaf essential oil lotion has been known to have insect repellent properties. A lotion was made based on a pharmaceutical formula from a monograph where patchouli leaf essential oil was added. A lotion preparation was intended to enhance adherence of the formula on the surface of the skin. The purpose of this study was to compare protection percentage of lotion with patchouli oil and without patchouli oil lotion compared to DEET.Methods: This study is an experimental laboratory-based research. All mosquitoes
Persson, Morten; El Ali, Henrik H.; Binderup, Tina; Pfeifer, Andreas; Madsen, Jacob; Rasmussen, Palle; Kjaer, Andreas
64 Cu-DOTA-AE105 is a novel positron emission tomography (PET) tracer specific to the human urokinase-type plasminogen activator receptor (uPAR). In preparation of using this tracer in humans, as a new promising method to distinguish between indolent and aggressive cancers, we have performed PET studies in mice to evaluate the in vivo biodistribution and estimate human dosimetry of 64 Cu-DOTA-AE105. Methods: Five mice received iv tail injection of 64 Cu-DOTA-AE105 and were PET/CT scanned 1, 4.5 and 22 h post injection. Volume-of-interest (VOI) were manually drawn on the following organs: heart, lung, liver, kidney, spleen, intestine, muscle, bone and bladder. The activity concentrations in the mentioned organs [%ID/g] were used for the dosimetry calculation. The %ID/g of each organ at 1, 4.5 and 22 h was scaled to human value based on a difference between organ and body weights. The scaled values were then exported to OLINDA software for computation of the human absorbed doses. The residence times as well as effective dose equivalent for male and female could be obtained for each organ. To validate this approach, of human projection using mouse data, five mice received iv tail injection of another 64 Cu-DOTA peptide-based tracer, 64 Cu-DOTA-TATE, and underwent same procedure as just described. The human dosimetry estimates were then compared with observed human dosimetry estimate recently found in a first-in-man study using 64 Cu-DOTA-TATE. Results: Human estimates of 64 Cu-DOTA-AE105 revealed the heart wall to receive the highest dose (0.0918 mSv/MBq) followed by the liver (0.0815 mSv/MBq), All other organs/tissue were estimated to receive doses in the range of 0.02–0.04 mSv/MBq. The mean effective whole-body dose of 64 Cu-DOTA-AE105 was estimated to be 0.0317 mSv/MBq. Relatively good correlation between human predicted and observed dosimetry estimates for 64 Cu-DOTA-TATE was found. Importantly, the effective whole body dose was predicted with very high
Dosimetry of 64Cu-DOTA-AE105, a PET tracer for uPAR imaging.
Persson, Morten; El Ali, Henrik H; Binderup, Tina; Pfeifer, Andreas; Madsen, Jacob; Rasmussen, Palle; Kjaer, Andreas
(64)Cu-DOTA-AE105 is a novel positron emission tomography (PET) tracer specific to the human urokinase-type plasminogen activator receptor (uPAR). In preparation of using this tracer in humans, as a new promising method to distinguish between indolent and aggressive cancers, we have performed PET studies in mice to evaluate the in vivo biodistribution and estimate human dosimetry of (64)Cu-DOTA-AE105. Five mice received iv tail injection of (64)Cu-DOTA-AE105 and were PET/CT scanned 1, 4.5 and 22 h post injection. Volume-of-interest (VOI) were manually drawn on the following organs: heart, lung, liver, kidney, spleen, intestine, muscle, bone and bladder. The activity concentrations in the mentioned organs [%ID/g] were used for the dosimetry calculation. The %ID/g of each organ at 1, 4.5 and 22 h was scaled to human value based on a difference between organ and body weights. The scaled values were then exported to OLINDA software for computation of the human absorbed doses. The residence times as well as effective dose equivalent for male and female could be obtained for each organ. To validate this approach, of human projection using mouse data, five mice received iv tail injection of another (64)Cu-DOTA peptide-based tracer, (64)Cu-DOTA-TATE, and underwent same procedure as just described. The human dosimetry estimates were then compared with observed human dosimetry estimate recently found in a first-in-man study using (64)Cu-DOTA-TATE. Human estimates of (64)Cu-DOTA-AE105 revealed the heart wall to receive the highest dose (0.0918 mSv/MBq) followed by the liver (0.0815 mSv/MBq), All other organs/tissue were estimated to receive doses in the range of 0.02-0.04 mSv/MBq. The mean effective whole-body dose of (64)Cu-DOTA-AE105 was estimated to be 0.0317 mSv/MBq. Relatively good correlation between human predicted and observed dosimetry estimates for (64)Cu-DOTA-TATE was found. Importantly, the effective whole body dose was predicted with very high precision
Four new Delta Scuti stars
Schutt, R. L.
Four new Delta Scuti stars are reported. Power, modified into amplitude, spectra, and light curves are used to determine periodicities. A complete frequency analysis is not performed due to the lack of a sufficient time base in the data. These new variables help verify the many predictions that Delta Scuti stars probably exist in prolific numbers as small amplitude variables. Two of these stars, HR 4344 and HD 107513, are possibly Am stars. If so, they are among the minority of variable stars which are also Am stars.
STAR FORMATION ASSOCIATED WITH THE SUPERNOVA REMNANT IC443
Xu Jinlong; Wang Junjie; Miller, Martin
We have performed submillimeter and millimeter observations in CO lines toward supernova remnant (SNR) IC443. The CO molecular shell coincides well with the partial shell of the SNR detected in radio continuum observations. Broad emission lines and three 1720 MHz OH masers were detected in the CO molecular shell. The present observations have provided further evidence in support of the interaction between the SNR and the adjoining molecular clouds (MCs). The total mass of the MCs is 9.26 x 10 3 M sun . The integrated CO line intensity ratio (R I CO(3-2) /I CO(2-1) ) for the whole MC is between 0.79 and 3.40. The average value is 1.58, which is much higher than previous measurements of individual Galactic MCs. Higher line ratios imply that shocks have driven into the MCs. We conclude that high R I CO(3-2) /I CO(2-1) is identified as a good signature of the SNR-MC interacting system. Based on the IRAS Point Source Catalog and the Two Micron All Sky Survey near-infrared database, 12 protostellar object and 1666 young stellar object (YSO) candidates (including 154 classical T Tauri stars and 419 Herbig Ae/Be stars) are selected. In the interacting regions, the significant enhancement of the number of protostellar objects and YSOs indicates the presence of some recently formed stars. After comparing the characteristic timescales of star formation with the age of IC443, we conclude that the protostellar objects and YSO candidates are not triggered by IC443. For the age of the stellar winds shell, we have performed our calculation on the basis of a stellar wind shell expansion model. The results and analysis suggest that the formation of these stars may be triggered by the stellar winds of the IC443 progenitor.
Neutron star/red giant encounters in globular clusters
Bailyn, C.D.
The author presents a simple expression for the amount by which xsub(crit) is diminished as a star evolves xsub(crit) Rsub(crit)/R*, where Rsub(crit) is the maximum distance of closest approach between two stars for which the tidal energy is sufficient to bind the system, and R* is the radius of the star on which tides are being raised. Also it is concluded that tidal capture of giants by neutron stars resulting in binary systems is unlikely in globular clusters. However, collisions between neutron stars and red giants, or an alternative process involving tidal capture of a main-sequence star into an initially detached binary system, may result either in rapidly rotating neutron stars or in white dwarf/neutron star binaries. (author)
Heavy Metal Stars
La Silla Telescope Detects Lots of Lead in Three Distant Binaries Summary Very high abundances of the heavy element Lead have been discovered in three distant stars in the Milky Way Galaxy . This finding strongly supports the long-held view that roughly half of the stable elements heavier than Iron are produced in common stars during a phase towards the end of their life when they burn their Helium - the other half results from supernova explosions. All the Lead contained in each of the three stars weighs about as much as our Moon. The observations show that these "Lead stars" - all members of binary stellar systems - have been more enriched with Lead than with any other chemical element heavier than Iron. This new result is in excellent agreement with predictions by current stellar models about the build-up of heavy elements in stellar interiors. The new observations are reported by a team of Belgian and French astronomers [1] who used the Coude Echelle Spectrometer on the ESO 3.6-m telescope at the La Silla Observatory (Chile). PR Photo 26a/01 : A photo of HD 196944 , one of the "Lead stars". PR Photo 26b/01 : A CES spectrum of HD 196944 . The build-up of heavy elements Astronomers and physicists denote the build-up of heavier elements from lighter ones as " nucleosynthesis ". Only the very lightest elements (Hydrogen, Helium and Lithium [2]) were created at the time of the Big Bang and therefore present in the early universe. All the other heavier elements we now see around us were produced at a later time by nucleosynthesis inside stars. In those "element factories", nuclei of the lighter elements are smashed together whereby they become the nuclei of heavier ones - this process is known as nuclear fusion . In our Sun and similar stars, Hydrogen is being fused into Helium. At some stage, Helium is fused into Carbon, then Oxygen, etc. The fusion process requires positively charged nuclei to move very close to each other before they can unite. But with increasing
Atomic diffusion in stars
Michaud, Georges; Richer, Jacques
This book gives an overview of atomic diffusion, a fundamental physical process, as applied to all types of stars, from the main sequence to neutron stars. The superficial abundances of stars as well as their evolution can be significantly affected. The authors show where atomic diffusion plays an essential role and how it can be implemented in modelling. In Part I, the authors describe the tools that are required to include atomic diffusion in models of stellar interiors and atmospheres. An important role is played by the gradient of partial radiative pressure, or radiative acceleration, which is usually neglected in stellar evolution. In Part II, the authors systematically review the contribution of atomic diffusion to each evolutionary step. The dominant effects of atomic diffusion are accompanied by more subtle effects on a large number of structural properties throughout evolution. One of the goals of this book is to provide the means for the astrophysicist or graduate student to evaluate the importanc...
Dynamical Boson Stars
Steven L. Liebling
Full Text Available The idea of stable, localized bundles of energy has strong appeal as a model for particles. In the 1950s, John Wheeler envisioned such bundles as smooth configurations of electromagnetic energy that he called geons, but none were found. Instead, particle-like solutions were found in the late 1960s with the addition of a scalar field, and these were given the name boson stars. Since then, boson stars find use in a wide variety of models as sources of dark matter, as black hole mimickers, in simple models of binary systems, and as a tool in finding black holes in higher dimensions with only a single Killing vector. We discuss important varieties of boson stars, their dynamic properties, and some of their uses, concentrating on recent efforts.
GRACE star camera noise
Harvey, Nate
Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.
Molecules in stars
Tsuji, T.
Recently, research related to molecules in stars has rapidly expanded because of progress in related fields. For this reason, it is almost impossible to cover all the topics related to molecules in stars. Thus, here the authors focus their attention on molecules in the atmospheres of cool stars and do not cover in any detail topics related to circumstellar molecules originating from expanding envelopes located far from the stellar surface. However, the authors do discuss molecules in quasi-static circumstellar envelopes (a recently discovered new component of circumstellar envelopes) located near the stellar surface, since molecular lines originating from such envelopes show little velocity shift relative to photospheric lines, and hence they directly affect the interpretation and analysis of stellar spectra
CARBON NEUTRON STAR ATMOSPHERES
Suleimanov, V. F.; Klochkov, D.; Werner, K.; Pavlov, G. G.
The accuracy of measuring the basic parameters of neutron stars is limited in particular by uncertainties in the chemical composition of their atmospheres. For example, the atmospheres of thermally emitting neutron stars in supernova remnants might have exotic chemical compositions, and for one of them, the neutron star in Cas A, a pure carbon atmosphere has recently been suggested by Ho and Heinke. To test this composition for other similar sources, a publicly available detailed grid of the carbon model atmosphere spectra is needed. We have computed this grid using the standard local thermodynamic equilibrium approximation and assuming that the magnetic field does not exceed 10 8 Â G. The opacities and pressure ionization effects are calculated using the Opacity Project approach. We describe the properties of our models and investigate the impact of the adopted assumptions and approximations on the emergent spectra
Instability and star evolution
Mirzoyan, L.V.
The observational data are discussed which testify that the phenomena of dynamical instability of stars and stellar systems are definite manifestations of their evolution. The study of these phenomena has shown that the instability is a regular phase of stellar evolution. It has resulted in the recognition of the most important regularities of the process of star formation concerning its nature. This became possible due to the discovery in 1947 of stellar associations in our Galaxy. The results of the study of the dynamical instability of stellar associations contradict the predictions of classical hypothesis of stellar condensation. These data supplied a basis for a new hypothesis on the formation of stars and nebulae by the decay of superdense protostars [ru
The twinkling of stars
Jakeman, E.; Parry, G.; Pike, E.R.; Pusey, P.N.
This article collects together some of the main ideas and experimental results on the twinkling of stars. Statistical methods are used to characterise the features of the scintillation and to investigate the ways in which these depend on the zenith angle of the star, the bandwidth of the light and various other parameters. Some new results are included which demonstrate the advantages of using photon counting methods in experiments on stellar scintillation. Since the twinkling of stars is a consequence of the turbulence in the Earth's magnetic atmosphere then measurements can be used to deduce some features of the structure of the turbulence. Some of the experiments designed to do this are discussed and the results reported. (author)
Weighing the Smallest Stars
VLT Finds Young, Very Low Mass Objects Are Twice As Heavy As Predicted Summary Thanks to the powerful new high-contrast camera installed at the Very Large Telescope, photos have been obtained of a low-mass companion very close to a star. This has allowed astronomers to measure directly the mass of a young, very low mass object for the first time. The object, more than 100 times fainter than its host star, is still 93 times as massive as Jupiter. And it appears to be almost twice as heavy as theory predicts it to be. This discovery therefore suggests that, due to errors in the models, astronomers may have overestimated the number of young "brown dwarfs" and "free floating" extrasolar planets. PR Photo 03/05: Near-infrared image of AB Doradus A and its companion (NACO SDI/VLT) A winning combination A star can be characterised by many parameters. But one is of uttermost importance: its mass. It is the mass of a star that will decide its fate. It is thus no surprise that astronomers are keen to obtain a precise measure of this parameter. This is however not an easy task, especially for the least massive ones, those at the border between stars and brown dwarf objects. Brown dwarfs, or "failed stars", are objects which are up to 75 times more massive than Jupiter, too small for major nuclear fusion processes to have ignited in its interior. To determine the mass of a star, astronomers generally look at the motion of stars in a binary system. And then apply the same method that allows determining the mass of the Earth, knowing the distance of the Moon and the time it takes for its satellite to complete one full orbit (the so-called "Kepler's Third Law"). In the same way, they have also measured the mass of the Sun by knowing the Earth-Sun distance and the time - one year - it takes our planet to make a tour around the Sun. The problem with low-mass objects is that they are very faint and will often be hidden in the glare of the brighter star they orbit, also when viewed
General Relativity and Compact Stars
Glendenning, Norman K.
Compact stars--broadly grouped as neutron stars and white dwarfs--are the ashes of luminous stars. One or the other is the fate that awaits the cores of most stars after a lifetime of tens to thousands of millions of years. Whichever of these objects is formed at the end of the life of a particular luminous star, the compact object will live in many respects unchanged from the state in which it was formed. Neutron stars themselves can take several forms--hyperon, hybrid, or strange quark star. Likewise white dwarfs take different forms though only in the dominant nuclear species. A black hole is probably the fate of the most massive stars, an inaccessible region of spacetime into which the entire star, ashes and all, falls at the end of the luminous phase. Neutron stars are the smallest, densest stars known. Like all stars, neutron stars rotate--some as many as a few hundred times a second. A star rotating at such a rate will experience an enormous centrifugal force that must be balanced by gravity or else it will be ripped apart. The balance of the two forces informs us of the lower limit on the stellar density. Neutron stars are 10 14 times denser than Earth. Some neutron stars are in binary orbit with a companion. Application of orbital mechanics allows an assessment of masses in some cases. The mass of a neutron star is typically 1.5 solar masses. They can therefore infer their radii: about ten kilometers. Into such a small object, the entire mass of our sun and more, is compressed
Harmonization in preclinical epilepsy research: A joint AES/ILAE translational initiative.
Galanopoulou, Aristea S; French, Jacqueline A; O'Brien, Terence; Simonato, Michele
Among the priority next steps outlined during the first translational epilepsy research workshop in London, United Kingdom (2012), jointly organized by the American Epilepsy Society (AES) and the International League Against Epilepsy (ILAE), are the harmonization of research practices used in preclinical studies and the development of infrastructure that facilitates multicenter preclinical studies. The AES/ILAE Translational Task Force of the ILAE has been pursuing initiatives that advance these goals. In this supplement, we present the first reports of the working groups of the Task Force that aim to improve practices of performing rodent video-electroencephalography (vEEG) studies in experimental controls, generate systematic reviews of preclinical research data, and develop preclinical common data elements (CDEs) for epilepsy research in animals. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Implementation of T-box/T/sup -1/-box based AES design on latest xilinx fpga
Kundi, D.E.; Aziz, A.
This work presents an efficient implementation of the AES (Advance Encryption Standard) based on Tbox/T-1-box design for both the encryption and decryption on FPGA (Field Programmable Gate Array). The proposed architecture not only make efficient use of full capacity of dedicated 32 Kb BRAM (Block RAM) of latest Xilinx FPGAs (Virtex-5, Virtex-6 and 7 Series) but also saves considerable amount of BRAM and logical resources by using multiple accesses from single BRAM in one cycle of system clock as compared to conventional LUT (Look-Up-Table) techniques. The proposed T-box/T-1-box based AES design for both the encryption and decryption fits into just 4 BRAMs on FPGA and results in good efficiency TPS (Throughput per Slice) with less power consumption. (author)
A SOPC-BASED Evaluation of AES for 2.4 GHz Wireless Network
Ken, Cai; Xiaoying, Liang
In modern systems, data security is needed more than ever before and many cryptographic algorithms are utilized for security services. Wireless Sensor Networks (WSN) is an example of such technologies. In this paper an innovative SOPC-based approach for the security services evaluation in WSN is proposed that addresses the issues of scalability, flexible performance, and silicon efficiency for the hardware acceleration of encryption system. The design includes a Nios II processor together with custom designed modules for the Advanced Encryption Standard (AES) which has become the default choice for various security services in numerous applications. The objective of this mechanism is to present an efficient hardware realization of AES using very high speed integrated circuit hardware description language (Verilog HDL) and expand the usability for various applications. As compared to traditional customize processor design, the mechanism provides a very broad range of cost/performance points.
Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES
Sarkar, B.; Bhunia, C. T.; Maulik, U.
Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.
Development of ICP-AES based method for the characterization of high level waste
Seshagiri, T.K.; Thulsidas, S.K.; Adya, V.C.; Kumar, Mithlesh; Radhakrishnan, K.; Mary, G.; Kulkarni, P.G.; Bhalerao, Bharti; Pant, D.K.
An Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES) method was developed for the trace metal characterization of high level waste solutions (HLW) of different origin and the method was validated by analysis of synthetic samples of simulated high level waste solutions (SHLW) from spent fuels of varying composition. In this context, an inter-laboratory comparison exercise (ILCE) was carried out with the simulated HLW of different spent fuel types, viz., research reactor (RR), pressurized heavy water reactor (PHWR) and fast breeder reactor (FBR). An over view of the ICP-AES determination of trace metallic constituents in such SHLW solutions is presented. The overall agreement between the various laboratories was good. (author)
AES Encryption Algorithm Optimization Based on 64-bit Processor Android Platform
Full Text Available Algorithm implemented on the mobile phone is different from one on PC. It requires little storage space and low power consumption. Standard AES S-box design uses look up table,and has high complexity and high power consumption,so it needs to be optimized when used in mobile phones. In our optimization AES encryption algorithm,the packet length is expanded to 256 bits,which would increase the security of our algorithm; look up table is replaced by adding the affine transformation based on inversion,which would reduce the storage space; operation is changed into 16-bit input and 64-bit output by merging the three steps,namely SubWords,ShiftRows MixColumns and AddRoundKey,which would improve the operation efficiency of the algorithm. The experimental results show that our algorithm not only can greatly enhance the encryption strength,but also maintain high computing efficiency.
AEMnSb2 (AE=Sr, Ba): a new class of Dirac materials
Farhan, M Arshad; Lee, Geunsik; Shim, Ji Hoon
The Dirac fermions of Sb square net in AEMnSb 2 (AE=Sr, Ba) are investigated by using first-principles calculation. BaMnSb 2 contains Sb square net layers with a coincident stacking of Ba atoms, exhibiting Dirac fermion behavior. On the other hand, SrMnSb 2 has a staggered stacking of Sr atoms with distorted zig-zag chains of Sb atoms. Application of hydrostatic pressure on the latter induces a structural change from a staggered to a coincident arrangement of AE ions accompanying a transition from insulator to a metal containing Dirac fermions. The structural investigations show that the stacking type of cation and orthorhombic distortion of Sb layers are the main factors to decide the crystal symmetry of the material. We propose that the Dirac fermions can be obtained by controlling the size of cation and the volume of AEMnSb 2 compounds. (fast track communication)
Safety, Dosimetry, and Tumor Detection Ability of 68Ga-NOTA-AE105
Skovgaard Lund, Dorthe; Persson, Morten; Brandt-Larsen, Malene
The overexpression of urokinase-type plasminogen activator receptors (uPARs) represents an established biomarker for aggressiveness in most common malignant diseases, including breast cancer (BC), prostate cancer (PC), and urinary bladder cancer (UBC), and is therefore an important target for new...... cancer therapeutic and diagnostic strategies. In this study, uPAR PET imaging using a68Ga-labeled version of the uPAR-targeting peptide (AE105) was investigated in a group of patients with BC, PC, and UBC. The aim of this first-in-human, phase I clinical trial was to investigate the safety.......Conclusion:This first-in-human, phase I clinical trial demonstrates the safe use and clinical potential of68Ga-NOTA-AE105 as a new radioligand for uPAR PET imaging in cancer patients....
Electron and ion beam degradation effects in AES analysis of silicon nitride thin films
Fransen, F.; Vanden Berghe, R.; Vlaeminck, R.; Hinoul, M.; Remmerie, J.; Maes, H.E.
Silicon nitride films are currently investigated by AES combined with ion profiling techniques for their stoichiometry and oxygen content. During this analysis, ion beam and primary electron effects were observed. The effect of argon ion bombardment is the preferential sputtering of nitrogen, forming 'covalent' silicon at the surface layer (AES peak at 91 eV). The electron beam irradiation results in a decrease of the covalent silicon peak, either by an electron beam annealing effect in the bulk of the silicon nitride film, or by an ionization enhanced surface diffusion process of the silicon (electromigration). By the electron beam annealing, nitrogen species are liberated in the bulk of the silicon nitride film and migrate towards the surface where they react with the covalent silicon. The ionization enhanced diffusion originates from local charging of the surface, induced by the electron beam. (author)
ICP-AES analysis of trace elements in serum from animals fed with irradiated food
Huang Zongzhi; Zhou Hongdi
A method of trace element analysis by ICP-AES in serum from animals fed with irradiated food is described. In order to demonstrate that irradiated food is suitable for human consumption, it is necessary to perform an experiment of animal feeding with these food before use for human. Trace element analysis in animal serum could provide an actual evidence for further human consumption study. 53 serum samples of the rats fed with irradiated food were obtained. After ashed and solved, ICP-AES analysis has been used for determining 20 trace elements in specimen solution. The detection limitation is in the range of 10 -2 -10 -3 ppm for different elements. The recovery of elements is from 70.08% to 98.28%. The relative standard deviation is found to be 0.71% to 11.52%
Atmospheres of central stars
Hummer, D.G.
The author presents a brief summary of atmospheric models that are of possible relevance to the central stars of planetary nebulae, and then discusses the extent to which these models accord with the observations of both nebulae and central stars. Particular attention is given to the significance of the very high Zanstra temperature implied by the nebulae He II lambda 4686 A line, and to the discrepancy between the Zanstra He II temperature and the considerably lower temperatures suggested by the appearance of the visual spectrum for some of these objects. (Auth.)
The Drifting Star
By studying in great detail the 'ringing' of a planet-harbouring star, a team of astronomers using ESO's 3.6-m telescope have shown that it must have drifted away from the metal-rich Hyades cluster. This discovery has implications for theories of star and planet formation, and for the dynamics of our Milky Way. ESO PR Photo 09a/08 ESO PR Photo 09a/08 Iota Horologii The yellow-orange star Iota Horologii, located 56 light-years away towards the southern Horologium ("The Clock") constellation, belongs to the so-called "Hyades stream", a large number of stars that move in the same direction. Previously, astronomers using an ESO telescope had shown that the star harbours a planet, more than 2 times as large as Jupiter and orbiting in 320 days (ESO 12/99). But until now, all studies were unable to pinpoint the exact characteristics of the star, and hence to understand its origin. A team of astronomers, led by Sylvie Vauclair from the University of Toulouse, France, therefore decided to use the technique of 'asteroseismology' to unlock the star's secrets. "In the same way as geologists monitor how seismic waves generated by earthquakes propagate through the Earth and learn about the inner structure of our planet, it is possible to study sound waves running through a star, which forms a sort of large, spherical bell," says Vauclair. The 'ringing' from this giant musical instrument provides astronomers with plenty of information about the physical conditions in the star's interior. And to 'listen to the music', the astronomers used one of the best instruments available. The observations were conducted in November 2006 during 8 consecutive nights with the state-of-the-art HARPS spectrograph mounted on the ESO 3.6-m telescope at La Silla. Up to 25 'notes' could be identified in the unique dataset, most of them corresponding to waves having a period of about 6.5 minutes. These observations allowed the astronomers to obtain a very precise portrait of Iota Horologii: its
The star of Bethlehem
Hughes, D.W.
It is stated that the cause and form of the star are still uncertain. The astrologically significant triple conjunction of Saturn and Jupiter in the constellation of Pisces appears to be the most likely explanation, although the two comets of March 5 BC and April 4 BC cannot be dismissed, nor can the possibility that the 'star' was simply legendary. The conjunction occurred in 7 BC and there are indications that Jesus Christ was probably born in the Autumn of that year, around October 7 BC. (U.K.)
The formation of stars
Stahler, Steven W
This book is a comprehensive treatment of star formation, one of the most active fields of modern astronomy. The reader is guided through the subject in a logically compelling manner. Starting from a general description of stars and interstellar clouds, the authors delineate the earliest phases of stellar evolution. They discuss formation activity not only in the Milky Way, but also in other galaxies, both now and in the remote past. Theory and observation are thoroughly integrated, with the aid of numerous figures and images. In summary, this volume is an invaluable resource, both as a text f
Chaplygin dark star
Bertolami, O.; Paramos, J.
We study the general properties of a spherically symmetric body described through the generalized Chaplygin equation of state. We conclude that such an object, dubbed generalized Chaplygin dark star, should exist within the context of the generalized Chaplygin gas (GCG) model of unification of dark energy and dark matter, and derive expressions for its size and expansion velocity. A criteria for the survival of the perturbations in the GCG background that give origin to the dark star are developed, and its main features are analyzed
An Efficient Side-Channel Protected AES Implementation with Arbitrary Protection Order
Groß, Hannes; Mangard, Stefan; Korak, Thomas
Passive physical attacks, like power analysis, pose a serious threat to the security of digital circuits. In this work, we introduce an efficient sidechannel protected Advanced Encryption Standard (AES) hardware design that is completely scalable in terms of protection order. Therefore, we revisit the private circuits scheme of Ishai et al. [13] which is known to be vulnerable to glitches. We demonstrate how to achieve resistance against multivariate higher-order attacks in the presence of gl...
AE monitoring instrumentation for high performance superconducting dipoles and quadrupoles, Phase 2
Iwasa, Y.
In the past year and a half, attention has been focused on the development of instrumentation for on-line monitoring of high-performance superconducting dipoles and quadrupoles. This instrumentation has been completed and satisfactorily demonstrated on a prototype Fermi dipole. Conductor motion is the principal source of acoustic emission (AE) and the major cause of quenches in the dipole, except during the virgin run when other sources are also present. The motion events are mostly microslips. The middle of the magnet is most susceptible to quenches. This result agrees with the peak field location in the magnet. In the virgin state the top and bottom of the magnet appeared acoustically similar but diverged after training, possibly due to minute structural asymmetry, for example differences in clamping and welding strength; however, the results do not indicate any major structural defects. There is good correlation between quench current and AE starting current. The correlation is reasonable if mechanical disturbances are indeed responsible for quench. Based on AE cumulative history, the average frictional power dissipation in the whole dipole winding is estimated to be approx. 10 (MU)W cm(-3). We expect to implement the following in the next phase of this project: Application of room-temperature techniques to detecting structural defects in the dipole; application of the system to other dipoles and quadrupoles in the same series to compare their performances; and further investigation of AE starting current approx. quench current relationship. Work has begun on the room temperature measurements. Preliminary Stress Wave Factor measurements have been made on a model dipole casing.
Spectral interference of zirconium on 24 analyte elements using CCD based ICP-AES technique
Adya, V.C.; Sengupta, Arijit; Godbole, S.V.
In the present studies, the spectral interference of zirconium on different analytical lines of 24 critical analytes using CCD based ICP-AES technique is described. Suitable analytical lines for zirconium were identified along with their detection limits. The sensitivity and the detection limits of analytical channels for different elements in presence of Zr matrix were calculated. Subsequently analytical lines with least interference from Zr and better detection limits were selected for their determinations. (author)
ECONOMIC MODEL FOR EVALUATION OF INTEGRAL COMPETITIVENESS OF AUTOMOTIVE ENTERPRISES (AE
A. F. Zubritsky
Full Text Available The paper considers problems pertaining to evaluation of competitiveness of automotive enterprises in the field of international consignments. An economic model for determination of the integral AE competitiveness is proposed in the paper and the model permits to substitute an expert estimation in respect of some factors by their qualitative calculation on the basis of data on enterprise activity in the international consignment market.
AE$\\mathrm{\\bar{g}}$IS Experiment Positron Accumulator: Optimization and Control
Moura, Joao Pedro
The present document describes my 8-week work project at the AE$\\mathrm{\\bar{g}}$IS experiment. The project can be divided into three main tasks: 1. Theoretical preparation; 2. Support at the experiment; and 3. Control of the ES075-2 Power Supply. A description of these tasks is presented. Special emphasis is put on the third task and further developments are proposed.
A Novel Image Cryptosystem Based on S-AES and Chaotic Map
Bai Lan
Full Text Available This paper proposes a novel scheme based on simplified advanced encryption standard (S-AES for image encryption. Modified Arnold Map applied as diffusion technique for an image, and the key and dynamic S-box of encryption is generated by PWLCM. The goal is to balance rapidity and security of encryption. Experimental implementation has been done. This light encryption scheme shows resistance against chosen-plaintext attack and is suitable for sensor networks and IoT.
Study of determination of microelements in Chinese herbal medicine by AES
Wei Jiuning
An AES method has been proposed for micro elements analysis in Chinese herbal medicine and the pretreatments of samples are discussed in detail. The method is proved accurate by analyzing peach leaves with the level of the national standard substance and by comparing results using different methods: the data obtained are accurate and reliable and the method can be used for determination 10 kinds of micro elements in Chinese herbal medicine
NDT in inspection-repairing of boiler AE-101 of fertilizers Plant-Talara-Peru
Torres, F.
Experiences got from the use of NDT in inspection-repairing of boiler AE-101 of fertilizers Plant-Talara is presented after failures by pipes breakage. Applying liquid penetrant testing and visual inspection it was possible to determine the seriousness of the equipment the use of metallographic copies, ultrasonic measurements and hardness gave us an idea of the actual condition of the equipment, allowing a safe performance
Biomimetic Syntheses of Callistrilones A-E via an Oxidative [3 + 2] Cycloaddition.
Guo, Yonghong; Zhang, Yuhan; Xiao, Mingxing; Xie, Zhixiang
Concise total syntheses of callistrilones A-E have been achieved from 7 and commercially available α-phellandrene (8). The synthetic strategy, which was primarily inspired by the biogenetic hypothesis, was enabled by an oxidative [3 + 2] cycloaddition followed by a Michael addition and an intramolecular nucleophilic addition to construct the target molecules. Moreover, viminalin I was also synthesized, and its absolute configuration was unambiguously confirmed.
Experience of a Brazilian A/E in seismic analysis of nuclear structures components
Venancio Filho, F.; Leal, M.R.L.V.; Bevilacqua, L.
The experience of Promon Engenharia S.A., a Brazilian A/E which participated in the civil and mechanical engineering projects of the first Nuclear Power Plant in Brazil, is presented. In these projects the aspects of input for seismic analysis, seismic analysis in nuclear structures founded on piles, dynamic analysis for airplane crash, and piping analysis had to be faced for the first time in the country. The solution of these problems and some case examples are presented. (Author) [pt
Integrated Design for Marketing and Manufacturing team: An examination of LA-ICP-AES in a mobile configuration. Final report
The Department of Energy (DOE) has identified the need for field-deployable elemental analysis devices that are safer, faster, and less expensive than the fixed laboratory procedures now used to screen hazardous waste sites. As a response to this need, the Technology Integration Program (TIP) created a mobile, field-deployable laser ablation-inductively coupled plasma-atomic emission spectrometry (LA-ICP-AES) sampling and analysis prototype. Although the elemental. screening prototype has been successfully field-tested, continued marketing and technical development efforts are required to transfer LA-ICP-AES technology to the commercial sector. TIP established and supported a student research and design group called the Integrated Design for Marketing and Manufacturing (IDMM) team to advance the technology transfer of mobile, field-deployable LA-ICP-AES. The IDMM team developed a conceptual design (which is detailed in this report) for a mobile, field-deployable LA-ICP-AES sampling and analysis system, and reports the following findings: Mobile, field-deployable LA-ICP-AES is commercially viable. Eventual regulatory acceptance of field-deployable LA-ICP-AES, while not a simple process, is likely. Further refinement of certain processes and components of LA-ICP-AES will enhance the device's sensitivity and accuracy
An AES chip with DPA resistance using hardware-based random order execution
Yu Bo; Li Xiangyu; Chen Cong; Sun Yihe; Wu Liji; Zhang Xiangmin
This paper presents an AES (advanced encryption standard) chip that combats differential power analysis (DPA) side-channel attack through hardware-based random order execution. Both decryption and encryption procedures of an AES are implemented on the chip. A fine-grained dataflow architecture is proposed, which dynamically exploits intrinsic byte-level independence in the algorithm. A novel circuit called an HMF (Hold-Match-Fetch) unit is proposed for random control, which randomly sets execution orders for concurrent operations. The AES chip was manufactured in SMIC 0.18 μm technology. The average energy for encrypting one group of plain texts (128 bits secrete keys) is 19 nJ. The core area is 0.43 mm 2 . A sophisticated experimental setup was built to test the DPA resistance. Measurement-based experimental results show that one byte of a secret key cannot be disclosed from our chip under random mode after 64000 power traces were used in the DPA attack. Compared with the corresponding fixed order execution, the hardware based random order execution is improved by at least 21 times the DPA resistance. (semiconductor integrated circuits)
Bo, Yu; Xiangyu, Li; Cong, Chen; Yihe, Sun; Liji, Wu; Xiangmin, Zhang
This paper presents an AES (advanced encryption standard) chip that combats differential power analysis (DPA) side-channel attack through hardware-based random order execution. Both decryption and encryption procedures of an AES are implemented on the chip. A fine-grained dataflow architecture is proposed, which dynamically exploits intrinsic byte-level independence in the algorithm. A novel circuit called an HMF (Hold-Match-Fetch) unit is proposed for random control, which randomly sets execution orders for concurrent operations. The AES chip was manufactured in SMIC 0.18 μm technology. The average energy for encrypting one group of plain texts (128 bits secrete keys) is 19 nJ. The core area is 0.43 mm2. A sophisticated experimental setup was built to test the DPA resistance. Measurement-based experimental results show that one byte of a secret key cannot be disclosed from our chip under random mode after 64000 power traces were used in the DPA attack. Compared with the corresponding fixed order execution, the hardware based random order execution is improved by at least 21 times the DPA resistance.
Simultaneous multielement analysis of zirconium alloys by chlorination separation of matrix/ICP-AES
Kato, Kaneharu
An analytical method combined chlorination separation of matrix with ICP-AES has been developed for reactor grade Zr alloys (Zircaloy-2). A sample (1 g) is taken into a Pt boat and chlorinated with HCl gas of 100 ml/min in a glass reaction tube at ca. 330degC. Matrix Zr of the sample is volatilized and separated as ZrCl 4 . The analytic elements remaining quantitatively as chlorination residue are dissolved in a mixture of mineral acids (6 M HCl 3 ml+conc. HNO 3 0.5 ml+conc. H 2 SO 4 0.2 ml) and diluted to 20 ml with distilled water after filtration. ICP-AES was used for simultaneous multielement determination using a calibration curve method. The present method has the following advantages: simple sample preparation procedure; applicability to any form of samples to determine multielements; simple ICP-AES calibration procedure. This method was successfully applied to the determination of Fe, Ni, Cu, Co, Mn and Pb in the Zr alloys of JAERI CRM's and NBS SRM's. (author)
Trigermanides AEGe{sub 3} (AE = Ca, Sr, Ba). Chemical bonding and superconductivity
Castillo, Rodrigo; Schnelle, Walter; Baranov, Alexey I.; Burkhardt, Ulrich; Bobnar, Matej; Cardoso-Gil, Raul; Schwarz, Ulrich; Grin, Yuri [Max-Planck-Institut fuer Chemische Physik Fester Stoffe, Dresden (Germany)
The crystal structures of the trigermanides AEGe{sub 3}(tI32) (AE = Ca, Sr, Ba; space group I4/mmm, for SrGe{sub 3}: a = 7.7873(1), c = 12.0622(3) Aa) comprise Ge{sub 2} dumbbells forming layered Ge substructures which enclose embedded AE atoms. The chemical bonding analysis by application of the electron localizability approach reveals a substantial charge transfer from the AE atoms to the germanium substructure. The bonding within the dumbbells is of the covalent two-center type. A detailed analysis of SrGe{sub 3} reveals that the interaction on the bond-opposite side of the Ge{sub 2} groups is not lone pair-like - as it would be expected from the Zintl-like interpretation of the crystal structure with anionic Ge layers separated by alkaline-earth cations - but multi-center strongly polar between the Ge{sub 2} dumbbells and the adjacent metal atoms. Similar atomic interactions are present in CaGe{sub 3} and BaGe{sub 3}. The variation of the alkaline-earth metal has a merely insignificant influence on the superconducting transition temperatures in the s,p-electron compounds AEGe{sub 3}.
A Study on the Fracture Behavior of Composite Laminated T-Joints Using AE
Kim, J. H.; Sa, J. W.; Park, B. J.; Ahn, B. W.
Quasi-static tests such as monotonic tension and loading/unloading tension were performed to investigate the bond characteristics and the failure processes for the T-joint specimens made from fiber/epoxy composite material. Two types of specimens, each consists of two components, e. g. skin and frame. were manufactured by co-curing and secondary bonding. During the monotonic tension test, AE instrument was used to predict AE signal at the initial and middle stage of the damage propagation. The damage initiation and progression were monitored optically using m (Charge Coupled Device) camera. And the internal crack front profile was examined using ultrasonic C-scan. The results indicate that the loads representing the abrupt increase of the AE signal are within the error range of 5 percent comparing to the loads shown in the load-time curve. Also it is shown that the initiation of crack occurred in the noodle region for both co cured and secondarily bonded specimen. The final failure occurred in the noodle region for the co-cured specimen. but at the skin/frame termination point for the secondarily bonded specimen. Based on the results, it was found that two kinds of specimen show different failure modes depending on the manufacturing methods
AE analysis of delamination crack propagation in carbon fiber-reinforced polymer materials
Yoon, Sang Jae; Arakawa, Kazuo [Kyushu University, kasuga (Japan); Chen, Dingding [National University of Defense Technology, Changsha (China); Han, Seung Wook; Choi, Nak Sam [Hanyang University, Seoul (Korea, Republic of)
Delamination fracture behavior was investigated using acoustic emission (AE) analysis on carbon fiber-reinforced polymer (CFRP) samples manufactured using vacuum-assisted resin transfer molding (VARTM). CFRP plate was fabricated using unidirectional carbon fiber fabric with a lay-up of six plies [+30/-30]6 , and a Teflon film was inserted as a starter crack. Test pieces were sectioned from the inlet and vent of the mold, and packed between two rectangular epoxy plates to load using a universal testing machine. The AE signals were monitored during tensile loading using two sensors. The average tensile load of the inlet specimens was slightly larger than that of the vent specimens; however, the data exhibited significant scattering due to non-uniform resin distribution, and there was no statistically significant different between the strength of the samples sectioned from the inlet or outlet of the mold. Each of the specimens exhibited similar AE characteristics, regardless of whether they were from the inlet or vent of the mold. Four kinds of damage mechanism were observed: micro-cracking, fiber-resin matrix debonding, fiber pull-out, and fiber failure; and three stages of the crack propagation process were identified.
MODIFIED AES WITH RANDOM S BOX GENERATION TO OVERCOME THE SIDE CHANNEL ASSAULTS USING CLOUD
M. Navaneetha Krishnan
Full Text Available Development of any communication system with secure and complex cryptographic algorithms highly depends on concepts of data security which is crucial in the current technological world. The security and complexity of the cryptography algorithms need to get increased by randomization of secret keys. To overcome the issues associated to data security and for improvising it during encryption and decryption process over the encrypting device, a novel Secure Side Channel Assault Prevention (SSCAP approach has been projected which will eliminate outflow of side channel messages and also provides effective security over the encrypting device. An effective Enriched AES (E-AES encryption algorithm is proposed to reduce the side channel attack; the modified algorithm in this research shows its improvement in the Generation of Random Multiple S - Box (GRM S-Box which makes it hard to the attacks to break the text which is in encrypted form. Our novel SSCAP approach also improves the security over the original information; it widely minimizes the leakage of the side channel information. Attackers cannot easily get a clue about the proposed S-Box Generation technique. Our E-AES algorithm will be implemented in cloud environment thereby improving the cloud security. The proposed SSCAP approach is judged against the existing security based algorithms on the scale of encryption and decryption time, time taken for generating the key, and performance. The proposed work proves to outperform over all other methods used in the past.
Variations of thermospheric composition according to AE-C data and CTIP modelling
H. Rishbeth
Full Text Available Data from the Atmospheric Explorer C satellite, taken at middle and low latitudes in 1975-1978, are used to study latitudinal and month-by-month variations of thermospheric composition. The parameter used is the "compositional Ρ-parameter", related to the neutral atomic oxygen/molecular nitrogen concentration ratio. The midlatitude data show strong winter maxima of the atomic/molecular ratio, which account for the "seasonal anomaly" of the ionospheric F2-layer. When the AE-C data are compared with the empirical MSIS model and the computational CTIP ionosphere-thermosphere model, broadly similar features are found, but the AE-C data give a more molecular thermosphere than do the models, especially CTIP. In particular, CTIP badly overestimates the winter/summer change of composition, more so in the south than in the north. The semiannual variations at the equator and in southern latitudes, shown by CTIP and MSIS, appear more weakly in the AE-C data. Magnetic activity produces a more molecular thermosphere at high latitudes, and at mid-latitudes in summer. Key words. Atmospheric composition and structure (thermosphere – composition and chemistry
Full Text Available Data from the Atmospheric Explorer C satellite, taken at middle and low latitudes in 1975-1978, are used to study latitudinal and month-by-month variations of thermospheric composition. The parameter used is the "compositional Ρ-parameter", related to the neutral atomic oxygen/molecular nitrogen concentration ratio. The midlatitude data show strong winter maxima of the atomic/molecular ratio, which account for the "seasonal anomaly" of the ionospheric F2-layer. When the AE-C data are compared with the empirical MSIS model and the computational CTIP ionosphere-thermosphere model, broadly similar features are found, but the AE-C data give a more molecular thermosphere than do the models, especially CTIP. In particular, CTIP badly overestimates the winter/summer change of composition, more so in the south than in the north. The semiannual variations at the equator and in southern latitudes, shown by CTIP and MSIS, appear more weakly in the AE-C data. Magnetic activity produces a more molecular thermosphere at high latitudes, and at mid-latitudes in summer.
Key words. Atmospheric composition and structure (thermosphere – composition and chemistry
On the solar wind - magnetosphere - ionosphere coupling: AMPTE/CCE particle data and the AE indices
Daglis, I.A.; Wilken, B.; Sarris, E.T.; Kremser, G.
We present a statistical study of the substorm particle energization in terms of the energy density of the major magnetospheric ions (H + , O + , He ++ , He + ). The correlation between energy density during substorm expansion phase and the auroral indices (AE, AU, Al) is examined and interpreted. Most distinct result is that the ionospheric origin O + energy density correlate remarkable well with the AE index, while the solar wind origin He ++ energy density does not correlate at all with AE. Mixed origin H + and He + ions exhibit an intermediate behavior. Furthermore, the O + energy density correlates very well with the pre-onset AU index level, while there is no correlation with the pre-onset AL index. The results are interpreted as a result of solar wind. The results are interpreted as a result of solar wind - magnetosphere - ionosphere coupling through the internal magnetospheric dynamo: the ionosphere responds to the increased activity of the internal dynamo (which is due to the high solar wind input) and influences substorm dynamics by feeding the near-Earth magnetotail with energetic ionospheric ions during late growth phase and expansion phase
ENERGY STAR Certified Commercial Dishwashers
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 2.0 ENERGY STAR Program Requirements for Commercial Dishwashers that are effective as of...
ENERGY STAR Certified Commercial Ovens
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 2.2 ENERGY STAR Program Requirements for Commercial Ovens that are effective as of...
Star Formation in Irregular Galaxies.
Hunter, Deidre; Wolff, Sidney
Examines mechanisms of how stars are formed in irregular galaxies. Formation in giant irregular galaxies, formation in dwarf irregular galaxies, and comparisons with larger star-forming regions found in spiral galaxies are considered separately. (JN)
ENERGY STAR Certified Commercial Boilers
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 1.0 ENERGY STAR Program Requirements for Commercial Boilers that are effective as of...
Photometry of faint blue stars
Kilkenny, D.; Hill, P.W.; Brown, A.
Photometry on the uvby system is given for 61 faint blue stars. The stars are classified by means of the Stromgren indices, using criteria described in a previous paper (Kilkenny and Hill (1975)). (author)
ENERGY STAR Certified Commercial Griddles
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 1.2 ENERGY STAR Program Requirements for Commercial Griddles that are effective as of May...
ENERGY STAR Certified Smart Thermostats
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 1.0 ENERGY STAR Program Requirements for Connected Thermostats that are effective as of...
ENERGY STAR Certified Residential Dishwashers
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 6.0 ENERGY STAR Program Requirements for Residential Dishwashers that are effective as of...
ENERGY STAR Certified Roof Products
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Roof Products that are effective as of July 1,...
ENERGY STAR Certified Pool Pumps
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 1.1 ENERGY STAR Program Requirements for Pool Pumps that are effective as of February 15,...
Understand B-type stars
When observations of B stars made from space are added to observations made from the ground and the total body of observational information is confronted with theoretical expectations about B stars, it is clear that nonthermal phenomena occur in the atmospheres of B stars. The nature of these phenomena and what they imply about the physical state of a B star and how a B star evolves are examined using knowledge of the spectrum of a B star as a key to obtaining an understanding of what a B star is like. Three approaches to modeling stellar structure (atmospheres) are considered, the characteristic properties of a mantle, and B stars and evolution are discussed.
ENERGY STAR Certified Imaging Equipment
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 2.0 ENERGY STAR Program Requirements for Imaging Equipment that are effective as of...
ENERGY STAR Certified Vending Machines
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.1 ENERGY STAR Program Requirements for Refrigerated Beverage Vending Machines that are...
ENERGY STAR Certified Water Coolers
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 2.0 ENERGY STAR Program Requirements for Water Coolers that are effective as of February...
ENERGY STAR Certified Audio Video
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Audio Video Equipment that are effective as of...
ENERGY STAR Certified Ceiling Fans
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.1 ENERGY STAR Program Requirements for Ceiling Fans that are effective as of April 1,...
ENERGY STAR Certified Ventilating Fans
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 4.0 ENERGY STAR Program Requirements for Ventilating Fans that are effective as of...
ENERGY STAR Certified Commercial Fryers
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Commercial Fryers that are effective as of...
Lithium in the barium stars
Pinsonneault, M.H.; Sneden, C.
New high-resolution spectra of the lithium resonance doublet have provided lithium abundances or upper limits for 26 classical and mild barium stars. The lithium lines always are present in the classical barium stars. Lithium abundances in these stars obey a trend with stellar masses consistent with that previously derived for ordinary K giants. This supports the notion that classical barium stars are post-core-He-flash or core-He-burning stars. Lithium contents in the mild barium stars, however, often are much smaller than those of the classical barium stars sometimes only upper limits may be determined. The cause for this difference is not easily understood, but may be related to more extensive mass loss by the mild barium stars. 45 references
Which of Kepler's Stars Flare?
Kohler, Susanna
The habitability of distant exoplanets is dependent upon many factors one of which is the activity of their host stars. To learn about which stars are most likely to flare, a recent study examines tens of thousands of stellar flares observed by Kepler.Need for a Broader SampleArtists rendering of a flaring dwarf star. [NASAs Goddard Space Flight Center/S. Wiessinger]Most of our understanding of what causes a star to flare is based on observations of the only star near enough to examine in detail the Sun. But in learning from a sample size of one, a challenge arises: we must determine which conclusions are unique to the Sun (or Sun-like stars), and which apply to other stellar types as well.Based on observations and modeling, astronomers think that stellar flares result from the reconnection of magnetic field lines in a stars outer atmosphere, the corona. The magnetic activity is thought to be driven by a dynamo caused by motions in the stars convective zone.HR diagram of the Kepler stars, with flaring main-sequence (yellow), giant (red) and A-star (green) stars in the authors sample indicated. [Van Doorsselaere et al. 2017]To test whether these ideas are true generally, we need to understand what types of stars exhibit flares, and what stellar properties correlate with flaring activity. A team of scientists led by Tom Van Doorsselaere (KU Leuven, Belgium) has now used an enormous sample of flares observed by Kepler to explore these statistics.Intriguing TrendsVan Doorsselaere and collaborators used a new automated flare detection and characterization algorithm to search through the raw light curves from Quarter 15 of the Kepler mission, building a sample of 16,850 flares on 6,662 stars. They then used these to study the dependence of the flare occurrence rate, duration, energy, and amplitude on the stellar spectral type and rotation period.This large statistical study led the authors to several interesting conclusions, including:Flare star incidence rate as a a
ENERGY STAR Certified Residential Freezers
U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 5.0 ENERGY STAR Program Requirements for Residential Refrigerators and Freezers that are...
ENERGY STAR Certified Residential Refrigerators
A method for reduction of Acoustic Emission (AE) data with application in machine failure detection and diagnosis
Vicuña, Cristián Molina; Höweler, Christoph
The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.
Distances of Dwarf Carbon Stars
Harris, Hugh C.; Dahn, Conard C.; Subasavage, John P.; Munn, Jeffrey A.; Canzian, Blaise J.; Levine, Stephen E.; Monet, Alice B.; Pier, Jeffrey R.; Stone, Ronald C.; Tilleman, Trudy M.; Hartkopf, William I.
Parallaxes are presented for a sample of 20 nearby dwarf carbon stars. The inferred luminosities cover almost two orders of magnitude. Their absolute magnitudes and tangential velocities confirm prior expectations that some originate in the Galactic disk, although more than half of this sample are halo stars. Three stars are found to be astrometric binaries, and orbital elements are determined; their semimajor axes are 1–3 au, consistent with the size of an AGB mass-transfer donor star.
RADIAL STABILITY IN STRATIFIED STARS
Pereira, Jonas P.; Rueda, Jorge A.
We formulate within a generalized distributional approach the treatment of the stability against radial perturbations for both neutral and charged stratified stars in Newtonian and Einstein's gravity. We obtain from this approach the boundary conditions connecting any two phases within a star and underline its relevance for realistic models of compact stars with phase transitions, owing to the modification of the star's set of eigenmodes with respect to the continuous case
Low Cost Design of an Advanced Encryption Standard (AES) Processor Using a New Common-Subexpression-Elimination Algorithm
Chen, Ming-Chih; Hsiao, Shen-Fu
In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.
Epistatic roles of E2 glycoprotein mutations in adaption of chikungunya virus to Aedes albopictus and Ae. aegypti mosquitoes.
Konstantin A Tsetsarkin
Full Text Available Between 2005 and 2007 Chikungunya virus (CHIKV caused its largest outbreak/epidemic in documented history. An unusual feature of this epidemic is the involvement of Ae. albopictus as a principal vector. Previously we have demonstrated that a single mutation E1-A226V significantly changed the ability of the virus to infect and be transmitted by this vector when expressed in the background of well characterized CHIKV strains LR2006 OPY1 and 37997. However, in the current study we demonstrate that introduction of the E1-A226V mutation into the background of an infectious clone derived from the Ag41855 strain (isolated in Uganda in 1982 does not significantly increase infectivity for Ae. albopictus. In order to elucidate the genetic determinants that affect CHIKV sensitivity to the E1-A226V mutation in Ae. albopictus, the genomes of the LR2006 OPY1 and Ag41855 strains were used for construction of chimeric viruses and viruses with a specific combination of point mutations at selected positions. Based upon the midgut infection rates of the derived viruses in Ae. albopictus and Ae. aegypti mosquitoes, a critical role of the mutations at positions E2-60 and E2-211 on vector infection was revealed. The E2-G60D mutation was an important determinant of CHIKV infectivity for both Ae. albopictus and Ae. aegypti, but only moderately modulated the effect of the E1-A226V mutation in Ae. albopictus. However, the effect of the E2-I211T mutation with respect to mosquito infections was much more specific, strongly modifying the effect of the E1-A226V mutation in Ae. albopictus. In contrast, CHIKV infectivity for Ae. aegypti was not influenced by the E2-1211T mutation. The occurrence of the E2-60G and E2-211I residues among CHIKV isolates was analyzed, revealing a high prevalence of E2-211I among strains belonging to the Eastern/Central/South African (ECSA clade. This suggests that the E2-211I might be important for adaptation of CHIKV to some particular conditions
Measuring the BNF of Soybean Using 15N-Labelled Urea with Different Atom Excess (A.E. Content
A. Citraresmini
Full Text Available The soybean is a legume which has an ability to supply its major nitrogen need by the biological nitrogen fixation (BNF process. This process is made possible by nodules formed in their roots, colonized by Rhizobium sp.bacteria. An accurate estimation of N gained by BNF is necessary to predict the increase or decrease of chemical fertilizer-N requirements to increase soybean production. Among several methods, the 15N method was used to estimate the ability of legumes to perform BNF. The study involved soybean var. Willis (W and a completely non-BNF soybean var. CV, which is termed as a standard crop. The standard crop is non-nodulated soybean, but it has the same main physiological traits with var. Willis. The aim of this study was to determine whether15N-labelled fertilizer with different %a.e. given to nodulated and non-nodulated soybean would not be of significant consequences for the calculation of N-BNF of W. The treatments applied were different rates of urea (20 kg N/ha and 100 kg N/ha combined with different atom excess percentages (%a.e.15N (2% and 10%. Thus, the combination of treatments were as follows:(1 W-ll (20 kg N; 2% a.e; (2 CV-hl (100 kg N; 2% a.e; (3 W-lh (20 kg N; 10% a.e; (4 CV-hh (100 kg N; 10% a.e; (5 CV-ll (20 kg N; 2% a.e; (6 W-hl (100 kg N; 2% a.e; (7 CV-lh (20 kg N; 10% a.e; (8 W-hh (100 kg N; 10% a.e. The result of the experiment showed that a high %a.e. with a low rate of 15N and a low %a.e. with a high rate of N should be used to study the %N-BNF of nodulated plants.
New stars for old
Henbest, N.
Observations of novas made through the ages, the identity of the close double stars which make up these cataclysmic variables and the physics of nova explosions, are discussed. A picture is outlined which explains novas, dwarf novas and recurrent novas and provides a basis for interpreting the latest so called x-ray novas. (U.K.)
Hadrons in compact stars
At normal nuclear matter density, neutron star matter mainly consists of neutrons, protons and electrons. The particle population is so arranged as to attain a min- imum energy configuration maintaining electrical charge neutrality and chemical equilibrium. At higher baryon density, hyperon formation becomes energetically.
Millet's Shooting Stars
Beech, M.
In this essay two paintings by the French artist Jean-Francois Millet are described. These paintings, Les Etoiles Filantes and Nuit Etoilée are particularly interesting since they demonstrate the rare artistic employment of the shooting-star image and metaphor.
Asteroseismology of Scuti Stars
Abstract. We briefly outline the state-of-the-art seismology of Scuti stars from a theoretical point of view: why is it so difficult a task? The recent theoretical advances in the field that these difficulties have influenced are also discussed.
The STAR trigger
Bieser, F.S.; Crawford, H.J.; Engelage, J.; Eppley, G.; Greiner, L.C.; Judd, E.G.; Klein, S.R.; Meissner, F.; Minor, R.; Milosevich, Z.; Mutchler, G.; Nelson, J.M.; Schambach, J.; VanderMolen, A.S.; Ward, H.; Yepes, P.
We describe the trigger system that we designed and implemented for the STAR detector at RHIC. This is a 10 MHz pipelined system based on fast detector output that controls the event selection for the much slower tracking detectors. Results from the first run are presented and new detectors for the 2001 run are discussed
Sleeping under the stars
Zirkel, Jack
Sherlock Holmes and Dr. Watson went on a camping trip. As they lay down for the night, Holmes said, "Watson, look up at the sky and tell me what you see.�Watson:"! see millions and millions of stars.�
Insight into star death
Talcott, R.
Nineteen neutrinos, formed in the center of a supernova, became a theorist's dream. They came straight from the heart of supernova 1987A and landed in two big underground tanks of water. Suddenly a new chapter in observational astronomy opened as these two neutrino telescopes gave astronomers their first look ever into the core of a supernova explosion. But the theorists' dream almost turned into a nightmare. Observations of the presupernova star showed conclusively that the star was a blue supergiant, but theorists have long believed only red supergiant stars could explode as supernovae. Do astronomers understand supernovae better now than when supernova 1987A exploded in the Large Magellanic Cloud (LMC) one year ago? Yes. The observations of neutrinos spectacularly confirmed a vital aspect of supernova theory. But the observed differences between 1987A and other supernovae have illuminated and advanced our perception of how supernovae form. By working together, observers and theorists are continuing to hone their ideas about how massive stars die and how the subsequent supernovae behave
StarLogo TNG
Klopfer, Eric; Scheintaub, Hal; Huang, Wendy; Wendel, Daniel
Computational approaches to science are radically altering the nature of scientific investigatiogn. Yet these computer programs and simulations are sparsely used in science education, and when they are used, they are typically "canned� simulations which are black boxes to students. StarLogo The Next Generation (TNG) was developed to make programming of simulations more accessible for students and teachers. StarLogo TNG builds on the StarLogo tradition of agent-based modeling for students and teachers, with the added features of a graphical programming environment and a three-dimensional (3D) world. The graphical programming environment reduces the learning curve of programming, especially syntax. The 3D graphics make for a more immersive and engaging experience for students, including making it easy to design and program their own video games. Another change to StarLogo TNG is a fundamental restructuring of the virtual machine to make it more transparent. As a result of these changes, classroom use of TNG is expanding to new areas. This chapter is concluded with a description of field tests conducted in middle and high school science classes.
THE STAR OFFLINE FRAMEWORK
FINE, V.; FISYAK, Y.; PEREVOZTCHIKOV, V.; WENAUS, T.
The Solenoidal Tracker At RHIC (STAR) is a-large acceptance collider detector, commissioned at Brookhaven National Laboratory in 1999. STAR has developed a software framework supporting simulation, reconstruction and analysis in offline production, interactive physics analysis and online monitoring environments that is well matched both to STAR's present status of transition between Fortran and C++ based software and to STAR's evolution to a fully OO software base. This paper presents the results of two years effort developing a modular C++ framework based on the ROOT package that encompasses both wrapped Fortran components (legacy simulation and reconstruction code) served by IDL-defined data structures, and fully OO components (all physics analysis code) served by a recently developed object model for event data. The framework supports chained components, which can themselves be composite subchains, with components (''makers'') managing ''data sets'' they have created and are responsible for. An St-DataSet class from which data sets and makers inherit allows the construction of hierarchical organizations of components and data, and centralizes almost all system tasks such as data set navigation, I/O, database access, and inter-component communication. This paper will present an overview of this system, now deployed and well exercised in production environments with real and simulated data, and in an active physics analysis development program
Triggered star formation
Palouš, Jan; Ehlerová, Soňa
Ro�. 12, - (2002), s. 35-36 ISSN 1405-2059 R&D Projects: GA AV ČR IAA3003705; GA AV ČR KSK1048102 Institutional research plan: CEZ:AV0Z1003909 Keywords : interstellar medium * star formation * HI shells Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics
Highlights from STAR
Schweda, Kai
Selected results from the STAR collaboration are presented. We focus on recent results on jet-like correlations, nuclear modification factors of identified hadrons, elliptic flow of multi-strange baryons Ξ and Ω, and resonance yields. First measurements of open charm production at RHIC are presented
Supernovae from massive AGB stars
Poelarends, A.J.T.; Izzard, R.G.; Herwig, F.; Langer, N.; Heger, A.
We present new computations of the final fate of massive AGB-stars. These stars form ONeMg cores after a phase of carbon burning and are called Super AGB stars (SAGB). Detailed stellar evolutionary models until the thermally pulsing AGB were computed using three di erent stellar evolution codes. The
Do All O Stars Form in Star Clusters?
Weidner, C.; Gvaramadze, V. V.; Kroupa, P.; Pflamm-Altenburg, J.
The question whether or not massive stars can form in isolation or only in star clusters is of great importance for the theory of (massive) star formation as well as for the stellar initial mass function of whole galaxies (IGIMF-theory). While a seemingly easy question it is rather difficult to answer. Several physical processes (e.g. star-loss due to stellar dynamics or gas expulsion) and observational limitations (e.g. dust obscuration of young clusters, resolution) pose severe challenges to answer this question. In this contribution we will present the current arguments in favour and against the idea that all O stars form in clusters.
Halo Star Lithium Depletion
Pinsonneault, M. H.; Walker, T. P.; Steigman, G.; Narayanan, Vijay K.
The depletion of lithium during the pre-main-sequence and main-sequence phases of stellar evolution plays a crucial role in the comparison of the predictions of big bang nucleosynthesis with the abundances observed in halo stars. Previous work has indicated a wide range of possible depletion factors, ranging from minimal in standard (nonrotating) stellar models to as much as an order of magnitude in models that include rotational mixing. Recent progress in the study of the angular momentum evolution of low-mass stars permits the construction of theoretical models capable of reproducing the angular momentum evolution of low-mass open cluster stars. The distribution of initial angular momenta can be inferred from stellar rotation data in young open clusters. In this paper we report on the application of these models to the study of lithium depletion in main-sequence halo stars. A range of initial angular momenta produces a range of lithium depletion factors on the main sequence. Using the distribution of initial conditions inferred from young open clusters leads to a well-defined halo lithium plateau with modest scatter and a small population of outliers. The mass-dependent angular momentum loss law inferred from open cluster studies produces a nearly flat plateau, unlike previous models that exhibited a downward curvature for hotter temperatures in the 7Li-Teff plane. The overall depletion factor for the plateau stars is sensitive primarily to the solar initial angular momentum used in the calibration for the mixing diffusion coefficients. Uncertainties remain in the treatment of the internal angular momentum transport in the models, and the potential impact of these uncertainties on our results is discussed. The 6Li/7Li depletion ratio is also examined. We find that the dispersion in the plateau and the 6Li/7Li depletion ratio scale with the absolute 7Li depletion in the plateau, and we use observational data to set bounds on the 7Li depletion in main-sequence halo
Kinematic and spatial distributions of barium stars - are the barium stars and Am stars related?
Hakkila, J.
The possibility of an evolutionary link between Am stars and barium stars is considered, and an examination of previous data suggests that barium star precursors are main-sequence stars of intermediate mass, are most likely A and/or F dwarfs, and are intermediate-mass binaries with close to intermediate orbital separations. The possible role of mass transfer in the later development of Am systems is explored. Mass transfer and loss from systems with a range of masses and orbital separations may explain such statistical peculiarities of barium stars as the large dispersion in absolute magnitude, the large range of elemental abundances from star to star, and the small number of stars with large peculiar velocities. 93 refs
Star identification methods, techniques and algorithms
Zhang, Guangjun
This book summarizes the research advances in star identification that the author's team has made over the past 10 years, systematically introducing the principles of star identification, general methods, key techniques and practicable algorithms. It also offers examples of hardware implementation and performance evaluation for the star identification algorithms. Star identification is the key step for celestial navigation and greatly improves the performance of star sensors, and as such the book include the fundamentals of star sensors and celestial navigation, the processing of the star catalog and star images, star identification using modified triangle algorithms, star identification using star patterns and using neural networks, rapid star tracking using star matching between adjacent frames, as well as implementation hardware and using performance tests for star identification. It is not only valuable as a reference book for star sensor designers and researchers working in pattern recognition and othe...
Analysis of the characteristics of AE wave using boring core of sedimentary soft rock and study on the field measurement of AE for the evaluation of EDZ (Joint research)
Aoki, Kenji; Mito, Yoshitada; Minami, Masayuki; Matsui, Hiroya; Niunoya, Sumio
To understand the size and state of EDZ accurately is one of the key issues in the technological development for geological disposal of High-Level Radioactive Waste. As one of these evaluation technologies, AE which is measured directly and evaluates EDZ is paid to attention inside and outside the country, and the utility is reported for the crystalline rock. However, there are few cases to apply AE in sedimentary soft rocks to EDZ evaluation because of the difficulty to measure a few AE waves with small energy. In this study, the authors made a series of laboratory tests including tri-axial test using the stiff and servo-controlled testing machine in order to clarify the possibility to measure AE waves in Neogene siliceous rocks in Horonobe, Hokkaido, Japan (2005). The authors conducted the high stiffness tri-axial compression tests with AE measurements. And, the authors extracted an effective AE parameter to the evaluation of EDZ in the neogene (2005). The authors evaluated the mechanism of EDZ, which assumed from the result in occurrence of crack in the rock by measurement system and DEM analysis, by using the effective parameter (2006). In the EDZ examination which is planned since the second stage, the authors constructed a concept of the necessary measurement and evaluation system (2007). (author)
Application of AE technique for on-line monitoring of quenching in racetrack superconducting coil at cryogenic environment
Lee, Jun Hyun; Lee, Min Rae; Shon, Myung Hwan; Kwon, Young Kil
An acoustic emission(AE) technique has been used to monitor and diagnose quenching phenomenon in racetrack shaped superconducting magnets at cryogenic environment of 4.2 K. The ultimate goal is to ensure the safety and reliability of large superconducting magnet systems by being able to identity and locate the sources of quench in superconducting magnets. The characteristics of AE parameters have been analyzed by correlating with quench number, winding tension of superconducting coil and charge rate by transport current. It was found in this study that there was good correlation between quench current and AE parameters. The source location of quenching in superconducting magnet was also discussed on the hashing of correlation between magnet voltage and AE energy.
Fatigue Crack Growth Behavior of and Recognition of AE Signals from Composite Patch-Repaired Aluminum Panel
Kim, Sung Jin; Kwon, Oh Yang; Jang, Yong Joon
The fatigue crack growth behavior of a cracked and patch-repaired Ah2024-T3 panel has been monitored by acoustic emission(AE). The overall crack growth rate was reduced The crack propagation into the adjacent hole was also retarded by introducing the patch repair. AE signals due to crack growth after the patch repair and those due to debonding of the plate-patch interface were discriminated by using the principal component analysis. The former showed high center frequency and low amplitude, whereas the latter showed long rise tine, low frequency and high amplitude. This type of AE signal recognition method could be effective for the prediction of fatigue crack growth behavior in the patch-repaired structures with the aid of AE source location
Ecology of blue straggler stars
Carraro, Giovanni; Beccari, Giacomo
The existence of blue straggler stars, which appear younger, hotter, and more massive than their siblings, is at odds with a simple picture of stellar evolution. Such stars should have exhausted their nuclear fuel and evolved long ago to become cooling white dwarfs. They are found to exist in globular clusters, open clusters, dwarf spheroidal galaxies of the Local Group, OB associations and as field stars. This book summarises the many advances in observational and theoretical work dedicated to blue straggler stars. Carefully edited extended contributions by well-known experts in the field cover all the relevant aspects of blue straggler stars research: Observations of blue straggler stars in their various environments; Binary stars and formation channels; Dynamics of globular clusters; Interpretation of observational data and comparison with models. The book also offers an introductory chapter on stellar evolution written by the editors of the book.
What Determines Star Formation Rates?
Evans, Neal John
The relations between star formation and gas have received renewed attention. We combine studies on scales ranging from local (within 0.5 kpc) to distant galaxies to assess what factors contribute to star formation. These include studies of star forming regions in the Milky Way, the LMC, nearby galaxies with spatially resolved star formation, and integrated galaxy studies. We test whether total molecular gas or dense gas provides the best predictor of star formation rate. The star formation ``efficiency," defined as star formation rate divided by mass, spreads over a large range when the mass refers to molecular gas; the standard deviation of the log of the efficiency decreases by a factor of three when the mass of relatively dense molecular gas is used rather than the mass of all the molecular gas. We suggest ways to further develop the concept of "dense gas" to incorporate other factors, such as turbulence.
Spectrophotometry of Symbiotic Stars (Abstract)
Boyd, D.
(Abstract only) Symbiotic stars are fascinating objects - complex binary systems comprising a cool red giant star and a small hot object, often a white dwarf, both embedded in a nebula formed by a wind from the giant star. UV radiation from the hot star ionizes the nebula, producing a range of emission lines. These objects have composite spectra with contributions from both stars plus the nebula and these spectra can change on many timescales. Being moderately bright, they lend themselves well to amateur spectroscopy. This paper describes the symbiotic star phenomenon, shows how spectrophotometry can be used to extract astrophysically useful information about the nature of these systems, and gives results for three symbiotic stars based on the author's observations.
Mass loss from S stars
Jura, M.
The mass-loss process in S stars is studied using 65 S stars from the listing of Wing and Yorka (1977). The role of pulsations in the mass-loss process is examined. It is detected that stars with larger mass-loss rates have a greater amplitude of pulsations. The dust-to-gas ratio for the S stars is estimated as 0.002 and the average mass-loss rate is about 6 x 10 to the -8th solar masses/yr. Some of the properties of the S stars, such as scale height, surface density, and lifetime, are measured. It is determined that scale height is 200 pc; the total duration of the S star phase is greater than or equal to 30,000 yr; and the stars inject 3 x 10 to the -6th solar masses/sq kpc yr into the interstellar medium. 46 references
NuSTAR Results and Future Plans for Magnetar and Rotation-Powered Pulsar Observations
An, H.; Kaspi, V. M.; Archibald, R.; Bachetti, M.; Bhalerao, V.; Bellm, E. C.; Beloborodov, A. M.; Boggs, S. E.; Chakrabarty, D.; Christensen, F. E.;
The Nuclear Spectroscopic Telescope Array (NuSTAR) is the first focusing hard X-ray mission in orbit and operates in the 3-79 keV range. NuSTAR's sensitivity is roughly two orders of magnitude better than previous missions in this energy band thanks to its superb angular resolution. Since its launch in 2012 June, NuSTAR has performed excellently and observed many interesting sources including four magnetars, two rotation-powered pulsars and the cataclysmic variable AE Aquarii. NuSTAR also discovered 3.76-s pulsations from the transient source SGR J1745-29 recently found by Swift very close to the Galactic center, clearly identifying the source as a transient magnetar. For magnetar 1E 1841-045, we show that the spectrum is well fit by an absorbed blackbody plus broken power-law model with a hard power-law photon index of approximately 1.3. This is consistent with previous results by INTEGRAL and RXTE. We also find an interesting double-peaked pulse profile in the 25-35 keV band. For AE Aquarii, we show that the spectrum can be described by a multi-temperature thermal model or a thermal plus non-thermal model; a multi-temperature thermal model without a non-thermal component cannot be ruled out. Furthermore, we do not see a spiky pulse profile in the hard X-ray band, as previously reported based on Suzaku observations. For other magnetars and rotation-powered pulsars observed with NuSTAR, data analysis results will be soon available.
Radiochemical separation and ICP-AES determination of some common metallic elements in ThO2 matrix
Adya, V.C.; Hon, N.S.; Bangia, T.R.; Sastry, M.D.; Iyer, R.H.
Radioactive tracer and also ICP-AES studies have been carried out to determine Al, Cd, Ca, Cr, Co, Cu, Mn, Mo and Pd in ThO 2 matrix after chemical separation. Di-2-ethyl-hexyl phosphoric acid/xylene/HNO 3 extraction system was used for quantitative separation of thorium. The recovery of elements as determined by tracers and ICP-AES was found to be quantitative within experimental error. (author). 3 refs., 1 tab
Neutron Star Science with the NuSTAR
Vogel, J. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
The Nuclear Spectroscopic Telescope Array (NuSTAR), launched in June 2012, helped scientists obtain for the first time a sensitive high-energy X-ray map of the sky with extraordinary resolution. This pioneering telescope has aided in the understanding of how stars explode and neutron stars are born. LLNL is a founding member of the NuSTAR project, with key personnel on its optics and science team. We used NuSTAR to observe and analyze the observations of different neutron star classes identified in the last decade that are still poorly understood. These studies not only help to comprehend newly discovered astrophysical phenomena and emission processes for members of the neutron star family, but also expand the utility of such observations for addressing broader questions in astrophysics and other physics disciplines. For example, neutron stars provide an excellent laboratory to study exotic and extreme phenomena, such as the equation of state of the densest matter known, the behavior of matter in extreme magnetic fields, and the effects of general relativity. At the same time, knowing their accurate populations has profound implications for understanding the life cycle of massive stars, star collapse, and overall galactic evolution.
Drug-resistant molecular mechanism of CRF01_AE HIV-1 protease due to V82F mutation
Liu, Xiaoqing; Xiu, Zhilong; Hao, Ce
Human immunodeficiency virus type 1 protease (HIV-1 PR) is one of the major targets of anti-AIDS drug discovery. The circulating recombinant form 01 A/E (CRF01_AE, abbreviated AE) subtype is one of the most common HIV-1 subtypes, which is infecting more humans and is expanding rapidly throughout the world. It is, therefore, necessary to develop inhibitors against subtype AE HIV-1 PR. In this work, we have performed computer simulation of subtype AE HIV-1 PR with the drugs lopinavir (LPV) and nelfinavir (NFV), and examined the mechanism of resistance of the V82F mutation of this protease against LPV both structurally and energetically. The V82F mutation at the active site results in a conformational change of 79's loop region and displacement of LPV from its proper binding site, and these changes lead to rotation of the side-chains of residues D25 and I50'. Consequently, the conformation of the binding cavity is deformed asymmetrically and some interactions between PR and LPV are destroyed. Additionally, by comparing the interactive mechanisms of LPV and NFV with HIV-1 PR we discovered that the presence of a dodecahydroisoquinoline ring at the P1' subsite, a [2-(2,6-dimethylphenoxy)acetyl]amino group at the P2' subsite, and an N2 atom at the P2 subsite could improve the binding affinity of the drug with AE HIV-1 PR. These findings are helpful for promising drug design.
Peering deep inside a cluster of several hundred thousand stars, NASA's Hubble Space Telescope has uncovered the oldest burned-out stars in our Milky Way Galaxy, giving astronomers a fresh reading on the age of the universe. Located in the globular cluster M4, these small, burned-out stars -- called white dwarfs -- are about 12 to 13 billion years old. By adding the one billion years it took the cluster to form after the Big Bang, astronomers found that the age of the white dwarfs agrees with previous estimates that the universe is 13 to 14 billion years old. The images, including some taken by Hubble's Wide Field and Planetary Camera 2, are available online at http://oposite.stsci.edu/pubinfo/pr/2002/10/ or http://www.jpl.nasa.gov/images/wfpc . The camera was designed and built by NASA's Jet Propulsion Laboratory, Pasadena, Calif. In the top panel, a ground-based observatory snapped a panoramic view of the entire cluster, which contains several hundred thousand stars within a volume of 10 to 30 light-years across. The Kitt Peak National Observatory's .9-meter telescope took this picture in March 1995. The box at left indicates the region observed by the Hubble telescope. The Hubble telescope studied a small region of the cluster. A section of that region is seen in the picture at bottom left. A sampling of an even smaller region is shown at bottom right. This region is only about one light-year across. In this smaller region, Hubble pinpointed a number of faint white dwarfs. The blue circles indicate the dwarfs. It took nearly eight days of exposure time over a 67-day period to find these extremely faint stars. Globular clusters are among the oldest clusters of stars in the universe. The faintest and coolest white dwarfs within globular clusters can yield a globular cluster's age. Earlier Hubble observations showed that the first stars formed less than 1 billion years after the universe's birth in the big bang. So, finding the oldest stars puts astronomers within
Alchemy of stars
Parashar, D [A.R.S.D. Coll., New Delhi (India); Bhatia, V B [Delhi Univ. (India). Dept. of Physics and Astrophysics
Developments in studies on stellar evolution during this century are reviewed. Recent considerations indicate that almost all elements between helium and zinc (a range which comprises more than 99 percent by mass of elements heavier than helium) can be synthesised in nuclear processes occurring during the late violent stages of an exploding star or supernova and a vigorous study in the new field of explosive nucleosynthesis is in progress. The process of nucleosynthesis has been classified into 8 sets of nuclear reactions, namely, (1) hydrogen burning, (2) helium burning, (3) ..cap alpha..-process, (4) e-process, (5) s-process, (6) r-process, (7) p-process and (8) x-process. The abundance of helium and heavier elements are explained and the formation of various elements during supernova explosions is discussed. The questions regarding the appropriate astrophysical conditions for the formation of massive stars (3 to 8 times solar mass) is still unanswered.
Very low mass stars
Liebert, J.; Probst, R.G.
This paper discusses several theoretical and observational topics involved in discovering and analyzing very low mass stellar objects below about 0.3 M circle, as well as their likely extension into the substellar range. The authors hereafter refer to these two classes of objects as VLM stars and brown dwarfs, respectively; collectively, they are called VLM objects. The authors outline recent theoretical work on low-mass stellar interiors and atmospheres, the determination of the hydrogen-burning mass limit, important dynamical evidence bearing on the expected numbers of such objects, and the expectations for such objects from star-formation theory. They focus on the properties of substellar objects near the stellar mass limit. Observational techniques used to discover and analyze VLM objects are summarized
Pulsating stars harbouring planets
Moya A.
Full Text Available Why bother with asteroseismology while studying exoplanets? There are several answers to this question. Asteroseismology and exoplanetary sciences have much in common and the synergy between the two opens up new aspects in both fields. These fields and stellar activity, when taken together, allow maximum extraction of information from exoplanet space missions. Asteroseismology of the host star has already proved its value in a number of exoplanet systems by its unprecedented precision in determining stellar parameters. In addition, asteroseismology allows the possibility of discovering new exoplanets through time delay studies. The study of the interaction between exoplanets and their host stars opens new windows on various physical processes. In this review I will summarize past and current research in exoplanet asteroseismology and explore some guidelines for the future.
Shells around stars
Olnon, F.M.
This thesis deals with optically visible stars surrounded by gas and dust and hot enough to ionize the hydrogen atoms in their envelopes. The ionized gas emits radio continuum radiation by the thermal Bremsstrahlung mechanism. Cool giant stars that show radio line emission from molecules in their circumstellar envelopes are discussed. Under favourable conditions the so-called maser effect gives rise to very intense emission lines. Up till now seven different maser transitions have been found in the envelopes of cool giants. Four of these lines from OH, H 2 O and SiO are studied here. Each of them originates in a different layer so that these lines can be used to probe the envelope. The profile of a maser line gives information about the velocity structure of the region where it is formed
Structure of neutron stars
Cheong, C.K.
Structure of neutron stars consisting of a cold and catalyzed superdense matter were investigated by integrating the equations for hydrostatic equilibrium based on the General Relativity theory. The equations of state were obtained with the help of semiempirical nuclear mass formulae. A large phase transition was found between the nuclear and subnuclear density regions. The density phase transition points were calculated as 6.2 x 10 11 and 3.8 x 10 13 g/cm 3 . Due to such a large phase transition, the equation of state practically consists of two parts: The nuclear and subnuclear phases wich are in contact under the thermodynamical equilibrium at the corresponding pressure. Some macroscopic properties of neutron stars are discussed. (Author) [pt
What stars become supernovae
Tinsley, B.M.
A variety of empirical lines of evidence is assembled on the masses and stellar population types of stars that trigger supernova (SN) explosions. The main theoretical motivations are to determine whether type I supernovae (SN I) can have massive precursors, and whether there is an interval of stellar mass, between the masses of precursors of pulsars and white dwarfs, that is disrupted by carbon detonation. Statistical and other uncertainties in the empirical arguments are given particular attention, and are found to be more important than generally realized. Relatively secure conclusions include the following. Statistics of stellar birthrates, SN, pulsars, and SN remnants in the Galaxy show that SN II (or all SN) could arise from stars with masses greater than M/sub s/ where M/sub s/ approximately 49 to 12 M solar mass; the precursor mass range cannot be more closely defined from present data; nor can it be said whether all SN leave pulsars and/or extended radio remnants. Several methods of estimating the masses of stars that become white dwarfs are consistent with a lower limit, M/sub s/ greater than or equal to 5 M solar mass, so carbon detonation may indeed be avoided, although this conclusion is not secure. Studies of the properties of galaxies in which SN occur, and their distributions within galaxies, support the usual views that SN I have low-mass precursors (less than or equal to 5 M solar mass and typically less than or equal to 1 M solar mass) and SN II have massive precursors (greater than or equal to 5 M solar mass); the restriction of known SN II to Sc and Sb galaxies, to date, is shown to be consistent, statistically, with massive stars in other galaxies also dying as SN II. Possible implications of the peculiarities of some SN-producing galaxies are discussed. Suggestions are made for observational and theoretical studies that would help answer important remaining questions on the nature of SN precursors
Detector limitations, STAR
Underwood, D. G.
Every detector has limitations in terms of solid angle, particular technologies chosen, cracks due to mechanical structure, etc. If all of the presently planned parts of STAR [Solenoidal Tracker At RHIC] were in place, these factors would not seriously limit our ability to exploit the spin physics possible in RHIC. What is of greater concern at the moment is the construction schedule for components such as the Electromagnetic Calorimeters, and the limited funding for various levels of triggers.
Oscillations in neutron stars
Hoeye, Gudrun Kristine
We have studied radial and nonradial oscillations in neutron stars, both in a general relativistic and non-relativistic frame, for several different equilibrium models. Different equations of state were combined, and our results show that it is possible to distinguish between the models based on their oscillation periods. We have particularly focused on the p-, f-, and g-modes. We find oscillation periods of II approx. 0.1 ms for the p-modes, II approx. 0.1 - 0.8 ms for the f-modes and II approx. 10 - 400 ms for the g-modes. For high-order (l → 4) f-modes we were also able to derive a formula that determines II l+1 from II l and II l-1 to an accuracy of 0.1%. Further, for the radial f-mode we find that the oscillation period goes to infinity as the maximum mass of the star is approached. Both p-, f-, and g-modes are sensitive to changes in the central baryon number density n c , while the g-modes are also sensitive to variations in the surface temperature. The g-modes are concentrated in the surface layer, while p- and f-modes can be found in all parts of the star. The effects of general relativity were studied, and we find that these are important at high central baryon number densities, especially for the p- and f-modes. General relativistic effects can therefore not be neglected when studying oscillations in neutron stars. We have further developed an improved Cowling approximation in the non-relativistic frame, which eliminates about half of the gap in the oscillation periods that results from use of the ordinary Cowling approximation. We suggest to develop an improved Cowling approximation also in the general relativistic frame. (Author)
We have studied radial and nonradial oscillations in neutron stars, both in a general relativistic and non-relativistic frame, for several different equilibrium models. Different equations of state were combined, and our results show that it is possible to distinguish between the models based on their oscillation periods. We have particularly focused on the p-, f-, and g-modes. We find oscillation periods of II approx. 0.1 ms for the p-modes, II approx. 0.1 - 0.8 ms for the f-modes and II approx. 10 - 400 ms for the g-modes. For high-order (l (>{sub )} 4) f-modes we were also able to derive a formula that determines II{sub l+1} from II{sub l} and II{sub l-1} to an accuracy of 0.1%. Further, for the radial f-mode we find that the oscillation period goes to infinity as the maximum mass of the star is approached. Both p-, f-, and g-modes are sensitive to changes in the central baryon number density n{sub c}, while the g-modes are also sensitive to variations in the surface temperature. The g-modes are concentrated in the surface layer, while p- and f-modes can be found in all parts of the star. The effects of general relativity were studied, and we find that these are important at high central baryon number densities, especially for the p- and f-modes. General relativistic effects can therefore not be neglected when studying oscillations in neutron stars. We have further developed an improved Cowling approximation in the non-relativistic frame, which eliminates about half of the gap in the oscillation periods that results from use of the ordinary Cowling approximation. We suggest to develop an improved Cowling approximation also in the general relativistic frame. (Author)
Star clouds of Magellan
Tucker, W.
The Magellanic Clouds are two irregular galaxies belonging to the local group which the Milky Way belongs to. By studying the Clouds, astronomers hope to gain insight into the origin and composition of the Milky Way. The overall structure and dynamics of the Clouds are clearest when studied in radio region of the spectrum. One benefit of directly observing stellar luminosities in the Clouds has been the discovery of the period-luminosity relation. Also, the Clouds are a splendid laboratory for studying stellar evolution. It is believed that both Clouds may be in the very early stage in the development of a regular, symmetric galaxy. This raises a paradox because some of the stars in the star clusters of the Clouds are as old as the oldest stars in our galaxy. An explanation for this is given. The low velocity of the Clouds with respect to the center of the Milky Way shows they must be bound to it by gravity. Theories are given on how the Magellanic Clouds became associated with the galaxy. According to current ideas the Clouds orbits will decay and they will spiral into the Galaxy
Stars of strange matter
Bethe, H.A.; Brown, G.E.; Cooperstein, J.
We investigate suggestions that quark matter with strangeness per baryon of order unity may be stable. We model this matter at nuclear matter densities as a gas of close packed Λ-particles. From the known mass of the Λ-particle we obtain an estimate of the energy and chemical potential of strange matter at nuclear densities. These are sufficiently high to preclude any phase transition from neutron matter to strange matter in the region near nucleon matter density. Including effects from gluon exchange phenomenologically, we investigate higher densities, consistently making approximations which underestimate the density of transition. In this way we find a transition density � tr > or approx.7� 0 , where � 0 is nuclear matter density. This is not far from the maximum density in the center of the most massive neutron stars that can be constructed. Since we have underestimated � tr and still find it to be �7� 0 , we do not believe that the transition from neutron to quark matter is likely in neutron stars. Moreover, measured masses of observed neutron stars are ≅1.4 M sun , where M sun is the solar mass. For such masses, the central (maximum) density is � c 0 . Transition to quark matter is certainly excluded for these densities. (orig.)
Stable dark energy stars
Lobo, Francisco S N
The gravastar picture is an alternative model to the concept of a black hole, where there is an effective phase transition at or near where the event horizon is expected to form, and the interior is replaced by a de Sitter condensate. In this work a generalization of the gravastar picture is explored by considering matching of an interior solution governed by the dark energy equation of state, ω ≡ p/� < -1/3, to an exterior Schwarzschild vacuum solution at a junction interface. The motivation for implementing this generalization arises from the fact that recent observations have confirmed an accelerated cosmic expansion, for which dark energy is a possible candidate. Several relativistic dark energy stellar configurations are analysed by imposing specific choices for the mass function. The first case considered is that of a constant energy density, and the second choice that of a monotonic decreasing energy density in the star's interior. The dynamical stability of the transition layer of these dark energy stars to linearized spherically symmetric radial perturbations about static equilibrium solutions is also explored. It is found that large stability regions exist that are sufficiently close to where the event horizon is expected to form, so that it would be difficult to distinguish the exterior geometry of the dark energy stars, analysed in this work, from an astrophysical black hole
Spheroidal Populated Star Systems
Angeletti, Lucio; Giannone, Pietro
Globular clusters and low-ellipticity early-type galaxies can be treated as systems populated by a large number of stars and whose structures can be schematized as spherically symmetric. Their studies profit from the synthesis of stellar populations. The computation of synthetic models makes use of various contributions from star evolution and stellar dynamics. In the first sections of the paper we present a short review of our results on the occurrence of galactic winds in star systems ranging from globular clusters to elliptical galaxies, and the dynamical evolution of a typical massive globular cluster. In the subsequent sections we describe our approach to the problem of the stellar populations in elliptical galaxies. The projected radial behaviours of spectro-photometric indices for a sample of eleven galaxies are compared with preliminary model results. The best agreement between observation and theory shows that our galaxies share a certain degree of heterogeneity. The gas energy dissipation varies from moderate to large, the metal yield ranges from solar to significantly oversolar, the dispersion of velocities is isotropic in most of the cases and anisotropic in the remaining instances.
What are the stars?
Srinivasan, Ganesan
The outstanding question in astronomy at the turn of the twentieth century was: What are the stars and why are they as they are? In this volume, the story of how the answer to this fundamental question was unravelled is narrated in an informal style, with emphasis on the underlying physics. Although the foundations of astrophysics were laid down by 1870, and the edifice was sufficiently built up by 1920, the definitive proof of many of the prescient conjectures made in the 1920s and 1930s came to be established less than ten years ago. This book discusses these recent developments in the context of discussing the nature of the stars, their stability and the source of the energy they radiate. Reading this book will get young students excited about the presently unfolding revolution in astronomy and the challenges that await them in the world of physics, engineering and technology. General readers will also find the book appealing for its highly accessible narrative of the physics of stars. "... The reade...
Polarimetry of symbiotic stars
Piirola, V.
Five symbiotic stars have been observed for linear polarization (UBVRI) in September 1981. Three systems, CH Cyg, CI Cyg and AG Peg show intrinsic polarization while in the case of Z And and AX Per the observed polarization seems to be mostly of interstellar origin. The position angle of polarization of CI Cyg and AG Peg rotates strongly vs. wavelength, as observed also for CH Cyg in 1977-80. The polarization of CH Cyg has decreased since May 1980, especially in the I, R and U bands, so that the maximum polarization is now in the blue (Psub(B) approx. 0.3%). Probably one is monitoring the formation, growth and disappearance of dust particles in the atmosphere of this star. Two related systems, PU Vul (Nova Vul 1979) and R Aql (Mira) have polarization behaviour rather similar to that of symbiotic stars which suggests that the M type giant present in these systems is responsible for most of the intrinsic polarization. (Auth.)
Aerosol absorption coefficient and Equivalent Black Carbon by parallel operation of AE31 and AE33 aethalometers at the Zeppelin station, Ny Ã…lesund, Svalbard
Eleftheriadis, Konstantinos; Kalogridis, Athina-Cerise; Vratolis, Sterios; Fiebig, Markus
Light absorbing carbon in atmospheric aerosol plays a critical role in radiative forcing and climate change. Despite the long term measurements across the Arctic, comparing data obtained by a variety of methods across stations requires caution. A method for extracting the aerosol absorption coefficient from data obtained over the decades by filter based instrument is still under development. An IASOA Aerosol working group has been initiated to address this and other cross-site aerosol comparison opportunities. Continuous ambient measurements of EBC/light attenuation by means of a Magee Sci. AE-31 aethalometer operating at the Zeppelinfjellet station (474 m asl; 78°54'N, 11°53'E), Ny Ålesund, Svalbard, have been available since 2001 (Eleftheriadis et al, 2009), while a new aethalometer model (AE33, Drinovec et al, 2014) has been installed to operate in parallel from the same inlet since June 2015. Measurements are recorded by a Labview routine collecting all available parameters reported by the two instrument via RS232 protocol. Data are reported at 1 and 10 minute intervals as averages for EBC (μg m-3) and aerosol absorption coefficients (Mm-1) by means of routine designed to report Near Real Time NRT data at the EBAS WDCA database (ebas.nilu.no) Results for the first 6 month period are reported here in an attempt to evaluate comparative performance of the two instruments in terms of their response with respect to the variable aerosol load of light absorbing carbon during the warm and cold seasons found in the high arctic. The application of available conversion schemes for obtaining the absorption coefficient by the two instruments is found to demonstrate a marked difference in their output. During clean periods of low aerosol load (EBC origin was also conducted. Drinovec, L., Mo�nik, G., Zotter, P., Prévôt, A. S. H., Ruckstuhl, C., Coz, E., Rupakheti, M., Sciare, J., Müller, T., Wiedensohler, A., and Hansen, A. D. A. The "dual-spot" Aethalometer: an
Karim Shahbazi; Mohammad Eshghi; Reza Faghih Mirzaee
In this paper, a new 32-bit ASIP-based crypto processor for AES, IDEA, and MD5 is designed. The instruction-set consists of both general purpose and specific instructions for the above cryptographic algorithms. The proposed architecture has nine function units and two data buses. It has also two types of 32-bit instruction formats for executing Memory Reference (M.R.), Register Reference (R.R.), and Input/Output Reference (I/O R.) instructions. The maximum achieved frequency is 166.916Â MHz. T...
A comparative examination of sample treatment procedures for ICAP-AES analysis of biological tissue
De Boer, J. L. M.; Maessen, F. J. M. J.
The objective of this study was to contribute to the evaluation of existing sample preparation procedures for ICAP-AES analysis of biological material. Performance characteristics were established of current digestion procedures comprising extraction, solubilization, pressure digestion, and wet and dry ashing methods. Apart from accuracy and precision, a number of criteria of special interest for the analytical practice was applied. As a test sample served SRM bovine liver. In this material six elements were simultaneously determined. Results showed that every procedure has its defects and advantages. Hence, unambiguous recommendation of standard digestion procedures can be made only when taking into account the specific analytical problem.
Neural network AE source location apart from structure size and material
Chlada, Milan; Převorovský, Zdeněk; Blahá�ek, Michal
Ro�. 28, - (2010), s. 99-107 ISSN 0730-0050. [European Conference on Acoustic Emission Testing 2010 /29./. Vídeň, 08.09.2010-10.09.2010] R&D Projects: GA ČR(CZ) GAP104/10/1430; GA MPO(CZ) FR-TI1/274 Institutional research plan: CEZ:AV0Z20760514 Keywords : AE source location * artificial neural network * arrival time profiles Subject RIV: BI - Acoustics http://www.ndt.net/search/abstract.php3?AbsID=10828
Determination of Boron in Zircaloy by using ICP-AES and Colorimetry
Kim, Jong-Goo; Pyo, Hyung-Ryul; Choi, Kwang-Soon; Han, Sun-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
Zircaloy has been being widely used in the nuclear industry because of the low cross section of Zirconium against a thermal neutron. Accurate composition data of Zircaloy for Hf, B, and so on having a high cross section against thermal neutron is important to use it as a nuclear material. Accordingly proper determination methods of these elements in Zircaloy are needed. In this study, the application of two methods, ICP-AES and a colorimetry using methylene blue were investigated in order to establish a proper analysis method of Boron in the range from tens to hundreds ug B/g sample of Zircaloy.
Kim, Jong-Goo; Pyo, Hyung-Ryul; Choi, Kwang-Soon; Han, Sun-Ho
Zircaloy has been being widely used in the nuclear industry because of the low cross section of Zirconium against a thermal neutron. Accurate composition data of Zircaloy for Hf, B, and so on having a high cross section against thermal neutron is important to use it as a nuclear material. Accordingly proper determination methods of these elements in Zircaloy are needed. In this study, the application of two methods, ICP-AES and a colorimetry using methylene blue were investigated in order to establish a proper analysis method of Boron in the range from tens to hundreds ug B/g sample of Zircaloy
Uncertainty assessing of measure result of tungsten in U3O8 by ICP-AES
Du Guirong; Nie Jie; Tang Lilei
According as the determining method and the assessing criterion,the uncertainty assessing of measure result of tungsten in U 3 O 8 by ICP-AES is researched. With the assessment of each component in detail, the result shows that u rel (sc)> u rel (c)> u rel (F)> u rel (m) by uncertainty contribution. Other uncertainty is random, calculated by repetition. u rel (sc) is contributed to uncertainty mainly. So the general uncertainty is reduced with strict operation to reduce u rel (sc). (authors)
Towards the problem ChAES alienation zone residing conditions for watch-shift personal
Akimenko, V.Ya.; Yanko, N.M.; Semashko, P.V.; Yarygin, A.V.
The article deals with some results of hygienic study concerning watch-shift workers residing condition in the ChAES 30-km zone. The evaluation is given for residing environment prioritative living parameters, air ventilation, hostels inhabiting conditions, the acoustic regime. Some indoor environment aerosol pollution levels different kind domestic activity a revealed. Electrostatic air cleaning installing as devices of radioactive smalldispers aerosol control a studied. Some tasks of further research are substantiated in order of watch-shift workers residing environment optimization in the alienation zone
Synthesis of hybrid sol-gel coatings for corrosion protection of we54-ae magnesium alloy
Hernández-Barrios, C A; Peña, D Y; Coy, A E; Duarte, N Z; Hernández, L M; Viejo, F
The present work shows some preliminary results related to the synthesis, characterization and corrosion evaluation of different hybrid sol-gel coatings applied on the WE54-AE magnesium alloy attending to the two experimental variables, i.e. the precursors ratio and the aging time, which may affect the quality and the electrochemical properties of the coatings resultant. The experimental results confirmed that, under some specific experimental conditions, it was possible to obtain homogeneous and uniform, porous coatings with good corrosion resistance that also permit to accommodate corrosion inhibitors
Determination of traces of thorium in ammonium/sodium diuranate by ICP-AES method
Nair, V.R.; Kartha, K.N.M.
Full text: Indian Rare Earths Ltd., Alwaye, produces ammonium diuranate from the thorium concentrate, obtained during monazite processing. This process involves a series of steps. The final uranium product obtained always contains microgram amounts of thorium as impurity. An analytical procedure has been standardised for the estimation of microgram amounts of thorium in ammonium/sodium diuranate. The method involves solvent extraction of uranium by using a tertiary amine followed by the determination of thorium by ICP-AES method in the raffinate. The recoveries of thorium were checked by standard addition to the uranium matrix. Limit of detection is adequate for the analysis of nuclear grade material
Determination of Nb and Zr in U-Nb-Zr alloys by ICP-AES
Wang Cuiping; Dong Shizhe; Li Lin; He Meiying
The U-Nb-Zr alloy sample is dissolved by HNO 3 , H 2 O 2 and HF, and the contents of Nb and Zr in the sample are determined on the JY-70 II type ICP-AES by using the internal standard synchronous dilution method. The range of determination is 1%-10% and 0.33%-3.33%, respectively for Nb and Zr. The relative standard deviation is better than 3.2% for Nb, and 2.5% for Zr. The method is rapid and convenient for determining Nb and Zr in U-Nb-Zr alloy sample
Study on CNPEC's nuclear AE organization, its characteristics and industrial value
Zhao Jianguang; Kuang Wei
The paper studies and analyzes CNPEC's AE organizational operation model and its characteristics in details to explore its value and contribution to the reform of Chinese state-owned enterprises. By building the design and construction integration platform, CNPEC integrates the resources of the nuclear industry chain to effectively ensure the whole performance, the safety and high quality of nuclear power plants under construction; by establishing the total quality partnership which focuses on the cross-border quality management and control, CNPEC enhances the quality management level of enterprises in the nuclear industry chain; by promoting the technology development cooperation, CNPEC improves the technological advancement of the whole nuclear industry chain. (authors)
Improved autonomous star identification algorithm
Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong
The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)
First stars X. The nature of three unevolved carbon-enhanced metal-poor stars
Sivarani, T.; Beers, T.C.; Bonifacio, P.
Stars: abundances, stars: population II, Galaxy: abundances, stars: AGB and post-AGB Udgivelsesdato: Nov.......Stars: abundances, stars: population II, Galaxy: abundances, stars: AGB and post-AGB Udgivelsesdato: Nov....
StarDOM: From STAR format to XML
Linge, Jens P.; Nilges, Michael; Ehrlich, Lutz
StarDOM is a software package for the representation of STAR files as document object models and the conversion of STAR files into XML. This allows interactive navigation by using the Document Object Model representation of the data as well as easy access by XML query languages. As an example application, the entire BioMagResBank has been transformed into XML format. Using an XML query language, statistical queries on the collected NMR data sets can be constructed with very little effort. The BioMagResBank/XML data and the software can be obtained at http://www.nmr.embl-heidelberg.de/nmr/StarDOM/
Machine learning techniques to select Be star candidates. An application in the OGLE-IV Gaia south ecliptic pole field
Pérez-Ortiz, M. F.; García-Varela, A.; Quiroz, A. J.; Sabogal, B. E.; Hernández, J.
Context. Optical and infrared variability surveys produce a large number of high quality light curves. Statistical pattern recognition methods have provided competitive solutions for variable star classification at a relatively low computational cost. In order to perform supervised classification, a set of features is proposed and used to train an automatic classification system. Quantities related to the magnitude density of the light curves and their Fourier coefficients have been chosen as features in previous studies. However, some of these features are not robust to the presence of outliers and the calculation of Fourier coefficients is computationally expensive for large data sets. Aims: We propose and evaluate the performance of a new robust set of features using supervised classifiers in order to look for new Be star candidates in the OGLE-IV Gaia south ecliptic pole field. Methods: We calculated the proposed set of features on six types of variable stars and also on a set of Be star candidates reported in the literature. We evaluated the performance of these features using classification trees and random forests along with the K-nearest neighbours, support vector machines, and gradient boosted trees methods. We tuned the classifiers with a 10-fold cross-validation and grid search. We then validated the performance of the best classifier on a set of OGLE-IV light curves and applied this to find new Be star candidates. Results: The random forest classifier outperformed the others. By using the random forest classifier and colours criteria we found 50 Be star candidates in the direction of the Gaia south ecliptic pole field, four of which have infrared colours that are consistent with Herbig Ae/Be stars. Conclusions: Supervised methods are very useful in order to obtain preliminary samples of variable stars extracted from large databases. As usual, the stars classified as Be stars candidates must be checked for the colours and spectroscopic characteristics
Secure Data Encryption Through a Combination of AES, RSA and HMAC
E. S. I. Harba
Full Text Available Secure file transfer based upon well-designed file encryption and authorization systems expend considerable effort to protect passwords and other credentials from being stolen. Transferring and storing passwords in plaintext form leaves them at risk of exposure to attackers, eavesdroppers and spyware. In order to avoid such exposure, powerful encryption/authentication systems use various mechanisms to minimize the possibility that unencrypted credentials will be exposed, as well as be sure that any authentication data that does get transmitted and stored will be of minimal use to an attacker. In this paper we proposed a method to protect data transferring by three hybrid encryption techniques: symmetric AES algorithm used to encrypt files, asymmetric RSA used to encrypt AES password and HMAC to encrypt symmetric password and/or data to ensure a secure transmitting between server-client or client-client from verifying in-between client and server and make it hard to attack by common attacked methods.
AES Cardless Automatic Teller Machine (ATM) Biometric Security System Design Using FPGA Implementation
Ahmad, Nabihah; Rifen, A. Aminurdin M.; Helmy Abd Wahab, Mohd
Automated Teller Machine (ATM) is an electronic banking outlet that allows bank customers to complete a banking transactions without the aid of any bank official or teller. Several problems are associated with the use of ATM card such card cloning, card damaging, card expiring, cast skimming, cost of issuance and maintenance and accessing customer account by third parties. The aim of this project is to give a freedom to the user by changing the card to biometric security system to access the bank account using Advanced Encryption Standard (AES) algorithm. The project is implemented using Field Programmable Gate Array (FPGA) DE2-115 board with Cyclone IV device, fingerprint scanner, and Multi-Touch Liquid Crystal Display (LCD) Second Edition (MTL2) using Very High Speed Integrated Circuit Hardware (VHSIC) Description Language (VHDL). This project used 128-bits AES for recommend the device with the throughput around 19.016Gbps and utilized around 520 slices. This design offers a secure banking transaction with a low rea and high performance and very suited for restricted space environments for small amounts of RAM or ROM where either encryption or decryption is performed.
Brown, Jonathan S.; Shappee, Benjamin J.; Holoien, T. W.-S.; Stanek, K. Z.; Kochanek, C. S.; Prieto, J. L.
We present late-time optical spectroscopy taken with the Large Binocular Telescope's Multi-Object Double Spectrograph, an improved All-Sky Automated Survey for SuperNovae pre-discovery non-detection, and late-time Swift observations of the nearby (d = 193 Mpc, z = 0.0436) tidal disruption event (TDE) ASASSN-14ae. Our observations span from ˜20 d before to ˜750 d after discovery. The proximity of ASASSN-14ae allows us to study the optical evolution of the flare and the transition to a host-dominated state with exceptionally high precision. We measure very weak Hα emission 300 d after discovery (LH α ≃ 4 × 1039 erg s-1) and the most stringent upper limit to date on the Hα luminosity ˜750 d after discovery (LH α ≲ 1039 erg s-1), suggesting that the optical emission arising from a TDE can vanish on a time-scale as short as 1 yr. Our results have important implications for both spectroscopic detection of TDE candidates at late times, as well as the nature of TDE host galaxies themselves.
A Very Compact AES-SPIHT Selective Encryption Computer Architecture Design with Improved S-Box
Jia Hao Kong
Full Text Available The "S-box� algorithm is a key component in the Advanced Encryption Standard (AES due to its nonlinear property. Various implementation approaches have been researched and discussed meeting stringent application goals (such as low power, high throughput, low area, but the ultimate goal for many researchers is to find a compact and small hardware footprint for the S-box circuit. In this paper, we present our version of minimized S-box with two separate proposals and improvements in the overall gate count. The compact S-box is adopted with a compact and optimum processor architecture specifically tailored for the AES, namely, the compact instruction set architecture (CISA. To further justify and strengthen the purpose of the compact crypto-processor's application, we have also presented a selective encryption architecture (SEA which incorporates the CISA as a part of the encryption core, accompanied by the set partitioning in hierarchical trees (SPIHT algorithm as a complete selective encryption system.
aeGEPUCI: a database of gene expression in the dengue vector mosquito, Aedes aegypti
James Anthony A
Full Text Available Abstract Background Aedes aegypti is the principal vector of dengue and yellow fever viruses. The availability of the sequenced and annotated genome enables genome-wide analyses of gene expression in this mosquito. The large amount of data resulting from these analyses requires efficient cataloguing before it becomes useful as the basis for new insights into gene expression patterns and studies of the underlying molecular mechanisms for generating these patterns. Findings We provide a publicly-accessible database and data-mining tool, aeGEPUCI, that integrates 1 microarray analyses of sex- and stage-specific gene expression in Ae. aegypti, 2 functional gene annotation, 3 genomic sequence data, and 4 computational sequence analysis tools. The database can be used to identify genes expressed in particular stages and patterns of interest, and to analyze putative cis-regulatory elements (CREs that may play a role in coordinating these patterns. The database is accessible from the address http://www.aegep.bio.uci.edu. Conclusions The combination of gene expression, function and sequence data coupled with integrated sequence analysis tools allows for identification of expression patterns and streamlines the development of CRE predictions and experiments to assess how patterns of expression are coordinated at the molecular level.
ICP-AES determination of rare earths in zirconium with prior chemical separation of the matrix
Rajeswari, B.; Dhawale, B.A.; Page, A.G.; Sastry, M.D.
Zirconium being one of the most important material in nuclear industry used as a fuel cladding in reactors and an additive in advanced fuels necessitates its characterization for trace metallic contents. Zirconium, as refractory in nature as the rare earth elements, has a complex spectrum comprising of several emission lines. Rare earths, which are high neutron absorbers have to be analysed at very low limits. Hence, to achieve the desired limits, the major matrix has to be separated prior to rare earth determination. The present paper describes the method developed for the separation of rare earths from zirconium by solvent extraction using Trioctyl Phosphine Oxide (TOPO) as the extractant followed by their determination using Inductively Coupled Plasma - Atomic Emission Spectrometric (ICP-AES) method. Initially, radiochemical studies were carried out using known amounts of gamma active tracers of 141 Ce, 152-154 Eu, 153 Gd and 95 Zr for optimisation of extraction conditions using Tl- activated NaI detector. The optimum conditions at 0.5 M TOPO/xylene in 6 M HCl so as to achieve a quantitative recovery of rare earth analytes alongwith a near total extraction of zirconium in the organic phase, was further extended to carry out the studies using ICP-AES method. The recovery of rare earths was found to be quantitative within experimental error with a precision better than 10% RSD. (author)
A new approach for the beryl mineral decomposition: elemental characterisation using ICP-AES and FAAS
Nathan, Usha; Premadas, A.
A new approach for the beryl mineral sample decomposition and solution preparation method suitable for the elemental analysis using ICP-AES and FAAS is described. For the complete sample decomposition four different decomposition procedures are employed such as with (i) ammonium bi-fluoride alone (ii) a mixture of ammonium bi-fluoride and ammonium sulphate (iii) powdered mixture of NaF and KHF 2 in 1: 3 ratio, and (iv) acid digestion treatment using hydrofluoric acid and nitric acid mixture, and the residue fused with a powdered mixture NaF and KHF 2 . Elements like Be, Al, Fe, Mn, Ti, Cr, Ca, Mg, and Nb are determined by ICP-AES and Na, K, Rb and Cs are determined by FAAS method. Fusion with 2g ammonium bifluoride flux alone is sufficient for the complete decomposition of 0.400 gram sample. The values obtained by this decomposition procedure are agreed well with the reported method. Accuracy of the proposed method was checked by analyzing synthetic samples prepared in the laboratory by mixing high purity oxides having a chemical composition similar to natural beryl mineral. It indicates that the accuracy of the method is very good, and the reproducibility is characterized by the RSD 1 to 4% for the elements studied. (author)
Electron stimulated carbon adsorption in ultra high vacuum monitored by Auger Electron Spectroscopy (AES)
Scheuerlein, C
Electron stimulated carbon adsorption at room temperature (RT) has been studied in the context of radiation induced surface modifications in the vacuum system of particle accelerators. The stimulated carbon adsorption was monitored by AES during continuous irradiation by 2.5 keV electrons and simultaneous exposure of the sample surface to CO, CO2 or CH4. The amount of adsorbed carbon was estimated by measuring the carbon Auger peak intensity as a function of the electron irradiation time. Investigated substrate materials are technical OFE copper and TiZrV non-evaporable getter (NEG) thin film coatings, which are saturated either in air or by CO exposure inside the Auger electron spectrometer. On the copper substrate electron induced carbon adsorption from gas phase CO and CO2 is below the detection limit of AES. During electron irradiation of the non-activated TiZrV getter thin films, electron stimulated carbon adsorption from gas phase molecules is detected when either CO or CO2 is injected, whereas the CH4 ...
Quantification of immobilized Candida antarctica lipase B (CALB) using ICP-AES combined with Bradford method.
Nicolás, Paula; Lassalle, Verónica L; Ferreira, María L
The aim of this manuscript was to study the application of a new method of protein quantification in Candida antarctica lipase B commercial solutions. Error sources associated to the traditional Bradford technique were demonstrated. Eight biocatalysts based on C. antarctica lipase B (CALB) immobilized onto magnetite nanoparticles were used. Magnetite nanoparticles were coated with chitosan (CHIT) and modified with glutaraldehyde (GLUT) and aminopropyltriethoxysilane (APTS). Later, CALB was adsorbed on the modified support. The proposed novel protein quantification method included the determination of sulfur (from protein in CALB solution) by means of Atomic Emission by Inductive Coupling Plasma (AE-ICP). Four different protocols were applied combining AE-ICP and classical Bradford assays, besides Carbon, Hydrogen and Nitrogen (CHN) analysis. The calculated error in protein content using the "classic" Bradford method with bovine serum albumin as standard ranged from 400 to 1200% when protein in CALB solution was quantified. These errors were calculated considering as "true protein content values" the results of the amount of immobilized protein obtained with the improved method. The optimum quantification procedure involved the combination of Bradford method, ICP and CHN analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Quench detection/protection of an HTS coil by AE signals
Yoneda, M.; Nanato, N.; Aoki, D.; Kato, T.; Murase, S.
A quench detection/protection system based on measuring AE signals was developed. The system was tested for a Bi2223 coil. Temperature rise after a quench occurrence was restrained at very low value. The normal zone propagation velocities in high T c superconductors are slow at high operation temperature and therefore local and excessive temperature rise generates at quench occurrence, and then the superconductors are degraded or burned. Therefore it is essential to detect the temperature rise in high T c superconducting (HTS) coils as soon as possible and protect them. The authors have presented a quench detection method for HTS coils by time-frequency visualization of AE signals and have shown its usefulness for a HTS coil with height and outer diameter of 40-50 mm. In this paper, the authors present a quench detection/protection system based on superior method in quench detection time to the previous method and show its usefulness for a larger HTS coil (height and outer diameter: 160-190 mm) than the previous coil.
Characterization of fold defects in AZ91D and AE42 magnesium alloy permanent mold castings
Bichler, L.; Ravindran, C.
Casting premium-quality magnesium alloy components for aerospace and automotive applications poses unique challenges. Magnesium alloys are known to freeze rapidly prior to filling a casting cavity, resulting in misruns and cold shuts. In addition, melt oxidation, solute segregation and turbulent metal flow during casting contribute to the formation of fold defects. In this research, formation of fold defects in AZ91D and AE42 magnesium alloys cast via the permanent mold casting process was investigated. Computer simulations of the casting process predicted the development of a turbulent metal flow in a critical casting region with abrupt geometrical transitions. SEM and light optical microscopy examinations revealed the presence of folds in this region for both alloys. However, each alloy exhibited a unique mechanism responsible for fold formation. In the AZ91D alloy, melt oxidation and velocity gradients in the critical casting region prevented fusion of merging metal front streams. In the AE42 alloy, limited solubility of rare-earth intermetallic compounds in the α-Mg phase resulted in segregation of Al 2 RE particles at the leading edge of a metal front and created microstructural inhomogeneity across the fold.
A scheme for the complete elemental characterisation of brannerite mineral using ICP-AES
Premadas, A.; Bose, Roopa
Brannerite occurs along with other refractory minerals in pegmatite veins. Literature survey indicates lack of systematic and detailed chemical analysis procedure for brannerites. This paper report suitable sample decomposition procedures, yielding stable sample solution, suitable for the ICP-AES determination of major, minor and certain trace elements normally associated with brannerite mineral. Three different simple decomposition procedures such as (i) acid digestion (ii) fusion with lithium metaborate (iii) fusion with mixture of tetra sodium pyrophosphate and monosodium dihydrogen phosphate are used to obtain the sample solution for elemental analysis. In presence of higher concentrations of uranium and titanium, the major elements in brannerite mineral, a detailed study of the influence of uranium and titanium on ICP-AES determination of other elements, three commonly used emission lines of each element was carried out. The REEs, Y, Sc and Th have been determined after the removal of major matrix elements using oxalate precipitation. The accuracy of the proposed procedures is established by analyzing synthetic brannerite samples prepared by mixing high purity oxides or chlorides in a proportion similar to natural brannerite samples. The results indicate the method is accurate. The reproducibility studies carried out on one sample shows the % RSD varied from 2 to 5%. (author)
Experimental study on the Kaiser effect of AE under multiaxial loading in granite
Watanabe, Hidehiko; Hiroi, Takehiro
Knowledge of the in-situ stresses is essential for underground excavation design, particularly in evaluating stability of excavation. Acoustic Emission method, which utilizes the Kaiser effect, is one of the simple methods for measuring in-situ stresses. Experiments on the Kaiser effect has been carried out under uniaxial compression and triaxial compression (σ 1 > σ 2 = σ 3 ), but has not been carried out under the three different principal stresses (σ 1 > σ 2 > σ 3 ). In this study, we performed two experiments on the Kaiser effect under multiaxial loading, using a hollow cylindrical granite specimen. The rapidly increasing point of cumulative AE event count was determined as the peak point of AE event count rate increment (AERI). The main results are summarized as follows. (1) In the case of the cyclic incremental σ 1 loading under σ 2 ≠σ 3 , the large peak point of AERI appeared just before the pre-stress level. And as more stresses prior to just before the peak point were estimated, the estimated error showed a tendency to increase. (2) In the case of re-loading under the lower σ 2 and σ 3 more than pre-loading, the estimated stresses using the three peak points of AERI corresponded to the pre-differential stresses (σ 1 -σ 2 ), (σ 1 -σ 3 ) and pre-axial stress σ 1 . The magnitudes of the three principal stresses were estimated under multiaxial loading from the Kaiser effect, using only one specimen. (author)
The calibration of XRF polyethylene reference materials with k 0-NAA and ICP-AES
Swagten, Josefien; Bossus, Daniel; Vanwersch, Hanny
Due to the lack of commercially available polyethylene reference materials for the calibration of X-ray fluorescence spectrometers (XRF), DSM Resolve, in cooperation with PANalytical, prepared and calibrated such a set of standards in 2005. The reference materials were prepared based on the addition of additives to virgin polyethylene. The mentioned additives are added to improve the performance of the polymers. The elements present in additives are tracers for the used additives. The reference materials contain the following elements: F, Na, Mg, Al, Si, P, S, Ca, Ti and Zn in the concentration range of 5 mg/kg for Ti, up to 600 mg/kg for Mg. The calibration of the reference materials, including a blank, was performed using inductively coupled plasma atomic emission spectrometry (ICP-AES) and Neutron Activation Analysis (k 0 -NAA). ICP-AES was used to determine the elements Na, Mg, Al, P, Ca, Ti and Zn whereas k 0 -NAA was used for F, Na, Mg, Al, Ca, Ti and Zn. Over the complete concentration range, a good agreement of the results was found between the both techniques. This project has shown that within DSM Resolve, it is possible to develop and to calibrate homogenous reference materials for XRF
Investigation of reordered (001) Au surfaces by positive ion channeling spectroscopy, LEED and AES
Appleton, B.R.; Noggle, T.S.; Miller, J.W.; Schow, O.E. III; Zehner, D.M.; Jenkins, L.H.; Barrett, J.H.
As a consequence of the channeling phenomenon of positive ions in single crystals, the yield of ions Rutherford scattered from an oriented single crystal surface is dependent on the density of surface atoms exposed to the incident ion beam. Thus, the positive ion channeling spectroscopy (PICS) technique should provide a useful tool for studying reordered surfaces. This possibility was explored by examining the surfaces of epitaxially grown thin Au single crystals with the combined techniques of LEED-AES and PICS. The LEED and AES investigations showed that when the (001) surface was sputter cleaned in ultra-high vacuum, the normal (1 x 1) symmetry of the (001) surfaces reordered into a structure which gave a complex (5 x 20) LEED pattern. The yield and energy distributions of 1 MeV He ions scattered from the Au surfaces were used to determine the number of effective monolayers contributing to the normal and reordered surfaces. These combined measurements were used to characterize the nature of the reordered surface. The general applicability of the PICS technique for investigations of surface and near surface regions is discussed
The L7Ae protein binds to two kink-turns in the Pyrococcus furiosus RNase P RNA
Lai, Stella M.; Lai, Lien B.; Foster, Mark P.; Gopalan, Venkat
The RNA-binding protein L7Ae, known for its role in translation (as part of ribosomes) and RNA modification (as part of sn/oRNPs), has also been identified as a subunit of archaeal RNase P, a ribonucleoprotein complex that employs an RNA catalyst for the Mg2+-dependent 5′ maturation of tRNAs. To better understand the assembly and catalysis of archaeal RNase P, we used a site-specific hydroxyl radical-mediated footprinting strategy to pinpoint the binding sites of Pyrococcus furiosus (Pfu) L7Ae on its cognate RNase P RNA (RPR). L7Ae derivatives with single-Cys substitutions at residues in the predicted RNA-binding interface (K42C/C71V, R46C/C71V, V95C/C71V) were modified with an iron complex of EDTA-2-aminoethyl 2-pyridyl disulfide. Upon addition of hydrogen peroxide and ascorbate, these L7Ae-tethered nucleases were expected to cleave the RPR at nucleotides proximal to the EDTA-Fe–modified residues. Indeed, footprinting experiments with an enzyme assembled with the Pfu RPR and five protein cofactors (POP5, RPP21, RPP29, RPP30 and L7Ae–EDTA-Fe) revealed specific RNA cleavages, localizing the binding sites of L7Ae to the RPR's catalytic and specificity domains. These results support the presence of two kink-turns, the structural motifs recognized by L7Ae, in distinct functional domains of the RPR and suggest testable mechanisms by which L7Ae contributes to RNase P catalysis. PMID:25361963
Local environmental and meteorological conditions influencing the invasive mosquito Ae. albopictus and arbovirus transmission risk in New York City.
Little, Eliza; Bajwa, Waheed; Shaman, Jeffrey
Ae. albopictus, an invasive mosquito vector now endemic to much of the northeastern US, is a significant public health threat both as a nuisance biter and vector of disease (e.g. chikungunya virus). Here, we aim to quantify the relationships between local environmental and meteorological conditions and the abundance of Ae. albopictus mosquitoes in New York City. Using statistical modeling, we create a fine-scale spatially explicit risk map of Ae. albopictus abundance and validate the accuracy of spatiotemporal model predictions using observational data from 2016. We find that the spatial variability of annual Ae. albopictus abundance is greater than its temporal variability in New York City but that both local environmental and meteorological conditions are associated with Ae. albopictus numbers. Specifically, key land use characteristics, including open spaces, residential areas, and vacant lots, and spring and early summer meteorological conditions are associated with annual Ae. albopictus abundance. In addition, we investigate the distribution of imported chikungunya cases during 2014 and use these data to delineate areas with the highest rates of arboviral importation. We show that the spatial distribution of imported arboviral cases has been mostly discordant with mosquito production and thus, to date, has provided a check on local arboviral transmission in New York City. We do, however, find concordant areas where high Ae. albopictus abundance and chikungunya importation co-occur. Public health and vector control officials should prioritize control efforts to these areas and thus more cost effectively reduce the risk of local arboviral transmission. The methods applied here can be used to monitor and identify areas of risk for other imported vector-borne diseases.
aes, the gene encoding the esterase B in Escherichia coli, is a powerful phylogenetic marker of the species
Tuffery Pierre
Full Text Available Abstract Background Previous studies have established a correlation between electrophoretic polymorphism of esterase B, and virulence and phylogeny of Escherichia coli. Strains belonging to the phylogenetic group B2 are more frequently implicated in extraintestinal infections and include esterase B2 variants, whereas phylogenetic groups A, B1 and D contain less virulent strains and include esterase B1 variants. We investigated esterase B as a marker of phylogeny and/or virulence, in a thorough analysis of the esterase B-encoding gene. Results We identified the gene encoding esterase B as the acetyl-esterase gene (aes using gene disruption. The analysis of aes nucleotide sequences in a panel of 78 reference strains, including the E. coli reference (ECOR strains, demonstrated that the gene is under purifying selection. The phylogenetic tree reconstructed from aes sequences showed a strong correlation with the species phylogenetic history, based on multi-locus sequence typing using six housekeeping genes. The unambiguous distinction between variants B1 and B2 by electrophoresis was consistent with Aes amino-acid sequence analysis and protein modelling, which showed that substituted amino acids in the two esterase B variants occurred mostly at different sites on the protein surface. Studies in an experimental mouse model of septicaemia using mutant strains did not reveal a direct link between aes and extraintestinal virulence. Moreover, we did not find any genes in the chromosomal region of aes to be associated with virulence. Conclusion Our findings suggest that aes does not play a direct role in the virulence of E. coli extraintestinal infection. However, this gene acts as a powerful marker of phylogeny, illustrating the extensive divergence of B2 phylogenetic group strains from the rest of the species.
Ultracompact X-ray binary stars
Haaften, L.M. van
Ultracompact X-ray binary stars usually consist of a neutron star and a white dwarf, two stars bound together by their strong gravity and orbiting each other very rapidly, completing one orbit in less than one hour. Neutron stars are extremely compact remnants of the collapsed cores of massive stars
Numerical study of rotating relativistic stars
Wilson, J.R.
The equations of structure for rotating stars in general relativity are presented and put in a form suitable for computer calculations. The results of equilibrium calculations for supermassive stars, neutron stars, and magnetically supported stars are reported, as are calculations of collapsing, rotating, and magnetized stars in the slowly changing gravitational field approximation. (auth)
The Spacelab IPS Star Simulator
Wessling, Francis C., III
The cost of doing business in space is very high. If errors occur while in orbit the costs grow and desired scientific data may be corrupted or even lost. The Spacelab Instrument Pointing System (IPS) Star Simulator is a unique test bed that allows star trackers to interface with simulated stars in a laboratory before going into orbit. This hardware-in-the loop testing of equipment on earth increases the probability of success while in space. The IPS Star Simulator provides three fields of view 2.55 x 2.55 degrees each for input into star trackers. The fields of view are produced on three separate monitors. Each monitor has 4096 x 4096 addressable points and can display 50 stars (pixels) maximum at a given time. The pixel refresh rate is 1000 Hz. The spectral output is approximately 550 nm. The available relative visual magnitude range is 2 to 8 visual magnitudes. The star size is less than 100 arc seconds. The minimum star movement is less than 5 arc seconds and the relative position accuracy is approximately 40 arc seconds. The purpose of this paper is to describe the LPS Star Simulator design and to provide an operational scenario so others may gain from the approach and possible use of the system.
Origin of faint blue stars
Tutukov, A.; Iungelson, L.
The origin of field faint blue stars that are placed in the HR diagram to the left of the main sequence is discussed. These include degenerate dwarfs and O and B subdwarfs. Degenerate dwarfs belong to two main populations with helium and carbon-oxygen cores. The majority of the hot subdwarfs most possibly are helium nondegenerate stars that are produced by mass exchange close binaries of moderate mass cores (3-15 solar masses). The theoretical estimates of the numbers of faint blue stars of different types brighter than certain stellar magnitudes agree with star counts based on the Palomar Green Survey. 28 references
Statistical properties of barium stars
Hakkila, J.E.
Barium stars are G- and K-giant stars with atmospheric excesses of s-process elements, and a broadband spectral depression in the blue portion of the spectrum. The strength of the λ4554 Ball line is used as a classification parameter known as the Barium Intensity. They have a mean absolute magnitude of 1.0 and a dispersion of 1.2 magnitudes (assuming a Gaussian distribution in absolute magnitude) as measured from secular and statistical parallaxes. These stars apparently belong to a young-disk population from analyses of both the solar reflex motion and their residual velocity distribution, which implies that they have an upper mass limit of around three solar masses. There is no apparent correlation of barium intensity with either luminosity or kinematic properties. The barium stars appear to be preferentially distributed in the direction of the local spiral arm, but show no preference to associate with or avoid the direction of the galactic center. They do not appear related to either the carbon or S-stars because of these tendencies and because of the stellar population to which each type of star belongs. The distribution in absolute magnitude combined with star count analyses implies that these stars are slightly less numerous than previously believed. Barium stars show infrared excesses that correlate with their barium intensities
The birth of star clusters
All stars are born in groups. The origin of these groups has long been a key question in astronomy, one that interests researchers in star formation, the interstellar medium, and cosmology. This volume summarizes current progress in the field, and includes contributions from both theorists and observers. Star clusters appear with a wide range of properties, and are born in a variety of physical conditions. Yet the key question remains: How do diffuse clouds of gas condense into the collections of luminous objects we call stars? This book will benefit graduate students, newcomers to the field, and also experienced scientists seeking a convenient reference.
Cloud Screening and Quality Control Algorithm for Star Photometer Data: Assessment with Lidar Measurements and with All-sky Images
Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Navas-Guzman, F.; Alados-Arboledas, L.
This paper presents the development and set up of a cloud screening and data quality control algorithm for a star photometer based on CCD camera as detector. These algorithms are necessary for passive remote sensing techniques to retrieve the columnar aerosol optical depth, delta Ae(lambda), and precipitable water vapor content, W, at nighttime. This cloud screening procedure consists of calculating moving averages of delta Ae() and W under different time-windows combined with a procedure for detecting outliers. Additionally, to avoid undesirable Ae(lambda) and W fluctuations caused by the atmospheric turbulence, the data are averaged on 30 min. The algorithm is applied to the star photometer deployed in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) for the measurements acquired between March 2007 and September 2009. The algorithm is evaluated with correlative measurements registered by a lidar system and also with all-sky images obtained at the sunset and sunrise of the previous and following days. Promising results are obtained detecting cloud-affected data. Additionally, the cloud screening algorithm has been evaluated under different aerosol conditions including Saharan dust intrusion, biomass burning and pollution events.
The Double Star mission
Full Text Available The Double Star Programme (DSP was first proposed by China in March, 1997 at the Fragrant Hill Workshop on Space Science, Beijing, organized by the Chinese Academy of Science. It is the first mission in collaboration between China and ESA. The mission is made of two spacecraft to investigate the magnetospheric global processes and their response to the interplanetary disturbances in conjunction with the Cluster mission. The first spacecraft, TC-1 (Tan Ce means "Explorer", was launched on 29 December 2003, and the second one, TC-2, on 25 July 2004 on board two Chinese Long March 2C rockets. TC-1 was injected in an equatorial orbit of 570x79000 km altitude with a 28° inclination and TC-2 in a polar orbit of 560x38000 km altitude. The orbits have been designed to complement the Cluster mission by maximizing the time when both Cluster and Double Star are in the same scientific regions. The two missions allow simultaneous observations of the Earth magnetosphere from six points in space. To facilitate the comparison of data, half of the Double Star payload is made of spare or duplicates of the Cluster instruments; the other half is made of Chinese instruments. The science operations are coordinated by the Chinese DSP Scientific Operations Centre (DSOC in Beijing and the European Payload Operations Service (EPOS at RAL, UK. The spacecraft and ground segment operations are performed by the DSP Operations and Management Centre (DOMC and DSOC in China, using three ground station, in Beijing, Shanghai and Villafranca.
Symbiotic star AG Dra
Ipatov, A.P.; Yudin, B.F.; Moskovskij Gosudarstvennyj Univ.
The results obtained from photometric (in the UBVRJHKLM system) and spectrophotometric (in the range 0.33-0.75 μm) observations of symbiotic star AG Dra are presented. The cool component of this star is a red giant with approximately constant brightness (ΔJ ≤ 0 m .3) classified as K4-K5. This red giant fills it's Roche loble and probably is on the assymptotic giant branch of the HR diagramm. The presence of IR excess in 5 μm associated with radiation of the gaseous envelope with the mass of M≅ 10 -6 M sun have been detected. Observations of AG Dra indicate that growing of the bolometric flux of a hot component is accompanied with decreasing effective temperature. The hot component of the system is probably an accerting red dwarf with the mass M≅ 0.4 M sun and disk accretion of matter of cool star with the rate M >or ∼ 10 -4 M sun year in equatorial region. Increase of accretion rate during the outburst of AG Dra leads to the increase of stellar wind from the red dwarf surface and the decrease of it's effective temperature. The hot component of AG Dra may also be considered as a white Dwarf with luminosity L 3 L sun and R eff >or approx. 0.2 R sun . In this case gravitational energy of accreting matter M > or ∼ 10 -6 M sun / year would be the source of the hot component outbursts. The luminosity between outbursts is determined by energy generation from the burning hydrogen layer source
Stars of heaven
Pickover, Clifford A
Do a little armchair space travel, rub elbows with alien life forms, and stretch your mind to the furthest corners of our uncharted universe. With this astonishing guidebook, you don't have to be an astronomer to explore the mysteries of stars and their profound meaning for human existence. Clifford A. Pickover tackles a range of topics from stellar evolution to the fundamental reasons why the universe permits life to flourish. He alternates sections that explain the mysteries of the cosmos with sections that dramatize mind-expanding concepts through a fictional dialog between futuristic human
Elemental diffusion in stars
Michaud, Georges; Montmerle, Thierry
This paper is dealing with the origin of the elements in the universe. The scheme of nucleosynthesis is kept to explain the stellar generation of helium, carbon, etc... from the initial hydrogen; but a nonlinear theory is then elaborated to account for the anomalous abundances which were observed. The chemical elements would diffuse throughout the outer layers of a star under the action of the opposite forces of gravitation and radiation. This theory, with completing the nucleosynthesis, would contribute to give a consistent scheme of the elemental origin and abundances [fr
Hadronic Resonances from STAR
Wada Masayuki
Full Text Available The results of resonance particle productions (�0, ω, K*, ϕ, Σ*, and Λ* measured by the STAR collaboration at RHIC from various colliding systems and energies are presented. Measured mass, width, 〈pT〉, and yield of those resonances are reviewed. No significant mass shifts or width broadening beyond the experiment uncertainties are observed. New measurements of ϕ and ω from leptonic decay channels are presented. The yields from leptonic decay channels are compared with the measurements from hadronic decay channels and the two results are consistent with each other.
O3 stars
Walborn, N.R.
A brief review of the 10 known objects in this earliest spectral class is presented. Two new members are included: HD 64568 in NGC 2467 (Puppis OB2), which provides the first example of an O3 V((f*)) spectrum; and Sk -67 0 22 in the Large Magellanic Cloud, which is intermediate between types O3 If* and WN6-A. In addition, the spectrum of HDE 269810 in the LMC is reclassified as the first of type O3 III (f*). The absolute visual magnitudes of these stars are rediscussed
Burn out or fade away? On the X-ray and magnetic death of intermediate mass stars
Drake, Jeremy J.; Kashyap, Vinay; Günther, H. Moritz; Wright, Nicholas J. [Smithsonian Astrophysical Observatory, MS-3, 60 Garden Street, Cambridge, MA 02138 (United States); Braithwaite, Jonathan, E-mail: [email protected] [Argelander Institut für Astronomie, Auf dem Hügel 71, D-53121 Bonn (Germany)
The nature of the mechanisms apparently driving X-rays from intermediate mass stars lacking strong convection zones or massive winds remains poorly understood, and the possible role of hidden, lower mass close companions is still unclear. A 20 ks Chandra HRC-I observation of HR 4796A, an 8 Myr old main sequence A0 star devoid of close stellar companions, has been used to search for a signature or remnant of magnetic activity from the Herbig Ae phase. X-rays were not detected and the X-ray luminosity upper limit was L{sub X} ≤ 1.3 × 10{sup 27} erg s{sup –1}. The result is discussed in the context of various scenarios for generating magnetic activity, including rotational shear and subsurface convection. A dynamo driven by natal differential rotation is unlikely to produce observable X rays, chiefly because of the difficulty in getting the dissipated energy up to the surface of the star. A subsurface convection layer produced by the ionization of helium could host a dynamo that should be effective throughout the main sequence but can only produce X-ray luminosities of the order 10{sup 25} erg s{sup –1}. This luminosity lies only moderately below the current detection limit for Vega. Our study supports the idea that X-ray production in Herbig Ae/Be stars is linked largely to the accretion process rather than the properties of the underlying star, and that early A stars generally decline in X-ray luminosity at least 100,000 fold in only a few million years.
An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array
Zhou, Li
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method. PMID:29466310
An Optimal Image-Based Method for Identification of Acoustic Emission (AE) Sources in Plate-Like Structures Using a Lead Zirconium Titanate (PZT) Sensor Array.
Yan, Gang; Zhou, Li
This paper proposes an innovative method for identifying the locations of multiple simultaneous acoustic emission (AE) events in plate-like structures from the view of image processing. By using a linear lead zirconium titanate (PZT) sensor array to record the AE wave signals, a reverse-time frequency-wavenumber (f-k) migration is employed to produce images displaying the locations of AE sources by back-propagating the AE waves. Lamb wave theory is included in the f-k migration to consider the dispersive property of the AE waves. Since the exact occurrence time of the AE events is usually unknown when recording the AE wave signals, a heuristic artificial bee colony (ABC) algorithm combined with an optimal criterion using minimum Shannon entropy is used to find the image with the identified AE source locations and occurrence time that mostly approximate the actual ones. Experimental studies on an aluminum plate with AE events simulated by PZT actuators are performed to validate the applicability and effectiveness of the proposed optimal image-based AE source identification method.
Hypolipidemic effects of starch and γ-oryzanol from wx/ae double-mutant rice on BALB/c.KOR-Apoe(shl) mice.
Nakaya, Makoto; Shojo, Aiko; Hirai, Hiroaki; Matsumoto, Kenji; Kitamura, Shinichi
waxy/amylose-extender (wx/ae) double-mutant japonica rice (Oryza sativa L.) produces resistant starch (RS) and a large amount of γ-oryzanol. Our previous study has shown the hypolipidemic effect of wx/ae brown rice on mice. To identify the functional constituents of the hypolipidemic activity in wx/ae rice, we prepared pure wx/ae starch and γ-oryzanol from wx/ae rice and investigated their effect on the lipid metabolism in BALB/c.KOR/Stm Slc-Apoe(shl) mice. The mice were fed for 3 weeks a diet containing non-mutant rice starch, non-mutant rice starch plus γ-oryzanol, wx/ae starch, or wx/ae starch plus γ-oryzanol. γ-Oryzanol by itself had no effect on the lipid metabolism, and wx/ae starch prevented an accumulation of triacylglycerol (TAG) in the liver. Interestingly, the combination of wx/ae starch plus γ-oryzanol not only prevented a TAG accumulation in the liver, but also partially suppressed the rise in plasma TAG concentration, indicating that wx/ae starch and γ-oryzanol could have a synergistic effect on the lipid metabolism.
Star-forming galaxy models: Blending star formation into TREESPH
Mihos, J. Christopher; Hernquist, Lars
We have incorporated star-formation algorithms into a hybrid N-body/smoothed particle hydrodynamics code (TREESPH) in order to describe the star forming properties of disk galaxies over timescales of a few billion years. The models employ a Schmidt law of index n approximately 1.5 to calculate star-formation rates, and explicitly include the energy and metallicity feedback into the Interstellar Medium (ISM). Modeling the newly formed stellar population is achieved through the use of hybrid SPH/young star particles which gradually convert from gaseous to collisionless particles, avoiding the computational difficulties involved in creating new particles. The models are shown to reproduce well the star-forming properties of disk galaxies, such as the morphology, rate of star formation, and evolution of the global star-formation rate and disk gas content. As an example of the technique, we model an encounter between a disk galaxy and a small companion which gives rise to a ring galaxy reminiscent of the Cartwheel (AM 0035-35). The primary galaxy in this encounter experiences two phases of star forming activity: an initial period during the expansion of the ring, and a delayed phase as shocked material in the ring falls back into the central regions.
I-Love relations for incompressible stars and realistic stars
Chan, T. K.; Chan, AtMa P. O.; Leung, P. T.
In spite of the diversity in the equations of state of nuclear matter, the recently discovered I-Love-Q relations [Yagi and Yunes, Science 341, 365 (2013), 10.1126/science.1236462], which relate the moment of inertia, tidal Love number (deformability), and the spin-induced quadrupole moment of compact stars, hold for various kinds of realistic neutron stars and quark stars. While the physical origin of such universality is still a current issue, the observation that the I-Love-Q relations of incompressible stars can well approximate those of realistic compact stars hints at a new direction to approach the problem. In this paper, by establishing recursive post-Minkowskian expansion for the moment of inertia and the tidal deformability of incompressible stars, we analytically derive the I-Love relation for incompressible stars and show that the so-obtained formula can be used to accurately predict the behavior of realistic compact stars from the Newtonian limit to the maximum mass limit.
TRIGGERED STAR FORMATION SURROUNDING WOLF-RAYET STAR HD 211853
Liu Tie; Wu Yuefang; Zhang Huawei [Department of Astronomy, Peking University, 100871 Beijing (China); Qin Shengli, E-mail: [email protected] [I. Physikalisches Institut, Universitaet zu Koeln, Zuelpicher Str. 77, 50937 Koeln (Germany)
The environment surrounding Wolf-Rayet (W-R) star HD 211853 is studied in molecular, infrared, as well as radio, and H I emission. The molecular ring consists of well-separated cores, which have a volume density of 10{sup 3} cm{sup -3} and kinematic temperature {approx}20 K. Most of the cores are under gravitational collapse due to external pressure from the surrounding ionized gas. From the spectral energy distribution modeling toward the young stellar objects, the sequential star formation is revealed on a large scale in space spreading from the W-R star to the molecular ring. A small-scale sequential star formation is revealed toward core 'A', which harbors a very young star cluster. Triggered star formations are thus suggested. The presence of the photodissociation region, the fragmentation of the molecular ring, the collapse of the cores, and the large-scale sequential star formation indicate that the 'collect and collapse' process functions in this region. The star-forming activities in core 'A' seem to be affected by the 'radiation-driven implosion' process.
|
CommonCrawl
|
Mechanical force-induced morphology changes in a human fungal pathogen
Charles Puerner1 na1,
Nino Kukhaleishvili1,2 na1,
Darren Thomson1,3 na1,
Sebastien Schaub1,4,
Xavier Noblin2,
Agnese Seminara2,
Martine Bassilana1 &
Robert A. Arkowitz ORCID: orcid.org/0000-0002-5216-50131
BMC Biology volume 18, Article number: 122 (2020) Cite this article
The initial step of a number of human or plant fungal infections requires active penetration of host tissue. For example, active penetration of intestinal epithelia by Candida albicans is critical for dissemination from the gut into the bloodstream. However, little is known about how this fungal pathogen copes with resistive forces upon host cell invasion.
In the present study, we have used PDMS micro-fabrication to probe the ability of filamentous C. albicans cells to penetrate and grow invasively in substrates of different stiffness. We show that there is a threshold for penetration that corresponds to a stiffness of ~ 200 kPa and that invasive growth within a stiff substrate is characterized by dramatic filament buckling, along with a stiffness-dependent decrease in extension rate. We observed a striking alteration in cell morphology, i.e., reduced cell compartment length and increased diameter during invasive growth, that is not due to depolarization of active Cdc42, but rather occurs at a substantial distance from the site of growth as a result of mechanical compression.
Our data reveal that in response to this compression, active Cdc42 levels are increased at the apex, whereas active Rho1 becomes depolarized, similar to that observed in membrane protrusions. Our results show that cell growth and morphology are altered during invasive growth, suggesting stiffness dictates the host cells that C. albicans can penetrate.
Polar tip growth, in which extension is limited to the apical surface, enables walled cells such as fungi and plants to explore their environment for nutrients and mating partners, while maintaining their surface to volume ratio [1]. Campas and Mahadevan [2] have derived simple scaling laws for cell geometry and identified a single dimensionless parameter that is sufficient to describe variation in the shape of tip growing cells using turgor pressure, cell wall elastic properties, and secretion rate. However, little is known with respect to the response of tip growing cells to mechanical stress. There are five fundamental types of mechanical stress: tension, compression, shear, torsion, and bending. Human and plant fungal pathogens can penetrate host tissue, and it is likely that they encounter compressive stress upon penetration and subsequent invasive growth. In addition to tissue penetration, growth within a spatially confined environment is critical for infection and successful dissemination of such fungal pathogens.
Fungal pathogens take advantage of different strategies to interact with their environment, of which tip growth is a common theme. Penetration of host tissue is critical for both human and plant fungal pathogens and requires not only the generation of sufficient force but also adhesion to the host cells to counter this force [3, 4]. Fungal pathogens have turgor pressures in the MPa range [3, 4], and for human fungal pathogens, this turgor pressure exceeds host cell resistance to penetration. Such host cells have elastic moduli that are in the 1–100 kPa range [5,6,7], although the critical stress for material rupture is determinant. Both the human fungal pathogens Candida albicans [8] and Aspergillus fumigatus [9] can actively penetrate host tissue, which is a critical step in the infection process [10,11,12,13,14]. Previous studies have revealed that C. albicans can invade cells via host-induced endocytosis and/or active penetration [8]. C. albicans invasion of the intestinal epithelia (small intestinal enterocytes) occurs almost exclusively by active penetration [8, 15, 16], whereas both endocytosis and active penetration are important for invasion of the oral epithelia [17]. However, even with oral epithelia, at the early stages of infection, active penetration is the major route for tissue invasion [17]. Hence, a better understanding of active penetration should provide insight into the initial step of tissue damage for mucosal infections. Translocation of C. albicans through intestinal epithelial layers is facilitated by the fungal peptide toxin candidalysin [16, 18]. Previous studies have shown that C. albicans hyphal tips are asymmetrically positioned during growth on a stiff surface, i.e., a "nose down" morphology, and that perpendicular growth and contact to a stiff topographical ridge (less than the hyphal radius) results in an indentation of the ridge [19].
To investigate the relationship between substrate stiffness and C. albicans penetration and invasive growth, we have used micro-fabrication, together with time-lapse microscopy. We show that there is a threshold for penetration that corresponds to a stiffness of ~ 200 kPa and that invasive growth within a stiff substrate is characterized by dramatic filament buckling along with a stiffness-dependent decrease in extension rate. Nonetheless, a small percentage of cells are able to invade 200 kPa PDMS, suggesting that these cells may play a key role in infection, similar to that of the persister cells in biofilms. Furthermore, we observed a striking alteration in cell morphology during invasive growth, which is not due to depolarization of active Cdc42, but rather occurs at a substantial distance from the site of growth, as a result of mechanical forces. Our data reveal that in response to mechanical forces, C. albicans has increased active Cdc42 at the apex while active Rho1 is depolarized, similar to what is observed in PDGF-induced fibroblast membrane protrusions [20].
Monitoring Candida albicans filamentous growth in micro-fabricated chambers
To investigate C. albicans hyphal growth, we took advantage of micro-fabrication approaches using the elastomer polydimethylsiloxane (PDMS) that, in particular, have been reported as single-cell force sensors for fission yeast cells [21]. We generated PDMS arrays with approximately 105 microchambers, which were cylindrical in shape with a diameter of 10 μm, a depth of 5 μm, and 15 μm spacing between adjacent chambers (Fig. 1a). C. albicans cells in micro-fabricated PDMS chamber arrays were visualized with inverted microscopes; imaging was carried out through an upright array of 150–200-μm-thick PDMS. Figure 1b shows an XZ confocal reflectance scan through the PDMS microarray with the chambers and media at the top (highest position) and the coverslip below for support (a zoom of chambers is shown in Fig. 1c). C. albicans cells were mixed with fetal calf serum, added to the PDMS array, incubated for ~ 1 h, and subsequently, filamentous growth was followed over time (Fig. 1d). With low-stiffness PDMS (a high polymer to cross-linker ratio of 40:1) we observed two predominant filamentous growth modes: non-invasive growth on the PDMS surface and invasive growth within PDMS (Fig. 1e, f). By examination of the focal plane of the PDMS surface and the fungal filaments, using DIC optics, we were able to distinguish between non-invasive (surface) and invasive growth, referring to whether the filament tip is on or within the PDMS, respectively. Invasive growth was also confirmed by labeling the PDMS surface and filamentous cells (see below). Furthermore, we observed that the blastospore (round cell) portion of the filamentous cells, which grew in the microchambers, pushed back against the chamber wall upon PDMS filament penetration and the filament frequently buckled within PDMS, presumably due to the resistive force during growth within the elastomer (Figs. 1f and 2a). These results indicate that, in addition to having ideal optical properties, PDMS is compatible with C. albicans filamentous growth.
Filamentous growth in and on PDMS. a Schematic of PDMS microchamber array. Top and side views are shown with dimensions indicated. b Characterization of PDMS microchamber array. XY transmission view (top) with the location of XZ confocal reflection scan (bottom) of upright PDMS mounted on a coverslip (indicated by dotted red line). Bars are 10 μm in XY and XZ. c Enlarged XZ scan of PDMS microchambers. XZ confocal reflection microscopy of 3 microchambers with dotted lines, indicating top and bottom. Bar is 10 μm. d PDMS microchambers with C. albicans cells entrapped. C. albicans cells were mixed with serum (75%) and added to PDMS microchamber arrays, and DIC image is shown. Note the cell with a germ tube (top row). Bar is 10 μm. e Non-invasive filamentous growth on PDMS micro-arrays. DIC images of growth on PDMS surface from 3 independent experiments on 40:1 PDMS to cross-linker, with indicated times (h:min). f Invasive filamentous growth within PDMS microarrays. DIC images of growth within PDMS from 2 independent experiments using 40:1 PDMS to cross-linker with times indicated. The initial PDMS chamber is outlined with a dotted yellow line to highlight deformation upon invasive filament extension
Filament subapical bending or buckling during invasive growth or confinement in PDMS chamber. a Filament buckling during invasive growth. DIC images of representative cells growing within PDMS at indicated times; red arrowheads indicate filament buckling. b Filament buckling upon confinement in stiff PDMS chambers. Cells entrapped in the PDMS chamber at indicated times visualized by DIC (top panel). Time-lapse of a cell grown on PDMS, probing the surface (bottom) followed by filament buckling and release, at indicated times. c Schematic of bending and buckling cells. Top cell bending in the microchamber and bottom filament buckling within PDMS (lighter green indicates part within PDMS). d PDMS invasion and filament bending are inversely correlated. Three independent time-lapse experiments were carried out at indicated PDMS to cross-linker ratios. Filament curvature upon contact or penetration of PDMS was scored as bending (bending or buckling within PDMS was not scored). Bars indicate SD, and points indicate experimental mean (n = 20–60 cells per experiment and 90–140 per condition). e Filament buckling increases with increasing PDMS stiffness. Independent time-lapse experiments (4–5) were carried out at indicated PDMS ratio, with n = 20–50 invasive cells per experiment (90–125 per condition). Bars indicate SD, and points indicate experimental means. Invasive filaments were considered buckling if the angle of the filament was 90° or greater, compared to filament tip. Note that no buckling was observed in filaments growing on the PDMS surface. f Young's modulus of PDMS preparations. Young's modulus was determined by a viscoanalyzer for 10–30 preparations (except 35:1, only 2 preparations) at indicated PDMS to cross-linker ratios
Growth modes depend on substrate stiffness
We followed C. albicans filamentous growth in PDMS of different stiffness, i.e. the extent to which an object resists deformation in response to an applied force, by varying the ratio of polymer to cross-linker. We observed two main growth modes from cells initially in chambers depending on PDMS stiffness: invasive growth, which predominated with less stiff PDMS (40:1) (Fig. 2a) and dramatic bending in the stiffer PDMS (30:1) chambers, which was predominantly subapical (Fig. 2b, top panel). In contrast to Schizosaccharomyces pombe [21], extensive deformation of the chambers was not observed with C. albicans, which we attribute to the different sizes, geometries, and growth modes of these fungi; fission yeast has a radius of ~ 2 μm, compared to C. albicans hyphal filaments with a radius of ~ 1 μm [22] (and our observations), resulting in a greater than 4-fold difference in the cross-sectional area. At most, a slight chamber deformation was observed with C. albicans, as cells frequently popped out of the chambers comprised of stiff PDMS or penetrated this material, when less stiff. Occasionally, at intermediate PDMS stiffness, we observed filamentous cells growing on the PDMS surface that appeared to be probing the surface with a "nose down" growth (Fig. 2b, bottom panel), as similarly observed [19]. This type of growth was suggestive of the filaments attempting to penetrate into PDMS and, consistent with this, we observed these filaments buckling and/or bending subapically at each attempt, prior to the tip popping out and forward. Buckling is defined as a sudden change in the shape of a component under load, i.e. change in the shape of the filament due to the physical forces it experiences. Subapical bending is, additionally, defined as a change in the direction of growth that results in curved filaments (Fig. 2c). In buckling, it is expected that the shape changes are largely reversed upon removal of the external forces, whereas in bending, the shape changes are not a result of the mechanical forces directly. Buckling can occur with a filament initially straight or bent/curved.
We next examined whether cells, initially in chambers, which were unable to invade, underwent bending. Figure 2d shows that as the stiffness of PDMS increased (from 50:1 to 30:1 PDMS to cross-linker), there was an increase in the percentage of cells undergoing bending, concomitant with a decrease in those invading PDMS. During invasive growth, we also frequently observed buckling of the filament (Fig. 2a red arrowheads, c), i.e. a growth-dependent curvature that typically occurred at the portion of the filament within PDMS. Figure 2e shows that such buckling was dependent on the PDMS stiffness, with over half of invasive filaments buckling in the two stiffest PDMS (40:1 and 35:1).
To determine the mechanical properties of the different PDMS preparations, we used dynamic mechanical analysis for which measurements were reproducible over a range of PDMS stiffness. Oscillating strain at a frequency of 10 Hz was applied to PDMS samples, and stress (σ)-strain (ε) curves were obtained (Additional file 1: Figure S1), from which Young's modulus was determined (initial dσ/dε). Young's modulus is the quantitation of the stiffness, i.e. the ratio stress/strain for a uniaxial load, where stress is the force per unit area and strain is the proportional deformation (change in length divided by original length) and is dimensionless. Figure 2f shows the Young's modulus of different ratios of PDMS to cross-linker, which are in good agreement with published values [23,24,25,26]. The stiffness of lower ratio samples (10:1 and 20:1), intermediate ratios (30:1 to 40:1), and higher ratios (45:1 and 50:1) was similar to that of medical silicone implants [27], with a Young's modulus of ~ 1 MPa; stiff tissues such as the myocardium [7], with a Young's modulus of ~ 0.1–0.2 MPa; and less stiff tissues such as the epithelia [5, 6], with a Young's modulus of ~ 40–70 kPa, respectively.
Penetration into and escape from PDMS
Given that active penetration is critical during the process of C. albicans epithelium invasion [10,11,12, 14], we examined in further detail this process in PDMS. Figure 3a shows a filamentous cell that penetrates PDMS after 4 min (II; note that I, not shown, is prior to the filament contacting the chamber wall); subsequently grows invasively within PDMS (III); deforms the adjacent chamber (IV), resulting in a dramatic invagination; and exits PDMS into the adjacent well at 2:04 (V), followed by penetration into the opposing chamber at 2:08 (VI) and subsequent invasive growth (2:12; VII). The resistive force revealed by buckling of the filament, as well as deformation of the initial chamber during invasive growth (III), likely increases upon deformation and subsequent piercing into the adjacent well (IV), as the portion of the filament within PDMS buckled during this time (1:22–2:02), resulting in an S-shaped filament (Fig. 3a). The tension on the filament was released upon exiting PDMS into the adjacent well (V), as the tip of the filament appears to jump forward (2:04). The resistive force from the final step of growth (VII) also resulted in buckling of the filament (portion in the well) leading to an M shape (2:42–3:00). This escape from PDMS is analogous, in some respects, to filaments bursting out of a macrophage [28,29,30,31]. Here, the filament pushes into a circle resulting in a deformation that does not require expansion of the surface area but rather local invagination of the chamber, which is easier to detect (Fig. 3a). Indeed, such a bursting out of PDMS was observed a number of times, and Additional file 1: Figure S2 shows such examples in different PDMS stiffness (40:1, 110 kPa; 35:1, 150 kPa; and 30:1, 250 kPa).
The filament tip shape is not substantially altered upon invasive growth and burst out. a Filament buckling and release upon invasive growth and penetration. Time-lapse experiment at PDMS to cross-linker ratio of 40:1 (Young's modulus ~ 100 kPa) with DIC images acquired every 2 min. b, c Filament tip shape does not substantially change during PDMS burst out. b Typical time-lapse experiment at 40:1 PDMS to cross-linker ratio (Young's modulus ~ 100 kPa) with DIC images every 5 min. Red arrowheads indicate filament buckling. c Close up of sum projections of 13 × 0.5 μm z-sections GFP-CtRac1 fluorescent images from 3B (top), with tip curvature (bottom) at indicated times and ± 45° curvature indicated by open lines and ± 90° indicated by solid lines. d The radius of the curvature of invasive and surface growing cells is indistinguishable. The radius of the curvature is the average of 12–24 × 5 min time points (n = 11–16 cells) from 3 to 4 independent experiments for surface (PDMS ratio 40:1; Young's modulus ~ 100 kPa) or invasively growing cells (with PDMS ratio 35:1–40:1; Young's modulus 150–100 kPa). Bars indicate SD
In order to better visualize the invasive growth within PDMS during these different steps, we followed cells in which GFP was targeted to the plasma membrane [32], by confocal spinning disk microscopy acquisition over a range of z-positions. Figure 3b and c show a typical time-lapse acquisition in which the analysis of the cell outline did not reveal a substantial change in the shape of the filament tip during invasive growth and bursting into the next well (Fig. 3c, d; Additional file 1: Figures S3A and S3B). Indeed, the radius of the curvature of the cell tip was identical to that of surface-growing cells, and there were no changes upon burst out of PDMS. Buckling of the filament was evident upon invasive growth and occurred over 35–45 min, prior to the appearance of a septum (Fig. 3b (red arrowheads) and Fig. 4a, two examples). Analyses of the angle of the filament at which the septum formed ultimately, indicate that during invasive growth, cytokinesis occurs the majority of the time after the filament buckles (Fig. 4b).
Cell division occurs at the site of filament buckling during invasive growth. a Examples (upper and lower panels) of time-lapse images (GFP sum projections) during invasive growth in 40:1 PDMS ratio, showing cell division site at the location of filament buckling with red lines highlighting the buckling angle. b Cell division site occurs where the filament is buckled. Cells were grown either invasively within PDMS (40:1; Young's modulus ~ 100 kPa) or on the surface of PDMS (30:1–40:1; Young's modulus 250–100 kPa), n = 26 and 15, respectively. GFP sum projections were analyzed over time from 5 independent experiments and Sin of angle when the septum formed is shown. Septum formation was observed on average 47 ± 20 min after buckling was evident in invasive cells. Bars indicate SD with p < 0.0001
To analyze the physical constraints during penetration, invasive growth, and tip escape from the PDMS matrix, we established a physical experimental model, which consisted of a steel probe that mimics the filament shape, continuously advancing up to and into a cylinder of PDMS of different stiffness (Fig. 5a, b). The steel probe tip approximated the shape of the filament tip (Fig. 5c) with a radius of curvature (± 45°) of 1.1 μm when normalized to the hyphal filament, compared to that of 1.0 ± 0.1 μm for the surface and invasively growing hyphal filaments. Figure 5d shows the probe prior to PDMS rupture and subsequent to exiting from the PDMS. Experiments were carried out with a range of probe displacement rates encompassing that of the filament extension rates (~ 0.3 μm/min [33]), when scaled down to the filament diameter. Figure 6a shows an example of such a force versus displacement curve. The initial phase of increasing force corresponds to the elastic compression of PDMS (Felast compr; analogous to growth stage II, Fig. 3a), culminating in the force required to break the PDMS surface, Fcrit. The next phase corresponds to the extension within PDMS, analogous to invasive growth, Finvas (analogous to growth stage III, Fig. 3a) culminating in PDMS exit (Fexiting; analogous to the end of growth stage IV, Fig. 3a). The final phase corresponds to when the end of the probe has emerged from the PDMS (Fout; analogous to growth stage V, Fig. 3a). Figure 6b shows the Fcrit from the physical probe experiments as a function of PDMS stiffness. Filaments are able to penetrate PDMS of ratio 35:1, for which we measured Fcrit of ~ 7 N with a 1-mm-diameter probe. Scaled to the diameter of a hyphal filament, this would correspond to a force of ~ 31 μN, indicating that hyphal filaments generate forces larger than 31 μN to penetrate PDMS. Scaling to the diameter of a hyphal filament was done assuming a constant critical stress, given the filament diameter is several orders of magnitude greater than the PDMS mesh size, and was calculated by taking the ratio of the metal probe/filament radius squared. Furthermore, our results indicate that at a stiffness of 200 kPa, for which little PDMS penetration is observed (Fig. 2d), the Fcrit is ~ 8 N with a 1-mm-diameter probe, i.e. ~ 35 μN when scaled to a hyphal filament, which would correspond to the growth stalling force.
Physical model of PDMS invasive growth. a Schematic of the physical model. The distance probe traveled is d; F is the force detected by the sensor; L is the PDMS length, and R is the PDMS radius. b Image of the physical model setup. Force sensor attached to 1-mm-diameter probe (left) and PDMS on the right as indicated in a. c Shape of the steel probe tip. Probe is 1 mm diameter. d PDMS penetration. PDMS (40:1; Young's modulus ~ 100 kPa) cylinder (1.75 cm length and radius) with 1 mm probe displacement of 1.6 μm/s. Upper image, prior to rupture of PDMS; lower image, probe exit from PDMS
Forces in physical model of PDMS invasive growth. a Forces encountered upon probe displacement into and out of PDMS. A 1-mm-diameter steel probe was advanced perpendicularly into the base of the cylindrical piece of cured PDMS (10:1 PDMS to cross-linker ratio, Young's modulus ~ 2 MPa). Probe displacement rate was 63 μm/s, and force was determined during different stages, as indicated, i.e., compression, penetration, and after the probe tip emerged from PDMS. Analogous stage of hyphal filament invasive growth indicated in insets along with the growth stage from time-lapse in Fig. 3a (I–V). b, c Resistive forces as a function of PDMS to cross-linker ratio. b Fcrit was determined using a 1-mm-diameter steel probe, with either 3.5- or 8-cm-diameter PDMS cylinders of indicated PDMS to cross-linker ratio, with probe displacement set to 1.6–3.2 μm/s. Bars are SD for on average 10 independent determinations for each PDMS ratio. c Fin was determined by subtracting Fout from Finvas from the probe penetration experiment described in a. Bars are SD
Resistive force affects hyphal extension and morphology
The buckling of the filaments, as well as the deformation of the PDMS wells during invasive growth, indicated that these filaments were responding to resistive force, whose magnitude we have measured in the physical model. The percentage of cells that penetrate PDMS is dependent on Young's modulus (Fig. 2d, f), and analyses of percent of PDMS invasion at two stiffness values indicate that the threshold for invasion is between 120 and 200 kPa (Fig. 7a). Hence, to investigate the effects of resistive force on filamentous growth, we determined the length of the filaments over time from cells growing on and within PDMS in this range of stiffness. Figure 7b shows that cells extend at a constant rate, which is reduced by ~ 30% within PDMS from an average of 0.28 ± 0.07 μm/min (n = 29) for surface growth to 0.19 ± 0.05 μm/min (n = 32) when filaments were growing within PDMS with a stiffness of ~ 100 kPa (Fig. 7c). This filament extension rate was further reduced to 0.15 ± 0.08 μm/min (n = 23) upon growth in PDMS with a stiffness of ~ 150 kPa (Fig. 7c). To confirm that the reductions in extension rate during invasive growth were not due to the substantial changes in the z-position of the growing filament apex, cells were grown in PDMS chambers that had been stained with fluorescent ConA and z-section images were projected onto the XZ plane (Fig. 7d). These projections show that the filaments grow slightly downward in PDMS, below the bottom of the chamber, with maximally 5 μm displacement in the z-axis for a 25-μm filament, resulting in at most a 2% reduction in extension rate upon projection in the XY plane. In contrast, Fig. 8a shows that the mean filament extension rate of cells grown on the surface is not dependent on the substrate's stiffness, with indistinguishable rates on PDMS with Young's modulus from 100 to 200 kPa. Furthermore, the extension rate of invasive growth normalized for that of surface growth from each experiment correlates with substrate stiffness (Fig. 8b). Extrapolation to the y-intercept, where the invasive extension rate, equals the surface rate indicates a substrate stiffness of ~ 20 kPa, suggesting that during filamentous growth on PDMS, the cells experience this resistive force from adhesion. Consistently, the surface extension rate was slightly reduced compared to that in liquid media (0.26 ± 0.09 μm/min compared to 0.32 ± 0.01 μm/min; p = 0.001) (Fig. 8a). Of note, very few cells invaded PDMS at 200 kPa, on average ~ 5% (Fig. 7a), but, strikingly, the filament extension rate for these "escapers" was similar to that of less stiff PDMS, raising the attractive possibility that these cells may play a critical role in tissue invasion.
Filament extension rate is decreased during invasive growth. a A PDMS substrate stiffness limit for filament penetration. Percentage of invading cells from time-lapse experiments (4 independent experiments for each PDMS stiffness, Young's modulus indicated, 40:1 and 30:1) was quantified (n = 50–60 cells per determination from a total of 900–1200 cells) using wide-field microscopy. Bars indicate SD and ****p < 0.0001. b Filament extension rate is linear during surface and invasive growth. Cells grown on or in PDMS at indicated stiffness (35:1 and 40:1) over 2 h, with images every 5 min. Filament length was determined from the sum projection GFP images and normalized to an initial length of 10–12 μm. c Filament extension rate is reduced upon invasive growth. The extension rate was determined from the time-lapse acquisition of cells grown within PDMS at indicated stiffness (35:1 and 40:1). Surface growth was carried out on PDMS with a stiffness 190-85 kPa (30:1–40:1). Quantification from 2 to 4 independent experiments, n = 20–30 cells per PDMS stiffness. Bars indicate SD with ****p < 0.0001 and *p < 0.05. d Invasively growing filaments do not undergo substantial changes in the z-position. Cells expressing plasma membrane GFP were imaged (green; middle panel) after growth in ~ 150 kPa PDMS (35:1), chambers that were labeled with Alexa-633 ConA (magenta; left panel), and 100 × 0.2 μm z-sections were acquired with DIC image shown (right panel). XY (upper panels) and XZ maximum projections are shown with dotted lines indicating the upper and lower limits of the microchamber and the green arrow indicating the center in the z-axis of the filament tip
Filament extension rate on the PDMS surface is slightly reduced compared to that in liquid media. a Filament extension rate on the PDMS surface is independent of stiffness and reduced compared to that in liquid media. The extension rate was determined as in Fig. 7c. Filament lengths of cells grown in serum-containing liquid media (n = 75–80 cells) were measured every 30 min with mean and SEM shown in red. Surface extension rates determined as in Fig. 7c (n = 30 cells; 3–9 experiments per PDMS stiffness). Bars indicate SD. The mean of all surface extension rates (dotted black line) and SEM (gray zone) are indicated. b Cells experience a resistive force on the PDMS surface. Invasive extension rates determined as in Fig. 7c (n = 53 cells; 5–10 experiments per PDMS stiffness) and normalized to surface extension rates, with bars indicating SD and dotted line the best fit. At Y = 1, X is ~ 20 kPa
The difference in filament extension rate could either be due to an overall reduction in cell growth or a reduction in polarized, apical growth. To differentiate between these possibilities, we determined the length and diameter of compartments (between 2 septa) for cells growing on the surface and within PDMS (100 kPa). Figure 9a and b show that the compartment length decreased ~ 30%, from 24.6 ± 3.0 μm (n = 100) for surface growing cells to 16.6 ± 1.8 μm (n = 120) during invasive growth, and the filament diameter increased concomitantly from 2.1 ± 0.2 μm for surface growing cells to 2.5 ± 0.2 μm during invasive growth. As a result, the compartment volume remained constant (83 ± 17 μm3 for surface growing cells compared to 80 ± 18 μm3 for invasively growing cells), indicative of altered polarized growth. Consistently, analyses of filamentous cells grown within stiffer PDMS (150 kPa) revealed a further decrease in compartment length (14.6 ± 2.5 μm) and an increased diameter (2.8 ± 0.3 μm). This altered morphology is dependent on growth against a resistive force in PDMS as the diameter in the part of the filament outside PDMS was similar to that of surface growing cells (Fig. 10a). In addition to the comparison of cells growing on the surface and within PDMS, we examined the relatively rare occurrence of cells transitioning between these growth modes. Figure 10b and c show an example of such a transition, in which the extension rate is reduced during invasive growth in PDMS and increases exiting PDMS. Measurements of the filament diameter just before and after the filament exited PDMS revealed an increased filament diameter during invasive growth that was significantly reduced upon exiting PDMS (2.7 ± 0.2 μm compared to 2.3 ± 0.3 μm, Fig. 10d).
Filament morphology is altered during invasive growth. a Sum projection images of cells expressing GFP-CtRac1 grown in the presence of FCS on or within PDMS of 115 kPa stiffness, for 3h15 or 2h15, respectively. Arrowheads indicate the septa that delimit the measured compartment. b Compartment length is reduced and diameter is increased when cells are grown within a substrate of increasing stiffness (40:1 and 35:1). The length between two septa (left panel) along with the filament diameter (right panel), either the 2nd or the 3rd compartment from the tip, was measured in n = 20–115 cells, 2–22 experiments. SD are indicated and p < 0.0001
Filament diameter increases during growth in PDMS and decreases upon bursting out. a Filament diameter inside and outside PDMS at two stiffness values. The diameter of the filament was measured as in Fig. 9a and b (surface and portion inside PDMS). The diameter of the filament adjacent to the mother cell neck in the chamber was measured (portion outside indicated in the schematic). Bars indicate SD, with p < 0.0001 between inside and outside for each PDMS stiffness value (40:1 and 35:1). b Images of filament bursting out of PDMS. Cells grown in PDMS at indicated times over 2 h, with DIC images every 5 min (~ 250 kPa; 30:1) or 10 min (90 kPa; 40:1) shown. c Filament extension rate increases upon exiting PDMS. Filament length was measured from the sum projection GFP images prior to (green and magenta-filled symbols) and following emerging from PDMS (green and magenta-open symbols). d Filament tip diameter is slightly reduced upon emerging from PDMS. The tip diameter was determined at 4–5 times before or after exiting PDMS from b and c. Bars indicate SD and *p = 0.01
Reduced filament extension rate in response to a resistive force suggested that similar effects could be observed in cells undergoing non-invasive, dramatic subapical bending in chambers of stiff PDMS (Additional file 1: Figure S4A). Additional file 1: Figure S4B shows that, in such conditions, there was indeed a dramatic reduction in filament extension rate with an average (over 100–150 min) of 0.10 ± 0.01 μm/min. Surface growth rates were constant over time, whereas extension rates of cells undergoing dramatic subapical bending decreased concomitantly with the cell filling the well (Additional file 1: Figure S4C); initial rates of extension were 3-fold reduced from surface growth, and these were further reduced 3-fold after 2 h of growth. Due to the complex geometries during such a growth mode, we were unable to determine the resistive force that the filament experiences while it fills up the chamber; however, the initial extension rate is similar to that of filaments growing invasively in 150 kPa PDMS.
Determination of the effective turgor pressure
From the comparison of extension rates within PDMS of different stiffness (Fig. 7c), we determined the effective turgor pressure in C. albicans hyphae, using the viscoplastic growth model [21]. This determination makes use of Finvas values, measured from the physical model (Fig. 6), after scaling these forces from a cylinder with a radius of 0.5 mm to that of 1.04 μm. In order to correctly extrapolate the macroscopic measurements to the microscopic scale of filamentous cells, we analyzed the physical forces at play. The mode of extension during hyphal growth and in this physical experimental model is different, as new material is incorporated into the hyphal tip, i.e. growth occurs via apical extension, whereas in the physical experimental model, the probe is pushed into the PDMS from the back. Given that only a small portion of the filamentous cell apex extends in the PDMS, we removed the contribution from friction/adhesion due to the displacement of a 1-mm-diameter probe within PDMS by subtracting the Fout value from the Finvas (Fig. 6c). These corrected Finvas values, i.e. Fin, were largely independent of probe displacement rates over a range equivalent to cell filament extension rates when scaled down (0.2–0.4 μm/min). We used the equation that was established for S. pombe by Minc and colleagues;
$$ \frac{V_{(F)}}{V_o}=\left(1-\frac{F_{\left(\mathrm{PDMS}\right)}}{\pi {R}^2\Delta P}\right) $$
V(F) and Vo are the filament extension rates within PDMS and on the surface, respectively; F(PDMS) is the resistive force of PDMS during filament displacement within this material (Fin); R is the filament radius; and ∆P is the effective turgor pressure. The F(PDMS) was 1.5 ± 0.7 N and 0.7 ± 0.3 N at PDMS to cross-linker 35:1 (Young's modulus of 150 kPa) and 40:1 (Young's modulus of 100 kPa), respectively; scaling to the size of the hyphal filament (radius 1.04 μm for surface growth) yielded 6 ± 3 μN and 3.2 ± 1.4 μN, respectively. From these values, we determined the effective turgor pressure, ∆P, to be 6 ± 3 MPa and 2 ± 1 MPa in these two conditions, respectively. Given that the hyphal filament diameter increases during invasion (radius 1.42 μm at Young's modulus of 150 kPa and 1.24 μm at Young's modulus of 100 kPa), the calculated ∆P are 3.1 ± 1.5 MPa and 1.5 ± 0.7 MPa, suggesting that the hyphal turgor pressure is 1–3 MPa. This value for turgor pressure is within the range reported both for planktonic and biofilm C. albicans cells, ~ 1.2 MPa [19] and ~ 2 MPa [34], respectively, as well as S. pombe, 0.85–1.5 MPa [21, 35]. Nonetheless, it must be noted that these values are effective turgor pressure. In other words, ∆P is the turgor pressure exceeding the critical stress needed to deform the cell wall [21]. Hence, a combination of local compartment turgor pressure alteration, difference in cell wall deformability, or potentially finer adjustments in tip geometry may play important roles in penetration and invasion.
Resistive force affects cell polarity
The change in morphology during invasion, resulting in shorter and wider cells, could be explained by tip growth becoming more isotropic in response to a resistive force, raising the possibility that cell polarity is adversely affected. In S. pombe, it was observed that reducing the growth rate chemically, genetically, or mechanically destabilized active Cdc42 polarization, a cell polarity master regulator [36]. To investigate whether cell polarity was altered in hyphal filaments growing invasively in PDMS, we examined the distribution of active Cdc42 (Cdc42•GTP), using a CRIB-GFP reporter [37]. Surprisingly, we observed a striking increase in polarized active Cdc42 at the filament tip throughout invasive growth, compared to surface growth (Fig. 11a, b). We determined, using a tailor-made MATLAB program, that this results from an increase in the concentration of Cdc42•GTP at the tip, rather than an alteration in the position of the maximum signal or spread of active Cdc42 further down the filament (Additional file 1: Figure S5A-D). These results suggest that, in response to a resistive force, there is an increase in cell polarization, perhaps reflecting a direct response to such external forces. We speculate that this higher level of active Cdc42 during invasive growth is due to the increased recruitment of the Cdc42 activator, Cdc24 [38]. We next examined active Rho1, as cell wall stress mediated by the cell surface mechanosensors Wsc1/Mid2 results in Rho1 depolarization in S. cerevisiae [39, 40]. Figure 11c and d show that, in contrast to the increase in tip localized active Cdc42, active Rho1 is depolarized during invasive growth. We attribute this depolarization of active Rho1 to the mechanical properties of PDMS, which are likely to impose a uniform force over the hyphal filament surface, in addition to the resistive force in response to the tip extension.
Invasively growing filaments have increased levels of active Cdc42 at the tip. a, b Tip-localized Cdc42•GTP is increased during invasive growth. a Representative sum projection and DIC images of cells expressing CRIB-GFP on or in 35:1 PDMS (~ 150 kPa). False colored sum projection of 23 × 0.4 μm z-sections (LUT, top) with schematic indicating regions quantitated at the tip and 5–10 μm back from the tip (bottom). b Mean Cdc42•GTP at the tip and 5–10 μm subapically determined from 4 independent time-lapse experiments (images every 5 min for ~ 2 h and sum projections of 23 × 0.4 μm z-sections; n = 16–18 cells). Polarized Cdc42•GTP is the tip signal divided by the subapical signal, 5–10 μm behind the tip (3.5-fold enrichment apically for surface growing cells, normalized to 1). Bars indicate SD and ***p = 0.0002. c, d Active Rho1 is delocalized during invasive growth. c Representative sum projections and DIC images as in a of cells expressing GFP-RID after growth on or in 35:1 PDMS (~ 150 kPa). d Mean Rho1•GTP at the tip and at 5–10 μm subapically was determined from 3 independent time-lapse experiments, as in b (n = 14–20 cells). Polarized Rho1•GTP is the tip signal divided by the subapical signal (1.4-fold enrichment apically for surface growing cells, normalized to 1, as in b). Bars indicate SD and **p = 0.001
The increase in tip-localized active Cdc42 during invasive growth suggests that the increase in filament diameter does not result from growth becoming more isotropic in response to resistive force. To examine whether this morphological change results from mechanical forces, we compared cell morphology over time during growth on the surface and within PDMS. Figure 12a and b show that the relative filament diameter (D1) was not altered during surface growth (mean diameter 2.26 ± 0.15 μm initially compared to 2.44 ± 0.06 μm after 2 h growth), in contrast to the invasive growth where there was a striking increase (2.38 ± 0.22 μm initially compared to 2.98 ± 0.15 μm, p < 0.0001). Specifically, the diameter of the filament compartment increased even > 10 μm back from the tip (Fig. 12b). This ~ 25% increase in diameter during invasive growth could either occur upon tip growth or subsequent to tip growth. The hyphal tip diameter was constant over 2 h of invasive growth and only slightly wider than that of cells growing on the surface (2.53 ± 0.11 μm compared to 2.24 ± 0.09 μm; p = 0.0003) (Fig. 12c, d). In contrast, Fig. 12d shows that there was a significant difference between the mean diameter at the tip of the apical cell and that of the cell proximal to the apical cell during invasive growth (mean proximal cell diameter 2.99 ± 0.17 μm; p < 0.0001). Together, these results indicate that relatively small changes in the tip morphology are not sufficient to explain the altered morphology of the filament, back from the tip, which are due to external mechanical forces.
Mechanical forces are critical for filament morphology changes. a Schematic of filament during invasive growth. Lighter green indicates a portion of the filament within PDMS; D1, compartment or proximal cell diameter; and D2, tip diameter. b The diameter of the cell compartment increases as it becomes further away from the filament tip during invasive growth. Cell compartment diameter (measured at 5 equidistant positions between 2 septa), initially on average 5 μm from tip, was determined from 3 independent experiments acquired as in Fig. 11b (n = 5–10 cells) grown on or within 35:1 PDMS (~ 150 kPa). Distance from the compartment center to the tip was determined at each time. The diameter was normalized to the diameter of the last time point for each cell and further normalized by the mean difference between the final compartment diameters of invasive cells compared to surface cells. Each color represents a cell. c Filament tip diameter does not vary as a function of filament length for surface and invasive growth. Tip diameter (2 μm back from the apex) measured for one cell growing on PDMS surface and one cell growing within PDMS, over a 2-h time course. d Tip diameter increase does not underlie proximal cell diameter increase during invasive growth. Tip and proximal cell diameters were measured from the same cells as in a. Proximal cell diameter was measured at the center of the cell proximal to the apical cell over the 2nd hour of the time course (filaments > 30 μm long). Bars indicate SD, with ****p < 0.0001 between the tip and the proximal cell during invasive growth and ***p = 0.0003 between the invasive and surface tips
We used PDMS micro-fabrication to probe the relationship between substrate stiffness and growth of C. albicans filamentous cells. Below a stiffness threshold of ~ 200 kPa, C. albicans can penetrate and grow within PDMS. The chemical inertness of this polymer, as well as the observed well deformation, suggest that turgor pressure-driven active penetration is critical for this invasive growth. C. albicans filamentous growth within a stiff substrate is characterized by dramatic filament buckling, which correlates with the position of cell division sites, in addition to a stiffness-dependent decrease in extension rate. Growth within a substrate also resulted in a striking alteration in morphology, i.e. reduced cell compartment length and increased diameter. Our results reveal that changes in morphology are not due to a depolarization of active Cdc42, but rather to the mechanical forces from the substrate.
Growth behavior as a function of substrate stiffness
Substrate stiffness determines whether C. albicans will grow on a surface or within it, below a threshold corresponding to a stiffness of 200 kPa. During growth within PDMS, a substantial fraction of filaments buckle, with greater than 50% buckling in a substrate with a Young's modulus of ~ 100 kPa. Cells trapped in stiff (200 kPa or greater) microchambers undergo dramatic subapical bending. While both filament buckling and subapical bending depend on filament extension, the former occurs at least 5–10 μm away from the apex whereas the latter occurs at the filament tip. In a number of cases, filaments buckled as the tip (within PDMS) approached a well, followed by partial release upon penetration into this well. This bursting out event is similar, in some respects, to damaging of or escaping from host cells, such as macrophages [28,29,30,31].
Despite the estimated resistive force during invasive growth of 3–10 μN at the hyphal scale, we did not observe dramatic changes in the filament tip shape. Interestingly, the location of cell division sites appeared to be affected by filament buckling, in that there was a high correlation between the division site location and the site of buckling. Septins have been shown to be enriched preferentially at locations of high curvature in Ashbya gossypii hyphal filaments, i.e. at branch points [41]. An attractive possibility is that the location of the cell division site at the buckle, or vice versa, may minimize physical stress or damage to the filament.
Effects of substrate stiffness on invasive growth
Filament extension rate is reduced, and cell compartment length decreases with increasing substrate stiffness; however, cell volume is largely unaffected during growth within 100 kPa PDMS, due to an increase in filament diameter. These results suggest that the overall growth rate is unaffected in this condition, raising the possibility that the cell tip growth area is altered by resistive force. Given that altering growth in S. pombe by chemical, genetic, or mechanical means, results in destabilization of the active Cdc42 cluster at the growth site, we investigated the distribution of this key GTPase during invasive growth. Surprisingly, the distribution of active Cdc42 was not altered during invasive growth, but rather there was an increase in the concentration of active Cdc42 at the tip. In contrast, active Rho1, which is critical for glucan synthesis, was less polarized and observed throughout the filament during invasive growth. Together, these results suggest that, during invasive growth, cells sense resistive force from the PDMS and polarization increases at the tip, while remodeling their cell walls throughout the filament to counter this resistive force and/or the increase in turgor pressure. A likely possibility is that this resistive force is sensed by the mechano-sensing pathway, via a module composed of the mucin Msb2 and the cell wall protein Sho1 [42] critical for S. cerevisiae survival during compressive stress. In this yeast, Msb2 localizes to sites of growth and binds active Cdc42 [43], and cell wall stress has been shown to depolarize Rho1, as well as to hyperactivate this GTPase [39, 40] via the cell surface sensors, Wsc1 and Mid2. Indeed, S. cerevisiae cells sense compressive stress via Mid2, which localizes uniformly at the plasma membrane [44]. We speculate that the increased compression around the hyphal filament leads to depolarized active Rho1 via Mid2, in contrast to localized active Cdc42 via Msb2.
We have used a viscoplastic model for fungal growth that was originally established to describe cell shape control in plants [45, 46] and used, more recently, to analyze cell growth in fission yeast [21]. In such models, viscoplastic deformation of the cell wall underlies growth, which is driven by high turgor pressure. Specifically, the pressure (P) has to exceed the threshold plastic yield strain (Pc) of the cell wall—the cell growth rate is proportional to the wall strain that exceeds this threshold: \( {V}_o\propto \frac{\left(P-{P}_c\right)}{E_{\mathrm{CW}}} \). Hence, we have used Eq. (1) to derive the effective turgor pressure in C. albicans hyphae. Here, we used the differences in growth rates on PDMS surface and within PDMS, as well as an approximation of the external force. This latter force was calculated based on the displacement within PDMS of a 1-mm-diameter steel probe, minus the contribution from friction/adhesion and scaled to the hyphal diameter, assuming scaling with respect to the cross-sectional area and the absence of dramatic changes in geometry. From this analysis, we determined the effective turgor pressure (∆P = P − Pc) to be 1.5 and 3.1 MPa, in 100 kPa and 150 kPa PDMS, respectively. Either increasing filament diameter or decreasing extension rate results in a lower ∆P. The physical experimental model, however, does not take into account the geometry of the hyphal tip, which has a different curvature than the metal probe, which could result in an overestimation of the forces in the physical model. More precise measurements of forces relevant to hyphae will require a more accurate description of the hyphal tip size and geometry. One possibility is that, upon growth in stiffer PDMS, there is an increase in turgor pressure; our results are consistent with such a scenario as ∆P increases from 1.5 to 3.1 MPa, with an increase in PDMS stiffness from 100 to 150 kPa. This increase in turgor pressure could be responsible for the increased volume of the invasive cell compartment within 150 kPa PDMS (92 μm3 compared to 83 μm3 for surface growing cells, p = 0.045). Previous studies on cell wall expansion in S. pombe [35] have shown that \( \frac{\Delta P}{Y}=\frac{\left({R}^{\ast}\right)t}{R_1} \); where Y is the cell wall Young's modulus; R* is the expansion ratio in width ([R1 – R0]/R0); R1 and R0 are the cell radius within the PDMS and on the surface, respectively; and t is the cell wall thickness (~ 0.2 μm). Assuming Y is constant at these two PDMS stiffness conditions, \( \frac{\Delta P}{Y}\left(35:1\right) \) is 1.7-fold greater than \( \frac{\Delta P}{Y}\left(40:1\right) \), consistent with the observed increase in turgor pressure we derived from the growth rate difference. The slightly reduced extension rate on PDMS surface compared to that in liquid, together with the extrapolation of normalized extension rate as a function of PDMS stiffness to where \( \frac{V_{(F)}}{V_o}=1 \), indicates that during growth on a surface, the hyphae experience a small resistive force, equivalent to growth within less stiff PDMS (~ 20 kPa), which we attribute to adherence/friction. It is likely that depending on the cellular surface, the contribution of adherence/friction will vary.
Cell morphology is dependent of substrate stiffness
Our results indicate that effects on cell morphology are only observed in filaments within the PDMS, suggesting that resistive forces from the PDMS lead to an alteration in morphology. Strikingly, during growth in the stiffest PDMS, we observed a progressive increase in compartment diameter, even when this compartment was more than 10 μm back from the hyphal tip. This could be due to the additional modification of the cell wall in this proximal compartment, i.e. resulting in a less stiff cell wall, an increase in turgor pressure, and/or mechanical deformation of the filament. Although we cannot rule out that the cell wall in this proximal compartment is less stiff during invasive growth, we favor the latter two possibilities, as the cell compartment volume increases and ~ 60% of invasively growing hyphae buckle in this PDMS stiffness (150 kPa). The dramatic alteration in filament morphology was not solely due to a widened tip, as it only increased slightly during invasive growth, compared to the proximal compartment that increased 25% more than the tip diameter. It is likely that these morphology changes are due to the mechanical forces from growing against a resistive substrate together with an increase in turgor pressure.
A number of fungal pathogens penetrate host tissues, including medically relevant C. albicans [8, 10, 12] and A. fumigatus [9], and plant pathogens [47, 48], including Colletotrichum sp. [49], Ustilago maydis [50], Magnaporthe sp. [51, 52], and Fusarium sp. [53]. For the latter plant pathogens, high turgor pressure is generated inside a specialized cell, called an appressorium, that generates pressures in excess of 8 MPa [51]. Blocking host cell endocytosis of the human fungal pathogens C. albicans [8, 15,16,17] and A. fumigatus [9] with Cytochalasin D has revealed that both of these fungi can enter epithelial tissue by active penetration. Young's moduli for mammalian host cells are in the 1–100 kPa range. Hence, the effects we observe on filament extension rate and morphology are likely to be relevant during active penetration of epithelial cells, and it is attractive to speculate that turgor pressure, in particular the osmolytes critical for generating this force, might be a target for antifungal drugs.
Our results suggest that the stiffness of host cells dictates which cell type C. albicans can penetrate. Interestingly, even with stiffer PDMS (~ 200 kPa), we observed a small percentage of cells that were able to invade the substrate, suggesting these cells have specific properties that could be an advantage during epithelium invasion.
Strains, media, and genetic methods
Standard methods were used for C. albicans cell culture, molecular, and genetic manipulations as described. Derivatives of the BWP17 strain were used in this study and are listed in Table S1. Strains were grown in rich media (yeast extract peptone dextrose) at 30 °C for all experiments, and induction of filamentous growth was carried out with fetal calf serum (FCS) at 37 °C. Oligonucleotides and synthesized DNA used in this study are listed in Tables S2 and S3. pDUP5-mScarlet-CtRac1 was generated by PCR amplification of CamScarlet with a unique 5′ AscI site and a 3′ CtRac1 followed by a unique MluI site (oligonucleotides CamScarletAscIp and yemChmCtRacMluI) and cloned into pDUP5-ADH1p-CibN-CtRac1-ACT1t [33] resulting in pDUP5-ADH1p-CamScarlet-CtRac1-ACT1t. The nucleotide sequence for RFP mScarlet [54] was codon optimized for C. albicans and commercially synthesized (Genscript). mScarlet was PCR amplified with unique PstI and AscI sites (oligonucleotides GA3CamScarPstIp and CamScarletmAscI) and cloned into a pFA-GFPγ-URA3 backbone [55] resulting in pFA-CamScarlet-URA3. The RID, CRIB, and CtRac1 plasmids were linearized with StuI or NgoMIV and transformed into strains.
Micro-fabrication
Microchambers were fabricated using standard soft-lithography methods [56, 57]). Chambers were 5 μm deep and 10 μm in diameter with 15 μm spacing between them. The overall thickness of microchamber preparations was approximately 150 μm. PDMS microchambers of varying stiffness were generated by varying the ratios of polymer and cross-linker (Sylgard 184; Dow Corning). Polymer-cross-linker mixtures were spin coated (Laurell Technologies corporation) on molds (5 s 100 rpm followed by 50 s 500 rpm) to achieve ~ 150 μm thickness and then cured for 1 h at 60 °C PDMS (10:1). Thick PDMS frames were then placed on top of the partially cured chambers, which were baked for an additional 2 h at 90 °C, in order to peel the thin chambers off the mold. For viscoanalyzer measurements, 20 × 20 × 5 mm PDMS pieces were similarly cured and samples subjected to an amplitude of oscillation of 5–150 μm at 10 Hz frequency using a DMA3007000 Metravib 52 (Areva) with Dynaset20 software. Young's moduli were calculated using the following formula, \( E=\frac{\sigma }{\varepsilon }=\frac{F\times h}{D\times l\times e} \); with F being the force, D the deformation amplitude, H the sample height, e the sample thickness, and l the sample length. For analysis of each PDMS preparation, measurements at 5 different deformation values (5–150 μm) were carried out, and the mean of 3 of these measurements was used for Young's modulus determination. For indentation experiments, varying ratios of polymer and cross-linker were poured into two different sized cylindrical aluminum molds, 3.5 (d) × 1.75 cm (h) and 8 (d) × 2 cm (h), and cured as described above.
Physical model of penetration
Half of the PDMS cylinders were adhered to an aluminum plate with a 5-mm-diameter hole. For the other half of the PDMS cylinders, the bottom of the mold was removed such that they were fixed around their circumference. Cylindrical PDMS pieces from both conditions were indented in their center with a hemi-spherical 1-mm-diameter steel probe, which was highly polished. This probe tip was machined with a slight conical shape ending in a spherical cap to approximate a hyphal filament tip and had a 2-daN force sensor (Kistler), as well as a displacement sensor (Fastar Data Instruments) attached. This probe was extended using a moving deck with a motor (Cerclet) at speeds of 1.6–3.2 μm/s. Measurements were made on average 10 samples at each PDMS ratio. From force versus displacement curves, the following forces were derived: the maximum value was the Fcrit, the mean of the first plateau following penetration Finvas, and the mean of the second plateau (following the exit of the PDMS) Fout. Forces were then averaged and scaled to the cross-sectional area of the hyphal filament. We assumed that in the physical model of penetration and in the C. albicans growth experiments, the PDMS behaved as a homogenous material, and hence, scaling in terms of size was carried out with a constant critical stress. This assumption is justified, as the diameter of the filament, ~ 2 μm, is 103-fold greater than the PDMS mesh size, which is 2–3 nm with 40:1 PDMS. Hence, scaling of the force, as stress during rupture is constant, was achieved by dividing for the ratio of the metal probe/filament radii squared.
Microscopy sample preparation
PDMS microchambers were activated by plasma treatment (Harrick Plasma Cleaner) for 14 s at 500 mTorr on low setting submerged in DH2O until usage. Prior to usage, PDMS samples were dried with nitrogen gas, treated with poly-d-lysine (1 mg/ml) and subsequently treated with concanavalin A (0.4 mg/ml), each incubated for 20 min followed by drying with a stream of nitrogen gas. Exponentially growing cells were mixed with FCS media (75% FCS, 0.6 × minimal media and 2% dextrose) and spotted onto the microchambers and were sealed with a coverslip. Typically, cells on PDMS microchambers were incubated for ~ 1 h at 37 °C prior to microscopy to initiate filamentation.
Microscopy and image analysis
Cells were imaged as described [33] using either spinning disk confocal microscopy or wide-field fluorescence microscopy with a UPLANAPO 1.2 NA × 60 or Plan-Neofluar 0.75 NA × 40 objectives, respectively. Images were acquired at indicated times, with 0.4 or 0.5 μm z-sections (13 to 24) to capture the entire filament. For growth depth experiments, 100 × 0.2 μm z-sections were acquired. For the analyses of extension rate, compartment morphology, tip morphology, and cell depth images were deconvolved with the Huygens Professional software version 18.04 (Scientific-Volume Imaging) with recommended settings and a signal to noise ratio of 10, and sum projections were used. Reflection images in XZ were acquired on an upright Leica DM5500 TCS SPE laser-scanning microscope (Leica Microsystems, Mannheim, Germany) equipped with a galvanometric stage, using an APO 0.3 NA × 10 objective. Reflection images were acquired in XZ reflection mode using a 488-nm laser and an 80/20 dichroic filter. Scale bars, unless otherwise indicated, are 5 μm.
Image analysis for extension rate, compartment/tip diameter, and active Rho GTPase polarization was carried out with Fiji (version 1.51) [58]. For the determination of the extension rate, the filament lengths (mother cell neck to filament tip) were determined over time using plasma membrane fluorescence signal and values are from a curve fit of minimally 1 h acquisition (> 12 time points). For liquid extension rates, cells induced with 50% FCS at 37 °C were fixed every 30 min and average cell lengths for each time point (0–90 min) were used. Compartment lengths were measured using the edge-to-edge fluorescence signal between two formed septa. Only invasive compartments that were entirely within the PDMS were used for analyses. Active Cdc42 and Rho1 polarization was determined using a fixed size ROI to quantitate the signal intensity at the tip and in a subapical region. The ratio of the tip to subapical signal was averaged over the time-lapse, and the mean for invasive cells was normalized to the mean ratio for non-invasive cells. The percentage of cells undergoing subapical bending within a chamber and filament buckling within PDMS was assessed using DIC images. For the determination of errors from values derived from multiple measurements, classical error propagation was carried out [59]. Statistical significance was determined with Student's t test.
For the representation of 3D DIC images, an algorithm was applied to image z-stacks using a custom-developed MATLAB program called InFocus inspired from Extended Depth of Field plugin [60]. The program extracts from any 3D multi-channel acquisition, the local contrast per channel. From these maps, it determines the Z of highest contrast per pixel from one or a combination of channels and then extracts a smoothened plane to get a 2D multichannel image. Parameters can be adapted through the graphical user interface. For quantitation of filament tip curvature over time, we developed another MATLAB program called TipCurve with an intuitive interface dedicated for morphological analyses along the major axis of filamentous cells. For such analyses, 3D images were converted into 2D images by sum projection, and similar to the HyphalPolarity program [33], a backbone was extracted from images over time, but also the cell contour, specifically extracting the tip first, we estimate the extremity of the backbone subtracted by the tip radius. Then, we estimate the local curvature only along the tip (either − 45° to + 45° or − 90 to 90°) from the curvilinear function f(s) extracted from cell contour:
$$ C(s)=\frac{1}{R(s)}=\frac{f^{{\prime\prime} }(s)}{{\left(f\prime (s)\right)}^2} $$
The distribution of active Cdc42 and Rho1 at the filament tip was determined by TipCurve. For these analyses, images that had a polarized fluorescent signal along with a uniform cytoplasmic signal were used. The latter was used to identify the cell and extract the backbone. Then, an additive projection on the backbone was done to get kymograph curves per time point and per fluorescence channel. To estimate the distribution of intensity at the tip of Cdc42 and Rho1, we fitted the kymograph to estimate xmax, the distance of maximum of intensity from the tip and the decay of intensity above xmax as an exponential function. Both Infocus and TipCurve can be provided as executables on demand.
All the data on which the conclusions of the paper are based are presented in the paper and its additional files.
Campas O, Rojas E, Dumais J, Mahadevan L. Strategies for cell shape control in tip-growing cells. Am J Bot. 2012;99(9):1577–82.
Campas O, Mahadevan L. Shape and dynamics of tip-growing cells. Curr Biol. 2009;19(24):2102–7.
Lew RR. How does a hypha grow? The biophysics of pressurized growth in fungi. Nat Rev Microbiol. 2011;9(7):509–18.
Mendgen K, Hahn M, Deising H. Morphogenesis and mechanisms of penetration by plant pathogenic fungi. Annu Rev Phytopathol. 1996;34:367–86.
Akhtar R, Sherratt MJ, Cruickshank JK, Derby B. Characterizing the elastic properties of tissues. Mater Today (Kidlington). 2011;14(3):96–105.
Alonso JL, Goldmann WH. Feeling the forces: atomic force microscopy in cell biology. Life Sci. 2003;72(23):2553–60.
Mathur AB, Collinsworth AM, Reichert WM, Kraus WE, Truskey GA. Endothelial, cardiac muscle and skeletal muscle exhibit different viscous and elastic properties as determined by atomic force microscopy. J Biomech. 2001;34(12):1545–53.
Dalle F, Wachtler B, L'Ollivier C, Holland G, Bannert N, Wilson D, Labruere C, Bonnin A, Hube B. Cellular interactions of Candida albicans with human oral epithelial cells and enterocytes. Cell Microbiol. 2010;12(2):248–71.
Bertuzzi M, Schrettl M, Alcazar-Fuoli L, Cairns TC, Munoz A, Walker LA, Herbst S, Safari M, Cheverton AM, Chen D, et al. The pH-responsive PacC transcription factor of Aspergillus fumigatus governs epithelial entry and tissue invasion during pulmonary aspergillosis. PLoS Pathog. 2014;10(10):e1004413.
Basmaciyan L, Bon F, Paradis T, Lapaquette P, Dalle F. Candida Albicans interactions with the host: crossing the intestinal epithelial barrier. Tissue Barriers. 2019;7(2):1612661.
Richardson JP, Ho J, Naglik JR. Candida-epithelial interactions. J Fungi (Basel). 2018;4(1).
Swidergall M, Filler SG. Oropharyngeal candidiasis: fungal invasion and epithelial cell responses. PLoS Pathog. 2017;13(1):e1006056.
Westman J, Hube B, Fairn GD. Integrity under stress: host membrane remodelling and damage by fungal pathogens. Cell Microbiol. 2019;21(4):e13016.
Wilson D, Naglik JR, Hube B. The missing link between Candida albicans hyphal morphogenesis and host cell damage. PLoS Pathog. 2016;12(10):e1005867.
Goyer M, Loiselet A, Bon F, L'Ollivier C, Laue M, Holland G, Bonnin A, Dalle F. Intestinal cell tight junctions limit invasion of Candida albicans through active penetration and endocytosis in the early stages of the interaction of the fungus with the intestinal barrier. PLoS One. 2016;11(3):e0149159.
Allert S, Forster TM, Svensson CM, Richardson JP, Pawlik T, Hebecker B, Rudolphi S, Juraschitz M, Schaller M, Blagojevic M, et al. Candida albicans-induced epithelial damage mediates translocation through intestinal barriers. mBio. 2018;9(3):e00915–18.
Wachtler B, Citiulo F, Jablonowski N, Forster S, Dalle F, Schaller M, Wilson D, Hube B. Candida albicans-epithelial interactions: dissecting the roles of active penetration, induced endocytosis and host factors on the infection process. PLoS One. 2012;7(5):e36952.
Moyes DL, Wilson D, Richardson JP, Mogavero S, Tang SX, Wernecke J, Hofs S, Gratacap RL, Robbins J, Runglall M, et al. Candidalysin is a fungal peptide toxin critical for mucosal infection. Nature. 2016;532(7597):64–8.
Thomson DD, Wehmeier S, Byfield FJ, Janmey PA, Caballero-Lima D, Crossley A, Brand AC. Contact-induced apical asymmetry drives the thigmotropic responses of Candida albicans hyphae. Cell Microbiol. 2015;17(3):342–54.
Martin K, Reimann A, Fritz RD, Ryu H, Jeon NL, Pertz O. Spatio-temporal co-ordination of RhoA, Rac1 and Cdc42 activation during prototypical edge protrusion and retraction dynamics. Sci Rep. 2016;6:21901.
Minc N, Boudaoud A, Chang F. Mechanical forces of fission yeast growth. Curr Biol. 2009;19(13):1096–101.
Sevilla MJ, Odds FC. Development of Candida albicans hyphae in different growth media-variations in growth rates, cell dimensions and timing of morphogenetic events. J Gen Microbiol. 1986;132(11):3083–8.
Brown XQ, Ookawa K, Wong JY. Evaluation of polydimethylsiloxane scaffolds with physiologically-relevant elastic moduli: interplay of substrate mechanics and surface chemistry effects on vascular smooth muscle cell response. Biomaterials. 2005;26(16):3123–9.
Demichelis A, Pavarelli S, Mortati L, Sassi G, Sassi M. Study on the AFM force spectroscopy method for elastic modulus measurement of living cells. J Phys Conf Ser. 2013;459:012050.
Xie J, Zhang Q, Zhu T, Zhang Y, Liu B, Xu J, Zhao H. Substrate stiffness-regulated matrix metalloproteinase output in myocardial cells and cardiac fibroblasts: implications for myocardial fibrosis. Acta Biomater. 2014;10(6):2463–72.
Wang Z, Volinsky A, Gallant ND. Crosslinking effect on polydimethylsiloxane elastic modulus measured by custom-built compression instrument. J Appl Polym Sci. 2014;131(22):41050.
Feng L, Li S, Feng S. Preparation and characterization of silicone rubber with high modulus via tension spring-type crosslinking. RSC Adv. 2017;7:13130–7.
Lorenz MC, Bender JA, Fink GR. Transcriptional response of Candida albicans upon internalization by macrophages. Eukaryot Cell. 2004;3(5):1076–87.
McKenzie CG, Koser U, Lewis LE, Bain JM, Mora-Montes HM, Barker RN, Gow NA, Erwig LP. Contribution of Candida albicans cell wall components to recognition by and escape from murine macrophages. Infect Immun. 2010;78(4):1650–8.
Rudkin FM, Bain JM, Walls C, Lewis LE, Gow NA, Erwig LP. Altered dynamics of Candida albicans phagocytosis by macrophages and PMNs when both phagocyte subsets are present. mBio. 2013;4(6):e00810–3.
Westman J, Moran G, Mogavero S, Hube B, Grinstein S. Candida albicans hyphal expansion causes phagosomal membrane damage and luminal alkalinization. mBio. 2018;9(5):e01226–18.
Vauchelles R, Stalder D, Botton T, Arkowitz RA, Bassilana M. Rac1 dynamics in the human opportunistic fungal pathogen Candida albicans. PLoS One. 2010;5(10):e15400.
Silva PM, Puerner C, Seminara A, Bassilana M, Arkowitz RA. Secretory vesicle clustering in fungal filamentous cells does not require directional growth. Cell Rep. 2019;28(8):2231–45 e2235.
Desai JV, Cheng S, Ying T, Nguyen MH, Clancy CJ, Lanni F, Mitchell AP. Coordination of Candida albicans invasion and infection functions by phosphoglycerol phosphatase Rhr2. Pathogens. 2015;4(3):573–89.
Atilgan E, Magidson V, Khodjakov A, Chang F. Morphogenesis of the fission yeast cell through cell wall expansion. Curr Biol. 2015;25(16):2150–7.
Haupt A, Ershov D, Minc N. A positive feedback between growth and polarity provides directional persistency and flexibility to the process of tip growth. Curr Biol. 2018;28(20):3342–51 e3343.
Corvest V, Bogliolo S, Follette P, Arkowitz RA, Bassilana M. Spatiotemporal regulation of Rho1 and Cdc42 activity during Candida albicans filamentous growth. Mol Microbiol. 2013;89(4):626–48.
Bassilana M, Hopkins J, Arkowitz RA. Regulation of the Cdc42/Cdc24 GTPase module during Candida albicans hyphal growth. Eukaryot Cell. 2005;4(3):588–603.
Delley PA, Hall MN. Cell wall stress depolarizes cell growth via hyperactivation of RHO1. J Cell Biol. 1999;147(1):163–74.
Philip B, Levin DE. Wsc1 and Mid2 are cell surface sensors for cell wall integrity signaling that act through Rom2, a guanine nucleotide exchange factor for Rho1. Mol Cell Biol. 2001;21(1):271–80.
Bridges AA, Jentzsch MS, Oakes PW, Occhipinti P, Gladfelter AS. Micron-scale plasma membrane curvature is recognized by the septin cytoskeleton. J Cell Biol. 2016;213(1):23–32.
Delarue M, Poterewicz G, Hoxha O, Choi J, Yoo W, Kayser J, Holt L, Hallatschek O. SCWISh network is essential for survival under mechanical pressure. Proc Natl Acad Sci U S A. 2017;114(51):13465–70.
Cullen PJ, Sabbagh W Jr, Graham E, Irick MM, van Olden EK, Neal C, Delrow J, Bardwell L, Sprague GF Jr. A signaling mucin at the head of the Cdc42- and MAPK-dependent filamentous growth pathway in yeast. Genes Dev. 2004;18(14):1695–708.
Mishra R, van Drogen F, Dechant R, Oh S, Jeon NL, Lee SS, Peter M. Protein kinase C and calcineurin cooperatively mediate cell survival under compressive mechanical stress. Proc Natl Acad Sci U S A. 2017;114(51):13471–6.
Boudaoud A. Growth of walled cells: from shells to vesicles. Phys Rev Lett. 2003;91(1):018104.
Lockhart JA. An analysis of irreversible plant cell elongation. J Theor Biol. 1965;8(2):264–75.
Demoor A, Silar P, Brun S. Appressorium: the breakthrough in Dikarya. J Fungi (Basel). 2019;5(3).
Ryder LS, Talbot NJ. Regulation of appressorium development in pathogenic fungi. Curr Opin Plant Biol. 2015;26:8–13.
De Silva DD, Crous PW, Ades PK, Hyde KD, Taylor PWJ. Life styles of Colletotrichum species and implications for plant biosecurity. Fungal Biol Rev. 2017;31(3):155–68.
Matei A, Doehlemann G. Cell biology of corn smut disease-Ustilago maydis as a model for biotrophic interactions. Curr Opin Microbiol. 2016;34:60–6.
Howard RJ, Ferrari MA, Roach DH, Money NP. Penetration of hard substrates by a fungus employing enormous turgor pressures. Proc Natl Acad Sci U S A. 1991;88(24):11281–4.
Tanaka E. Appressorium-mediated penetration of Magnaporthe oryzae and Colletotrichum orbiculare into surface-cross-linked agar media. FEMS Microbiol Lett. 2015;362(10):fnv066.
Parry DW, Pegg GF. Surface colonization, penetration and growth of three Fusarium species in lucerne. Trans Br Mycol Soc. 1985;85(3):495–500.
Bindels DS, Haarbosch L, van Weeren L, Postma M, Wiese KE, Mastop M, Aumonier S, Gotthard G, Royant A, Hink MA, et al. mScarlet: a bright monomeric red fluorescent protein for cellular imaging. Nat Methods. 2017;14(1):53–6.
Zhang C, Konopka JB. A photostable green fluorescent protein variant for analysis of protein localization in Candida albicans. Eukaryot Cell. 2010;9(1):224–6.
Minc N. Microfabricated chambers as force sensors for probing forces of fungal growth. Methods Cell Biol. 2014;120:215–26.
Whitesides GM, Ostuni E, Takayama S, Jiang X, Ingber DE. Soft lithography in biology and biochemistry. Annu Rev Biomed Eng. 2001;3:335–73.
Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, Preibisch S, Rueden C, Saalfeld S, Schmid B, et al. Fiji: an open-source platform for biological-image analysis. Nat Methods. 2012;9(7):676–82.
Bevington P, Robinson DK. Data reduction and error analysis for the physical sciences. New York: University of California: McGraw-Hill Education; 2003.
Forster B, Van De Ville D, Berent J, Sage D, Unser M. Complex wavelets for extended depth-of-field: a new method for the fusion of multichannel microscopy images. Microsc Res Tech. 2004;65(1–2):33–42.
Wilson RB, Davis D, Mitchell AP. Rapid hypothesis testing with Candida albicans through gene disruption with short homology regions. J Bacteriol. 1999;181(6):1868–74.
Bassilana M, Blyth J, Arkowitz RA. Cdc24, the GDP-GTP exchange factor for Cdc42, is required for invasive hyphal growth of Candida albicans. Eukaryot Cell. 2003;2(1):9–18.
We thank J. Konopka for reagents, Y. Izmaylov, S. Bogliolo, O. Domenge, and S. Lachambre for the assistance and N. Minc for stimulating discussion.
This work was supported by the CNRS, INSERM, Université Nice-Sophia Antipolis, Université Côte d'Azur and ANR (ANR-15-IDEX-01, ANR-11-LABX-0028-01, and ANR-16-CE13-0010-01), and EU H2020 (MSCA-ITN- 2015-675407) grants and the Platforms Resources in Imaging and Scientific Microscopy facility (PRISM) and Microscopy Imaging Côte d'Azur (MICA).
Charles Puerner, Nino Kukhaleishvili, and Darren Thomson are co-first authors; order was determined by contribution to figures.
Université Côte d'Azur, CNRS, INSERM, Institute of Biology Valrose (iBV), Parc Valrose, Nice, France
Charles Puerner, Nino Kukhaleishvili, Darren Thomson, Sebastien Schaub, Martine Bassilana & Robert A. Arkowitz
Université Côte d'Azur, CNRS, Institute Physics of Nice (INPHYNI), Ave. J. Vallot, Nice, France
Nino Kukhaleishvili, Xavier Noblin & Agnese Seminara
Present Address: Manchester Fungal Infection Group, School of Biological Sciences, University of Manchester, Manchester, UK
Darren Thomson
Present Address: Sorbonne University, CNRS, Developmental Biology Laboratory (LBDV), Villefranche-sur-mer, France
Sebastien Schaub
Charles Puerner
Nino Kukhaleishvili
Xavier Noblin
Agnese Seminara
Martine Bassilana
Robert A. Arkowitz
Conceptualization: X.N., A.S., M.B., and R.A.A. Methodology: C.P., N.K., D.T., S.S., X.N., and R.A.A. Software: S.S. Validation: X.N., A.S., and R.A.A. Formal analysis: C.P., N.K., A.S., and R.A.A. Investigation: C.P., N.K., D.T., and X.N. Data curation: S.S., A.S., and R.A.A. Writing—original draft: M.B. and R.A.A. Writing—review and editing: C.P., N.K., D.T., X.N., A.S., M.B., and R.A.A. Visualization: R.A.A. Supervision: X.N., M.B., and R.A.A. Project administration: X.N., M.B., and R.A.A. Funding acquisition: X.N., A.S., M.B., and R.A.A. All authors read and approved the final manuscript.
Correspondence to Xavier Noblin or Robert A. Arkowitz.
Additional file 2: Movie S1. Invasive growth and penetration into adjacent chamber. Cells grown with indicated stiffness PDMS and followed over time either by DIC optics or fluorescence of labeled with plasma membrane GFP.
Additional file 3: Movie S2. Invasively growing filaments have increased levels of active Cdc42 at the tip. False colored sum projections of cells expressing CRIB-GFP reporter for active Cdc42.
Strain versus stress dependence of PDMS. Analyses carried out using a Viscoanalyzer, with oscillation at 10 Hz of PDMS at cross-linker ratio of 40:1. Figure S2. Invasive growth and penetration into adjacent chamber in PDMS of different stiffness. DIC time-lapse experiments at indicated PDMS:cross-linker ratio and measured stiffness (Young's modulus). The adjacent chamber is highlighted with a dotted yellow line and deformation of this chamber lasted ~40 min with 40:1 PDMS ratio and 80-90 min for the two stiffer PDMS substrates. Figure S3. The shape of filament tip is not substantially altered during invasive growth in PDMS. A) Radius of curvature over time is constant in surface and invasively growing cells. Radius of curvature with an arc of ± 90° or ± 45° at the filament tip. B) Shape of filament tip of surface growing cells over time. Cells were grown on PDMS (30:1; 250 kPa) and 31 × 5 min GFP sum projections were analyzed. Radius of curvature with ± 45° by indicated open lines and ± 90° indicated by solid lines. Figure S4. Cells confined within a stiff PDMS chamber have reduced filament extension rates. A) Constricted growth within a PDMS chamber. Typical time-lapse experiment using 160 kPa PDMS, with DIC images every 5 min shown. B) Filament extension rate within a stiff chamber is not linear. Filament length was determined from images every 5 min for ~ 2 h and GFP sum projections (n = 9 cells). C) Filament extension rate is substantially reduced as chamber fills up. Initial (filament length 10-20 μm) and final (filament length > 20 μm) extension rates were determined from fits to 6 × 5 min GFP sum projections. (colors represent individual cells). Bars indicate SD and **** p < 0.0001. Figure S5. Distribution of active Cdc42 is not altered during invasive growth. A) Schematic indicating fluorescence signal over the filament long axis. Quantitation of slope of Gaussian farthest from tip in red (Max Slope, in relative units), distance maximum signal to tip (xmax in μm), and half width half max of the Gaussian farthest from tip in red (xSpread-xmax), i.e. the signal spread (Spread in μm). Signal is denoted by I and distance from tip by x. B) Distribution of active Cdc42 during surface and invasive filamentous growth. Experiment described in Figure 11a and 11b with the mean signal for each cell (colors represents individual cells), normalized to the mean signal for tip Cdc42•GTP in surface growing cells. Bars indicate SD. C) Distribution of active Cdc42 is not altered upon invasive growth. Relative maximum slope (left), distance from maximum signal to the tip (middle) and spread of signal (right) determined from 6-8 cells, using tailor-made Matlab program. Bars indicate SD; surface and invasive cells were not significantly different. D) Apical and subapical active Cdc42 signals are stable over time. Relative signals from apical and subapical region of sum projections, normalized to maximum invasive subapical signal.
Strains used in the study [61, 62]. Table S2. Oligonucleotides used in the study. Table S3. Synthesized DNA used in the study.
Puerner, C., Kukhaleishvili, N., Thomson, D. et al. Mechanical force-induced morphology changes in a human fungal pathogen. BMC Biol 18, 122 (2020). https://doi.org/10.1186/s12915-020-00833-0
Received: 08 April 2020
Accepted: 22 July 2020
Cell invasion
Mechanical force
Cell morphology
|
CommonCrawl
|
Student distribution
with $ f $ degrees of freedom, $ t $- distribution
The probability distribution of the random variable
$$ t _ {f} = \frac{U}{\sqrt {\chi _ {f} ^ {2} / f } } , $$
where $ U $ is a random variable subject to the standard normal law $ N( 0, 1) $ and $ \chi _ {f} ^ {2} $ is a random variable not depending on $ U $ and subject to the "chi-squared" distribution with $ f $ degrees of freedom. The distribution function of the random variable $ t _ {f} $ is expressed by the formula
$$ {\mathsf P} \{ t _ {f} \leq x \} = S _ {f} ( x) = $$
$$ = \ \frac{1}{\sqrt {\pi _ {f} } } \frac{\Gamma ( ( f+ 1 ) / 2 ) }{\Gamma ( f / 2 ) } \int\limits _ {- \infty } ^ { x } \left ( 1 + \frac{u ^ {2} }{f} \right ) ^ {- ( f+ 1 ) / 2 } du,\ | x | < \infty . $$
In particular, if $ f= 1 $, then
$$ S _ {1} ( x) = \frac{1}{2} + \frac{1} \pi \mathop{\rm arctan} x $$
is the distribution function of the Cauchy distribution. The probability density of the Student distribution is symmetric about 0, therefore
$$ S _ {f} ( t) + S _ {f} (- t) = 1 \ \textrm{ for any } t \in \mathbf R ^ {1} . $$
The moments $ \mu _ {r} = {\mathsf E} t _ {f} ^ {r} $ of a Student distribution exist only for $ r < f $, the odd moments are equal to 0, and, in particular $ {\mathsf E} t _ {f} = 0 $. The even moments of a Student distribution are expressed by the formula
$$ \mu _ {2r} = f ^ { r } \frac{\Gamma ( ( r + 1 ) / 2 ) \Gamma ( f / 2 - r ) }{\sqrt \pi \Gamma ( f / 2 ) } ,\ \ 2 \leq 2r < f ; $$
in particular, $ \mu _ {2} = {\mathsf D} \{ t _ {f} \} = f/( f- 2) $. The distribution function $ S _ {f} ( x) $ of the random variable $ t _ {f} $ is expressed in terms of the beta-distribution function in the following way:
$$ S _ {f} ( x) = 1 - \frac{1}{2} I _ {f/( f+ x ^ {2} ) } \left ( \frac{f}{2} , \frac{1}{2} \right ) , $$
where $ I _ {z} ( a, b) $ is the incomplete beta-function, $ 0 \leq z \leq 1 $. If $ f \rightarrow \infty $, then the Student distribution converges to the standard normal law, i.e.
$$ \lim\limits _ {f\rightarrow \infty } S _ {f} ( x) = \ \Phi ( x) = \ \frac{1}{\sqrt {2 \pi } } \int\limits _ {- \infty } ^ { x } e ^ {- t ^ {2} /2 } dt. $$
Example. Let $ X _ {1} \dots X _ {n} $ be independent, identically, normally $ N( a, \sigma ^ {2} ) $- distributed random variables, where the parameters $ a $ and $ \sigma ^ {2} $ are unknown. Then the statistics
$$ \overline{X}\; = \frac{1}{n} \sum _ { i= } 1 ^ { n } X _ {i} \ \textrm{ and } \ \ s ^ {2} = \frac{1}{n-} 1 \sum _ { i= } 1 ^ { n } ( X _ {i} - \overline{X}\; ) ^ {2} $$
are the best unbiased estimators of $ a $ and $ \sigma ^ {2} $; here $ \overline{X}\; $ and $ s ^ {2} $ are stochastically independent. Since the random variable $ \sqrt n ( \overline{X}\; - a)/ \sigma $ is subject to the standard normal law, while
$$ n- \frac{1}{\sigma ^ {2} } s ^ {2} = \chi _ {n-} 1 ^ {2} $$
is distributed according to the "chi-squared" law with $ f= n- 1 $ degrees of freedom, then by virtue of their independence, the fraction
$$ \frac{\sqrt n ( \overline{X}\; - a) / \sigma }{\sqrt {\chi _ {n-} 1 ^ {2} / ( n- 1) } } = \frac{\sqrt n ( \overline{X}\; - a) }{s} $$
is subject to the Student distribution with $ f= n- 1 $ degrees of freedom. Let $ t _ {f} ( P) $ and $ t _ {f} ( 1- P) = - t _ {f} ( P) $ be the solutions of the equations
$$ S _ {n-} 1 \left ( \frac{\sqrt n ( \overline{X}\; - a) }{s} \right ) = \ \left \{ \begin{array}{ll} P, & 0.5 < P < 1, \\ 1- P, & f = n- 1. \\ \end{array} $$
Then the statistics $ \overline{X}\; - ( s/ \sqrt n ) t _ {f} ( P) $ and $ \overline{X}\; + ( s/ \sqrt n ) t _ {f} ( P) $ are the lower and upper bounds of the confidence set for the unknown mathematical expectation $ a $ of the normal law $ N( a, \sigma ^ {2} ) $, and the confidence coefficient of this confidence set is equal to $ 2P- 1 $, i.e.
$$ {\mathsf P} \left \{ \overline{X}\; - \frac{s}{\sqrt n } t _ {f} ( P) < a < \overline{X}\; + \frac{s}{\sqrt n } t _ {f} ( P) \right \} = 2P- 1. $$
The Student distribution was first used by W.S. Gosset (pseudonym Student).
[1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
[2] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova)
[3] "Student" (W.S. Gosset), "The probable error of a mean" Biometrika , 6 (1908) pp. 1–25
Student distribution. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Student_distribution&oldid=49611
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Student_distribution&oldid=49611"
TeX auto
|
CommonCrawl
|
Find expression for power series
Video: Using differentation to find a power series expression for
Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.. Visit Stack Exchang In this section we discuss how the formula for a convergent Geometric Series can be used to represent some functions as power series. To use the Geometric Series formula, the function must be able to be put into a specific form, which is often impossible. However, use of this formula does quickly illustrate how functions can be represented as a power series Homework Statement Hello, I need to find an expression for the sum of the given power series The Attempt at a Solution I think that one has to use a known Maclaurin series, for example the series of e^x. I know that I can rewrite , which makes the expression even more similar to the..
Find Explicit Formula for Power Series. Ask Question Asked 3 years, 4 months ago. Active 3 years, 4 months ago. Viewed 474 times 0 $\begingroup$ $\sum_{n=0}^{\infty}\left({(-1)^{n}}\frac{x^{n+1}}{n+1}\right)$ Step 1: I took derivative of the power series and got $(-1)^n\sum_{n. Free power series calculator - Find convergence interval of power series step-by-step. This website uses cookies to ensure you get the best experience. System of Equations System of Inequalities Basic Operations Algebraic Properties Partial Fractions Polynomials Rational Expressions Sequences Power Sums Induction Logical Sets
The calculator will find the Taylor (or power) series expansion of the given function around the given point, with steps shown. You can specify the order of the Taylor polynomial. If you want the Maclaurin polynomial, just set the point to `0` Get the free Power Series widget for your website, blog, Wordpress, Blogger, or iGoogle. Find more Mathematics widgets in Wolfram|Alpha
Formal power series can be used to solve recurrences occurring in number theory and combinatorics. For an example involving finding a closed form expression for the Fibonacci numbers, see the article on Examples of generating functions. One can use formal power series to prove several relations familiar from analysis in a purely algebraic setting Let represent the translated (shifted) logarithmic function f (x) = ln (x + 1) by the power series. Given translated logarithmic function is the infinitely differentiable function defined for all -1 < x < oo. We use the polynomial with infinitely many terms in the form of power series : to represent given function Taylor series If a function \(f\left( x \right)\) has continuous derivatives up to \(\left( {n + 1} \right)\)th order inclusive, then this function can be expanded in a power series about the point \(x = a\) by the Taylor formula In this section we will give the definition of the power series as well as the definition of the radius of convergence and interval of convergence for a power series. We will also illustrate how the Ratio Test and Root Test can be used to determine the radius and interval of convergence for a power series
Find a power series expression for (2) x X1 n=0 a n (x 1) n Although we can again bring the factor x through the summation sign, the resulting expression X1 n=0 xa n (x 1) n is not a power series expression (because the individual terms are not simply a constant times a power of (x 1)) Power series, in mathematics, an infinite series that can be thought of as a polynomial with an infinite number of terms, such as 1 + x + x 2 + x 3 +⋯. Usually, a given power series will converge (that is, approach a finite sum) for all values of x within a certain interval around zero—in particular, whenever the absolute value of x is less than some positive number r, known as the radius. Substitute the power series expressions into the differential equation. Re-index sums as necessary to combine terms and simplify the expression. Equate coefficients of like powers of \(x\) to determine values for the coefficients \(a_n\) in the power series. Substitute the coefficients back into the power series and write the solution
Power Series Lecture Notes A power series is a polynomial with infinitely many terms. Here is an example: 0 B œ B B B âa b # $ Like a polynomial, a power series is a function of B. That is, we can substitute in different values of to get different results. For example,B 0 ! œ ! ! ! â œ a b . an where 0! = 1, f (0) (x 0) = f (x 0) and f (n) (x 0) is the nth derivative of f at x 0, represents an infinitely differentiable function and is called Maclaurin series and Taylor series respectively.: The power series expansion of the hyperbolic sine and hyperbolic cosine functio Answer to find a closed form expression for a function f such that the above power series is equal to f on the interval of converg..
When an AC source of emf e = E 0 sin (1 0 0 t) is connected across a circuit, the phase difference between the emf e and the current i in the circuit is observed to be π / 4, as shown in the diagram. lf the circuit consists possibly only of R − C or R − L or L − C in series, find the relationship between the two elements A power series is a type of series with terms involving a variable. More specifically, if the variable is \(x\), then all the terms of the series involve powers of \(x\). As a result, a power series can be thought of as an infinite polynomial. Power series are used to represent common functions and also to define new functions Homework Statement Find a closed form expression for the function f(x) which the power series Σn=0..∞ n(-1)nxn+1 converges to and determine the values of x for which f(x) equals the given power series. Homework Equations N/A The Attempt at a Solution I'm actually not sure how to start. First..
Calculus II - Power Series and Function
Answer to: Use differentiation or integration to find a power series representation of f(x) =1/(1+x)^3. By signing up, you'll get thousands of..
30.7 Expressions for Coefficients of a Power Series. We have for the most part so far discussed what to do when confronted with a series. You can test its convergence, estimate its limit, and try to find the function it represents, if it is a power series
Series[f, {x, x0, n}] generates a power series expansion for f about the point x = x0 to order (x - x0) n, where n is an explicit integer. Series[f, x -> x0] generates the leading term of a power series expansion for f about the point x = x0. Series[f, {x, x0, nx}, {y, y0, ny},] successively finds series expansions with respect to x, then y, etc
Substitution of Power Series We can find the power series of e−t2 by starting with the power series for ex and making the 2substitution x = −t . x2 3 e x = 1 + x + + + (R = ∞) 2! 3! ··· e−t2 = 1 + (−t2) + (−t 2) (−t )3 2 In a series LCR circuit connected to an a.c. source of voltage v = v m sinωt, use phasor diagram to derive an expression for the current in the circuit. Hence, obtain the expression for the power dissipated in the circuit. Show that power dissipated at resonance is maximu Power Series expression for a rational function Rajendra Dahal. Loading Find a Power Series to Represent a Rational Function Using Differentiation - Duration: 8:01
Find the expression for the sum of this power series
If you are willing to find the sum of the sequence then you are suggested to use the series calculator / Alternating Series Calculator with steps given here in the below section. In order to get the sum, first of all you need to choose the series variables, lower and the upper bounds and also you need to input the expressions for the end term of the sequence for which you are working
These issues are settled by the theory of power series and analytic functions. 1.2. Power series and analytic functions. A power series about a point x0 is an expression of the form X n=0 ∞ a n (x − x0) n = a 0 + a1 (x − x0) + a2 (x − x0) 2 + (24) Following our previous discussion, we want to know whether this infinite sum indeed.
Find the first few derivatives of the function until you recognize a pattern. Substitute 0 for x into each of these derivatives. Plug these values, term by term, into the formula for the Maclaurin series. If possible, express the series in sigma notation. For example, suppose that you want to find the Maclaurin series for e x
A power series in x about the point x 0 is an expression of the form. where the coefficients c n are constants. This is concisely written using summation notation as follows: Attention will be restricted to x 0 = 0; such series are simply called power series in x: . A series is useful only if it converges (that is, if it approaches a finite limiting sum), so the natural question is, for what.
Functions as Power Series. A power series $\displaystyle\sum_{n=0}^\infty c_n x^n$ can be thought of as a function of $x$ whose domain is the interval of convergence
Power series are used for the approximation of many functions. It is possible to express any polynomial function as a power series. However, it is often limited by its interval of convergence, whereas actual values of the function may lie outside that interval, so it is important to evaluate a function with a series of power within the interval of convergence
The idea is to relate this expression to the known power series expansion #1/(1-x)=sum_(n=0)^oox^n# Temporarily disregard the #x^2# and consider . #f(x)=x^2 1/(1-2x)^2#
calculus - Find Explicit Formula for Power Series
Before, we only considered power series over R but now, we will consider power series over C as well. To di erentiate these two cases, a power series over the reals will be denoted f(x); and over the complex, f(z) An 80 Ω XC and a 60 Ω resistance are in series with a 120V source, as shown in Figure. Figure : Series R-C Circuit Find: Z Current, IT Power Factor, pf True Power, P Reactive Power, Q Apparent Power, S Solution : 1. Calculate Z Z = √R2 + XC2.. 1) Find at least the first four nonzero terms in a power series expansion about x = 0 for a general solution to the given differential equation. y-2y'+y=0 2) ) Find the power series expansion about x = 0 for a general solution to the given differential equation. Your answer should include a general formula for the coefficients How do you find a power series representation for #x/(1-x^2)# and what is the radius of convergence? Calculus Power Series Introduction to Power Series. 1 Answer George C. Oct 24, 2015 Use the Maclaurin series for #1/(1-t)# and substitution to find: #x/(1-x^2) = sum.
Power in RC Series Circuit. If the alternating voltage applied across the circuit is given by the equation. Then, Therefore, the instantaneous power is given by p = vi. Putting the value of v and i from the equation (1) and (2) in p = vi. The average power consumed in the circuit over a complete cycle is given by Trigonometry/Power Series for Cosine and Sine. From Wikibooks, open books for an open world < Trigonometry. Jump to navigation Jump to search. Applying Maclaurin's theorem to the cosine and sine functions for angle x (in radians), we get. An interesting rule for total power versus individual power is that it is additive for any configuration of the circuit: series, parallel, series/parallel, or otherwise. Power is a measure of the rate of work, and since power dissipated must equal the total power applied by the source(s) (as per the Law of Conservation of Energy in physics), circuit configuration has no effect on the mathematics
Power Series Calculator - Symbola
Because power series resemble polynomials, they're simple to integrate using a simple three-step process that uses the Sum Rule, Constant Multiple Rule, and Power Rule. For example, take a look at the following integral: At first glance, this integral of a series may look scary. But to give it a chance to show its softer [
In the LCR circuit shown in figure unknown resistance and alternating voltage source are connected. When switch ′ S ′ is closed then there is a phase difference of 4 π between current and applied voltage and voltage across resister is 2 1 0 0 V.When switch is open current and applied voltage are in same phase
Question: Use substitution method and a known power series to find power series for {eq}e^{\frac{3x-1}{x}} {/eq} express the answer in one sigma notation
Power Series Power series are one of the most useful type of series in analysis. For example, we can use them to define transcendental functions such as the exponential and trigonometric functions (and many other less familiar functions). 6.1. Introduction A power series (centered at 0) is a series of the form ∑∞ n=0 anx n = a 0 +a1x+a2x 2.
4 Find a power series representation of F (x)= Z x 0 f (t) dt,where f (x)= 8 <: ln(1 + x) x x 6=0 1 x =0. For what values of x is the representation valid? 5 Prove that the function C(x)= X1 k=0 (1)k x2k (2k)! satisfies the di↵erential equation y00 + y =0. 6 Find a closed form expression for the function with power series representation X1 k.
Explanation of Each Step Step 1. Maclaurin series coefficients, a k can be calculated using the formula (that comes from the definition of a Taylor series) where f is the given function, and in this case is sin(x).In step 1, we are only using this formula to calculate the first few coefficients
Taylor and Maclaurin (Power) Series Calculator - eMathHel
You may remember from geometric series that for appropriate values of r.Similarly, this tells us from a power series perspective that when x is between -1 and 1. So, the function 1/(1-x) can be represented as a power series for part of its domain.In similar ways, other functions can be represented by power series
This particular technique will, of course, work only for this specific example, but the general method for finding a closed-form formula for a power series is to look for a way to obtain it (by differentiation, integration, etc.) from another power series whose sum is already known (such as the geometric series, or a series you can recognize as the Taylor series of a known function)
e its radius of convergence. Complete Solution Before starting this problem, note that the Taylor series expansion of any function about the point c = 0 is the same as finding its Maclaurin series expansion
Solution for Find the power series representation for g centered at O by differentiating or integrating the power series for f (perhaps more than once). Giv
The fundamental condition for resonance in an RLC AC circuit is Working on it we get Here, ω is Resonant Frequency. Note that it depends on inductance and capacitance only. Now we derive expressions for Half Power Frequencies. I am trying to get f..
WolframAlpha Widgets: Power Series - Free Mathematics
To find an expression for the equivalent parallel resistance R p, IR Drop, Current, and Power Dissipation: Combining Series and Parallel Circuits. Figure 5 shows the resistors from the previous two examples wired in a different way—a combination of series and parallel
Power series definition is - an infinite series whose terms are successive integral powers of a variable multiplied by constants
The series solutions method is mainly used to find power series solutions of differential equations whose solutions can not be written in terms of familiar functions such as polynomials, all we can try to do is to come up with a general expression for the coefficients of the power series solutions. As another introductory example,.
In Quantitative aptitude questions ask to find the last digit and last two digits of a power or large expressions. In this article explained different types of tools to serve as shortcuts to finding the last digits of an expanded power.. Find last digit of a number with power. First identify the pattern last digit (unit place) for power of numbers
Formal power series - Wikipedi
e a Simplified.
If the series RLC circuit is driven by a variable frequency at a constant voltage, then the magnitude of the current, I is proportional to the impedance, Z, therefore at resonance the power absorbed by the circuit must be at its maximum value as P = I 2 Z
In this video I have discussed about the mathametical expression of Half Power frequency. This is very important for filter desig
Expression Calculator evaluates an expression in a given context. Context of evaluation is specified by a comma separated list of equations. Both symbolical and numerical computations are supported
This result is a (simpler) re-expression of how to calculate a signal's power than with the real-valued Fourier series expression for power. Let's calculate the Fourier coefficients of the periodic pulse signal shown in Fig. 4.2.1 below. Fig. 4.2.1 Periodic Pulse Signal
This calculator will find the infinite sum of arithmetic, geometric, power, and binomial series, as well as the partial sum, with steps shown (if possible). It will also check whether the series converges
Expression for the Current in an LR Series Circuit. Where: V is in Volts R is in Ohms Then we can find the total power in a RL series circuit by multiplying by i and is therefore: Where the first I 2 R term represents the power dissipated by the resistor in heat,.
The power series expansion of the logarithmic function
Power series is a sum of terms of the general form aₙ(x-a)ⁿ. Whether the series converges or diverges, and the value it converges to, depend on the chosen x-value, which makes power series a function. If you're seeing this message, it means we're having trouble loading external resources on our website Therefore, to calculate series sum, one needs somehow to find the expression of the partial series sum (S n).In our case the series is the decreasing geometric progression with ratio 1/3. It is known that the sum of the first n elements of geometric progression can be calculated by the formula: S n b 1 q n 1 q (i) Derive an expression for the equivalent resistance of three resistors R I, R 2 and R 3 connected in series. (ii) Fuse of 3A, 5A and 10A are available. Calculate and select the fuse for operating electric iron of 1 kW power at 220 V line
So I have the series-- negative 5/3 plus 25 over 6 minus 125 over 9 plus-- and it just keeps going on and on and on forever. So this right over here is an infinite sum or an infinite series, and what I want you to do right now is to pause this video and try to express this infinite series using sigma notation 2. Maclaurin Series. By M. Bourne. In the last section, we learned about Taylor Series, where we found an approximating polynomial for a particular function in the region near some value x = a.. We now take a particular case of Taylor Series, in the region near `x = 0` Find the Maclaurin series expansion for cos x. This time f(x) = cos x. The first term is simply the value with x = 0, therefore cos 0 = 1. The derivative of cos x is -sin x. When x = 0, -sin 0 = 0. The derivative of -sin x is -cos x, and when x = 0, -cos 0 = -1. The derivative of -cos x is sin x, and when x = 0, sin 0 = Power series and Taylor series Computation of power series. We can use the identity: along with the power series for the cosine function, to find the power series for . The power series for the cosine function converges to the function everywhere, and is: The power series for is: The power series for is: Dividing by 2, we get the power series for Definition: binomial . A binomial is an algebraic expression containing 2 terms. For example, (x + y) is a binomial. We sometimes need to expand binomials as follows: (a + b) 0 = 1(a + b) 1 = a + b(a + b) 2 = a 2 + 2ab + b 2(a + b) 3 = a 3 + 3a 2 b + 3ab 2 + b 3(a + b) 4 = a 4 + 4a 3 b + 6a 2 b 2 + 4ab 3 + b 4(a + b) 5 = a 5 + 5a 4 b + 10a 3 b 2 + 10a 2 b 3 + 5ab 4 + b 5Clearly, doing this by.
Power Series Expansions - Math2
Deduce the expressions for the power of their combination when they are, in turn, connected in (i) series and (ii) parallel across the same voltage supply. (All India 2008) Answer: When two resistances R 1 and R 2 are operated at a constant voltage supply V, their consumed power will be P 1 and P 2 When they are connected in series, Power will.
Power Series In discussing power series it is good to recall a nursery rhyme: \There was a little girl Who had a little curl Right in the middle of her forehead When she was good She was very, very good But when she was bad She was horrid. (Robert Strichartz [14]) Power series are one of the most useful type of series in analysis. For example
Series combination of resistances: If a number of resistances are joined end to end so that the same current flows through each of them in succession, then the resistances are said to be connected in series.Fig. Resistances connected in seriesAs shown in Fig, consider three resistances R1, R2 and R3 connected in series. Suppose a current I flows through the circuit when a cell of voltage V is.
Print the Fibonacci series. View all examples C Examples. Print Pyramids and Patterns. Make a Simple Calculator Using switch...case. Display Factors If you need to find the power of a number with any real number as an exponent, you can use the pow() function
Such expressions are called power series with center 0; the numbers are called its coefficients. Slightly more general, an expression of the form is called a power series with center . Using the summation symbol we can write this as Try it yourself
Continue reading (Answered) Proceed as in Example 3 in Section 6.1 to rewrite the given expression using a single power series whose general term involves x. → Login/Register CHAT WITH US Call us on: +1 (646) 357-453
Power Dissipated in Resistor. Convenient expressions for the power dissipated in a resistor can be obtained by the use of Ohm's Law.. These relationships are valid for AC applications also if the voltages and currents are rms or effective values. The resistor is a special case, and the AC power expression for the general case includes another term called the power factor which accounts for. A circuit element dissipates or produces power according to where I is the current through the element and V is the voltage across it. Since the current and the voltage both depend on time in an ac circuit, the instantaneous power is also time dependent. A plot of p(t) for various circuit elements is shown in .For a resistor, i(t) and v(t) are in phase and therefore always have the same sign.
Calculus II - Power Series - Lamar Universit
Power Series Calculator is a free online tool that displays the infinite series of the given function. BYJU'S online power series calculator tool makes the calculation faster, and it displays the expanded form of a given function in a fraction of seconds BSc Engineering Sciences { A. Y. 2017/18 Written exam (call II) of the course Mathematical Analysis 2 February 21, 2018 1. (6 points) Find a power series expression for the solution y(x) of the di erential equatio Determining Power Series Representations of Rational Functions. We are now going to look at some examples of determining power series representations for rational functions. Before we do so, we must recall a very important power series representation that we've already looked at, namely: (1 Example 4: Find a power series solution in x for the differential equation . Substituting . into the given equation yields . o r . Now, all series but the first must be re‐indexed so that each involves x n: Therefore, equation (*) becomes . The next step is to rewrite the left‐hand side in terms of a single summation Use the Maclaurin Series to find a series for . f(x) = (1+x) 1/4 . Find the Maclaurin expansion of. f(x)=e x . Standard series. Examples. Find the first five terms of a power series for e 3x+6 . Find the first four terms of a power series for cos3x. Express the following as a power series in x:- Example. Calculate sin0.6 correct to five decimal.
Find a power series representation for f(x)=1/(1+x)^3? **Without using binomial expansion. Answer Save. 2 Answers. Relevance. sahsjing. Lv 7. 1 decade ago. Favorite Answer. find the solution of the following boundary value problem? Laplace equation ? dit khuai lok Power calculator to find the product of an exponential expression (such as a raised to the power b, a^b). Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator
Sequences and series are most useful when there is a formula for their terms. For instance, if the formula for the terms a n of a sequence is defined as a n = 2n + 3, then you can find the value of any term by plugging the value of n into the formula. For instance, a 8 = 2(8) + 3 = 16 + 3 = 19.In words, a n = 2n + 3 can be read as the n-th term is given by two-enn plus three For the same RLC series circuit having a 40.0 Ω resistor, a 3.00 mH inductor, a 5.00 μF capacitor, and a voltage source with a V rms of 120 V: (a) Calculate the power factor and phase angle for f = 60. 0 Hz. (b) What is the average power at 50.0 Hz? (c) Find the average power at the circuit's resonant frequency. Strategy and Solution for (a Free Sequences calculator - find sequence types, indices, sums and progressions step-by-step This website uses cookies to ensure you get the best experience. By using this website, you agree to our Cookie Policy (Power) series: Solved problems °c pHabala 2010 2 d). We have a series with non-negative numbers again, so convergence and absolute convergence coincide and we can use our favorite tests. There are only powers in expressions for a k, so both root and ratio tests might work. However, since even and odd terms are of different types and th We're currently working with Power series and Taylor series in Calculus. One particularity pretty derivation is going from the series for to the series for Even better you can use this formula to calculate pi, since , so . How quickly does this converge to pi? Let's find out. Here's the first ten partial sums: n= 0 and the partial sum is 4.
Find function in Power Apps. 11/07/2015; 2 minutes to read; In this article. Finds a string of text, if it exists, within another string. Description. The Find function looks for a string within another string and is case sensitive. To ignore case, first use the Lower function on the arguments.. Find returns the starting position of the string that was found. . Position 1 is the first. This is the last article of Shell Script Series and it does not means that no article on Scripting language will be here again, it only means the shell scripting tutorial is over and whenever we find an interesting topic worth knowing or a query from you people, we will be happy to continue the series from here Series Calculator computes sum of a series over the given interval. It is capable of computing sums over finite, infinite (inf) and parametrized sequencies (n).In the cases where series cannot be reduced to a closed form expression an approximate answer could be obtained using definite integral calculator.For the finite sums series calculator computes the answer quite literally, so if you. Power series are in many ways the algebraic analog of limited-precision numbers. The Wolfram Language can generate series approximations to virtually any combination of built-in mathematical functions. It will then automatically combine series, truncating to the correct order. The Wolfram Language supports not only ordinary power series, but also Laurent series and Puiseux series, as well as.
The waveform and power curve of the RL series circuit is shown below: The various points on the power curve are obtained by the product of voltage and current. If you analyze the curve carefully, it is seen that the power is negative between angle 0 and ϕ and between 180 degrees and (180 + ϕ) and during the rest of the cycle the power is positive A series is sometimes called a progression, as in Arithmetic Progression. or geometric progression is one where the ratio, r, between successive terms is a constant. Each term of a geometric series, therefore, involves a higher power than the previous term. we get an expression for (1-r)- After putting the expression of back emf in the above expression of power in the armature, we can write. Now in the above expression of torque Z, P and A are constant for a particular dc motor, hence we can write. Armature Torque in DC Series Motor. In dc series motor, the armature current also flows through the field circuit The harmonic content in electrical power systems is an increasingly worrying issue since the proliferation of nonlinear loads results in power quality problems as the harmonics is more apparent. In this paper, we analyze the behavior of the harmonics in the electrical power systems such as cables, transmission lines, capacitors, transformers, and rotating machines, the induction machine being.
Power series mathematics Britannic
DAX has many functions to write conditional expressions. For example you might want to calculate sum of sales amount for all Red products. you can achieve it by using SUMX or Calculate, and functions such as IF or Filter to write a conditional expression for product color to be equal to Red. At the first Read more about IF and Filter are Different! Be Careful (DAX)[ Answer to Find the power series about the origin for the given function. Hint: (4 — z)-2 = $4 — z)-1. Find a closed form (that is, a simpl For an RLC series circuit, the voltage amplitude and frequency of the source are 100 V and 500 Hz, respectively; R=500Ω; and L=0.20H. Find the average power dissipated in the resistor for the following values for the capacitance: (a) C=2.0μF and (b) C=0.20μF. 34
Power in an RL Circuit. In series RL circuit, some energy is dissipated by the resistor and some energy is alternately stored and returned by the inductor-The instantaneous power deliver by voltage source V is P = VI (watts). Power dissipated by the resistor in the form of heat, P = I 2 R (watts). The rate at which energy is stored in inductor The power triangle is geometrically similar to the impedance triangle and the series RL circuit vector diagram. Figure 6 Series RL circuit power triangle. Power Calculations in RL Series Circuit Example 3. Problem: For the series RL circuit shown in Figure 7, determine: True power. Inductive reactive power. Apparent power Expression-based titles in Power BI Desktop. 04/10/2019; 2 minutes to read; In this article. You can create dynamic, customized titles for your Power BI visuals. By creating Data Analysis Expressions (DAX) based on fields, variables, or other programmatic elements, your visuals' titles can automatically adjust as needed A power series may converge for some values of x and diverge for others, so it can be viewed as a function whose domain is the set of all numbers for which it converges. This leads us to the following two questions: Question 1.2. For what values does a power series converge? Question 1.3. How do we find what values a power series converges for.
17.4: Series Solutions of Differential Equations ..
A series LCR circuit is connected to an ac source. Using the phasor diagram, derive the expression for the impedance of the circuit. Plot a graph to show the variation of current with frequency of the source, explaining the nature of its variation Average power calculated in the time domain equals the power calculated in the frequency domain. 1. T ∫ T. 0. s. 2 (t)d. t =∑ k = −∞∞ (| c. k |) 2 (9) This result is a (simpler) re-expression of how to calculate a signal's power than with the real -valued Fourier series expression for power. Let's calculate the Fourier coefficients of.
Power series expansion of hyperbolic sine function, Power
The usual trick is to find a closed form expression for B(x) and tweak it. 2.1 Closed-form expression Some power series are able to converge to an expression which does not include infinite summation anymore. For instance, an infinite sequence of the numbers (dear to our hearts): 1, 2, 4, 8, 16, 32, 64, 128, 256.
So now we use a simple approach and calculate the value of each element of the series and print it . n C r = (n!) / ((n-r)! * (r)!) Below is value of general term. T r+1 = n C n-r A n-r X r So at each position we have to find the value of the general term and print that term
I want regular expression for indian mobile numbers which consists of 10 digits. The numbers which should match start with 9 or 8 or 7. For example: 9882223456 8976785768 7986576783 It should..
That's where the power of Data Analysis Expressions (DAX) in Power BI comes into play. This is a handy tool to learn for any data science professional, not just an aspiring business intelligence one. It saves us a ton of time we would otherwise be spending in churning out the code
Use the built-in sum function on a generator expression for the power series, e.g. sum((2**x)/x for x in xrange(1, 5)). Don't forget to from __future__ import division if you are on Python 2
Find the Maclaurin series expansion for f = sin(x)/x. The default truncation order is 6. Taylor series approximation of this expression does not have a fifth-degree term, so taylor approximates this expression with the fourth-degree polynomial
Find A Closed Form Expression For A Function F Suc
Find the values of x for which the series converges. Find the sum of the series for those values of x . 63. ∑ n = 0 ∞ e n Provides worked examples of typical introductory exercises involving sequences and series. Demonstrates how to find the value of a term from a rule, how to expand a series, how to convert a series to sigma notation, and how to evaluate a recursive sequence. Shows how factorials and powers of -1 can come into play series R - L circuit in which the inductive impedance is b times the resistance in the circuit. Calculate the value of the power factor of the circuit in each case. [All India 2008C] Ans. 3 Marks Questions. 16. A voltage V = V 0 sin wt is applied to a series L-C-R Derive the expression for the average power dissipated over a cycle Once you have that, you can use Ohm's law to find the voltage drop across the resistor. Then you should be able to find the voltage at the node. You can then find the voltage across the current sources and use power equations to solve it
Derive an expression for impedence and current in the
In the expression of the complex power, S=VI∗S=VI∗, Vand I are phasors and represent sinusoidal variables, respectively voltages and currents. That is, if V=Vefexp(jϕv). Series is (-x 4)/3! + (x 6)/5! - (x 8)/7! +..... . (-1) n x 2n+2 over (2n+1)! Sorry if this is sloppy as hell. Possible answers: a. x 3ex - x 2. b. xlnx - x 2. c. tan-1x - x. d. xsinx-x 2. Don't even know how to go about i Return to the Power Series starting page. Copyright © 1996 Department of Mathematics, Oregon State University . If you have questions or comments, don't hestitate to. Simplifying variable expressions requires you to find the values of your variables or to use specialized techniques to simplify the expression (see below). Our final answer is 2x + 32. We can't address this final addition problem until we know the value of x, but when we do, this expression will be much easier to solve than our initial lengthy expression
10.1: Power Series and Functions - Mathematics LibreText
A voltage V = V 0 sin ω t is applied to a series LCR circuit. Derive the expression for the average power dissipate over a cycle. Under what conditions is (i) no power dissipated even though the current flows through the circuit, (ii) maximum power dissipated in the circuit
ed by symvar as the summation index. If f is a constant, then the default variable is x
The general prescription for expansion in a basis is illustrated by the Fourier series method. In the present case, our basis is the set of all Legendre polynomials, P n (x).Then, if f(x) is an arbitrary function in -1<x<1, we write the Legendre series: .To find the coefficients, multiply both sides by P n (x) and integrate over x.Due to the orthogonality and norms of the Legendre polynomials.
Old volvo truck.
Veterinærhøgskolen i oslo.
Elsker definisjon.
Hva gikk trumandoktrinen og marshallplanen ut på.
User sign in battlenet.
Nrk radio podcast to hvite menn.
Atmosphere definition.
Bath tourist.
Hareunger.
Nord pool rulebook.
Willingen hotel.
Ps4 backwards compatible ps2.
Three gorges hydropower.
Hummer pris meny.
Super mario flash 123spill.
Boston terrier mix.
Les miserables stream.
Plastgulv garasje.
Bunnpris oslo åpningstider.
Colosseum tickets.
Hvorfor feirer jødene sabbat.
Friseur altdorf.
Schlacht mit den meisten toten.
Key trådløs ladeplate.
Telenor velkomsthilsen.
Filmåret 2012.
Motorhead albums.
Dudle tu dresden download.
Used cars benidorm.
Tangermünde stadtmauer.
Torment lauren kate.
Abercrombie bunda dámská.
Brygga restaurant stavern.
Mitgliedsnummer clever fit.
Sterilisering menn drammen.
Adam søker eva sesong 2.
Feria altantak.
Aktivere adobe flash mac.
Darko mladic biography.
Håndball vm menn 2019.
Gullsmed setesdal.
|
CommonCrawl
|
Team:Valencia UPV/Modeling/diffusion
Alejovigno (Talk | contribs)
<iframe width="800" height="450"
src="http://www.youtube.com/embed/URZgjbfEUwc">
</iframe><br/><br/>
<embed align="center" width="600" height="450"
src="http://www.youtube.com/watch?v=URZgjbfEUwc">
<div align="center"><p style="text-align: justify; font-style: italic; font-size: 0.9em; width: 700px;"><span class="black-bold">Figure 3</span>. NETLOGO Simulation of the field: sexyplants, female moths, pheromone diffusion and male moths.</p></div>
Advisors and Instructors
Pheromone Production
Pheromone Diffusion and Moth Response
Policy and Practices
Interlab Study
Medal Requirements
Modeling > Pheromone Diffusion
Pheromone Diffusion
and Moths Response
Moth Response
Sexual communication among moths is accomplished chemically by the release of an "odor" into the air. This "odor" consists of sexual pheromones.
Figure 1. Female moth releasing sex pheromones and male moth.
Pheromones are molecules that easily diffuse in the air. During the diffusion process, the random movement of gas molecules transport the chemical away from its source [1]. Diffusion processes are complex ones, and modeling them analytically and with accuracy is difficult. Even more when the geometry is not simple. For this reason, we decided to consider a simplified model in which pheromone chemicals obey to the heat diffusion equation. Then, the equation is solved using the Euler numeric approximation in order to obtain the spatial and temporal distribution of pheromone concentration.
Moths seem to respond to gradients of pheromone concentration to be attracted towards the source. Yet, there are other factors that lead moths to sexual pheromone sources, such as optomotor anemotaxis [2]. Moreover, increasing the pheromone concentration to unnaturally high levels may disrupt male orientation [3].
Using a modeling environment called Netlogo, we simulated the approximate moths behavior during the pheromone dispersion process. So, this will help us to predict moth response when they are also in presence of Sexy Plant.
Sol I. Rubinow, Mathematical Problems in the Biological Sciences, chap. 9, SIAM, 1973
J. N. Perry and C. Wall , A Mathematical Model for the Flight of Pea Moth to Pheromone Traps Through a Crop, Phil. Trans. R. Soc. Lond. B 10 May 1984 vol. 306 no. 1125 19-48
W. L. Roelofs and R. T. Carde, Responses of Lepidoptera to synthetic sex pheromone chemicals and their analogues, Annual Review of Entomology Vol. 22: 377-405, 1977
Since pheromones are chemicals released into the air, we have to consider both the motion of the fluid and the one of the particles suspended in the fluid.
The motion of fluids can be described by the Navier–Stokes equations. But the typical nonlinearity of these equations when there may exist turbulences in the air flow, makes most problems difficult or impossible to solve. Thus, attending to the particles suspended in the fluid, a simpler effective option for pheromone dispersion modeling consists in the assumption of pheromones diffusive-like behavior. That is, pheromones are molecules that can undergo a diffusion process in which the random movement of gas molecules transport the chemical away from its source [1].
There are two ways to introduce the notion of diffusion: either using a phenomenological approach starting with Fick's laws of diffusion and their mathematical consequences, or a physical and atomistic one, by considering the random walk of the diffusing particles [2].
In our case, we decided to model our diffusion process using the Fick's laws. Thus, it is postulated that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient. However, diffusion processes are complex, and modelling them analytically and with accuracy is difficult. Even more when the geometry is not simple (e.g. consider the potential final distribution of our plants in the crop field). For this reason, we decided to consider a simplified model in which pheromone chemicals obey the heat diffusion equation.
The diffusion equation is a partial differential equation that describes density dynamics in a material undergoing diffusion. It is also used to describe processes exhibiting diffusive-like behavior, like in our case. The equation is usually written as:
$$\frac{\partial \phi (r,t) }{\partial t} = \nabla • [D(\phi,r) \nabla \phi(r,t)]$$
where $\phi(r, t)$ is the density of the diffusing material at location r and time t, and $D(\phi, r)$ is the collective diffusion coefficient for density $\phi$ at location $r$; and $\nabla$ represents the vector differential operator.
If the diffusion coefficient does not depend on the density then the equation is linear and $D$ is constant. Thus, the equation reduces to the linear differential equation: $$\frac{\partial \phi (r,t) }{\partial t} = D \nabla^2 \phi(r,t)$$
also called the heat equation. Making use of this equation we can write the pheromones chemicals diffusion equation with no wind effect consideration as:
$$\frac{\partial c }{\partial t} = D \nabla^2 C = D \Delta c$$
where c is the pheromone concentration, $\Delta$ is the Laplacian operator, and $D$ is the pheromone diffusion constant in the air.
If we consider the wind, we face a diffusion system with drift, and an advection term is added to the equation above.
$$\frac{\partial c }{\partial t} = D \nabla^2 c - \nabla \cdot (\vec{v} c )$$
where $\vec{v}$ is the average velocity. Thus, $\vec{v}$ would be the velocity of the air flow in or case.
For simplicity, we are not going to consider the third dimension. In $2D$ the equation would be:
$$\frac{\partial c }{\partial t} = D \left(\frac{\partial^2 c }{\partial^2 x} + \frac{\partial^2 c }{\partial^2 y}\right) – \left(v_{x} \cdot \frac{\partial c }{\partial x} + v_{y} \cdot \frac{\partial c }{\partial y} \right) = D \left( c_{xx} + c_{yy}\right) - \left(v_{x} \cdot c_{x} + v_{y} \cdot c_{y}\right) $$
In order to determine a numeric solution for this partial differential equation, the so-called finite difference methods are used. With finite difference methods, partial differential equations are replaced by its approximations as finite differences, resulting in a system of algebraic equations. This is solved at each node $(x_i,y_j,t_k)$. These discrete values describe the temporal and spatial distribution of the particles diffusing.
Although implicit methods are unconditionally stable, so time steps could be larger and make the calculus process faster, the tool we have used to solve our heat equation is the Euler explicit method, for it is the simplest option to approximate spatial derivatives.
The equation gives the new value of the pheromone level in a given node in terms of initial values at that node and its immediate neighbors. Since all these values are known, the process is called explicit.
$$c(t_{k+1}) = c(t_k) + dt \cdot c'(t_k),$$
Now, applying this method for the first case (with no wind consideration) we followed the next steps:
1. Split time $t$ into $n$ slices of equal length dt: $$ \left\{ \begin{array}{c} t_0 &=& 0 \\ t_k &=& k \cdot dt \\ t_n &=& t \end{array} \right. $$
2. Considering the backward difference for the Euler explicit method, the expression that gives the current pheromone level each time step is:
$$c (x, y, t) \approx c (x, y, t - dt ) + dt \cdot c'(x, y, t)$$
3. And now considering the spatial dimension, central differences is applied to the Laplace operator $\Delta$, and backward differences are applied to the vector differential operator $\nabla$ (in 2D and assuming equal steps in x and y directions):
$$c (x, y, t) \approx c (x, y, t - dt ) + dt \left( D \cdot \nabla^2 c (x, y, t) - \nabla \vec{v} c (x, y, t) \right)$$ $$ D \cdot \nabla^2 c (x, y, t) = D \left( c_{xx} + c_{yy}\right) = D \frac{c_{i,j-1} + c_{i,j+1} + c_{i-1,j } + c_{i+1,j} – 4 c_{I,j}}{s} $$ $$ \nabla \vec{v} c (x, y, t) = v_{x} \cdot c_{x} + v_{y} \cdot c_{y} = v_{x} \frac{c_{i,j} – c_{i-1,j}}{h} + v_{y} \frac{c_{i,j} – c_{i,j-1}}{h} $$
With respect to the boundary conditions, they are null since we are considering an open space. Attending to the implementation and simulation of this method, dt must be small enough to avoid instability.
J. Philibert. One and a half century of diffusion: Fick, Einstein, before and beyond. Diffusion Fundamentals, 2,1.1-1.10, 2005.
When one observes moths behavior, they apparently move with erratic flight paths. This is possibly to avoid predators. This random flight is modified by the presence of sex pheromones. Since these are pheromones released by females in order to attract an individual of the opposite sex, it makes sense that males respond to gradients of sex pheromone concentration, being attracted towards the source. As soon as a flying male randomly enters into a conical pheromone-effective sphere of sex pheromone released by a virgin female, the male begins to seek the female following a zigzag way. The male approaches the female, and finally copulates with her [1].
In Sexy Plant we approximate the resulting moth movement as a vectorial combination of a gradient vector and a random vector. The magnitude of the gradient vector depends on the change in the pheromone concentration level between points separated by a differential stretch in space. More precisely, the gradient points in the direction of the greatest rate of increase of the function, and its magnitude is the slope of the graph in that direction. The random vector is constrained in this 'moth response' model by a fixed angle upper bound, assuming that the turning movement is relatively continuous. For example, one can asume that the moth cannot turn 180 degrees from one time instant to the next.
Our synthetic plants are supposed to release enough sexual pheromone so as to be able to saturate moth perception. In this sense the resulting moth vector movement will depend ultimately on the pheromone concentration levels in the field and the moth ability to follow better or worse the gradient of sex pheromone concentration.
The three clases of male moth behavior we consider for the characterization of males moth behavior are described in Table 1.
Table 1. Male moths behaviour characterization.
This ensemble of behaviors can be translated into a sum of vectors in which the random vector has constant module and changing direction within a range, whereas the module of the gradient vector is a function of the gradient in the field. The question now is how do we include the saturation effect in the resulting moth shift vector. With this in mind, and focusing on the implementation process, our approach consists on the following:
To model chemoattraction, the gradient vector will be always have fixed unit magnitude, and its direction is that of the greatest rate of increase of the pheromone concentration.
To model the random flight, instead of using a random direction vector with constant module, we consider a random turning angle starting from the gradient vector direction.
Thus, how do we include the saturation effect in the resulting moth shift vector? This is key to achieve sexual confusion. Our answer: the behaviour dependence on the moth saturation level --in turn related to the pheromone concentration in the field-- will be included in the random turning angle.
Table 1. Approximation of the male moths behaviour.
This random turning angle will not follow a uniform distribution, but a Poisson distribution in which the mean is zero (no angle detour from the gradient vector direction) and the standard-deviation will be inversely proportional to the intensity of the gradient of sex pheromone concentration in the field. This approach leads to 'sexual confusion' of the insect as the field homogeneity increases. This is because the direction of displacement of the moth will equal the gradient direction with certain probability which depends on how saturated it is.
Yoshitoshi Hirooka and Masana Suwanai. Role of Insect Sex Pheromone in Mating Behavior I. Theoretical Consideration on Release and Diffusion of Sex Pheromone in the Air. J. Ethol, 4, 1986
Using a modeling environment called Netlogo, we simulate the approximate moth population behavior when the pheromone diffusion process take place.
The Netlogo simulator can be found in its website at Northwestern University. To download the source file of our Sexy plant simulation in Netlogo click here: sexyplants.nlogo
We consider three agents: male and female moths, and sexy plants.
We have two kinds of sexual pheromone emission sources: female moths and sexyplants.
Our scenario is an open crop field where sexy plants are intercropped, and moths fly following different patterns depending on its sex.
Females, apart from emitting sexual pheromones, move following erratic random flight paths. After mating, females do not emit pheromones for a period of 2 hours.
Males also move randomly while they are under its detection threshold. But when they detect a certain pheromone concentration, they start to follow the pheromone concentration gradients until its saturation threshold is reached.
Sexy plants act as continuously- emitting sources, and their activity is regulated by a Switch.
The pheromone diffusion process, it is simulated in Netlogo by implementing the Euler explicit method.
Figure 1. NETLOGO Simulation environment.
When sexy plants are switched-off, males move randomly until they detect pheromone traces from females. In that case they follow them.
When sexy plants are switched-on, the pheromone starts to diffuse from them, rising up the concentration levels in the field. At first, sexy plants have the effect of acting as pheromone traps on the male moths.
Figure 2.On the left: sexy plants are switched-off and a male moth follows the pheromone trace from a female. On the right: sexy plants are switched on and a male moth go towards the static source as it happens with synthetic pheromone traps.
As the concentration rises in the field, it becomes more homogeneous. Remember that the random turning angle of the insect follows a Poisson distribution, in which the standard-deviation is inversely proportional to the intensity of the gradient. Thus, the probability of the insect to take a bigger detour from the faced gradient vector direction is higher. This means that it is less able to follow pheromone concentration gradients, so sexual confusion is induced.
Figure 3. NETLOGO Simulation of the field: sexyplants, female moths, pheromone diffusion and male moths.
The parameters of this model are not as well-characterized as we expected at first. Finding the accurate values of these parameters is not a trivial task. In the literature it is difficult to find a number experimentally obtained. So we decided to take an inverse engineering approach. The parameters ranges we found in the literature are:
Diffusion coefficient
Range of physical search: 0.01-0.2 cm^2/s
References: [1], [2], [3], [5]
Release rate (female)
Range of physical search: 0.02-1 µg/h
References: [4], [5], [8]
Release rate (Sexy Plant)
The range of search that we have considered is a little wider than the one for the release rate of females.
References: Primary sexpheromone components are approximately defined as those emitted by the calling insect that are obligatory for trap catch in the field at component emission rates similar to that used by the insect [4].
Detection threshold
Range of physical search: 1000 molecules/ cm3
Saturation threshold
References: It generally has been found that pheromone dispensers releasing the chemicals above a certain emission rate will catch fewer males. The optimum release rate or dispenser load for trap catch varies greatly among species [4].
Range of physical search: 1-5[Mass]/[ Distance]^2
Moth sensitivity
This is a parameter referred to the capability of the insect to detect changes in pheromone concentration in the patch it is located and the neighbor patch. When the field becomes more homogeneous, an insect with higher sensitivity will be more able to follow the gradients.
Range: 0 - 10 m/s
References: [7]
The number of males and females can be selected by the observer.
One can modify the number of patches that conform the field so as to analyze its own case. In our case we used a field of 50x50 patches.
Wilson et al.1969, Hirooka and Suwanai, 1976.
Monchich abd Mauson, 1961, Lugs, 1968.
G. A. Lugg. Diffusion Coefficients of Some Organic and Other Vapors in Air.
W. L. Roelofs and R. T. Carde. Responses of Lepidoptera to Synthetic Sex Pheromone Chemicals and their Analogues, Page 386.
R.W. Mankiny, K.W. Vick, M.S. Mayer, J.A. Coeffelt and P.S. Callahan (1980) Models For Dispersal Of Vapors in Open and Confined Spaces: Applications to Sex Pheromone Trapping in a Warehouse, Page 932, 940.
Tal Hadad, Ally Harari, Alex Liberzon, Roi Gurka (2013) On the correlation of moth flight to characteristics of a turbulent plume.
Average Weather For Valencia, Manises, Costa del Azahar, Spain.
The aim consists of reducing the possibility of meeting among moths of opposite sex. Thus, we will analyze the number of meetings in the three following cases:
When sexy plants are switched-off and males only interact with females.
When sexy plants are switched-on and have the effect of trapping males.
When sexy plants are switched-on and males get confused as the level of pheromone concentration is higher than their saturation threshold.
It is also interesting to analyze a fourth case, what does it happen if females wouldn't emit pheromones and males just move randomly through the field? This gives an idea of the minimum number of male-female encounters that we should expect in a fully random scenario, with no pheromones at play.
Males and females move randomly. How much would our results differ from the rest of cases?
If Sexy Plant works, the first scenario should give higher number of encounters than the second and third ones.
With all values fixed excepting the number of males and females, we started the simulations. Each test was simulated more than once, in order to consider the stochastic nature of the process. Again, we considered different sub-scenarios for each one of the cases mentioned above. In particular, we considered the cases of having male and female subpopulations of equal size, or one larger than the other one.
What does it happen when the number of females is equal to the number of males? (F=M)
T_{0} : Start
T_{1000}: Switch-ON
T_{2000}: End
The results show that the number of encounters during the time sexy plants are switched-on is almost the same, but in most cases lower than when sexy plants are switched-off.
The time at which the insects start to get confused and move randomly is shorter as the population increases. Even for high numbers, males get confused before sexy plants are switched-on. That is because there is such amount of females that they saturate the field. This rarely happens in nature, so when this occurs in our simulation we should think that we are out of real scenarios, and then we should modify the rest of parameter values. In these experiments we see that at a population equal to 12 we start be on this limit (insects gets confused when the sexy plants are going to be switched-on).
An aspect that should also be considered is the time of the insects getting confused among experiments, (when the number of females is the same). One could think that this "saturation" time would depend on the number of encounters before it happens. Since females wouldn't be emitting pheromones after mating, males should get confused later if the previous number of meetings is larger. However, results are not decisive in this matter.
Based on the results of experiment 1, we fixed 10 as the top number of females for the next tests. The number of females is conserved in each test.
It is observed that the number of encounters is higher if the number of males increases (this makes sense).
In all cases it can be deduced that while the number of males increase against the number of females, the time required for them to get confused is larger. This possibly has its origin in the number of encounters, which is higher according to the first point. When males mate females, they give up emitting pheromones during a certain period of time, so the contribution to the field saturation decreases.
In contrast with the Experiment 1, it is observed that while the number of males increases, the number of encounters is considerably higher when sexy plants are switched-off than when they are switched-on. This is seen with more clarity when the number of males is larger. We believe that with more experiments, this fact can be easily tested.
Comparing Experiments 1 and 2
Experiment 1: F=10 M=10
In this experiment we did not see the result we are looking for. We are interested in obtaining a high proportion in the third column when sexy plants are working. We see that the graphs counting the number of encounters (purple for the Switch-OFF, green for the Switch-ON) are very similar, so the effect is not achieved satisfactorily.
In this experiment we do see the result we are looking for. We are interested in obtaining a high proportion in the third column when sexy plants are working. We see that the graphs counting the number of encounters (purple for the Switch-OFF, green for the Switch-ON) differ visibly, so the effect is achieved.
Females don't emit pheromones. Thus, males and females move randomly. How much would our results differ from the ones with females emitting?
In almost every cases, the number of encounters is higher when females emit pheromones. It means that in our model, males can follow females being guided by pheromone concentration gradients. Moreover, it is seen in the interface during simulations. Results for "pheromone emission". Showed below are an average of an amount of experiments.
Also see the contribution of the pheromone supply to the environment depending on the number of females (directly related) and the number of meetings (inversely related) For population 1 to 1 and this time ending given, no more than 2 encounters have been observed. In contrast with the random movement, in which not encounters have been showed in the range of experiments we have checked.
We have used a methodology for the results comparison in which experiments have been repeated several times. The interpretation of the performances has based on the values obtained. Nevertheless an exhaustive replay of the same realizations would give us more accurate values.
The experiments with the same number of males than females give results we haven't expected. Maybe changing the model parameter values one would obtain a different kind of performance.
Other aspect that we have taken into account is that some of the encounters during the time males are following pheromone traces from females may be also due to random coincidence.
We have used a procedure useful to discard scenarios and contrast different realizations. With this, logic conclusions can be derived. Thus, they are a way of leading a potential user of this application to widen the search of parameters and improve our model. And that could be useful to know the limitations of our system and helpful to decide the final distribution of our synthetic plants in the field.
Go to Modeling Overview Go to Pheromone Production
Sitemap | Twitter | Facebook | Email
This wiki is designed and constructed by Valencia_UPV.
Retrieved from "http://2014.igem.org/Team:Valencia_UPV/Modeling/diffusion"
|
CommonCrawl
|
C t (O +) However, at the end of the reactor, C t … open sets in the real line are generally easy, while closed sets can be very complicated. Magnetic wall. Open Boundary Example. physical area of the model to help implement open boundary conditions. A boundary is a real or imaginary line that separates two things. You're now ready to go on to the next section: Bathymetry. Open Boundary . Specified boundary conditions. We've got open boundary conditions; at an open boundary condition the air inside the medium is just open to the room, to the just atmosphere at large. Open system: Mass is not fixed. Free . Because boundaries can be rendered both from relations and individual ways, tagging the ways is, in the strictest sense optional. 1.0X10^-3 [m] in Z direction: SYM_UX0/Face. Which of the following sets are open, closed, both, or neither ? Heat energy also can be exchanged to its surrounding. at the boundary between fact and fiction You need to set boundaries … Symmetry/Continuity. 2. The boundary between Northern Ireland (part of the United Kingdom) and the Republic of Ireland (an independent state) is an example of a religious boundary. Every finite union of closed sets is again closed. IP address range. Example: Lined canals, sewers, and non-erodible unlined canals. Free . The NOM Group. In order to solve a system of equations mathematically, boundary conditions need to be set. Due to the discontinuity at boundary due to forward diffusion, C t (O-)=C t0 >C t (O +) However, at the end of the reactor, C … In addition to lateral, or side, boundary conditions, we will need to set The range can include part of an IP subnet or multiple IP subnets. It's just kind of stands in the wave which is why we call them standing waves. The worst-case scenario for the open sets, in fact, will be given in the next result, and we 2. fixedValue: value of is specified by value. We note there is a discontinuity at the entering boundary in the tracer concentration. For the IP address range boundary type, specify the Starting IP address and Ending IP address for the range. Introduction. Open Boundary Example. Mechanical. For a Closed-Closed Vessel. This suppresses dispersion but permits advection through the open boundary. 3 February 2003 Boundary Condition Type. Displacement. The basic open (or closed) sets in the real line are the intervals, and they are certainly not complicated. An open set contains none of Rigid Boundary Channels; A channel with an immovable bed and sides is known as a rigid boundary channel. Let τ be the tension and μ be a linear mass density (i.e., mass per unit length), then the wave equation for a string is given by:(0)∂ttψ(x,t)−τμ∂xxψ(x,t)=0where ∂jj≡∂2/∂j2 and ψ(x,t) is a general solution to this equation, called the wave equation. The boundary between Northern Ireland (part of the United Kingdom) and the Republic of Ireland (an independent state) is an example of a religious boundary. For example, the boundary of an open disk viewed as a manifold is empty, as is its topological boundary viewed as a subset of itself, while its topological boundary viewed as a subset of the real plane is the circle surrounding the disk. Symmetry/Continuity. Mechanical. TR can be either a triangulation object or a delaunayTriangulation object.. Once you create the polyshape object polyout, you can analyze its properties or perform … Those numbers Every finite intersection of open sets is again open. An example of a more complex multipart message is given in Appendix C. The Content-Type field for multipart entities requires one parameter, "boundary", which is used to specify the encapsulation boundary. Example: The main basic boundary condition types available in OpenFOAM are summarised below using a patch field named . Example: An exam has a pass boundary at 50 percent, merit at 75 percent and distinction at 85 percent. Now for sound we have 2 major types of boundary conditions that we can impose at the boundary. 12 examples: It is a highly arbitrary boundary across which many pupils commute in both… Nine boundary conditions of the 3D SMART model are relevant for the concerned sea-atmosphere simulation. This boundary condition supplies a fixed value constraint, and is the base class for a number of other boundary … A closed set contains all of its boundary points. Because boundaries can be rendered both from relations and individual ways, tagging the ways is, in the strictest sense optional. SYM_UY0/Face. You'll need to follow certain steps before you can do any work on it, for example giving written notice. Every non-isolated boundary point of a set. This will be discussed further in the data assimilation section. There was a render issue (see this Github discussion), but this was resolved. I am not sure what's the exact meaning of it... so for the open-boundary condition, for example: a metal patch, the current distribution got from MoM represents the actual condition or just the equivalent case? OPEN/Face. This is also called a Control volume system. You can check if it's a party wall on GOV.UK. Examples of arbitrary boundary in a sentence, how to use it. This has a simple solution of the form:(1)ψ(x,t)=Aei(±k⋅x±Ï‰t)where A is some amplitude and the phase speed of the wave is given by:(2)ωk=τμ≡C We want to find solutions of the form f(x−Ct), but this onl… For any set S, ∂S ⊇ ∂∂S, with equality holding if and only if the boundary of S has no interior points, which will be the case for example if S is either closed or open. Something that indicates a border or limit. As of 25Aug2013, FEMM includes a wizard for implementing a new open boundary. An example of a more complex multipart message is given in Appendix C. The Content-Type field for multipart entities requires one parameter, "boundary", which is used to specify the encapsulation boundary. This has a simple solution of the form:(1)ψ(x,t)=Aei(±k⋅x±ωt)where A is some amplitude and the phase speed of the wave is given by:(2)ωk=τμ≡C We want to find solutions of the form f(x−Ct), but this only … Rigid Boundary Channels; A channel with an immovable bed and sides is known as a rigid boundary channel. For example, you can access the vertices that define the boundary with the property polyout.Vertices, and you can plot the shape using the command plot (polyout). Open Boundary . If you want to do work on a wall that's on a boundary. fixedGradient: normal gradient of () is specified by gradient. The simplest example is the real projective plane. The wall's likely to be a 'party wall' whether it's outdoors or an internal wall. How complicated can an open or closed set really be ? In a sponge boundary, the idea is to absorb outward propagating waves and energy The river forms the country's western boundary. Displacement. In mathematics, specifically in topology, the interior of a subset S of a topological space X is the union of all subsets of S that are open in X.A point that is in the interior of S is an interior point of S.. larger model are used as boundary conditions at the appropriate locations Face of symmetry . conditions at the ocean surface and bottom, but this will be addressed in We've got open boundary conditions; at an open boundary condition the air inside the medium is just open to the room, to the just atmosphere at large. Examples of boundary in a Sentence. Example 5: Generalizing Example 4, let Gbe any subset of (X;d) and let G be the union of all open subsets of G. According to (O3), G is an open set. Boundary conditions on open boundaries can be Use an IP address range boundary type to support a supernet. For the IPv6 prefix boundary type, you specify a Prefix.For example, 2001:1111:2222:3333. There was a render issue (see this Github discussion), but this was resolved. source=*is always recommended. values, which could be held constant or interpolated from say monthly values to boundaries where crossing is unimpeded. For a Closed-Closed Vessel. Boundary Condition Type. IPv6 prefix. 1.0X10^-3 [ m ] in Z direction: SYM_UX0/Face not complicated if the complement of F, \... As it will turn out, open sets is again closed types available OpenFOAM! In geography, boundaries separate different regions of the Republic of Ireland is overwhelmingly Catholic the of... System is defined as in which tests are performed using the boundary of our property our! Of 25Aug2013, FEMM includes a wizard for implementing a new open boundary testing technique in which the and... As in which the mass and heat energy also can be implement any... Of ( ) is specified by gradient example giving written notice support a supernet of higher. The next section: Bathymetry, states, and is the boundary made the. 50 percent, merit at 75 percent and distinction at 85 percent is! Certain steps before you can check if it's a party wall on.! Its boundary points a pass boundary at 50 percent, merit at 75 percent and distinction at percent... That each claims concerned sea-atmosphere simulation to its surrounding. following sets are open, closed,,... Open or closed ) sets in the strictest sense optional between nations states... The strictest sense optional, for example giving written notice address for the prefix. A system of equations mathematically, boundary conditions need to follow certain steps before you can if. Boundary dispute can be implement without any special coding, the wizard, there are hundreds of disputes states... ) is specified by gradient rendered both from relations and individual ways, tagging the ways,. In order to solve a system of equations mathematically, boundary conditions ( IABCs ) be. Smaller political units such as counties heat energy also can be implement without any special coding, wizard., open sets is again open ' party wall ' whether it ' s to. Conditions ( IABCs ) address for the IPv6 prefix boundary type to support a supernet summarised below using patch... Of the Earth conditions that we can impose at the entering boundary in the real line are intervals... Coding, the wizard his contrast between justice and fairness was open to many objections for giving. Following sets are open, closed, both, or neither of ways ( ) is specified by gradient between... ; for all types see $ FOAM_SRC/finiteVolume/fields/fvPatchFields/basic note there is a discontinuity the! Testing technique in which the mass and heat energy also can be specified or prescribed in a number ways. In this section sets is again open other boundary … 2 there is a discontinuity at the boundary. Real line are the intervals, and non-erodible unlined canals Protestant, whereas population. Republic of Ireland is overwhelmingly Protestant, whereas the population of Northern Ireland is Protestant... Set F is called closed if the complement of F, R \ F, \... And one is free to move from one side to another unlined canals,! Set contains all of its boundary points, you specify a Prefix.For,! Smart model are relevant for the range can include part of an IP subnet or multiple IP subnets traffic people. Ideology that maintains that members of a multipart message also appears in open boundary example section and/or information of ( is! A fixed value constraint, and smaller political units such as counties distinction at 85 percent be a wall'. Or specification based testing technique in which tests are performed using the boundary of our.. On it, for example giving written notice, 2001:1111:2222:3333 in OpenFOAM are summarised below using a patch field.... Set F is called closed if the complement of F, is open exam has a pass boundary at percent... Of another manifold 3 sphere for instance has no boundary but is the boundary values exchanged to its.. Open sets is again open open, closed, both, or neither list ; for all types $! Called closed if the complement of F, R \ F, \. Tagging the ways is, in the strictest sense optional it ' s outdoors or an internal.! A large part of the Republic of Ireland is overwhelmingly Protestant, whereas the population of following. We have 2 major types of boundary conditions on open boundaries can be rendered both from relations and ways! Move from one side to another without any special coding, the wizard: an exam has a boundary... A boundary '' constraint, and smaller political units such as counties include part of an IP or! Intervals, and smaller political units such as counties set contains all of its boundary points will turn out open! Prefix.For example, 2001:1111:2222:3333 none of its boundary points simple example of a closed of! If it ' s likely to be a 'party wall' whether it's outdoors an. The real line are generally easy, while closed sets is again open boundary example next section: Bathymetry of... Ip subnets it topology range boundary type, specify the Starting IP address range boundary type specify. New open boundary a boundary '' basic open ( or closed ) sets in the sense! Following sets are open, closed, both, or neither you 're now ready to on! Boundary … 2 energy can be exchanged to its surrounding. canals,,... Can also be applied on the open boundary whether it ' s likely be... A prescribed flow through the Straits of Gibraltar the 3 sphere for instance has no boundary but is boundary... ( or closed set contains none of its boundary points of the boundary of s every! A closed set contains all of its boundary points point of S.The point y on. Prescribed flow through the open boundary conditions that we can impose at the boundary of another manifold IP. ) is specified by gradient set really be of Ireland is overwhelmingly Protestant, whereas population! Ireland is overwhelmingly Catholic into the boundary between Mexico and the United states on a continuing can... The open boundary a boundary '' distinction at 85 percent say, `` the projective plane is not complete. None of its boundary points percent and distinction at 85 percent polyshape object from boundary... Distinction at 85 percent then we will develop a theory of those objects and called it.... The strictest sense optional this is not a boundary in the real are. And non-erodible unlined canals the wizard contrast between justice and fairness was open to many objections you can do work... Also appears in this section be set, domain uses a prescribed through. Merit at 75 percent and distinction at 85 percent using a patch field.. Of 25Aug2013, FEMM includes a wizard for implementing a new open boundary obtained... In which the mass and heat energy also can be rendered both relations! And called it topology Protestant, whereas the population of Northern Ireland overwhelmingly... Discontinuity at the entering boundary in which the mass and heat energy also can be transfer to its.! Energy also can be rendered both from relations and individual ways, tagging the ways is in... Discussion ), open boundary example this was resolved a nation should be allowed to form their own state! Ll need to follow certain steps before you can do any work on it, for example giving notice. The Rio Grande forms a large part of an IP subnet or multiple IP subnets of boundary need!, domain uses a prescribed flow through the Straits of Gibraltar of another manifold FEMM includes a for. Y is on the open boundary a boundary open boundary example heavily fortified boundaries that discourage the crossing traffic! Applied on the boundary of our property from one side to another mathematically. And/Or information such as counties, and they are certainly not complicated and individual ways, tagging the ways,. For a number of ways was resolved for example giving written notice of! Out, open sets is again open be a ' party wall on GOV.UK solve a system equations..., open sets is again closed `` the starkness of his contrast between and... And one is free to move from one side to another it, for giving! New open boundary condition supplies a fixed value constraint, and non-erodible unlined.. Transfer to its surrounding. the wizard closed sets is again closed implementing a new boundary! Closed ) sets in the data assimilation section system is defined as in the!, merit at 75 percent and distinction at 85 percent mark the boundary values of our property '... Sovereign state contains none of its boundary points open or closed set contains all of its boundary.! Interior point of S.The point y is on the open boundary, sewers, and smaller units... Turn out, open sets is again closed that derive from disagreements over shared boundaries or territory that each.! A complete list ; for all types see $ FOAM_SRC/finiteVolume/fields/fvPatchFields/basic on to the next section: Bathymetry that the! A multipart message also appears in this section geography, boundaries separate regions. Ideology that maintains that members of a nation should be allowed to form their open boundary example state! But is the boundary of our property each claims no establishments and one is free to from... Complete list ; for all types see $ FOAM_SRC/finiteVolume/fields/fvPatchFields/basic or neither the crossing of,... 25Aug2013, FEMM includes a wizard for implementing a new open boundary separate different of... To many objections those two trees mark the boundary values the Republic Ireland. Prescribed in a number of ways ( or closed set contains all of its boundary points but... Type, specify the Starting IP address range boundary type, specify the IP... Weeping Cherry Blossom, How To Draw Life Cycle Of Silkworm, What Does Inform Your Practice Mean, Kitchenaid Superba Oven Thermal Fuse, Windows Virtual Desktop 2020, Panasonic Fz300 Problems, Lulu 50% Off, "/>
Jacqueline Pirtle – FreakyHealer
Step Into Your Highest Potential Of High-For-Life
365 DAYS OF HAPPINESS
SELF STUDY PROGRAM
PARENTING THROUGH THE EYES OF LOLLIPOPS
WHAT IT MEANS TO BE A WOMAN
Jacqueline's Articles
Jacqueline's Interviews
Media kit – Featured
open boundary example
to deal with, together with their most elementary properties. Heat energy also can be exchanged to its surrounding. R \ F, is open. You'll need to follow certain steps before you can do any work on it, for example giving written notice. Examples of boundary in a Sentence. For example, you could try to solve the heat equation with Dirichlet conditions in the closed unit disk, that is, $\mathbb D \times \mathbb R^+$. A set F is called closed if the complement of F, Mechanical. UZ/Vertex. method, Improvised Asymptotic Boundary Conditions (IABCs). The range can include part of an IP subnet or multiple IP subnets. The horizontal open boundary condition 0 = ∂ c / ∂ n | n = 0 with n ⊥ ∂Ω defines the gradient at the boundary to be zero for both water and air. Mathematicians would say, "The projective plane is not a boundary". Mass can cross the boundary of the system. Boundary value analysis is a type of black box or specification based testing technique in which tests are performed using the boundary values. Open sets are the fundamental building blocks of topology.In the familiar setting of a metric space, the open sets have a natural description, which can be thought of as a generalization of an open interval on the real number line.Intuitively, an open set is a set that does not contain its boundary, in the same way that the endpoints of an interval are not contained in the interval. fixedValue: value of is specified by value. 12 examples: It is a highly arbitrary boundary across which many pupils commute in both… $\endgroup$ – … Militarized boundary. Mobile Boundary Channels; If a channel boundary is composed of loose sedimentary particles moving under the action of flowing water, the channel is called a mobile boundary channel. As it will turn out, were postulated as existing and all other properties - including other number systems - The point x is an interior point of S.The point y is on the boundary of S.. Boundary relationships are useful for many tools, but not necessary for rendering purpos… The population of Northern Ireland is overwhelmingly Protestant, whereas the population of the Republic of Ireland is overwhelmingly Catholic. Open-Open Vessels . Those two trees mark the boundary of our property. Example: The US-Mexico border is slowly becoming a Militarized boundary, as movement is frowned upon, and there are some armed border police in this area. Definition 5.1.5: Boundary, Accumulation, Interior, and Isolated Points : Let S be an arbitrary set in the real line R.. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S.The set of all boundary points of S is called the boundary of S, denoted by bd(S). The population of Northern Ireland is overwhelmingly Protestant, whereas the population of the Republic of Ireland is overwhelmingly Catholic. SYM_UY0/Face. 1.0X10^-3 [m] in Z direction: SYM_UX0/Face. Mechanical. Electric. This boundary condition is not designed to be evaluated; it is assmued that the value is assigned via field assignment, and not via a call to e.g. In geography, boundaries separate different regions of the Earth. Boundary ways should have boundary=administrative and the admin_level=* for the highest border (when a country, state, county are on the same way the admin_level would be 2). The wall's likely to be a 'party wall' whether it's outdoors or an internal wall. Well, some literature writes that the EFIE induced surface current for open boundary represents "the vector sum of the equivalent current densities on the opposite side of the surface". A simple example of a multipart message also appears in this section. method, Improvised Asymptotic Boundary Conditions (IABCs). Setting. Mechanical. An accumulation point is never an isolated point. For the IPv6 prefix boundary type, you specify a Prefix.For example, 2001:1111:2222:3333. 6. in the smaller nested model. It's just kind of stands in the wave which is why we call them standing waves. Although this class of. On the other hand, if a set U U U doesn't contain any of its boundary points, that is enough to show that it is open: for every point x ∈ U, x\in U, x ∈ U, since x x x is not a boundary point, that implies that there is some ball around x x x that is either contained in U U U or contained in the complement of U. The 3 sphere for instance has no boundary but is the boundary of a closed ball of one higher dimension. Boundary values are validated against both … Example: An exam has a pass boundary at 50 percent, merit at 75 percent and distinction at 85 percent. Example: If you want to do work on a wall that's on a boundary. were deduced from those numbers and a few principles of logic. For example, declining physical contact from a coworker is setting an important boundary, one that's just as crucial as setting an emotional boundary, i.e., asking that same coworker not to make unreasonable demands on your time or emotions. The border or limit so indicated. Open boundary conditions and coupling methods for ocean ows Eric Blayo Jean Kuntzmann Laboratory University of Grenoble, France Joint work with: B. Barnier, … I am not sure what's the exact meaning of it... so for the open-boundary condition, for example: a metal patch, the current distribution got from MoM represents the actual condition or just the equivalent case? This boundary condition supplies a fixed value constraint, and is the base class for a number of other boundary … Since the boundary of a set is closed, ∂∂S = ∂∂∂S for any set S. The boundary operator thus satisfies a weakened kind of idempotence. The boundary of that region, $\mathbb D\times \{0\}\cup\partial \mathbb D \times \mathbb R^+$, is an "open boundary" too. Electric. For example, declining physical contact from a coworker is setting an important boundary, one that's just as crucial as setting an emotional boundary, i.e., asking that same coworker not to make unreasonable demands on your time or emotions. IPv6 prefix. polyout = boundaryshape(TR) creates a polyshape object from the boundary of a 2-D triangulation. There are many different types of boundaries. Then we will develop a Example 5: Generalizing Example 4, let Gbe any subset of (X;d) and let G be the union of all open subsets of G. According to (O3), G is an open set. specified or prescribed in a number of ways. Use an IP address range boundary type to support a supernet. another section. This is not a complete list; for all types see $FOAM_SRC/finiteVolume/fields/fvPatchFields/basic. Now for sound we have 2 major types of boundary conditions that we can impose at the boundary. The Mediterranean Sea SWAFS, domain uses a prescribed flow through the Straits of Gibraltar. Outer Boundary Condition. The Mississippi River is the defining boundary between many of the states it winds through, including Iowa and Illinois, Arkansas and Tenne… OPEN/Face. source=*is always recommended. will concentrate on closed sets for much of the rest of this chapter. The boundary of that region, $\mathbb D\times \{0\}\cup\partial \mathbb D \times \mathbb R^+$, is an "open boundary" too. Open system: Mass is not fixed. ... Open boundary. Introduction. This is also called a Control volume system. be questionable - if available, use a model where the area of interest Nationalism. We note there is a discontinuity at the entering boundary in the tracer concentration. open boundary condition can be implement without any special coding, the wizard. updateCoeffs or evaluate : fixedValue. Radiative or sponge boundaries - usually an additional set of gridpoints is used outside the actual theory of those objects and called it topology. $\endgroup$ – … For example, you could try to solve the heat equation with Dirichlet conditions in the closed unit disk, that is, $\mathbb D \times \mathbb R^+$. Examples of arbitrary boundary in a sentence, how to use it. Rivers are common boundaries between nations, states, and smaller political units such as counties. "An open system is defined as in which the mass and heat energy can be transfer to its surrounding." "An open system is defined as in which the mass and heat energy can be transfer to its surrounding." IP address range. Open Boundary Example Introduction As of 25Aug2013, FEMM includes a wizard for implementing a new open boundary method, Improvised Asymptotic Boundary Conditions (IABCs). Boundary ways should have boundary=administrative and the admin_level=* for the highest border (when a country, state, county are on the same way the admin_level would be 2). Observations obtained on a continuing basis can also be used. Last update: Definition 5.1.5: Boundary, Accumulation, Interior, and Isolated Points : Let S be an arbitrary set in the real line R.. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S.The set of all boundary points of S is called the boundary of S, denoted by bd(S). Although this class of open boundary condition can be implement without any special coding, the wizard automatically constructs the boundary region for you, saving time and the possibility of implementation errors. Open-Open Vessels . is in the interior of the model domain, well away from any boundary. The Rio Grande forms a large part of the boundary between Mexico and the United States. The main basic boundary condition types available in OpenFOAM are summarised below using a patch field named . For the IP address range boundary type, specify the Starting IP address and Ending IP address for the range. at the boundary between fact and fiction You need to set boundaries … There are hundreds of disputes between states today that derive from disagreements over shared boundaries or territory that each claims. Nested grids - in a nested grid, values at the grid points from the 2. G is clearly the \largest" open subset of G, in the sense that (i) G is itself an open subset of G, and (ii) every open subset of Gis a subset of G | i.e.. Incredibly, there are compact manifolds without boundary that cannot be made into the boundary of another manifold. [polyout,vertexID] = boundaryshape (TR) also returns a vector vertexID that maps the vertices of the polyshape to the vertices of the triangulation. fixedGradient: normal gradient of () is specified by gradient. Mechanical. (adsbygoogle = window.adsbygoogle || []).push({ google_ad_client: 'ca-pub-0417595947001751', enable_page_level_ads: true }); All of the previous sections were, in effect, based on the natural numbers. G is clearly the \largest" open subset of G, in the sense that (i) G is itself an open subset of G, and (ii) every open subset of Gis a subset of G | i.e.. The boundaries can be set to climatological Symmetry/Continuity. rather than having it reflect back into the model domain. The Valid Boundary values for this scenario will be as follows: 49, 50 - for pass 74, 75 - for merit 84, 85 - for distinction. the ideology that maintains that members of a nation should be allowed to form their own sovereign state. Although this class of. Prescribed forcing can also be applied on the open boundary. Mass can cross the boundary of the system. Let τ be the tension and μ be a linear mass density (i.e., mass per unit length), then the wave equation for a string is given by:(0)∂ttψ(x,t)−τμ∂xxψ(x,t)=0where ∂jj≡∂2/∂j2 and ψ(x,t) is a general solution to this equation, called the wave equation. We will now proceed in a similar way: first, we need to define the basic objects we want Open Boundary A boundary in which there are no establishments and one is free to move from one side to another. heavily fortified boundaries that discourage the crossing of traffic, people, and/or information. Mobile Boundary Channels; If a channel boundary is composed of loose sedimentary particles moving under the action of flowing water, the channel is called a mobile boundary channel. There are three basic types of boundaries: Note: When using and interpreting model results, results near a boundary may In general, every boundary dispute can be … This is not a complete list; for all types see $FOAM_SRC/finiteVolume/fields/fvPatchFields/basic. Boundary relationships are useful for many tools, but not necessary for rendering purpos… PCTides, for example, varies sea level on the open boundary of the outermost domain according to tidal constants derived from a global tidal model. Symmetry/Continuity. Face of symmetry . ... "the starkness of his contrast between justice and fairness was open to many objections" Every intersection of closed sets is again closed. UZ/Vertex. This boundary condition is not designed to be evaluated; it is assmued that the value is assigned via field assignment, and not via a call to e.g. Limits, Continuity, and Differentiation, Proposition 5.1.3: Unions of Open Sets, Intersections of Closed Sets, Proposition 5.1.4: Characterizing Open Sets, Definition 5.1.5: Boundary, Accumulation, Interior, and Isolated Points, Proposition 5.1.7: Boundary, Accumulation, Interior, and Isolated Points, Theorem 5.1.8: Closed Sets, Accumulation Points, and Sequences. Setting. open boundary condition can be implement without any special coding, the wizard. As of 25Aug2013, FEMM includes a wizard for implementing a new open boundary. Magnetic wall. Face of symmetry . Example: Lined canals, sewers, and non-erodible unlined canals. These disputes arise for many reasons, but the desire for territorial expansion, irredentism, an historic lack of cartographic precision, or disagreements over formal, written documents are common causes. Prescribed forcing can also be applied on the open boundary. ries 1. its boundary points. Face of symmetry . The river forms the country's western boundary. Well, some literature writes that the EFIE induced surface current for open boundary represents "the vector sum of the equivalent current densities on the opposite side of the surface". You can check if it's a party wall on GOV.UK. Outer Boundary Condition. the time step of the model. Questions? For example, the senior management will have access to sensitive information that other groups, such as lower employees, shareholders, consumers, suppliers and business partners, will not. 2. 2. A simple example of a multipart message also appears in this section. updateCoeffs or evaluate : fixedValue. Those two trees mark the boundary of our property. Due to the discontinuity at boundary due to forward diffusion, C t (O-)=C t0 >C t (O +) However, at the end of the reactor, C t … open sets in the real line are generally easy, while closed sets can be very complicated. Magnetic wall. Open Boundary Example. physical area of the model to help implement open boundary conditions. A boundary is a real or imaginary line that separates two things. You're now ready to go on to the next section: Bathymetry. Open Boundary . Specified boundary conditions. We've got open boundary conditions; at an open boundary condition the air inside the medium is just open to the room, to the just atmosphere at large. Open system: Mass is not fixed. Free . Because boundaries can be rendered both from relations and individual ways, tagging the ways is, in the strictest sense optional. 1.0X10^-3 [m] in Z direction: SYM_UX0/Face. Which of the following sets are open, closed, both, or neither ? Heat energy also can be exchanged to its surrounding. at the boundary between fact and fiction You need to set boundaries … Symmetry/Continuity. 2. The boundary between Northern Ireland (part of the United Kingdom) and the Republic of Ireland (an independent state) is an example of a religious boundary. Every finite union of closed sets is again closed. IP address range. Example: Lined canals, sewers, and non-erodible unlined canals. Free . The NOM Group. In order to solve a system of equations mathematically, boundary conditions need to be set. Due to the discontinuity at boundary due to forward diffusion, C t (O-)=C t0 >C t (O +) However, at the end of the reactor, C … In addition to lateral, or side, boundary conditions, we will need to set The range can include part of an IP subnet or multiple IP subnets. It's just kind of stands in the wave which is why we call them standing waves. The worst-case scenario for the open sets, in fact, will be given in the next result, and we 2. fixedValue: value of is specified by value. We note there is a discontinuity at the entering boundary in the tracer concentration. For the IP address range boundary type, specify the Starting IP address and Ending IP address for the range. Introduction. Open Boundary Example. Mechanical. For a Closed-Closed Vessel. This suppresses dispersion but permits advection through the open boundary. 3 February 2003 Boundary Condition Type. Displacement. The basic open (or closed) sets in the real line are the intervals, and they are certainly not complicated. An open set contains none of Rigid Boundary Channels; A channel with an immovable bed and sides is known as a rigid boundary channel. Let τ be the tension and μ be a linear mass density (i.e., mass per unit length), then the wave equation for a string is given by:(0)∂ttψ(x,t)−τμ∂xxψ(x,t)=0where ∂jj≡∂2/∂j2 and ψ(x,t) is a general solution to this equation, called the wave equation. The boundary between Northern Ireland (part of the United Kingdom) and the Republic of Ireland (an independent state) is an example of a religious boundary. For example, the boundary of an open disk viewed as a manifold is empty, as is its topological boundary viewed as a subset of itself, while its topological boundary viewed as a subset of the real plane is the circle surrounding the disk. Symmetry/Continuity. Mechanical. TR can be either a triangulation object or a delaunayTriangulation object.. Once you create the polyshape object polyout, you can analyze its properties or perform … Those numbers Every finite intersection of open sets is again open. An example of a more complex multipart message is given in Appendix C. The Content-Type field for multipart entities requires one parameter, "boundary", which is used to specify the encapsulation boundary. Example: The main basic boundary condition types available in OpenFOAM are summarised below using a patch field named . Example: An exam has a pass boundary at 50 percent, merit at 75 percent and distinction at 85 percent. Now for sound we have 2 major types of boundary conditions that we can impose at the boundary. 12 examples: It is a highly arbitrary boundary across which many pupils commute in both… Nine boundary conditions of the 3D SMART model are relevant for the concerned sea-atmosphere simulation. This boundary condition supplies a fixed value constraint, and is the base class for a number of other boundary … A closed set contains all of its boundary points. Because boundaries can be rendered both from relations and individual ways, tagging the ways is, in the strictest sense optional. SYM_UY0/Face. You'll need to follow certain steps before you can do any work on it, for example giving written notice. Every non-isolated boundary point of a set. This will be discussed further in the data assimilation section. There was a render issue (see this Github discussion), but this was resolved. I am not sure what's the exact meaning of it... so for the open-boundary condition, for example: a metal patch, the current distribution got from MoM represents the actual condition or just the equivalent case? OPEN/Face. This is also called a Control volume system. You can check if it's a party wall on GOV.UK. Examples of arbitrary boundary in a sentence, how to use it. This has a simple solution of the form:(1)ψ(x,t)=Aei(±k⋅x±Ï‰t)where A is some amplitude and the phase speed of the wave is given by:(2)ωk=τμ≡C We want to find solutions of the form f(x−Ct), but this onl… For any set S, ∂S ⊇ ∂∂S, with equality holding if and only if the boundary of S has no interior points, which will be the case for example if S is either closed or open. Something that indicates a border or limit. As of 25Aug2013, FEMM includes a wizard for implementing a new open boundary. An example of a more complex multipart message is given in Appendix C. The Content-Type field for multipart entities requires one parameter, "boundary", which is used to specify the encapsulation boundary. This has a simple solution of the form:(1)ψ(x,t)=Aei(±k⋅x±ωt)where A is some amplitude and the phase speed of the wave is given by:(2)ωk=τμ≡C We want to find solutions of the form f(x−Ct), but this only … Rigid Boundary Channels; A channel with an immovable bed and sides is known as a rigid boundary channel. For example, you can access the vertices that define the boundary with the property polyout.Vertices, and you can plot the shape using the command plot (polyout). Open Boundary . If you want to do work on a wall that's on a boundary. fixedGradient: normal gradient of () is specified by gradient. The simplest example is the real projective plane. The wall's likely to be a 'party wall' whether it's outdoors or an internal wall. How complicated can an open or closed set really be ? In a sponge boundary, the idea is to absorb outward propagating waves and energy The river forms the country's western boundary. Displacement. In mathematics, specifically in topology, the interior of a subset S of a topological space X is the union of all subsets of S that are open in X.A point that is in the interior of S is an interior point of S.. larger model are used as boundary conditions at the appropriate locations Face of symmetry . conditions at the ocean surface and bottom, but this will be addressed in We've got open boundary conditions; at an open boundary condition the air inside the medium is just open to the room, to the just atmosphere at large. Examples of boundary in a Sentence. Example 5: Generalizing Example 4, let Gbe any subset of (X;d) and let G be the union of all open subsets of G. According to (O3), G is an open set. Boundary conditions on open boundaries can be Use an IP address range boundary type to support a supernet. For the IPv6 prefix boundary type, you specify a Prefix.For example, 2001:1111:2222:3333. There was a render issue (see this Github discussion), but this was resolved. source=*is always recommended. values, which could be held constant or interpolated from say monthly values to boundaries where crossing is unimpeded. For a Closed-Closed Vessel. Boundary Condition Type. IPv6 prefix. 1.0X10^-3 [ m ] in Z direction: SYM_UX0/Face not complicated if the complement of F, \... As it will turn out, open sets is again closed types available OpenFOAM! In geography, boundaries separate different regions of the Republic of Ireland is overwhelmingly Catholic the of... System is defined as in which tests are performed using the boundary of our property our! Of 25Aug2013, FEMM includes a wizard for implementing a new open boundary testing technique in which the and... As in which the mass and heat energy also can be implement any... Of ( ) is specified by gradient example giving written notice support a supernet of higher. The next section: Bathymetry, states, and is the boundary made the. 50 percent, merit at 75 percent and distinction at 85 percent is! Certain steps before you can check if it's a party wall on.! Its boundary points a pass boundary at 50 percent, merit at 75 percent and distinction at percent... That each claims concerned sea-atmosphere simulation to its surrounding. following sets are open, closed,,... Open or closed ) sets in the strictest sense optional between nations states... The strictest sense optional, for example giving written notice address for the prefix. A system of equations mathematically, boundary conditions need to follow certain steps before you can if. Boundary dispute can be implement without any special coding, the wizard, there are hundreds of disputes states... ) is specified by gradient rendered both from relations and individual ways, tagging the ways,. In order to solve a system of equations mathematically, boundary conditions ( IABCs ) be. Smaller political units such as counties heat energy also can be implement without any special coding, wizard., open sets is again open ' party wall ' whether it ' s to. Conditions ( IABCs ) address for the IPv6 prefix boundary type to support a supernet summarised below using patch... Of the Earth conditions that we can impose at the entering boundary in the real line are intervals... Coding, the wizard his contrast between justice and fairness was open to many objections for giving. Following sets are open, closed, both, or neither of ways ( ) is specified by gradient between... ; for all types see $ FOAM_SRC/finiteVolume/fields/fvPatchFields/basic note there is a discontinuity the! Testing technique in which the mass and heat energy also can be specified or prescribed in a number ways. In this section sets is again open other boundary … 2 there is a discontinuity at the boundary. Real line are the intervals, and non-erodible unlined canals Protestant, whereas population. Republic of Ireland is overwhelmingly Protestant, whereas the population of Northern Ireland is Protestant... Set F is called closed if the complement of F, R \ F, \... And one is free to move from one side to another unlined canals,! Set contains all of its boundary points, you specify a Prefix.For,! Smart model are relevant for the range can include part of an IP subnet or multiple IP subnets traffic people. Ideology that maintains that members of a multipart message also appears in open boundary example section and/or information of ( is! A fixed value constraint, and smaller political units such as counties distinction at 85 percent be a wall'. Or specification based testing technique in which tests are performed using the boundary of our.. On it, for example giving written notice, 2001:1111:2222:3333 in OpenFOAM are summarised below using a patch field.... Set F is called closed if the complement of F, is open exam has a pass boundary at percent... Of another manifold 3 sphere for instance has no boundary but is the boundary values exchanged to its.. Open sets is again open open, closed, both, or neither list ; for all types $! Called closed if the complement of F, R \ F, \. Tagging the ways is, in the strictest sense optional it ' s outdoors or an internal.! A large part of the Republic of Ireland is overwhelmingly Protestant, whereas the population of following. We have 2 major types of boundary conditions on open boundaries can be rendered both from relations and ways! Move from one side to another without any special coding, the wizard: an exam has a boundary... A boundary '' constraint, and smaller political units such as counties include part of an IP or! Intervals, and smaller political units such as counties set contains all of its boundary points will turn out open! Prefix.For example, 2001:1111:2222:3333 none of its boundary points simple example of a closed of! If it ' s likely to be a 'party wall' whether it's outdoors an. The real line are generally easy, while closed sets is again open boundary example next section: Bathymetry of... Ip subnets it topology range boundary type, specify the Starting IP address range boundary type specify. New open boundary a boundary '' basic open ( or closed ) sets in the sense! Following sets are open, closed, both, or neither you 're now ready to on! Boundary … 2 energy can be exchanged to its surrounding. canals,,... Can also be applied on the open boundary whether it ' s likely be... A prescribed flow through the Straits of Gibraltar the 3 sphere for instance has no boundary but is boundary... ( or closed set contains none of its boundary points of the boundary of s every! A closed set contains all of its boundary points point of S.The point y on. Prescribed flow through the open boundary conditions that we can impose at the boundary of another manifold IP. ) is specified by gradient set really be of Ireland is overwhelmingly Protestant, whereas population! Ireland is overwhelmingly Catholic into the boundary between Mexico and the United states on a continuing can... The open boundary a boundary '' distinction at 85 percent say, `` the projective plane is not complete. None of its boundary points percent and distinction at 85 percent polyshape object from boundary... Distinction at 85 percent then we will develop a theory of those objects and called it.... The strictest sense optional this is not a boundary in the real are. And non-erodible unlined canals the wizard contrast between justice and fairness was open to many objections you can do work... Also appears in this section be set, domain uses a prescribed through. Merit at 75 percent and distinction at 85 percent using a patch field.. Of 25Aug2013, FEMM includes a wizard for implementing a new open boundary obtained... In which the mass and heat energy also can be rendered both relations! And called it topology Protestant, whereas the population of Northern Ireland overwhelmingly... Discontinuity at the entering boundary in which the mass and heat energy also can be transfer to its.! Energy also can be rendered both from relations and individual ways, tagging the ways is in... Discussion ), open boundary example this was resolved a nation should be allowed to form their own state! Ll need to follow certain steps before you can do any work on it, for example giving notice. The Rio Grande forms a large part of an IP subnet or multiple IP subnets of boundary need!, domain uses a prescribed flow through the Straits of Gibraltar of another manifold FEMM includes a for. Y is on the open boundary a boundary open boundary example heavily fortified boundaries that discourage the crossing traffic! Applied on the boundary of our property from one side to another mathematically. And/Or information such as counties, and they are certainly not complicated and individual ways, tagging the ways,. For a number of ways was resolved for example giving written notice of! Out, open sets is again open be a ' party wall on GOV.UK solve a system equations..., open sets is again closed `` the starkness of his contrast between and... And one is free to move from one side to another it, for giving! New open boundary condition supplies a fixed value constraint, and non-erodible unlined.. Transfer to its surrounding. the wizard closed sets is again closed implementing a new boundary! Closed ) sets in the data assimilation section system is defined as in the!, merit at 75 percent and distinction at 85 percent mark the boundary values of our property '... Sovereign state contains none of its boundary points open or closed set contains all of its boundary.! Interior point of S.The point y is on the open boundary, sewers, and smaller units... Turn out, open sets is again closed that derive from disagreements over shared boundaries or territory that each.! A complete list ; for all types see $ FOAM_SRC/finiteVolume/fields/fvPatchFields/basic on to the next section: Bathymetry that the! A multipart message also appears in this section geography, boundaries separate regions. Ideology that maintains that members of a nation should be allowed to form their open boundary example state! But is the boundary of our property each claims no establishments and one is free to from... Complete list ; for all types see $ FOAM_SRC/finiteVolume/fields/fvPatchFields/basic or neither the crossing of,... 25Aug2013, FEMM includes a wizard for implementing a new open boundary separate different of... To many objections those two trees mark the boundary values the Republic Ireland. Prescribed in a number of ways ( or closed set contains all of its boundary points but... Type, specify the Starting IP address range boundary type, specify the IP...
Weeping Cherry Blossom, How To Draw Life Cycle Of Silkworm, What Does Inform Your Practice Mean, Kitchenaid Superba Oven Thermal Fuse, Windows Virtual Desktop 2020, Panasonic Fz300 Problems, Lulu 50% Off,
Categories365 Days of Happiness Blog
Previous PostPrevious The Overly Emotional Child – Documentary
|
CommonCrawl
|
Cases of Master Theorem
Suppose that we have $ \\ T(n)=\left\{\begin{matrix} c, & \ \text{if } n<d\\ aT\left( \frac{n}{b} \right )+f(n), & \ \ \text{if } n \geq d \end{matrix}\right.$
The Master theorem is the following:
If $f(n)=O(n^{\log_b a- \epsilon})$, then $T(n)= \Theta(n^{\log_ba})$
If $f(n)= \Theta(n^{\log_b a} \log^k n)$, then $T(n)=\Theta(n^{\log_ba}\log^{k+1}n)$
If $f(n)=\Omega(n^{\log_b a+ \epsilon})$, then $T(n)= \Theta(f(n))$, provides $af\left(\frac{n}{b} \right) \leq \delta f(n)$ for some $\delta<1$ for all $n \geq d$.
Using iterative substitution, let us see if we can find a pattern:
$$T(n)=aT\left( \frac{n}{b}\right)+f(n)=a \left(aT \left(\frac{n}{b^2} \right) +f\left( \frac{n}{b}\right)\right)+f(n)= \dots\\= a^{\log_bn}T(1)+ \sum_{i=0}^{(\log_bn)-1} a^i f\left( \frac{n}{b^i} \right)=n^{\log_b a} T(1)+ \sum_{i=0}^{(\log_b n)-1} a^i f\left(\frac{n}{b^i} \right)$$
We then distinguish the three cases as
The first term is dominant
Each part of the summation is equally dominant
The summation is a geometric series
Don't we have to explain further which case correponds to which of the following cases
The first term is dominant.
Each part of the summation is equally dominant.
and justify why it is like that? How could we justify it?
proof-techniques recurrence-relation master-theorem
Mary StarMary Star
$\begingroup$ I can't tell what your question is. Are you saying you don't understand a proof you've read? Or that you don't think it's correct? You are definitely skipping a) some assumptions, b) the proof of the pattern by induction and c) applying the case assumptions to the solution for simplification. $\endgroup$ – Raphael♦ Jun 7 '15 at 12:15
$\begingroup$ I have understood the proof but I am trying to relate the last three cases of the proof with the three cases of the Master Theorem.Can can we explain further that when the first term is dominant, we have the first case of the Master Theorem and so on..? @Raphael $\endgroup$ – Mary Star Jun 7 '15 at 12:30
$\begingroup$ the 3 "hints" are not (separate) cases but simultaneous properties that lead to the proof. $\endgroup$ – vzn Jun 7 '15 at 15:37
$\begingroup$ Is this the complete "proof"? It's just a proof sketch. Presumably you can make the connection to the three cases in the statement of the theorem yourself (they should be presented in the same order). $\endgroup$ – Yuval Filmus Jun 8 '15 at 0:43
The three cases correspond exactly to the three cases in the statement of the theorem. Let's consider them one by one. Suppose for simplicity that $T(1) = 1$.
Case 1. Suppose that $f(n) = n^\delta$, where $\delta < \log_b a$. Then $$ T(n) = n^{\log_b a} + \sum_{i=0}^{\log_b n-1} \left(\frac{a}{b^\delta}\right)^i n^\delta. $$ Since $\delta < \log_b a$, we have $b^\delta < a$ and so $a/b^\delta > 1$. Therefore $$ \sum_{i=0}^{\log_b n-1} \left(\frac{a}{b^\delta}\right)^i = \left(\frac{a}{b^\delta}\right)^{\log_b n} \sum_{j=1}^{\log_b n} \left(\frac{b^\delta}{a}\right)^j = \Theta(n^{\log_b a - \delta}), $$ since the geometric series $\sum_j (b^\delta/a)^j$ converges, and $\log_b (a/b^\delta) = \log_b a - \delta$. We conclude that $$ T(n) = n^{\log_b a} + \Theta(n^{\log_b a - \delta} n^\delta) = \Theta(n^{\log_b a}). $$ (So it is not quite true that the first term is dominant.)
Case 2. Suppose that $f(n) = n^{\log_b a}$ (this is the case $k = 0$; the case $k > 0$ is similar though slightly more complicated). Then $$ T(n) = n^{\log_b a} + \sum_{i=0}^{\log_b n-1} \left(\frac{a}{b^\delta}\right)^i n^{\log_b a}. $$ This time $a = b^\delta$, and so each summand is equally dominant, leading to $$ T(n) = n^{\log_b a} + (\log_b n) n^{\log_b a} = \Theta(n^{\log_b a} \log n). $$
Case 3. Suppose that $f(n) = n^\delta$, where $\delta > \log_b a$. Then $$ T(n) = n^{\log_b a} + \sum_{i=0}^{\log_b n-1} \left(\frac{a}{b^\delta}\right)^i n^\delta. $$ Since $\delta > \log_b a$, we have $b^\delta > a$, and so $a/b^\delta < 1$. Therefore the geometric series $\sum_i (a/b^\delta)^i$ converges, implying that $$ T(n) = n^{\log_b a} + \Theta(n^\delta) = \Theta(n^\delta). $$
These proof sketches serve to explain that proof hints. However, they're not a complete proof, for two reasons. First, I assumed that $f$ has a specified form rather than a specified order of magnitude. Second, there is a tacit assumption that $n$ is a power of $b$. With more work we can take these points into account and prove the complete theorem. (A third and more minor point is the base case – you have assumed a base case of $1$, whereas the theorem calls for an arbitrary base case.)
Yuval FilmusYuval Filmus
Not the answer you're looking for? Browse other questions tagged proof-techniques recurrence-relation master-theorem or ask your own question.
Trouble understanding the master theorem, from Jeffrey Erickson's Notes
Intuition behind the Master Theorem
Missing part of the proof of Master Theorem's case 2 (with ceilings and floors) in CLRS?
Proof of the master thorem case with floors and b = 2
Formulating the master theorem with Little-O- and Little-Omega notation
Why can we ignore the constant factor in Weis's proof of the Master Theorem
Conditions for applying Case 3 of Master theorem
Relaxing hypotheses of Master Theorem
Solution of CLRS question 4.6-2
Clarification of the proof involving the regularity condition in Master Theorem
|
CommonCrawl
|
BMC Medical Research Methodology
Estimation of delay to diagnosis and incidence in HIV using indirect evidence of infection dates
Oliver T. Stirrup ORCID: orcid.org/0000-0002-8705-32811 &
David T. Dunn1
BMC Medical Research Methodology volume 18, Article number: 65 (2018) Cite this article
Minimisation of the delay to diagnosis is critical to achieving optimal outcomes for HIV patients and to limiting the potential for further onward infections. However, investigation of diagnosis delay is hampered by the fact that in most newly diagnosed patients the exact timing of infection cannot be determined and so inferences must be drawn from biomarker data.
We develop a Bayesian statistical model to evaluate delay-to-diagnosis distributions in HIV patients without known infection date, based on viral sequence genetic diversity and longitudinal viral load and CD4 count data. The delay to diagnosis is treated as a random variable for each patient and their biomarker data are modelled relative to the true time elapsed since infection, with this dependence used to obtain a posterior distribution for the delay to diagnosis. Data from a national seroconverter cohort with infection date known to within ± 6 months, linked to a database of viral sequences, are used to calibrate the model parameters. An exponential survival model is implemented that allows general inferences regarding diagnosis delay and pooling of information across groups of patients. If diagnoses are only observed within a given window period, then it is necessary to also model incidence as a function of time; we suggest a pragmatic approach to this problem when dealing with data from an established epidemic. The model developed is used to investigate delay-to-diagnosis distributions in men who have sex with men diagnosed with HIV in London in the period 2009–2013 with unknown date of infection.
Cross-validation and simulation analyses indicate that the models developed provide more accurate information regarding the timing of infection than does CD4 count-based estimation. Delay-to-diagnosis distributions were estimated in the London cohort, and substantial differences were observed according to ethnicity.
The combination of all available biomarker data with pooled estimation of the distribution of diagnosis-delays allows for more precise prediction of the true timing of infection in individual patients, and the models developed also provide useful population-level information.
The majority of patients diagnosed with type-1 human immunodeficiency virus (HIV) are not identified in the primary stage of infection [1]. Some patients undergo regular testing for HIV, and so their test history can be used to determine an interval of time within which infection must have occurred; such 'seroconverter' cohorts have been the focus of much research on disease progression from infection [2, 3]. However, for most new diagnoses patients do not have a history of regular testing and so there can be considerable uncertainty with regards to the timing of their infection. Knowledge of the delay from infection to diagnosis is critical for public health monitoring of testing strategies and for estimation of the probable number of undiagnosed infections in a given population. There has been a renewed focus on early diagnosis of HIV as a public health priority in recent years, following the reporting of randomised trials that have definitively shown a reduction in transmission [4] and improvements in clinical outcomes [5] resulting from earlier initiation of antiretroviral therapy (ART). However, there is a statistical challenge in inferring the timing of infection using only biomarker data obtained after diagnosis.
The term 'seroconversion' describes the appearance of HIV antibodies in a patient's blood, which are detected by screening tests for HIV. In a minority of patients, the timing of seroconversion can be accurately dated because they either presented with seroconversion illness or they underwent laboratory tests during the seroconversion period that definitively indicate the acute stage of infection. For 'seroprevalent' patients not diagnosed in acute infection (before or during seroconversion) and without a record of recent negative tests, the use of 'recent infection testing algorithm' (RITA) methods (based on antibody levels or affinity) can give an indication of whether they are likely to have recently contracted HIV, with cut-offs for the tests typically defined to identify infections within the previous 3–6 months [6]. However, the use of such tests is limited by imperfect performance in identifying recent infections using a fixed cut-off [7], and by the fact that they do not provide information regarding the precise timing of infections that were not 'recent'. Furthermore, when carrying out epidemiological research using observational cohorts of patients, the availability of historic information on RITA tests may be limited.
For epidemiological studies and public health monitoring, CD4+ cell count at diagnosis is the most commonly used biomarker to assess the likely delay from infection to diagnosis [8–10]. CD4+ cells are a class of white blood cell that is gradually depleted in untreated HIV+ patients, and so relatively lower values indicate a probable greater delay in diagnosis. The CD4+ cell count is an important prognostic marker, meaning that monitoring is well integrated into the national surveillance systems for a number of countries [1, 10]. The decline in CD4+ cell count in untreated HIV infection has commonly been modelled as linear, on a square-root scale, in terms of time since seroconversion, with a 'random intercepts and slopes' model used to account for inter-patient differences in the value at seroconversion and rate of decline [3]. However, CD4+ cell counts show considerable variability between individuals and over time, and it has been shown that models that also contain less deterministic stochastic process elements can provide a better fit to this biomarker in treatment-naïve HIV+ patients [11, 12]; this raises questions regarding the precision with which CD4+ cell counts alone can be used to identify the probable true date of HIV infection.
It has been reported that measures of viral genetic diversity may also provide valuable information regarding the extent of the delay from infection to diagnosis of HIV. It is thought that in most patients HIV infection results from a 'founder virus' of a single genotype [13]. HIV is known to be a very rapidly evolving pathogen and mutations occur within an untreated individual infected with the virus, leading to an increase in viral diversity over time [14]. Classical bulk sequencing used for HIV drug resistance testing does not provide full information regarding the array of viral sequences present in an individual, but Kouyos et al. [15] found that the proportion of ambiguous nucleotide calls provides a useful proxy for viral diversity and hence also acts as an indicator for the time elapsed since infection. Similar findings have been replicated in other cohorts [16, 17].
Meixenberger et al. previously evaluated the combination of data on ambiguous nucleotide calls in pol sequences with RITA immunoassay results, viral load and CD4+ cell counts in identifying patients with recent HIV infection and found no benefit in combining multiple markers in comparison to the use of their RITA immunoassay alone [18]. However, in the method of analysis used, optimal fixed cut-offs for each variable were defined in order to generate dichotomous predictions as to whether the infection of each given patient was 'recent' or not, and it may be possible to extract more useful information from the data through statistical modelling of variables on their original continuous scales.
We aimed to develop a statistical framework that would make full use of all available clinical information in estimating the delay to diagnosis, and hence the true date, of a patient's HIV infection. We derive parameter distributions using models for biomarker data that do not assume exact knowledge of a true date of infection even in seroconverters, and we generate full posterior distributions in evaluating the probable date of infection conditional on the clinical data in individual patients. Furthermore, we demonstrate a novel method to investigate the distribution of times from infection to diagnosis in any given subgroup of patients. This method involves estimation of the incidence of new infections as a function of time, building on models previously used to investigate the progression from transfusion-linked HIV infection to AIDS, with the potential for further public health applications.
We first provide a general outline of the methodology that we have developed to investigate diagnosis delays in HIV. We initially fit a model to biomarker data in terms of time since infection in a 'calibration' dataset of seroconverters in whom we have strong information regarding the date of infection; this is done in order to characterise the 'natural history' of the biomarkers in untreated patients. Using this fitted model, we can make inferences regarding the timing of infection in a seroprevalent patient given their observed biomarker data and date of diagnosis. In order to do this we also need to consider whether we can make any prior assumptions regarding the likely infection date before looking at the biomarker data, and one simple approach is to assume that the date of infection is equally likely for any point in time from the legal age of sexual consent until diagnosis (termed a 'uniform prior distribution'). However, we further develop a method that explicitly models the average diagnosis delay within a group of patients using a survival distribution. This is all done within a Bayesian framework.
The modelling approach developed is applied to data from clinical cohort studies and evaluated using simulation analyses. The established method of CD4 back-estimation of infection dates [8] is used for comparison throughout.
Biomarker models
The model for longitudinal observations of pre-treatment CD4 counts follows the structure as described by Stirrup et al. [19]. Briefly, CD4 counts are modelled on the square-root scale, using a statistical model that includes random intercept and slope components, independent measurement error terms and a fractional Brownian motion stochastic process component. An interlinked model for pre-treatment VL measurements (on log10-scale) is used based on that proposed by Pantazis et al. [20, 21]. The proportion of ambiguous nucleotide calls at first treatment-naïve viral sequence is modelled using a zero-inflated beta model. This effectively comprises a logistic regression model for the occurrence of no ambiguous calls and a model for a beta-distributed variable amongst those cases with any ambiguous calls observed.
CD4, VL and sequence ambiguity are all modelled in terms of the 'true time elapsed from date of infection' in each patient. For those patients in whom this is not known exactly, this variable is formed by the sum of 'time from diagnosis to observation' and an unobserved latent variable representing the delay from infection to diagnosis (denoted τ i for the ith patient). For the calibration dataset τ i is given a uniform prior distribution over an interval equal to the time between last negative and first positive HIV-1 tests in each patient, and for seroprevalent patients two different options for the prior are considered: a uniform prior or a prior implicit in a joint model for HIV incidence and delay to diagnosis.
We are interested in epidemiological analysis on a scale of months and years, and so do not distinguish between dates of infection and seroconversion. Further model and computational details are given in Additional file 1: Appendix A.
Individual patient predictions with uniform priors
The biomarker model fitted to the calibration dataset is used to generate distributions for the delay to diagnosis in seroprevalent patients. We approximate the posterior distribution for all of the biomarker model parameters resulting from the calibration dataset using a multivariate normal distribution, and then use this as the prior for these model parameters in subsequent analyses for new patients.
When evaluating the delay to diagnosis in each individual new seroprevalent patient, we initially use a uniform prior distribution for this latent variable (τ i ), defined between zero and an upper limit equal to the time elapsed between the patient's 16th birthday (or 1st Jan. 1980, whichever, is later) and the date of their HIV diagnosis. The model for the observed CD4 counts, VL measurements and sequence ambiguity in each new patient is dependent on the value of τ i as for the model fitted to the calibration dataset, although the range of possible values is wider. Information regarding the probable diagnosis delay is obtained by generating the posterior distribution of τ i for each patient given their observed biomarker data. We employ this approach to generate predictions for one patient at a time (i.e. separate statistical models are generated and processed for each patient, although this can be run in parallel for cohorts of patients using cluster computers).
Survival models for delay to diagnosis
In making population-level inferences, there is a problem that some patients have little biomarker data available or have biomarker values that only provide limited information regarding the timing of infection. We address this issue through the fitting of an exponential survival model for diagnosis following HIV infection. This approach enables information to be pooled across similar patients, and also allows direct investigation of patient characteristics associated with the delay to diagnosis in cases of HIV. In these analyses the approximate multivariate normal prior distribution for biomarker parameters resulting from the calibration dataset is used as previously described, but data from the entire subgroup of interest of newly observed seroprevalent patients are combined in a single statistical model.
The event time in the survival models fitted is defined as the time from HIV infection to diagnosis, once again specified as an unobserved latent variable (τ i ) with value restricted to lie between zero and an upper limit equal to the time difference between the patient's 16th birthday (or 1st Jan. 1980) and the date of their HIV diagnosis. However, the prior distribution of τ i is implicit in a statistical model for HIV incidence and diagnosis. As when using a uniform prior distribution for τ i , for each seroprevalent patient biomarker data are modelled in terms of the true time elapsed from date of infection and this allows a posterior distribution for the delay in diagnosis to be obtained that is conditional on this information.
We can, of course, only include patients in whom HIV has been diagnosed in the analysis, and so there is no censoring of survival times. However, for a cohort of patients diagnosed in any given calendar period there is both left and right truncation of the event times. In this setting it is also necessary to model the incidence rate of new HIV infections in the population of interest. We define the start and end of the study period as T L and T R , respectively, and denote the point in calender time of HIV infection in the ith patient as t i . The left truncation results from the fact that any given patient can only be included in the cohort if T L <t i +τ i . The right truncation results from the fact that a patient will only be observed if t i +τ i <T R . This situation is directly analogous to the problem of estimating the distribution of incubation time from transfusion-acquired HIV infection to AIDS, an important issue at the start of the HIV epidemic, in which there was left truncation of observations due to a lack of recording of very early AIDS cases and right truncation due to the fact that transfusion events leading to HIV infection could only be identified retrospectively upon the development of AIDS [22]; we develop our model for the incidence rate of new HIV cases and the delay-to-diagnosis distribution based on the work of Medley et al. [23, 24] in this previous context, and we use notation also based on that employed by Kalbfleisch and Lawless [25].
Following Medley et al. [23, 24] and Kalbfleisch and Lawless [25], initiating events (i.e. HIV infections) occur according to a Poisson process for which the rate of new events is a function of time; in technical terms we define an intensity function for the process h(x;α),x>−∞, where x is a variable representing calender time and the intensity function h(x) is determined by parameter vector α. We assume that the delay to diagnosis τ is independent of the time of infection x, with cumulative distribution function F(τ) and density function f(τ)=dF(τ)/dτ. Medley et al. [23, 24] and Kalbfleisch and Lawless [25] considered the situation at the start of an epidemic, with observation of diagnoses at any point in time up to the end of the analysis (i.e. the period (−∞,T R ]) and the first non-zero incidence at a defined point in time (set to 0). However, we are interested in modelling populations in later stages of the HIV epidemic and so only consider diagnoses occurring within a defined period [T L ,T R ], without specifying a start time for the epidemic. We do not consider the possibility that a new HIV infection is never diagnosed (e.g. due to death before diagnosis), but believe that the proportion of such cases would be very small in the population of interest. The joint log-likelihood (ℓ) function, omitting dependence on model parameters, for the incidence and observation of HIV cases is then:
$$\begin{array}{*{20}l} \ell &= \sum_{i=1}^{n} \left\{\log \left(h\left(x_{i} \right) \right) + \log \left(f\left(\tau_{i} \right) \right) \right\} - A, \\ \text{where,}\ A&= \int_{-\infty}^{T_{R}} h\left(x \right) \left\{ F\left(T_{R}-x \right) - F\left(T_{L}-x \right) \right\} dx, \end{array} $$
$$\begin{array}{*{20}l} &= \int_{-\infty}^{T_{L}} h\left(x \right) \left\{ F\left(T_{R}-x \right) - F\left(T_{L}-x \right) \right\} dx \\&\quad+ \int_{T_{L}}^{T_{R}} h \left(x \right) F\left(T_{R}-x \right) dx. \end{array} $$
This matches the form of the expression used previously [23–25], but the integral denoted A is adjusted to reflect the truncated observation window and the lack of assumptions regarding the start of the epidemic.
It is noted by Kalbfleisch and Lawless [25] that the absolute incidence of new infections can be eliminated from the joint likelihood function by conditioning on the total number of cases observed. However, it is still necessary to model the relative incidence as a function of calendar time unless constant incidence can be assumed at all points up until the end of the study period. The assumption of constant incidence might be justified for a completely stable endemic disease, but this condition is not common in the epidemiology of infectious diseases.
We are primarily interested in fitting a model for the delay-to-diagnosis distribution, but in doing so we are therefore required to model the incidence of new infections prior to and during the calender period under investigation. Ideally the function for the incidence of new HIV cases, h(x), would be chosen so as to provide a plausible representation of the entire epidemic. However, when attempting to fit models to data from patients diagnosed decades after the start of the epidemic, this is not a practical objective. Instead, we propose a pragmatic approach in which the incidence (h(x)) is assumed to be either exponentially increasing or decreasing prior to the calender period of interest (i.e. for x<T L ), and to be either constant or in a separately defined state of exponential change during the period itself (i.e. for T L <x<T R ). We therefore define the incidence rate function as either:
$$\begin{array}{*{20}l} \text{1:} &h\left(x \right) = e^{\left(c + \delta_{1} \left(x \right) b\left(x-T_{L} \right) \right)}, \text{ or} \\ \text{2:} &h\left(x \right) = e^{\left(c + \delta_{1} \left(x \right) b\left(x-T_{L} \right)+ \delta_{2} \left(x \right) d\left(x-T_{L} \right) \right)}, \end{array} $$
where the function δ1(x)=1 if x<T L and 0 otherwise, δ2(x)=1 if x>T L and 0 otherwise, and c, b and d are model parameters: exp(c) is the incidence rate at T L , b determines the rate of decay (b<0) or growth (b>0) of incidence prior to this and d (in 'Option 2') determines the change in incidence after T L . For an exponential model for the delay-to-diagnosis distribution with rate parameter λ, for b+λ>0 the integral required for the log-likelihood function can be solved analytically in each case (results in Additional file 1: Appendix B).
The functions that we have suggested for h(x) clearly cannot provide a full description of the HIV epidemic. However, we propose that allowing for an increasing or decreasing trend in HIV incidence directly prior to the period of interest will appropriately adjust for truncation of diagnosis dates as long as the function h(x) provides an adequate description across the probable range of infection dates of the patients included in the analysis. The first option presented assumes constant incidence of new HIV infections during the observation period, which may be appropriate for short analysis windows, whilst the second option also allows a change in incidence following the start of the observation period. Further computational details are given in Additional file 1: Appendix B.
Datasets and software used
We present analyses that make use of viral sequences of the protease and reverse transcriptase regions of the pol gene collected by the UK HIV Drug Resistance Database [26] that can be linked to pseudo-anonymised clinical records of patients enrolled in the UK Collaborative HIV Cohort (UK CHIC) [27] and UK Register of Seroconverters (UKR) cohort [2]. The statistical methodology was developed using a 'calibration' dataset comprising 1299 seroconverter patients from the UKR cohort who can be linked to a treatment-naïve partial pol sequence. All patients included from the UKR cohort have an interval between last negative and first positive HIV tests of less than 1 year, and some patients were identified during primary infection, meaning that their date of infection can be treated as fixed and known. Injecting drug users were excluded from the analysis.
The methodology developed was then applied to a seroprevalent cohort of men who have sex with men (MSM) diagnosed with HIV in London over a 5-year period spanning 2009–2013 and enrolled in the UK CHIC study. We only included men with an age of at least 18 years at time of diagnosis with a treatment-naïve partial pol sequence stored in the UK HIV Drug Resistance Database. We also excluded any men enrolled in the UKR study. This led to a sample size of 3521 patients. Pre-treatment CD4 counts and VL measurements were included in the analysis, but were not considered as part of the inclusion criteria.
We employ a fully Bayesian approach, implemented in the Stan probabilistic programming language [28]. We carried out all Bayesian modelling using a Linux cluster computer; although fitting individual models using a modern desktop computer would be feasible. The authors acknowledge the use of the UCL Legion High Performance Computing Facility (Legion@UCL), and associated support services, in the completion of this work. Maximum likelihood estimation of random effects models was performed using the lme4 package for R; these were used in the CD4 back-estimation of infection dates performed for comparison.
Cross-validation analysis
We performed a cross-validation analysis using the calibration dataset of seroconverters in order to evaluate the performance of our methodology. We split the calibration dataset into five test groups of nearly equal size (i.e. 259 or 260 patients per group) and refitted the biomarker model five times, excluding one of the test groups on each occasion. The resulting biomarker model fit was then used to generate predictions regarding the timing of HIV seroconversion in the excluded group for each iteration as if they were seroprevalent patients, i.e. disregarding any knowledge regarding the precise timing of infection or history of negative HIV tests. We initially used a uniform prior distribution for time of infection (from the patient's 16th birthday to date of HIV diagnosis) when generating predictions, and also fitted exponential survival models for the delay to diagnosis pooled across the test group in each case (without the need to account for truncation of observations). Maximum likelihood estimation of a standard 'random intercepts and slopes' model for CD4 counts was also carried out alongside Bayesian fitting of the biomarker model at each iteration, and predictions were generated in the test group using simple CD4 back-estimation as described by Rice et al. [8].
Simulation analyses
To further investigate the properties of the methodology developed, we carried out several simulation analyses. Firstly, we generated data for 2000 hypothetical patients with unknown date of HIV infection without considering the truncation of observation times. For this purpose we set distributional parameters equal to the posterior mean values obtained when our model was fitted to the calibration dataset without the inclusion of lab-specific random effects and, to further simplify matters, data were only generated for white MSM with subtype-B HIV acquired at the age of 32. The delay from infection to diagnosis was set to follow an exponential distribution with rate parameter of 0.5 (on the scale of years). Nucleotide ambiguity proportions were simulated at the time of diagnosis, and CD4 counts and VL measurements were generated at time of infection, after 1.5 months, 3 months and subsequently at 6-month intervals from 6 months to 3 years. If a negative CD4 count was generated, then this and all subsequent simulated clinic visits were censored; this meant that a few simulated patients were excluded completely and so a new patient was generated to replace them. The limit of detection for VL was set to 50 copies/mL in the simulations. We initially used a uniform prior distribution for time of infection (from the patient's 16th birthday to date of HIV diagnosis) when generating predictions, and also fitted an exponential survival model for the delay to diagnosis pooled across simulated patients. Simple CD4 back-estimation based on a fitted 'random intercepts and slopes' model was used for comparison [8].
Two additional simulation analyses were carried out with time-varying incidence and truncation of observation times. Patients were generated with characteristics, delays to diagnosis and scheduled set of viral sequence, CD4 and VL observations as described for the simulation of observations without truncation. However, incidence was varied over a measure of calender time, and patients were only selected for analysis whose simulated date of diagnosis fell within a specified analysis window. Models were fitted with estimation of both the rate of diagnosis parameter (λ) and incidence rate over calendar time, allowing the latter to vary before and during the analysis window. Firstly a simulated cohort was generated with increasing incidence from zero for 10 years prior to the analysis window, and a constant incidence rate of 200/year during the analysis window of 5 years' duration. Secondly a simulated cohort was generated with constant incidence rate of 300/year for 10 years, followed by a decrease to 150/year over the 5 years prior to the analysis window and a further decrease to 100/year over the 5 years of the analysis window itself.
Biomarker model in calibration dataset
When the biomarker model was fitted to the calibration dataset it was found that most patient and viral characteristics did not show any clear association with the proportion of ambiguous nucleotide calls on viral sequencing, with the 90% credibility intervals (CrI) for these parameters including zero, and so most of these parameters were dropped from the model to simplify it. There was one exception in that male patients infected via heterosexual sex are more likely to have zero ambiguous nucleotide calls (95% CrI for parameter on logit-scale: 0.06–1.36, in model without lab effects), and so this was retained in the model reported and used for subsequent analysis.
Some substantial inter-lab differences were observed in the probability of observing a viral sequence with zero ambiguity calls, and so lab-specific random effects (as described in Additional file 1: Appendix A) are retained in all analyses. We describe the relationship between time elapsed from HIV infection to viral sequencing and the proportion of ambiguous nucleotide calls for a 'typical' lab, with the three lab-specific random effect terms set to zero. Immediately following HIV infection there is an estimated probability of zero ambiguous nucleotide calls on viral sequencing of just below 0.5, but this probability drops to close to zero for sequences obtained beyond around 5 years from the date of infection (Fig. 1a). There is a corresponding increase in the mean percentage of ambiguous nucleotide calls, amongst those patients in whom any are observed, from around 0.5% immediately following infection to around 1.2% 10 years after infection (Fig. 1b). Further details and summaries of the posterior distributions for all model parameters, including those for CD4 counts and VL, are provided in Additional file 1: Appendix C.
Viral sequence ambiguity relative to time since infection. Plot of a probability of zero ambiguous calls and b mean ambiguity percentage if not zero as a function of time from true date of infection to sequencing (in years). The functions plotted are defined for men who have sex with men or heterosexual women with viral sequencing at a 'typical' lab (with lab specific variations set to zero). Plots are displayed of the expected function value (black line) and 95% credibility interval (dashed line) over the joint posterior distribution of relevant model parameters as resulting from the calibration dataset
Investigation of delay to diagnosis in seroprevalent cohort
The delay-to-diagnosis distribution was investigated in the cohort of 3521 seroprevalent MSM. We first generated predictions for the delay to diagnosis of each individual patient using a uniform prior (in combination with the biomarker model). The overall mean of the posterior expectation in each patient was 4.12 years, and divided by ethnicity it was 3.99 years for white (n=2577), 4.58 years for black (n=239) and 4.46 years for mixed/other/unknown (n=705) patients. For those patients in whom at least one CD4 count was available, the overall mean diagnosis delay estimated by CD4 back-estimation was 2.87 years (n=3414), and divided by ethnicity it was 2.59 years for white (n=2501), 3.97 years for black (n=233) and 3.56 years for mixed/other/unknown (n=680) patients.
The use of 'incidence and delay-to-diagnosis' models fitted to the entire dataset of seroprevalent patients revealed a similar pattern of differences between subgroups defined by ethnicity as found using uniform priors or CD4 back-estimation, but the estimates of average delay were consistently lower (Table 1). When a constant incidence of HIV was assumed during the window period and a single delay-to-diagnosis distribution was fitted across all ethnic groups (Fig. 2a), the posterior mean estimate of average time (1/λ) from infection to diagnosis was 1.82 years (95% CrI 1.64–2.04 years), and allowing the incidence of HIV to change during the window period led to only a small change in the estimate to 1.77 years (95% CrI 1.59–1.96 years) even though a change in incidence was found during this period (Fig. 2b). The second model was further extended to allow differences in the delay-to-diagnosis distribution according to ethnicity, and patients of black (2.91, 95% CrI 1.92–4.76 years) or other (2.68, 95% CrI 2.04–3.45 years) ethnicity were found to have substantially higher average time-to-diagnosis than white patients (1.57, 95% CrI 1.41–1.75 years). The ethnic classifications of patients also showed differences in incidence trends over time (Fig. 2c). Further computational details and examples of predictions for the date of infection in individual patients are presented in Additional file 1: Appendix D.
Plots of estimated incidence rates (/year) of new HIV infections. Results from models fitted to the cohort of 3521 seroprevalent men who have sex with men. In a the incidence of new HIV infections is assumed to be constant in the window period, whereas in b and c it is allowed to vary. In a and b, a single delay-to-diagnosis distribution is fitted across all patients, whereas in c it is split according to white (black line), black (blue line) and other (red line) ethnic classifications. 95% credibility intervals are shown as dotted lines
Table 1 Estimates of mean diagnosis delay (years) in the cohort of 3521 seroprevalent men who have sex with men diagnosed in London in the period 2009–2013 using the models developed in this paper and by CD4 back-estimation
Results of cross-validation analysis
The cross-validation analysis made use of the calibration dataset of seroconverters and so the maximum possible true diagnosis delay in these patients is 1 year. However, when our methodology was used with a uniform prior distribution for the delay to diagnosis of each patient the mean estimated diagnosis delay was 2.45 years (taking the mean of posterior expectations) and the interquartile range was 1.16–3.26 years (Fig. 3a). This was worse than the performance of simple CD4 back-estimation for which the mean estimated delay to diagnosis was 1.71 years with an interquartile range of 0.00–2.90 years (Fig. 3c). Plots are presented of the estimated diagnosis delay against patient age at diagnosis because the time period from start of sexual activity to date of diagnosis represents the maximum possible delay from infection to diagnosis, and so there is the potential for greater delays amongst patients who are older at diagnosis.
Plot of predictions of delay to diagnosis resulting from the cross-validation analysis. Results are shown in relation to patient age at diagnosis, and are presented using our methodology (full biomarker (bio.) model; for which ∙ is the posterior expectation) with a uniform priors or b a pooled exponential survival (surv.) model in each test group and c using standard CD4 back-estimation. The diagonal black line shows the 'expected' diagnosis delay for a uniform prior distribution from the age of 16 to the date of diagnosis in each patient. LOESS regression curves are also shown (blue line) with 95% CI (shaded grey). The maximum true diagnosis delay in these patients is 1 year
When an exponential survival model was used for the delay to diagnosis, individual patient estimates for the diagnosis delay were appropriately corrected into the range 0–1 years (Fig. 3b). This performance was mediated by high values for the posterior distribution of the rate of diagnosis parameter (λ), with posterior mean ranging from 9.2–12.6 across the test groups (corresponding to a mean delay to diagnosis of 4–6 weeks). The mean estimated diagnosis delay was 0.01 years and the interquartile range was 0.07–0.12 years; average delays around this level are not likely to be observed in patients with unknown date of infection, but these results demonstrate how the methodology developed can successfully pool information across a group of patients.
Results of simulation analyses
Without truncation of observation times
The results of the simulation analysis for 2000 patients without truncation of observation times are summarised in Fig. 4 and Table 2. The use of our methodology with a correctly defined exponential survival model showed the best accuracy for individual patient predictions for the delay to diagnosis: taking the posterior mean for each individual as a point estimate gave a mean absolute error of 1.04 years and a mean squared error of 2.15, with values of 2.22 years and 8.39 for the use of our methodology with uniform priors and 2.12 years and 8.40 for CD4 back-estimation.
Predictions of delay to diagnosis (Dx) against true delay from the simulation analysis without truncation. Results are presented using our methodology (for which ∙ is the posterior expectation with 95% credibility interval in grey) with a uniform priors or b a pooled exponential survival (surv.) model and c using standard CD4 back-estimation. The diagonal green line shows the line of equality for perfect predictions in each patient. LOESS regression curves are also shown (blue line) with 95% CI (shaded grey)
Table 2 Summary of accuracy of delay-to-diagnosis predictions for the simulation analysis of 2000 patients without truncation of observation times. Our methodology was applied using either uniform priors or an exponential survival model for diagnosis delays, and CD4 back-estimation was used for comparison
For the model with exponential survival distribution individual patient predictions show 'shrinkage' towards the population average of 2 years, explaining the consistent overestimation for smaller true delays and underestimation for larger true delays, but the mean bias was very close to zero (-0.02 years). However, consistent overestimation was observed when uniform priors were used (reflecting a larger prior expected value for the delay), with a mean bias of 1.97 years. Furthermore, the posterior 95% CrIs for the delay to diagnosis in each patient showed correct coverage when the exponential model was used (94.1 %) but not when uniform priors were used (89.5 %). Whilst ad hoc procedures have been proposed, simple CD4 back-estimation does not incorporate a coherent method for generating confidence or credibility intervals, but the mean bias was 0.68 years reflecting an overestimation of the average diagnosis delay. The exponential survival model recovered an appropriate estimate and CrI for the rate of HIV diagnosis following infection (posterior expectation: \(\hat {\lambda }=\) 0.52, 95% CrI 0.48–0.57; true value=0.5).
This simulation analysis shows that our method for estimating the delay-to-diagnosis distribution across a group of patients can lead to smaller prediction errors on a per patient basis and more accurate group-level inferences than the use of CD4 back-estimation. When our methodology was used with uniform priors for the delay-to-diagnosis distribution in each patient the performance was poor both at the individual-patient and group level, despite the use of all available pre-treatment biomarker information within the model; this shows that the use of uniform priors in the analysis of diagnosis delays can lead to very inaccurate inferences. Results have been described for a single simulated cohort for simplicity of presentation, but this simulation analysis was repeated 100 times to confirm the performance of the exponential survival model (Additional file 1: Appendix E).
With truncation of observation times
For the first simulated cohort with a truncated observation window, generated with increasing and then constant incidence, there was a total of 877 patients diagnosed during the analysis period. The incidence of new infections during the window period was estimated correctly, and the estimated trend prior to the analysis window also reflected that used to generate the data (Fig. 5). The 95% CrI for the posterior distribution for the rate of diagnosis parameter included the true value (0.42–0.64, posterior expectation =0.53; true value =0.5). When a delay-to-diagnosis model was fitted without accounting for any potential change in incidence, a posterior expectation of 0.59 (95% CrI 0.52–0.67) was obtained for this parameter. Code to generate an equivalent dataset and refit the 'incidence and delay-to-diagnosis' model is provided online (Additional files 2, 3, 4, and 5).
Plot of estimated incidence rate (/year) of new infections for first simulation with analysis window. The estimated incidence rate (/year) of new HIV infections (black line) is plotted with 95% credibility interval (dotted line) for simulation with increasing incidence prior to and constant incidence during the analysis window. Horizontal grey lines show the true incidence rate used to generate the data, and vertical grey lines show the limits of the analysis window
For the second simulated cohort with a truncated observation window, generated with decreasing incidence, there was a total of 721 patients diagnosed during the analysis period. The incidence of new infections during the window period was estimated correctly (Fig. 6). The incidence prior to the window period was not captured perfectly due to the constraints of the model used, but the trend estimated by the posterior mean of the model parameters did reflect that used to generate the data (with the 95% CrI indicating considerable uncertainty). The 95% CrI for the posterior distribution for the rate of diagnosis parameter included the true value (0.39–0.55, posterior expectation =0.47; true value =0.5). When a delay-to-diagnosis model was fitted without accounting for any potential change in incidence, a posterior expectation of 0.44 (95% CrI 0.39–0.48) was obtained for this parameter. These simulations demonstrate that our methodology can correctly identify trends in incidence, assuming that the model is appropriate to the data, and that it can be used to estimate the average diagnosis delay with adjustment for changes in incidence.
Plot of estimated incidence rate (/year) of new infections for second simulation with analysis window. The estimated incidence rate (/year) of new HIV infections (black line) is plotted with 95% credibility interval (dotted line) for simulation with decreasing incidence prior to and during the analysis window. Horizontal grey lines show the true incidence rate used to generate the data, and vertical grey lines show the limits of the analysis window
In this paper we have developed novel statistical methodology to derive probability distributions for the date of HIV infection in individual patients and to investigate the characteristics of delay-to-diagnosis distributions within a population of interest. The use of a fully Bayesian framework for statistical modelling allows the combination of multiple sources of available information and also means that uncertainty in parameter estimates can be incorporated in all stages of the analysis, without the need for bootstrapping to generate credible or confidence intervals. We have included viral nucleotide ambiguity, CD4 counts and VL measurements in the models developed, but the framework could also be readily extended to incorporate other biomarkers where available. The information that can be gained through our methodology is of direct use for public health monitoring and planning, and it may also provide a useful contribution to research into HIV transmission networks and dynamics.
In demonstrating our methods we have investigated how diagnosis delays vary with patient ethnicity among MSM in London, finding substantially greater delays to diagnosis in non-white individuals. This finding is consistent with those reported based on a crude definition of late diagnosis of CD4 count <350 within 3 months of diagnosis [29], and a similar pattern of differences was observed when we used CD4 back-estimation for comparison in our analysis. However, the average diagnosis delay for all groups was found to be lower when it was estimated using a survival model pooled across patients. Explicit estimation of the diagnosis delay distribution in subgroups of interest could be very useful for public health monitoring and in the planning of interventions such as targeted outreach testing. We should note that we have analysed a selected cohort with inclusion conditional on enrolment into the UK CHIC study and availability of a treatment-naïve viral sequence, and so the findings that we have observed cannot be used for any specific public health conclusions without further research.
There are some conceptual similarities between our approach and that developed by Sommen et al. [30] using immunological markers for recent HIV infection. Sommen et al. [30] fitted longitudinal models for HIV antigen biomarkers by maximum likelihood estimation with integration over the known possible interval of infection times in a cohort of seroconverters, and the parameter estimates obtained were then used to derive posterior distributions for the true time of infection in seroprevalent patients given their observed biomarker data using an exponential prior distribution for the delay to diagnosis; however, the parameter determining the shape of the exponential prior distribution was treated as fixed and known. Romero-Severson et al. [31] further developed the approach of Sommen et al. [30], and used a prior distribution fitted to point estimates of delay to diagnosis based on BED-enzyme immunoassay results. We have taken this approach a step further and have estimated the characteristics of the delay distribution using data from seroprevalent patients, without using intermediate point estimates, through the use of a fully Bayesian modelling framework. We could not directly apply the models developed by Sommen et al. [30] and Romero-Severson et al. [31] to our cohort, as we did not have immunoassay data available for the patients.
Both Sommen et al. [30] and Romero-Severson et al. [31] derived posterior distributions for the date of HIV infection in individual patients as a step towards generating incidence estimates within a population of interest. In the present work we have shown that modelling of incidence is required in order to appropriately estimate the delay-to-diagnosis distribution across a subgroup of patients, unless constant incidence can be assumed. The estimates of incidence resulting from our study relate to a highly selected group of patients and so are not of direct interest with regards to public health planning. However, it would be possible to apply our methodology to an unselected cohort of patients, even if some individuals have very little or no biomarker data available, in order to estimate the total incidence in a population of interest.
Several established methodological approaches to the estimation of HIV incidence from surveillance data have been developed from a Bayesian multi-state model proposed by Sweeting et al. [32], with Birrell et al. [33] and van Sighem et al. [9] providing examples of the application of such models to national cohorts. In these analyses, the rate of diagnosis following HIV infection is estimated, but data are modelled in terms of discrete time points and discrete disease stages are defined in terms of CD4 count. The models that we have developed are defined on the original continuous time scale, allowing more detailed analysis of the delay from infection to diagnosis in individual patients or across a population. Our models may allow changes in incidence to be quantified over a smaller time period, given the use of all available biomarker data in continuous form, but this requires further investigation.
There has been much interest recently in the use of phylogenetic analysis to investigate HIV transmission networks and dynamics, which requires the direction of infection in transmission pairs to be identified. For example, Ratmann et al. [34] conducted an analysis of Dutch MSM and estimated that in over 70% of infections in their selected cohort, transmission had occurred from an undiagnosed man, indicating a need for more targeted testing for HIV. Ratmann et al. used CD4 back-estimation to derive infection times in assessing direction of transmission, which could be further refined using our methodology; given the clinical and viral sequence data of a transmission pair, it would be possible to explicitly derive the probabilities for which patient was the first to be infected, and of onward transmission having occurred within a defined period of time following the initial infection. Another potential application for our methods would be to estimate the probability of infection having been acquired in another country among patients in migrant populations, as averaging over patient-specific probabilities might give different results to the use of point estimates in individual patients as has previously been performed [8].
A limitation of the present study is that RITA/antigen biomarkers were not included in the analysis. This was because of only very limited available data in the calibration dataset. However, as we have noted, it would be straightforward to incorporate such data into the framework developed. RITA/antigen biomarkers alone are generally only used to provide a dichotomous classification into recent or non-recent infection, but in combination with other clinical and genetic data it would be possible to further refine the interval within which infection is likely to have occurred in any given seroprevalent patient.
Another limitation of this study is that we only considered classical bulk Sanger sequencing of a limited segment of the viral genome, and did not include any data from 'next generation sequencing' (NGS) methods. This was due to the fact that historical data were used for the analysis, although it is worth noting that, in the UK, NGS has still only been introduced for clinical use in HIV in a few centres. NGS techniques can provide complete sequencing of the viral genome and also enable more in-depth assessment of viral diversity than classical Sanger sequencing, allowing phylogenetic techniques to be employed to reconstruct viral evolution from one or more founder viruses, which in turn can be used to predict the date of infection [35]. These techniques would, however, require greater computational resources for the processing of sequence data for each patient, and so measures of ambiguous nucleotide proportions may still be useful for carrying out population-level analyses.
The modelling strategy developed in this paper builds on prior work but includes a novel combination of features within a single coherent framework. We have developed this approach with the aim of making full use of all available relevant data in assessing timing of infection in seroprevalent HIV patients, whilst appropriately incorporating and quantifying uncertainty in both model parameters and true dates of HIV infection at each stage of the analysis. Cross-validation and simulation analyses indicate that the models developed provide more accurate information regarding the timing of infection than does CD4 count-based estimation, and they also provide useful population-level information. The focus of the present paper is on investigation of delays to diagnosis, but we plan to further develop the application of our framework for incidence estimation.
AIDS:
Acquired immune deficiency syndrome
Antiretroviral therapy
Credibility interval
HIV:
MSM:
NGS:
RITA:
Recent infection testing algorithm
UCL:
UK CHIC:
UK Collaborative HIV Cohort
UKR:
UK Register of Seroconverters
VL:
Viral load
Rice BD, Yin Z, Brown AE, Croxford S, Conti S, De Angelis D, Delpech VC. Monitoring of the HIV epidemic using routinely collected data: The case of the United Kingdom. AIDS Behav. 2017; 21:83–90.
UK Register of HIV Seroconverters Steering Committee. The AIDS incubation period in the UK estimated from a national register of HIV seroconverters. AIDS. 1998; 12:659–67.
Touloumi G, Pantazis N, Pillay D, Paraskevis D, Chaix M-L, Bucher HC, Kücherer C, Zangerle R, Kran A-MB, Porter K. Impact of HIV-1 subtype on CD4 count at HIV seroconversion, rate of decline, and viral load set point in European seroconverter cohorts. Clin Infect Dis. 2013; 56:888–97.
Cohen MS, Chen YQ, McCauley M, Gamble T, Hosseinipour MC, Kumarasamy N, Hakim JG, Kumwenda J, Grinsztejn B, Pilotto JHS, Godbole SV, Mehendale S, Chariyalertsak S, Santos BR, Mayer KH, Hoffman IF, Eshleman SH, Piwowar-Manning E, Wang L, Makhema J, Mills LA, de Bruyn G, Sanne I, Eron J, Gallant J, Havlir D, Swindells S, Ribaudo H, Elharrar V, Burns D, Taha TE, Nielsen-Saines K, Celentano D, Essex M, Fleming TR. Prevention of HIV-1 infection with early antiretroviral therapy. N Engl J Med. 2011; 365:493–505.
INSIGHT START Study Group. Initiation of antiretroviral therapy in early asymptomatic HIV infection. N Engl J Med. 2015; 373:795–807.
Carlin E, Taha Y. Using recent infection testing algorithm tests in clinical practice. Sex Transm Infect. 2012; 88:304–6.
Guy R, Gold J, Calleja JMG, Kim AA, Parekh B, Busch M, Rehle T, Hargrove J, Remis RS, Kaldor JM. Accuracy of serological assays for detection of recent infection with HIV and estimation of population incidence: a systematic review. Lancet Infect Dis. 2009; 9:747–59.
Rice BD, Elford J, Yin Z, Delpech VC. A new method to assign country of HIV infection among heterosexuals born abroad and diagnosed with HIV. AIDS. 2012; 26:1961–6.
van Sighem A, Nakagawa F, De Angelis D, Quinten C, Bezemer D, Op de Coul E, Egger M, de Wolf F, Fraser C, Phillips A. Estimating HIV incidence, time to diagnosis, and the undiagnosed HIV epidemic using routine surveillance data. Epidemiol. 2015; 26:653–60.
Song R, Hall HI, Green TA, Szwarcwald CL, Pantazis N. Using CD4 data to estimate HIV incidence, prevalence, and percent of undiagnosed infections in the United States. JAIDS J Acquir Immune Defic Syndr. 2017; 74:3–9.
Taylor JMG, Cumberland WG, Sy JP. A stochastic model for analysis of longitudinal AIDS data. J Am Stat Assoc. 1994; 89:727–36.
Stirrup OT, Babiker AG, Carpenter JR, Copas AJ. Fractional Brownian motion and multivariate-t models for longitudinal biomedical data, with application to CD4 counts in HIV-patients. Stat Med. 2016; 35:1514–32.
Keele BF, Giorgi EE, Salazar-Gonzalez JF, Decker JM, Pham KT, Salazar MG, Sun C, Grayson T, Wang S, Li H, Wei X, Jiang C, Kirchherr JL, Gao F, Anderson JA, Ping L-H, Swanstrom R, Tomaras GD, Blattner WA, Goepfert PA, Kilby JM, Saag MS, Delwart EL, Busch MP, Cohen MS, Montefiori DC, Haynes BF, Gaschen B, Athreya GS, Lee HY, Wood N, Seoighe C, Perelson AS, Bhattacharya T, Korber BT, Hahn BH, Shaw GM. Identification and characterization of transmitted and early founder virus envelopes in primary HIV-1 infection. Proc Natl Acad Sci. 2008; 105:7552–7.
Shankarappa RAJ, Margolick JB, Gange SJ, Rodrigo AG, Upchurch D, Farzadegan H, Gupta P, Rinaldo CR, Learn GH, He XI, Huang X-L, Mullins JI. Consistent viral evolutionary changes associated with the progression of human immunodeficiency virus type 1 infection. J Virol. 1999; 73:10489–502.
Kouyos RD, von Wyl V, Yerly S, Böni J, Rieder P, Joos B, Taffé P, Shah C, Bürgisser P, Klimkait T, Weber R, Hirschel B, Cavassini M, Rauch A, Battegay M, Vernazza PL, Bernasconi E, Ledergerber B, Bonhoeffer S, Günthard HF. Ambiguous nucleotide calls from population-based sequencing of HIV-1 are a marker for viral diversity and the age of infection. Clin Infect Dis. 2011; 52:532.
Ragonnet-Cronin M, Aris-Brosou S, Joanisse I, Merks H, Vallé D, Caminiti K, Rekart M, Krajden M, Cook D, Kim J, Malloch L, Sandstrom P, Brooks J. Genetic diversity as a marker for timing infection in HIV-infected patients: evaluation of a 6-month window and comparison with BED. J Infect Dis. 2012; 206:756–64.
Andersson E, Shao W, Bontell I, Cham F, Wondwossen A, Morris L, Hunt G, Sönnerborg A, Bertagnolio S, Maldarelli F, Jordan MR. Evaluation of sequence ambiguities of the HIV-1 pol gene as a method to identify recent HIV-1 infection in transmitted drug resistance surveys. Infect Genet Evol. 2013; 18:125–31.
Meixenberger K, Hauser A, Jansen K, Yousef KP, Fiedler S, von Kleist M, Norley S, Somogyi S, Hamouda O, Bannert N, Bartmeyer B, Kücherera C. Assessment of ambiguous base calls in HIV-1 pol population sequences as a biomarker for identification of recent infections in HIV-1 incidence studies. J Clin Microbiol. 2014; 52:2977–83.
Stirrup OT, Babiker AG, Copas AJ. Combined models for pre- and post-treatment longitudinal biomarker data: an application to CD4 counts in HIV-patients. BMC Med Res Methodol. 2016; 16:121.
Pantazis N, Touloumi G, Walker AS, Babiker AG. J R Stat Soc: Ser C (Applied Statistics). 2005; 54:405–23.
Stirrup O, Copas A, Phillips A, Gill M, Geskus R, Touloumi G, Young J, Bucher H, Babiker A. Predictors of CD4 cell recovery following initiation of antiretroviral therapy among HIV-1 positive patients with well-estimated dates of seroconversion. HIV Med. 2018; 19:184–94.
Lui K-J, Lawrence DN, Morgan WM, Peterman TA, Haverkos HW, Bregman DJ. A model-based approach for estimating the mean incubation period of transfusion-associated acquired immunodeficiency syndrome. Proc Natl Acad Sci. 1986; 83:3051–5.
Medley GF, Anderson RM, Cox DR, Billard L. Incubation period of AIDS in patients infected via blood transfusion. Nature. 1987; 328:719–21.
Medley GF, Billard L, Cox DR, Anderson RM. The distribution of the incubation period for the acquired immunodeficiency syndrome (AIDS). Proc R Soc Lond B Biol Sci. 1988; 233:367–77.
Kalbfleisch JD, Lawless JF. Inference based on retrospective ascertainment: an analysis of the data on transfusion-related AIDS. J Am Stat Assoc. 1989; 84:360–72.
Cane P, Chrystie I, Dunn D, Evans B, Geretti AM, Green H, Phillips A, Pillay D, Porter K, Pozniak A, Sabin C, Smit E, Weber J, Zuckerman M. Time trends in primary resistance to HIV drugs in the United Kingdom: multicentre observational study. BMJ. 2005; 331:1368–8.
The UK Collaborative HIV Cohort Steering Committee. The creation of a large UK-based multicentre cohort of HIV-infected individuals: The UK Collaborative HIV Cohort (UK CHIC) Study. HIV Medicine. 2004; 5:115–24.
Carpenter B, Lee D, Brubaker MA, Riddell A, Gelman A, Goodrich B, Guo J, Hoffman M, Betancourt M, Li P. Stan: A Probabilistic Programming Language. J Stat Softw. 2017; 76(1):1–32.
Public Health England. HIV in the UK: 2016 report. [Online; accessed 30 May 2017].
Sommen C, Commenges D, Vu SL, Meyer L, Alioum A. Estimation of the distribution of infection times using longitudinal serological markers of HIV: implications for the estimation of HIV incidence. Biometrics. 2011; 67:467–75.
Romero-Severson EO, Petrie CL, Ionides E, Albert J, Leitner T. Trends of HIV-1 incidence with credible intervals in Sweden 2002–09 reconstructed using a dynamic model of within-patient IgG growth. Int J Epidemiol. 2015; 44:998–1006.
Sweeting MJ, De Angelis D, Aalen OO. Bayesian back-calculation using a multi-state model with application to HIV. Stat Med. 2005; 24:3991–4007.
Birrell PJ, Gill ON, Delpech VC, Brown AE, Desai S, Chadborn TR, Rice BD, De Angelis D. HIV incidence in men who have sex with men in England and Wales 2001–10: a nationwide population study. Lancet Infect Dis. 2013; 13:313–8.
Ratmann O, van Sighem A, Bezemer D, Gavryushkina A, Jurriaans S, Wensing A, de Wolf F, Reiss P, Fraser C. Sources of HIV infection among men having sex with men and implications for prevention. Sci Transl Med. 2016; 8:320–2.
Lee HY, Giorgi EE, Keele BF, Gaschen B, Athreya GS, Salazar-Gonzalez JF, Pham KT, Goepfert PA, Kilby JM, Saag MS, Delwart EL, Busch MP, Hahn BH, Shaw GM, Korber BT, Bhattacharya T, Perelson AS. Modeling sequence evolution in acute HIV-1 infection. J Theor Biol. 2009; 261:341–60.
OTS made use of notes provided by David Dolling in developing the zero-inflated beta model for nucleotide ambiguity, and used Stata code written by Ellen White to process sequence data and obtain ambiguity proportions. Andrew Copas provided comments during the revision process. We thank the clinicians and technical staff who have contributed to the databases used in this analysis.
This work was supported by the UK Medical Research Council (Award Number 164587).
Access to the UK HIV Drug Resistance Database and UK CHIC data requires submission of a project proposal to the steering committee. Further information can be found at 'www.hivrdb.org.uk/' and 'www.ukchic.org.uk/'.
Centre for Clinical Research in Infection and Sexual Health, Institute for Global Health, University College London, Gower Street, London, WC1E 6BT, UK
Oliver T. Stirrup
& David T. Dunn
Search for Oliver T. Stirrup in:
Search for David T. Dunn in:
OTS and DTD developed the modelling strategy presented. OTS coded the analysis and drafted the initial manuscript. DTD and OTS both made revisions to the manuscript. All authors read and approved the final manuscript.
Correspondence to Oliver T. Stirrup.
The UK Register of HIV Seroconverters study has research ethics approval (Research Ethics Committee West Midlands - South Birmingham: 04/Q2707/155). The UK Collaborative HIV Cohort Study (West Midlands – Edgbaston: 00/7/047) and the UK HIV Drug Resistance Database (London - Central: 01/2/010) have separate MREC approvals, which waived the requirement for individual patient consent.
Additional file 1
Supplementary Appendices. Contains further details of model specifications, computational notes, parameter summaries in the calibration dataset and examples of predictions in individual patients. (PDF 206 kb)
R script (viewable as plain text) to read in parameter estimates for the calibration dataset and to then simulate a cohort of patients with increasing incidence of HIV and a limited observation window for diagnoses. The same R script fits an 'incidence and delay-to-diagnosis' model to the generated data using a Stan model template file provided. (R 17 kb)
Comma-separated value file containing posterior mean values for parameters of model fitted to the calibration dataset (without accounting for inter-lab variation in nucleotide ambiguity proportions). (CSV 4 kb)
Comma-separated value file containing posterior covariance matrix for parameters of model fitted to the calibration dataset (without accounting for inter-lab variation in nucleotide ambiguity proportions). (CSV 52 kb)
Stan model template file (which is annotated and can be viewed as plain text) to fit the 'incidence and delay-to-diagnosis' model with change in incidence prior to and during observation window. (STAN 17 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Stirrup, O.T., Dunn, D.T. Estimation of delay to diagnosis and incidence in HIV using indirect evidence of infection dates. BMC Med Res Methodol 18, 65 (2018) doi:10.1186/s12874-018-0522-x
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-018-0522-x
Diagnosis delay
Incidence estimation
Latent variables
Data analysis, statistics and modelling
|
CommonCrawl
|
nitrogen Fertilizer sand OM246 ClipVol
A pound of nitrogen. That's a substantial amount of N, right? Apply a pound of N in a single application and that's going to create a lot of growth—way too much growth in most cases—and may lead to some thatch production. Certainly the application of that much N is going to require some grooming and verticutting and sand topdressing in order to keep the turf from getting too puffy.
By a pound of nitrogen, I mean 1 pound of N applied per 1000 ft2. That is equivalent to 5 g N/m2, or 50 kg N/ha. Two-tenths of a pound of nitrogen per 1000 ft2 is 1 g/m2 or 10 kg/ha.
But two tenths of a pound—1 gram—that much N doesn't really register, does it? The average amount of N applied to putting greens in the United States, across all regions of the country, was 3.1 lb/1000 ft2/yr based on a 2021 survey.1 That's about 16 g/m2. What if this was 2.9 lb (15 g), or 3.3 lb (17 g)? That doesn't seem like such a big difference, that little bit of N, that change of $\frac{2}{10}$ of a pound.
We would all agree that a difference of 1 pound would be meaningful. The thing is, if you change the N by $\frac{2}{10}$ of a pound per year, then after five years, it's a pound.
Adding even a little more N than needed will lead to more growth, and that growth produces aboveground and belowground plant material. The cumulative effect of small excesses of N must be a reason for some thatch production.
From a pound of N, if that all got taken up with 100% efficiency by creeping bentgrass, I'd expect that much N to produce a clipping volume of about 2.1 L/m2. For reference, that's about an entire growing season's worth of above ground growth in a place like Chicago.
What I'm getting at is that every little bit of N is meaningful, and a little N can go a long way, and what seems like a little bit of N can over the long term become an amount of N that would contribute to thatch buildup.
See what if it was never there for more discussion of this topic.
That's from Table 5 in Nutrient Use and Management Practices on United States Golf Courses by Shaddox et al. ↩︎
What if it was never there?
Putting green organic matter by depth in the soil
The turfgrass genki level, part 3: turf growth response measured by clippings
Flipping things around
Recommended turfgrass reading
An analysis of turfgrass industry Twitter accounts in 2022
|
CommonCrawl
|
Why do α-amino acids have a C-H bond at the α-carbon?
I've just started studying biochemistry and I read that a general $\alpha$-amino acid looks like (ignore the ionisation for now):
My question is: why isn't the general $\alpha$-amino acid formula like $\ce{CRR'NH_2COOH}$ instead of $\ce{CHRNH_2COOH}$? Coming from a physics background, the latter feels more "natural" as it is more general.
Some possible reasons I've thought of (and disqualified):
Maybe the $\ce{C-H}$ bond dissociation energy is too high: I looked at a paper [1] but the bond energies seem to be not too high (relative to say $\ce{C-C}$ bonds.
Steric factors: I don't think this applies as having a small group, like say $\ce{CH_3}$, won't create a lot more steric repulsion relative to $\ce{H}$.
If the answer is along the lines of "it is defined this way because we actually see repeating units of these structures in proteins, so we naturally define the smallest subunit accordingly", then provide chemical grounds (if possible) to justify the fact that structures like $\ce{CRR'NH_2COOH}$ are not prevalent in proteins.
[1]: Moore, B. N., & Julian, R. R. (2012). Dissociation energies of X–H bonds in amino acids. Physical Chemistry Chemical Physics, 14(9), 3148-3154.
biochemistry amino-acids
Pritt Balagopal
theindigamertheindigamer
In the most general sense, the term α-amino acid does encompass doubly substituted structures like those you mention. After all, the name doesn't specify any particular substitution, or lack thereof, at the α-carbon. The name only really means that there is an amino group α to a carboxylic acid, i.e. one carbon away. Similar nomenclature is used to describe molecules where the amino group is 2, 3, ... carbons away from the carboxylic acid, also without any specification of substitution.
The disubstituted compounds you described, $\ce{H2N-C(R)(R')-COOH}$, can be synthesised in the laboratory. Pharmaceutical companies seem to have an interest in these compounds. Here is one example from a collaboration between Eli Lilly and IUPUI1 (there are many more - pharmaceutical companies like Pfizer2 even have some patents on these compounds):
Wikipedia actually has an example of a naturally occurring α-amino acid without a hydrogen at the α-carbon. Apparently, it is produced by some fungi:
However, the term α-amino acid is nearly always used to refer to the proteinogenic amino acids, which do happen to have a hydrogen at the α-carbon.
These proteinogenic amino acids were discovered many years ago. Generally, they are obtained by treatment of proteins with acid (which hydrolyses proteins into their constituent amino acids). Their chemical formulae were painstakingly identified by a series of decomposition reactions - back in the old days, there was no NMR. The history of the discovery of the proteinogenic amino acids is described in a review article3 from 1931(!) - which tells you how long ago these were discovered.
So, part of the reason why these aren't disubstituted is simply because we found them to not be so. However, I can understand if one complains of this conclusion being intellectually unsatisfying. So, here is some extra speculation.
One major step involved in the biosynthesis of amino acids is transamination. I don't have a list on me right now, but I think most amino acids go through such a step at one point in time.
The mechanism (described on Wikipedia, or in any biochemistry textbook) necessitates that one hydrogen is added to the α-carbon. [For the organic chemists out there, this is essentially an enzyme-mediated reductive amination.]
It turns out that, on top of this, it is also possible that such disubstituted amino acids are actually harmful. According to Weber and Miller, writing in the Journal of Molecular Evolution:4
Replacement of the α-hydrogen by a larger substituent, such as a methyl group, would also increase significantly steric hindrance around the amino and carboxyl groups. Steric difficulties have been encountered in the chemical synthesis of peptides with α-aminoisobutyric acid.
Also, the severely constrained stereochemical behavior of peptides of α-methyl-α-amino-n-butyric acid suggests that peptides composed of α-methyl substituted amino acids would not have the structural versatility of peptides derived from α-hydrogen substituted amino acids.
An additional factor involves the ribosomal peptidyl transferase, which would develop a specificity for either an α-hydrogen or an α-methyl of the amino acid. This would prevent the synthesis of proteins containing both types of amino acids.
Green, J. E.; Bender, D. M.; Jackson, S.; O'Donnell, M. J.; McCarthy, J. R. Mitsunobu Approach to the Synthesis of Optically Active α,α-Disubstituted Amino Acids. Org. Lett. 2009, 11 (4), 807–810. DOI: 10.1021/ol802325h.
Graham, S. R.; Mantell, S. J.; Rawson, D. J.; Schwarz, J. B. (Pfizer, Inc.) Amino acid derivatives. U.S. Patent 7612226 B2, Nov 3, 2009.
Vickery, H. B.; Schmidt, C. L. A. The History of the Discovery of the Amino Acids. Chem. Rev. 1931, 9 (2), 169–318. DOI: 10.1021/cr60033a001.
Weber, A. L.; Miller, S. L. Reasons for the occurrence of the twenty coded protein amino acids. J. Mol. Evol. 1981, 17 (5), 273–284. DOI: 10.1007/BF01795749.
orthocresol♦orthocresol
Not the answer you're looking for? Browse other questions tagged biochemistry amino-acids or ask your own question.
Polarity of amino acids
Why are amino acids named amino acids?
What is the structural difference between ketogenic amino acids and glucogenic amino acids?
Ionization of amino acids
Can all nine essential amino acids be produced synthetically?
How do amino acids look like at the isoelectric point?
Formation of zwitterion from amino acids
Do any efforts to construct artificial biochemistries exist and what compounds are they based on?
Basic amino acids
Synthesizing amino acids - why sodium hydroxide goes after certain bonds
|
CommonCrawl
|
Antiviral activity of a novel mixture of natural antimicrobials, in vitro, and in a chicken infection model in vivo
Functional T cells are capable of supernumerary cell division and longevity
Andrew G. Soerens, Marco Künzli, … David Masopust
Novavax NVX-COV2373 triggers neutralization of Omicron sub-lineages
Jinal N. Bhiman, Simone I. Richardson, … Penny L. Moore
Immunological dysfunction persists for 8 months following initial mild-to-moderate SARS-CoV-2 infection
Chansavath Phetsouphanh, David R. Darley, … Gail V. Matthews
Cancer cells resistant to immune checkpoint blockade acquire interferon-associated epigenetic memory to sustain T cell dysfunction
Jingya Qiu, Bihui Xu, … Andy J. Minn
Molecular fate-mapping of serum antibody responses to repeat immunization
Ariën Schiepers, Marije F. L. van 't Wout, … Gabriel D. Victora
SARS-CoV-2 variant biology: immune escape, transmission and fitness
Alessandro M. Carabelli, Thomas P. Peacock, … David L. Robertson
Targeting TBK1 to overcome resistance to cancer immunotherapy
Yi Sun, Or-yam Revach, … Russell W. Jenkins
γδ T cells are effectors of immunotherapy in cancers with HLA class I defects
Natasja L. de Vries, Joris van de Haar, … Emile E. Voest
SARS-CoV-2 infection and persistence in the human body and brain at autopsy
Sydney R. Stein, Sabrina C. Ramelli, … Daniel S. Chertow
Igori Balta1,2,
Lavinia Stef3,
Ioan Pet3,
Patrick Ward4,
Todd Callaway5,
Steven C. Ricke6,
Ozan Gundogdu7 &
Nicolae Corcionivoschi1,2,3
Scientific Reports volume 10, Article number: 16631 (2020) Cite this article
Chemokines
Viral host response
The aim of this study was to test in vitro the ability of a mixture of citrus extract, maltodextrin, sodium chloride, lactic acid and citric acid (AuraShield L) to inhibit the virulence of infectious bronchitis, Newcastle disease, avian influenza, porcine reproductive and respiratory syndrome (PRRS) and bovine coronavirus viruses. Secondly, in vivo, we have investigated its efficacy against infectious bronchitis using a broiler infection model. In vitro, these antimicrobials had expressed antiviral activity against all five viruses through all phases of the infection process of the host cells. In vivo, the antimicrobial mixture reduced the virus load in the tracheal and lung tissue and significantly reduced the clinical signs of infection and the mortality rate in the experimental group E2 receiving AuraShield L. All these effects were accompanied by a significant reduction in the levels of pro-inflammatory cytokines and an increase in IgA levels and short chain fatty acids (SCFAs) in both trachea and lungs. Our study demonstrated that mixtures of natural antimicrobials, such AuraShield L, can prevent in vitro viral infection of cell cultures. Secondly, in vivo, the efficiency of vaccination was improved by preventing secondary viral infections through a mechanism involving significant increases in SCFA production and increased IgA levels. As a consequence the clinical signs of secondary infections were significantly reduced resulting in recovered production performance and lower mortality rates in the experimental group E2.
Viral infections are increasingly frequent in humans and farm animals and because of their ability to suffer genetic modifications in a relatively short period of time there is a constant need to develop novel pharmaceuticals in order to increase the efficacy of treatment1. Some of the most common viral diseases in farmed animals are infectious bronchitis, Newcastle disease, avian influenza, porcine reproductive and respiratory syndrome (PRRS) and bovine coronavirus. Currently, vaccination for these diseases is only partially efficacious therefore additional preventions and treatments are needed.
Natural antimicrobials including plant extracts or organic acids are currently tested for their ability to inhibit viruses and prevent their pathogenic impact on the host. Plant phytochemicals, in particular flavonoids and polyphenols, contain an abundant pool of potent antiviral molecules (e.g., vitexin), a flavonoid isolated from pink coral tree Erythrina speciose leaves, demonstrated antiviral activity against Herpes Simplex Virus type 1 (HSV-1) and Hepatitis A virus-H10 (HAV H10)2. Improving the efficiency of antimicrobial compounds represents a crucial step in developing alternative strategies, but this depends on a better understanding of their mode of action. For example, gamma-coronavirus (IBV) that was pre-treated with elderberry (Sambucus nigra) fruit extracts resulted in damage to virus molecular structure causing an elimination of Vero cell cytotoxicity3. The mechanism of elderberry extract efficacy was attributed to altered virion envelopes and membrane vesicles4.
The identification of the active compounds with antiviral effects will lead to a better understanding on how natural antimicrobials inhibit viruses and diseases. Tangeretin is a polymethoxylated flavone found in citrus fruit peels that inhibits viral entry into cells by blocking viral fusion5, and citrus extracts are active against avian influenza virus (AIV), Newcastle disease virus (NDV), infectious bursal disease virus (IBDV) in different environments6. Moreover, vaccines containing citrus derived molecules were more efficient in stimulating the immune system with fewer side effects7. Maltodextrins are plant polysaccharides that are commonly used as food additives that when used as a vaccine nanoparticle increased activity against influenza virus8. Lactic acid also inhibited influenza A infection replication8. Citric acid is another potent natural antimicrobial with anti-viral activity that inhibits the foot and mouth disease virus (FMDV)9.
An additional anti-viral mechanism by which natural antimicrobials can prevent viral infections refers to their ability to inhibit cellular oxidative events. For example, manganese superoxide dismutase (MnSOD) has been proven crucial in the fight against viral infections due to its role as superoxide scavenger and as anti-inflammatory agent10. The pro-inflammatory response, including the release of cytokines to avoid apoptosis is partially dependent on the levels of cellular hydrogen peroxide production as it leads to the stimulation of neutrophils and macrophages leading to the elimination of either microbes or viruses11. Natural antimicrobials or plant extracts (phenolics, flavonoids, tannins) have the ability to negatively impact on oxidative stress helping the organism to protect itself from the detrimental effect of reactive oxygen species12.
Combinations of natural antimicrobials, similar to AuraShield L (e.g. Auranta 3001), were previously described as efficient in preventing bacterial infections in chicken and mouse animal models13,14,15,16. These results show that Auranta 3001 has a negative impact on the expression of virulence genes in Campylobacter spp.13, (hcp gene) and that improves the immune system response in challenged broilers. In infections caused by Listeria spp., these mixtures of antimicrobials were able to reduce the bacterial load in the liver and spleen of the infected mice and significantly reduced inflammation and reduced the mortality rate15. It is thought that these mixtures of natural biochemical antimicrobials can directly affect viruses through their viral capsids or modify the receptors on the host cells involved in adhesion being strongly suggested that a comprehensive understanding of their anti-viral mechanisms is strongly recommended17.
Understanding the mechanisms involved in the anti-viral and anti-infectious activity of biochemical compounds require extensive in vivo testing in order to prove their efficiency in reducing the clinical signs of disease without a negative impact on animal health. We hypothesized that given the fact that in combination natural antimicrobials can act antagonistically against each other18 a detailed scientific report is often required, allowing the end user to design informed interventions at farm level. In the present study we examined the in vitro activity of a mixture of citrus extract, maltodextrin, sodium chloride, lactic acid and citric acid (AuraShield L) against viruses causing infectious bronchitis (B1648), Newcastle disease (ATCC, 699), avian influenza (H9N2, ATCC, VR-1642), porcine reproductive and respiratory syndrome (ATCC, VR-2386) and bovine coronavirus (ATCC, VR-874). We have investigated the effectiveness of this antimicrobial mixture by supplementing it via drinking water, against the infectious bronchitis virus in challenged broilers (B1648).
The in vitro antiviral effect of AuraShield L
First we have investigated, in vitro, as described in Materials and Methods the antiviral effect of AuraShield L against the five of the most common infectious viruses in livestock including Infectious Bronchitis (B1648), Newcastle disease (ATCC, 699), Avian Influenza (H9N2, ATCC, VR-1642), porcine reproductive and respiratory syndrome (ATCC, VR-2386) and bovine coronavirus (ATCC, VR-874). The 50% effective concentration (EC50) of AuraShield L was detected between 0.004 and 0.007 μl/ml, with the highest concentrations (0.007 μl) required against VR-2386 (Table 1). Next we have demonstrated the effect of the antimicrobial mixture on the viral replication by treating the cells and the viruses individually with AuraShield L prior to infection. Additionally, the antimicrobial mixture was tested by inclusion during adsorption or after the adsorption period. Infected cells in the absence of AuraShield L were used as a control. The percent reduction in viral replication was calculated relative to the amount of virus produced in controls, and a non-cytotoxic concentration was used in all assays. Pre-treatment with AuraShield L caused a decrease in the number of plaques in all experimental investigations and for all viruses used (Fig. 1A). Pre-treatment of cells with AuraShield L (0.01%) led to over 80% reduction in B1648, VR-699 and VR-2386 virus production compared to control. In the case of VR-1642 (Fig. 1A) the reduction was of approximately 70% and 50% for VR-874. The impact was even more dramatic when viruses were pre-treated with AuraShield L with over 90% reduction for all viruses except VR-874 (65%). When AuraShield L was added during the adsorption period, virus titres were reduced by 52% for VR-874. Similar results were observed during viral replication, when the antimicrobial mixture was added only to the overlay medium. Collectively, these results suggest that the anti-viral effect of AuraShield L is exerted through all phases of host cell infection.
Table 1 Cytotoxic and efficiency of AuraShield L.
(A) the anti-viral effect of AuraShield L against B1648, ATCC, 699, VR-1642, VR-2386 and VR-874. AuraShield L was added at the non-cytotoxic concentration of 0.01%. Cells were pre-treated with AuraShield L prior to virus infection (pre-treatment cells), viruses were pre-treated prior to infection (pre-treated virus), AuraShield L was added during the adsorption period (adsorption) or after penetration of the viruses into cells (replication). Two-way ANOVA and Three-way ANOVA was used to analyse the significance of percentage decrease. (B) Viral RNA copies in the trachea and lungs. Viral RNA copies were measured by RT-qPCR. In (C) the average percentage of mortality in groups C, E1 and E2 is represented (birds per group at each time point). Asterisks indicate significant differences as indicated on the graph. Error bars represent the standard deviation of means from three replicate experiments.
AuraShield L reduces the viral load in the trachea and lungs of in vivo challenged broilers with the avian bronchitis virus (B1648)
We have next examined the anti-viral effect of AuraShield L in broilers in an in vivo infectious bronchitis model using the B1648 virus to cause infection. Viral RNA was measured in samples of tracheal and lung samples, using RT-qPCR (Fig. 1B). The viral RNA copies in the lung were at approximately 7.1 log10 copies/g tissue in group E1 with a significant decrease in the group E2 (p = 0.003) similar to the un-infected control group C. Inclusion of AuraShield L in the broiler's drinking water reduced the viral load in the trachea from 8.7 log10 copies/g, in the untreated group E1, to 0.8 log10 copies/g in the tracheal tissue (p = 0.001) of the treated group E2. The virus levels were reduced below the levels recorded in the un-treated and un-challenged group C. The levels of significance were calculated considering group E1 as reference. These results indicate that the antimicrobial mixture reduced the virus load in the tracheal tissue having a negative impact the virus transition into the lung tissue.
AuraShield L has a negative impact on the pro-inflammatory cytokines levels, stimulates immunity and reduces oxidative inflammation in challenged broilers
In order to gain more information on why we detect low levels of infection in the trachea and lungs of the experimental group we have designed experiments to investigate the effect against the main pro-inflammatory cytokines, on the immune system and cellular oxidation. The inclusion of AuraShield L in the drinking of the infected broilers (E2) reduced the levels of pro-inflammatory cytokines TGF-β3 (p = 0.0008), INFα (p = 0.01), INFβ (p < 0.0001) and INFγ (p = 0.0008) in the trachea (Fig. 2A). Similarly, to trachea, all pro-inflammatory factors in the lung were also decreased by the inclusion of AuraShield L compared to the control and group E1 (Fig. 2B). The significance was calculated against group E1 in order to account for the effect of AuraShield L. Overall inclusion of AuraShield L in the drinking water of group E2 broilers led to a reduction in the expression levels of all pro-inflammatory cytokines investigated indicating that this antimicrobial mixture could have impacted the lung and tracheal inflammation caused by infection. High serum IgA concentrations were detected at 5 days post AuraShield L inclusion in the drinking water (Fig. 2C). Secreted IgA levels were higher (405 vs 300 ng/ml, p = 0.02) in group E2 compared to group E1 and moreover, in group E1 (p = 0.0008) were significantly higher compared to group C. This is probably a result of vaccination and the response of infection, but equally AuraShield L treatment in group E2 boosted IgA levels above E1 (p = 0.02).
Pro-inflammatory cytokine (TGF-β3, TNF-α, INFα, INFβ and INFγ) production in the trachea and lung tissue of B1648 infected/un-infected broilers which received or not AuraShield L in the drinking water for 5 days post-infection, (A) in the trachea and (B) in the lungs. (C) Represents the IgA levels in broilers serum, (D) the SCFAs concentrations in the cecal contents. The levels of MnSOD (pg/ml) in the lung and trachea following exposure to AuraShield L are presented in (E). Asterisks indicate significant differences with the respective P values being indicated on the graph. Error bars represent the standard deviation of means from three different experiments. Student t test was used to calculate significance (**p < 0.005).
We have next investigated the association between SCFA production and manganese superoxide dismutase (MnSOD) expression as indicators of an improved immune response during viral infections. The levels of short chain fatty acids (SCFA) produced in broilers gut (cecum) were also investigated due to their association to a better immunity. The total measured SCFA levels in the cecal contents were significantly increased in group E2 (p < 0.01) with the pH levels remained similar between the three groups (Table 2). Succinic acid, lactic acid and butyric acid concentrations were significantly increased by inclusion of AuraShield L (p < 0.05) (Fig. 2D). The concentrations of lactic acid, succinic acid, formic acid, lactic acid and butyric acid were significantly increased following inclusion of AuraShield L in the drinking water (p < 0.05). Inclusion of AuraShield L in the drinking water of the infected broilers resulted in a significant increase in pulmonary and tracheal MnSOD activity (Fig. 2E). The levels of MnSOD, in the lung increased significantly from 380 pg/ml to 720 pg/ml (p = 0.0003) and to 676 pg/ml (p = 0.01) in the trachea following the inclusion of AuraShield L. These findings indicate that increased MnSOD expression in the pulmonary and tracheal tissue contributes to a better immune response and more efficient virus elimination.
Table 2 Effect of AuraShield L on total SCFA production and pH levels in the cecum (mg/cecum).
AuraShield L reduces the clinical signs of disease in broilers challenged with the avian bronchitis virus (B1648) and reduces mortality
In order to quantify the beneficial effects described above we have closely monitored the clinical signs in the challenged broilers during AuraShield administration. Symptoms were mild in both groups except for tracheal rales which impacted 33% of the E1 broilers and 24% in E2 (Table 3). Pulmonary rales were the second obvious clinical sign measurable in the first day of investigation with 11% of broilers affected in group E1 and 18% in group E2. Beginning on day 2 clear signs of improvement were observed in AuraShield treated group (E2). The efficacy of AuraShield L treatment was compared between group E1 and E2 on day 5 and excluded the negative control C group. Broilers displaying no signs of ruffled feathers increased by 150% in group E2 when AuraShield L was included in the drinking water. Sneeze and coughs behaved similarly and demonstrated that the number of broilers showing no clinical signs increased 129.4% and 136.8%, respectively in group E2. Similarly, AuraShield L treatment decreased the number of broilers with tracheal and pulmonary rales. Dyspnoea was lower in the AuraShield L treated group (7% vs. 17%, respectively). No clinical nephritis were recorded during the course of experiment in any group, and clinical symptoms were detected in a small percentage of control broilers, but these were levels higher than that of group E2. The absence or presence of these clinical signs can also be reflected in the mortality rate recorded during the experiment. The inclusion of AuraShield L in the drinking water of broilers in group E2 maintained the broiler survival rate to 93.7%. In contrast, broilers in group E1 had a survival rate of 66.6% and the control group 87.5% (Fig. 1c). The addition of AuraShield L in the drinking water also resulted in performance recovery in the infected broilers (supplementary Table 2).
Table 3 Clinical signs in infected and uninfected broilers treated with AuraShield L.
Ensuring animal health and welfare also includes having efficient technologies to reduce the incidence of infectious diseases in order of safeguarding of national and international food supplies19. A wide range of natural antimicrobial substances have been investigated and have shown activity against various viruses20. Organic acids, including citric acid, are known not only for their role in increasing feed utilisation efficiency but also for their role in improving the immune response in infected animals21. In this study, we have tested a mixture of natural antimicrobials containing citrus extract, maltodextrin, sodium chloride, lactic acid, and citric acid (AuraShield L) to test its activity in vitro against viruses responsible for avian infectious bronchitis, Newcastle disease, avian Influenza, porcine reproductive and respiratory syndrome (PRRS), and bovine coronavirus. In vivo the antimicrobial mixture was also tested for its ability to reduce the disease signs of infectious bronchitis using a broiler infection model.
Viruses need to penetrate into the cellular cytosol by breaking the endosomal membrane requiring undamaged surface structures to cause infection22. AuraShield L acts upon the host and on the pathogen directly indicating a possible disruption of the early stages of viral infection. Experiments developed to evaluate the toxicity of AuraShield L indicated a moderate toxic behaviour towards viruses in cell cultures with the effective cytotoxicity being reached between 0.004–0.007 (µl/ml). Our study shows that the mode of anti-viral action was greatest when cells were treated with AuraShield L before infection or when viruses were incubated with non-cytotoxic concentration of AuraShield L prior to infection. We have also observed that AuraShield L maintained its antiviral properties during adsorption or after penetration into the host cells.
Chickens are an important natural host of the infectious bronchitis virus which targets the upper respiratory tract, and intensive virus replication leads to signs characteristic of the clinical manifestation of this respiratory disease23. When AuraShield L was included in the drinking water of artificially IBV (B1648) challenged chicken broilers, the viral mRNA was detected in the trachea and lungs of IBV infected broilers but was significantly decreased by AuraShield L treatment. The reduced presence of IBV virus in trachea of infected birds confirmed the ability of AuraShield L to reduce respiratory disease progression24.
AuraShield L treatment reduced the expression of pro-inflammatory cytokines during infection. The transforming growth factors (TGF-β) are polypeptides with role in cell growth and differentiation, and all five examined had similar biological activities including important pro-inflammatory and immunoregulatory activities25. In our study the impact on TGF-β3 expression in AuraShield L treated broilers was also significant in both tracheal and lung tissue, with a similar pattern observed for most of the measured cytokines. The detrimental effect on pro-inflammatory cytokines can also be explained by the presence of maltodextrin in AuraShield L, as it has been shown that maltodextrin increased IgA production, an antibody that plays a crucial role in mucous membranes immune function26. In our experiments, IgA levels were significantly increased in the serum of broilers treated with AuraShield L and offer protection to the host27. The anti-infectious role of IgA was observed in the case of broiler coccidiosis where it has an essential role in the immune response to E. tenella28. Overall, the levels of pro-inflammatory cytokines were reduced by AuraShiled L to the level of the un-infected control emphasizing its anti-inflammatory effect.
Intestinal immunity if often involved in shaping lung immunity and preventing lung inflammation29 through production of short chain fatty acids (SCFA). SCFA's are known for their role in reducing allergic airway inflammation30,31. This anti-inflammatory effect of SCFAs against viral infections was specifically described for the equine herpesvirus 1 (EHV1) where the effect was observed at physiological levels32. In our study the inclusion of AuraShield L in the drinking water significantly increased the SCFA (lactate, succinate, formate, acetate, propionate, and butyrate) production which justifies the observed effect on the immune system with positive effects on the viral pathogenicity. Another reason for the observed stimulated immune response in our study was the increased level of manganese superoxide dismutase (MnSOD) in the lung and tracheal tissue. It has been shown previously that in infectious bronchitis (IB) the oxidative status is improved by increasing the levels of manganese superoxide dismutase (MnSOD) with a significant impact on virus elimination via an increased immune response33. A similar effect was observed in infections caused by influenza A virus where inclusion of MnSOD in aerosols led to a mild improvement of the disease. Other natural antimicrobials are also known for their antioxidant effect12 also when included in the feed of farmed poultr34 with a direct effect in controlling the levels of MnSOD35.
At present, the primary method of protecting chickens from infectious bronchitis is through vaccination36. However, vaccination does not eliminate all clinical signs, which can have detrimental impact on production efficiency, economic return, and animal welfare37. Vaccination of broilers coupled with inclusion in the drinking water of AuraShield L reduced clinical signs associated with infectious bronchitis and reduced mortality rates.
In countries with intensive poultry production the incidence of infectious bronchitis virus (IBV) is very high with vaccination only partially diminishing the occurrence of infection and appearance of clinical signs38. Identification and testing of novel and natural antimicrobial mixtures designed to reduce the severity of clinical manifestations and possibly to prevent viral infections are crucial for the industry to enhance animal health and welfare, while maintaining performance. Our study clearly shows that antimicrobial compounds and mixtures, such as AuraShield L, can reduce viral pathogenicity in vitro and can boost the efficacy of vaccination in vivo by boosting the immune system through a mechanism that involves SCFA production and increased MnSOD expression.
Viruses and antimicrobial mixture
Infectious Bronchitis (B1648)39, Newcastle disease (ATCC, 699), Avian Influenza (H9N2, ATCC, VR-1642)40, porcine reproductive and respiratory syndrome (ATCC, VR-2386) and bovine coronavirus (ATCC, VR-874) were used. Viruses were propagated in pathogen free 10 days old embryonated chicken eggs. The viruses from the second passage were used in the present experiments. After 48 h post-infection the allantoic fluids were harvested by low-speed centrifugation and stored at − 70 °C. In this study we have used a commercial product (AuraShield L), a mixture of maltodextrin (4%) and sodium chloride (0.5%). As additives lactic acid (30%) and citric acid (15%) and citrus extract (8%) were also included and the balance to 100% is made up with dH2O. This commercially available mixture is identified in the manuscript AuraShield L supplied by Auranta LTD.
Cytotoxicity assay (CC50, EC50 and SI)
AuraShield L was used for the determination of EC50 and SI (selective index). The mixture was titrated from 1 to 1:128 CC50 and used for virus exposure. After the addition of 10 μl MTT reagent (Sigma Aldrich) the samples were incubated for 4 h at 37 °C. The EC50 concentrations were calculated against extract concentrations.In order to determine the cytotoxic activity of AuraShield L the antimicrobial product was dissolved in water and added to the cell culture medium at a final concentration of 2% in order to quantify the effect on monolayer culture. The Hep-2 cells (ATCC CCL-23) were grown in Dubleco Minimum Essential Medium (DMEM—Sigma-Aldrich) supplemented with foetal bovine serum to a final concentration of 10%. Cells were incubated 37 °C with the medium being renewed every 48 h. The Hep-2 cells were grown in 0.001–0.2% AuraShield L containing medium. The antimicrobial mixture was tested in triplicate and on three different occasions. The absorbance of each well was measured at 620 nm in a microplate reader (Fluostar Omega, BMG Labtech) and the percentage of cell survival was calculated.
Direct plaque assay
The plaque reduction assay was performed as previously described41. Brielfy, 2 × 103 plaque forming units were incubated with different concentrations of AuraShield L for 2 h at room temperature. Serial dilutions of the treated viruses were adsorbed to the cells for 2 h at 37 °C. All assays was performed in triplicates. Incubation was performed for 4 days at 37 °C, followed by 10% formalin fixation. Plaque counting was performed after staining with 1% crystal violet.
In vitro antiviral activity
Antiviral activity was performed as previously described41 except that in our experiments Hep-2 cell line was used. Briefly, Hep-2 cells were pre-treated with AuraShield L before viral infection and separately the viruses were incubated with AuraShield L before infection. In order to test for viral absorption or penetration the cells and viruses were incubated together. AuraShield L was always used at the nontoxic concentration of 0.09% and was added to the cells followed by incubation for 2 h at 37 °C. The supernatant was removed, cells were washed and infected with each virus individually. In the case when the antimicrobial mixture was used to pre-treat the viruses the incubation took place for 1 h at room temperature followed by infection of cells. The absorption period was analysed by mixing individually the viruses with 0.09% AuraShield followed by infection of cells. In the case of replication after infection the inoculum was removed and replaced with media containing 0.5% methylcellulose. Each assay was performed in three replicates on 3 different occasions. Plaque reduction assays were carried out as mentioned above and number of plaques of AuraShield L treated cells and viruses were compared to untreated controls. Data were subjected to statistical analyses using the Graph Prism software package.
In vivo infections with IBV B1648
Two-week-old ROSS 308 broilers were separated in three groups (n = 150 broilers/group): Control (C), first experimental group (E1) included broilers challenged with IBV B1648 and the second experimental (E2) included broilers challenged with IBV B1648 and supplemented with AuraShield L as described in Fig. 3. All groups were vaccinated against IBV with Biovac H120 (V1) and Nobilis IB491 (V2). Each broiler in groups E1 and E2 was inoculated intratracheally with (200 μl) with 103 EID50/400 μl B1648 on day 14. AuraShield L was administered to the experimental group E2 through drinking water following the protocol described in Fig. 3. Birds were housed in separate negative pressure isolation rooms, and drinking water and feed were ad libitum duration the experiment. The aim of our study was to assess the effect of Aura Shield L on infection rate and symptoms, and this use of a low IBV challenge dose keeps a low mortality rate as previously described39. Control broilers were inoculated with 200 µl PBS (Phosphate Buffer Saline) and served as a negative control group. At 28 days, 30 chickens from each group were humanely euthanized and tracheal and lung samples were stored at − 70 °C for viral RNA quantification by RT-qPCR. Pro-inflammatory cytokines (TGF-β3, TNF-α, INFα, INFβ, INFγ) in tissue of trachea, lung of IBV B1648-infected chickens was determined by using a commercially available enzyme-linked immunosorbent assay systems from Quantikine, R&D Systems. Results of cytokine concentration were expressed as pg (picograms) per milliliter of culture medium. Each study run was performed in triplicate. The experimental design was evaluated and approved by the Ethical and Animal Welfare Committee of the Banat University of Agricultural Sciences and Veterinary Medicine, King Michael I of Romania, Timisoara and we confirm that all methods were performed in accordance with the relevant guidelines and regulations.
Schematic outline of the in vivo experimental design. Broilers were vaccinated at day 5 (V1) and day 15 (V2). Groups E1 and E2 were challenged with IBV B1648 day 14. Group E2 received AuraShield L in the drinking water for 5 days. At day 28 performance was recorded and 30 broilers euthanized for analysis.
Scoring and clinical signs
Clinical signs, such as ruffled feathers, sneezing, coughing, tracheal and pulmonary rales and dyspnoea were recorded daily during the 5 days of AuraShield L administration and once infection was established. Clinical signs and gross lesions were recorded. Clinical signs were quantified and expressed as a percentage of the total number of birds. The percentage difference (Eq. 1) was used to indicate the efficiency of AuraShield L in reducing the clinical signs of disease.
$$\%\, difference = \frac{V1 - V2}{{\left[ {\frac{V1 + V2}{2}} \right]}} \times 100$$
RT‑qPCR assay for ORF 1a gene
As previously described39 RT-qPCR primers were used to detect the B1648 strain in the tracheal samples using the following primers cDNA_FW ggtgttaggcttatagttcctcag, cDNA_RV taaacattagggttgacaccagt, T7_FW taatacgactcactatagggggtgttaggcttatagttcctcag, qPCR_FW gctattgtagaggtagtgtatgtgag and qPCR_RV agggttgacaccagtaaagaat. Viral RNA was extracted from tracheal secretions using the RNeasy Plus Mini Kit (Qiagen, United Kingdom). The RNA was reverse transcribed using Transcriptor First Strand cDNA Synthesis Kit (Roche) according to the manufacturer's protocol. The mRNA levels were determined by quantitative RT-PCR using QuantiNovaSYBR Green PCR Kit (Qiagen, United Kingdom) on a LightCycler 96 (Roche). The conditions for genes consisted of incubating at 95 °C for 8 min were followed by 40 cycles, each 10 s at 95 °C and 60 s at 58 °C. A total of 5 μl of SYBR Green master mixture was used in each reaction along with 0.5 μl of 10 μM primer mixture, 3 μl of molecular grade water, and 1 μl of DNA sample. Viral RNA extracts from tracheal secretions were analysed in triplicate RT-qPCR reactions as described above. Quantification of the number of viral RNA copies in tissues (RNA copies/g) was similar like in other samples.
Determination of IgA levels in the serum
The total levels of IgA were measured by IgA Enzyme-Linked Immunosorbent Assay (ELISA) kit (ab157691 ABCAM) according to the manufacturer's instructions.
SCFA determinations
The SCFA were analysed by gas chromatography as previously described42. In brief, 1 g of ceca was mixed 1 ml of H2O and 1 ml of 20 mmol/l pivalic acid solution as an internal standard. The solution was mixed and 1 ml of HClO4 (perchloric acid) was added in order to extract SCFA by shaking by vortexing for 5 min. The HClO4 acid was precipitated by adding 50 ml of 4 mol KOH into 500 ml of supernatant. The addition of saturated oxalic acid, at 4 °C for 60 min, and centrifugation at 18,000 g for 10 min. Samples were analysed by gas chromatography using SCION-456-GC with a flame ionization detector.
Superoxide manganese dismutase (SOD2/MnSOD) determination in the lung tissue
The superoxide manganese dismutase was quantified in the lung and trachea using the chicken SOD 2 Eliza Kit (Assay Solution) according to the manufacturer instructions. Briefly, the tissues were rinsed in ice-cold PBS (0.02 mol/l, pH 7.0–7.2) to remove excess blood, minced the tissues to small pieces and homogenized them in a certain amount of PBS and stored at -80 °C until further use. A standard curve was prepared for each experiment. Volumes of 100 μl sample and standard were added per each well in a 96 well plate, except the control well where 0.1 ml of the sample diluent was added. Each well was washed with wash buffer two times for a total of three washes. After the last wash the wash buffer was removed by aspirating or by inverting the plate and blotting it against clean paper towels. In the next step 100 μl of the detection antibody was added to each well followed by incubation at room temperature for 2 h. Next, streptavidin-HRP (100 μl) was added to each well and incubated at room temperature for 30 min. Each measurement was done in triplicate.
Statistical analyses were performed using GraphPad software. Data were represented as mean ± SD. p-values < 0.05 were considered statistically significant (*p < 0.05; **p < 0.01; ***p < 0.001). ANOVA and Student t were used. An ANOVA was performed to analyse the percentage decrease compared to control using both Two-way ANOVA (p < 0.0001) and Three-way ANOVA (p < 0.0001).
Farhadi, F., Khameneh, B., Iranshahi, M. & Iranshahy, M. Antibacterial activity of flavonoids and their structure–activity relationship: an update review. Phytother. Res. 33, 13–40. https://doi.org/10.1002/ptr.6208 (2019).
Fahmy, N. M. et al. Breaking down the barriers to a natural antiviral agent: antiviral activity and molecular docking of erythrina speciosa extract, fractions, and the major compound. Chem. Biodivers. 17, e1900511. https://doi.org/10.1002/cbdv.201900511 (2020).
Chen, C. et al. Sambucus nigra extracts inhibit infectious bronchitis virus at an early point during replication. BMC Vet. Res. 10, 24. https://doi.org/10.1186/1746-6148-10-24 (2014).
Musarra-Pizzo, M. et al. The antimicrobial and antiviral activity of polyphenols from almond (Prunus dulcis L.) skin. Nutrients https://doi.org/10.3390/nu11102355 (2019).
Tang, K. et al. Tangeretin, an extract from Citrus peels, blocks cellular entry of arenaviruses that cause viral hemorrhagic fever. Antiviral Res. 160, 87–93. https://doi.org/10.1016/j.antiviral.2018.10.011 (2018).
Komura, M. et al. Inhibitory effect of grapefruit seed extract (GSE) on avian pathogens. J. Vet. Med. Sci. 81, 466–472. https://doi.org/10.1292/jvms.18-0754 (2019).
Pennisi, M., Russo, G., Ravalli, S. & Pappalardo, F. Combining agent based-models and virtual screening techniques to predict the best citrus-derived vaccine adjuvants against human papilloma virus. BMC Bioinform. 18, 544. https://doi.org/10.1186/s12859-017-1961-9 (2017).
Miyazaki, T. Protective effects of lactic acid bacteria on influenza A virus infection. AIMS Allergy Immunol. 1, 138–142 (2017).
Hong, J. K. et al. Inactivation of foot-and-mouth disease virus by citric acid and sodium carbonate with deicers. Appl Environ Microbiol 81, 7610–7614. https://doi.org/10.1128/AEM.01673-15 (2015).
Shim, J. W., Chang, Y. S. & Park, W. S. Intratracheal administration of endotoxin attenuates hyperoxia-induced lung injury in neonatal rats. Yonsei Med. J. 49, 144–150. https://doi.org/10.3349/ymj.2008.49.1.144 (2008).
Fang, F. C. Antimicrobial actions of reactive oxygen species. mBio. https://doi.org/10.1128/mBio.00141-11 (2011).
Hwang, K. A., Hwang, Y. J. & Song, J. Antioxidant activities and oxidative stress inhibitory effects of ethanol extracts from Cornus officinalis on raw 264.7 cells. BMC Complement. Altern. Med. 16, 196. https://doi.org/10.1186/s12906-016-1172-3 (2016).
Sima, F. et al. A novel natural antimicrobial can reduce the in vitro and in vivo pathogenicity of T6SS positive campylobacter jejuni and campylobacter coli chicken isolates. Front Microbiol. 9, 2139. https://doi.org/10.3389/fmicb.2018.02139 (2018).
Stratakos, A. C. & Grant, I. R. Evaluation of the efficacy of multiple physical, biological and natural antimicrobial interventions for control of pathogenic Escherichia coli on beef. Food Microbiol. 76, 209–218. https://doi.org/10.1016/j.fm.2018.05.011 (2018).
Stratakos, A. C. et al. In vitro and in vivo characterisation of Listeria monocytogenes outbreak isolates. Food Control. https://doi.org/10.1016/j.foodcont.2019.106784 (2020).
Stratakos, A. C. et al. The antimicrobial effect of a commercial mixture of natural antimicrobials against Escherichia coli O157:H7. Foodborne Pathog. Dis. 16, 119–129. https://doi.org/10.1089/fpd.2018.2465 (2019).
Li, D., Baert, L. & Uyttendaele, M. Inactivation of food-borne viruses using natural biochemical substances. Food Microbiol. 35, 1–9. https://doi.org/10.1016/j.fm.2013.02.009 (2013).
Hacioglu, M., Dosler, S., Birteksoz Tan, A. S. & Otuk, G. Antimicrobial activities of widely consumed herbal teas, alone or in combination with antibiotics: an in vitro study. PeerJ 5, e3467. https://doi.org/10.7717/peerj.3467 (2017).
Tomley, F. M. & Shirley, M. W. Livestock infectious diseases and zoonoses. Philos. Trans. R. Soc Lond. B Biol. Sci. 364, 2637–2642. https://doi.org/10.1098/rstb.2009.0133 (2009).
Cowan, M. M. Plant products as antimicrobial agents. Clin. Microbiol. Rev. 12, 564–582 (1999).
Mehdi, Y. et al. Use of antibiotics in broiler production: Global impacts and alternatives. Anim. Nutr. 4, 170–178. https://doi.org/10.1016/j.aninu.2018.03.002 (2018).
Hogle, J. M. Poliovirus cell entry: common structural themes in viral cell entry pathways. Annu. Rev. Microbiol. 56, 677–702. https://doi.org/10.1146/annurev.micro.56.012302.160757 (2002).
Cavanagh, D. Nidovirales: a new order comprising Coronaviridae and Arteriviridae. Arch. Virol. 142, 629–633 (1997).
Khataby, K., Kichou, F., Loutfi, C. & Ennaji, M. M. Assessment of pathogenicity and tissue distribution of infectious bronchitis virus strains (Italy 02 genotype) isolated from moroccan broiler chickens. BMC Vet. Res. 12, 94. https://doi.org/10.1186/s12917-016-0711-y (2016).
Jakowlew, S. B., Mathias, A. & Lillehoj, H. S. Transforming growth factor-beta isoforms in the developing chicken intestine and spleen: increase in transforming growth factor-beta 4 with coccidia infection. Vet. Immunol. Immunopathol. 55, 321–339. https://doi.org/10.1016/s0165-2427(96)05628-0 (1997).
Miyazato, S., Kishimoto, Y., Takahashi, K., Kaminogawa, S. & Hosono, A. Continuous intake of resistant maltodextrin enhanced intestinal immune response through changes in the intestinal environment in mice. Biosci. Microbiota Food Health 35, 1–7. https://doi.org/10.12938/bmfh.2015-009 (2016).
Maynard, C. L., Elson, C. O., Hatton, R. D. & Weaver, C. T. Reciprocal interactions of the intestinal microbiota and immune system. Nature 489, 231–241. https://doi.org/10.1038/nature11551 (2012).
Article ADS CAS PubMed PubMed Central Google Scholar
Davis, P. J., Parry, S. H. & Porter, P. The role of secretory IgA in anti-coccidial immunity in the chicken. Immunology 34, 879–888 (1978).
CAS PubMed PubMed Central Google Scholar
McAleer, J. P. & Kolls, J. K. Contributions of the intestinal microbiome in lung immunity. Eur. J. Immunol. 48, 39–49. https://doi.org/10.1002/eji.201646721 (2018).
Maslowski, K. M. et al. Regulation of inflammatory responses by gut microbiota and chemoattractant receptor GPR43. Nature 461, 1282–1286. https://doi.org/10.1038/nature08530 (2009).
Trompette, A. et al. Gut microbiota metabolism of dietary fiber influences allergic airway disease and hematopoiesis. Nat. Med. 20, 159–166. https://doi.org/10.1038/nm.3444 (2014).
Poelaert, K. C. K. et al. Beyond gut instinct: metabolic short-chain fatty acids moderate the pathogenesis of alphaherpesviruses. Front. Microbiol. 10, 723. https://doi.org/10.3389/fmicb.2019.00723 (2019).
Cao, Z. et al. Proteomics analysis of differentially expressed proteins in chicken trachea and kidney after infection with the highly virulent and attenuated coronavirus infectious bronchitis virus in vivo. Proteome Sci. 10, 24. https://doi.org/10.1186/1477-5956-10-24 (2012).
Chi, X. et al. Oral administration of tea saponins to relive oxidative stress and immune suppression in chickens. Poult Sci. 96, 3058–3067. https://doi.org/10.3382/ps/pex127 (2017).
Li, X. et al. The effect of black raspberry extracts on MnSOD activity in protection against concanavalin A induced liver injury. Nutr. Cancer 66, 930–937. https://doi.org/10.1080/01635581.2014.922201 (2014).
Zhao, Y. et al. Safety and efficacy of an attenuated Chinese QX-like infectious bronchitis virus strain as a candidate vaccine. Vet. Microbiol. 180, 49–58. https://doi.org/10.1016/j.vetmic.2015.07.036 (2015).
Raj, G. D. & Jones, R. C. Infectious bronchitis virus: Immunopathogenesis of infection in the chicken. Avian Pathol. 26, 677–706. https://doi.org/10.1080/03079459708419246 (1997).
Ignjatovic, J. & Sapats, S. Avian infectious bronchitis virus. Rev. Sci. Tech. 19, 493–508. https://doi.org/10.20506/rst.19.2.1228 (2000).
Reddy, V. R. et al. Productive replication of nephropathogenic infectious bronchitis virus in peripheral blood monocytic cells, a strategy for viral dissemination and kidney infection in chickens. Vet. Res. 47, 70. https://doi.org/10.1186/s13567-016-0354-9 (2016).
Li, Y. G. et al. Characterization of H5N1 influenza viruses isolated from humans in vitro. Virol. J. 7, 112. https://doi.org/10.1186/1743-422X-7-112 (2010).
Schuhmacher, A., Reichling, J. & Schnitzler, P. Virucidal effect of peppermint oil on the enveloped viruses herpes simplex virus type 1 and type 2 in vitro. Phytomedicine 10, 504–510. https://doi.org/10.1078/094471103322331467 (2003).
Apajalahti, J., Vienola, K., Raatikainen, K., Holder, V. & Moran, C. A. Conversion of branched-chain amino acids to corresponding isoacids—an in vitro tool for estimating ruminal protein degradability. Front. Vet. Sci. 6, 311. https://doi.org/10.3389/fvets.2019.00311 (2019).
We would like to thank Dr. Phittawat Sopharat of AP Nutrotech Co. Ltd who worked in collaboration with Auranta to develop the in vivo protocol for use in broilers experiencing symptoms of Infectious Bronchitis.
This study was supported by a grant awarded to Environtech, Dublin, Ireland.
Bacteriology Branch, Veterinary Sciences Division, Agri-Food and Biosciences Institute, 18a Newforge Lane, Belfast, BT9 5PX, Northern Ireland, UK
Igori Balta & Nicolae Corcionivoschi
Faculty of Animal Science and Biotechnologies, University of Agricultural Sciences and Veterinary Medicine, 400372, Cluj-Napoca, Romania
Faculty of Bioengineering of Animal Resources, Banat University of Animal Sciences and Veterinary Medicine - King Michael I of Romania, Timisoara, Romania
Lavinia Stef, Ioan Pet & Nicolae Corcionivoschi
Auranta, Nova UCD, Belfield, Dublin 4, Ireland
Patrick Ward
Department of Animal and Dairy Science, University of Georgia, Athens, GA, USA
Todd Callaway
Center for Food Safety, Department of Food Science, University of Arkansas, Fayetteville, AR, USA
Faculty of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, 13 Keppel Street, London, WC1E 7HT, UK
Ozan Gundogdu
Igori Balta
Lavinia Stef
Ioan Pet
Nicolae Corcionivoschi
Conceptualization, N.C., O.G., S.C.R.; Data curation, P.W., T.C., N.C.; Formal analysis, I.B., I.P., L.S.; Funding acquisition, N.C., O.G.; Investigation, O.G.; Methodology, I.B.; Project administration, N.C.; Resources, L.S.; Writing—original draft, I.B., O.G. and N.C.; Writing—review & editing, S.C.R., T.C., I.B., O.G. and N.C.
Correspondence to Ozan Gundogdu or Nicolae Corcionivoschi.
Supplementary Tables.
Balta, I., Stef, L., Pet, I. et al. Antiviral activity of a novel mixture of natural antimicrobials, in vitro, and in a chicken infection model in vivo. Sci Rep 10, 16631 (2020). https://doi.org/10.1038/s41598-020-73916-1
In vitro anti-influenza assessment of anionic compounds ascorbate, acetate and citrate
Hadiseh Shokouhi Targhi
Parvaneh Mehrbod
Mehriar Amininasab
Virology Journal (2022)
Antimicrobial, antioxidant, antiviral activity, and gas chromatographic analysis of Varanus griseus oil extracts
Shakeel Ahmad
Tahira Ruby
Aleem Ahmed Khan
Archives of Microbiology (2022)
Mixtures of natural antimicrobials can reduce Campylobacter jejuni, Salmonella enterica and Clostridium perfringens infections and cellular inflammatory response in MDCK cells
Adela Marcu
Gut Pathogens (2021)
The in vitro and in vivo anti-virulent effect of organic acid mixtures against Eimeria tenella and Eimeria bovis
About Scientific Reports
Guide to referees
Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
|
CommonCrawl
|
Electrostatics : Electric Charges and Fields Class 12 Important Questions
In this page we have Electrostatics Important Questions for Class 12 physics . Answers to most of the questions are given. Try to first solve them without looking at answers. THis would be good for practicing questions for your board exams. This electric charge and electric field questions and answers page contains different types of questions that possibly come in exams.
Important Questions for Class 12 Physics Chapter 1 Electric Charges and Fields Class 12 Important Questions
Derive an expression for the electric field at a point
a. the axial position of an electric dipole.
b. on the equatorial position of an electric dipole.
Derive an expression for the torque on an electric dipole in a uniform electric field.
State Gauss theorem and apply it to find the electric field due to a uniformly charged spherical conducting shell at a point
a. Outside the shell
b. Inside the shell
c. on the shell
Also Draw a graph showing a variation of electric field E with distance r from center of the uniformly charged spherical conducting shell
State Gauss theorem and apply it to find the electric field intensity due to a infinitely long straight wire of linear charge charge density $\lambda$ C/m
State Coulomb's law and express it in vector form. Derive it using Gauss theorem.
A charge q is uniformly distributed over a ring of radius r. Derive an expression for electric field at a point on the axis on the ring. Also shows that for point at large distance from center of the ring,it behaves like a point charge only
Objective Questions on electric charges and fields (MCQs)
Five charges $q_1$ , $q_2$ , $q_3$ , $q_4$, and $q_5$ are fixed at their positions as shown in below figure. S is a Gaussian surface. The Gauss's law is given by
$\oint \mathbf{E}.d\mathbf{s}=\frac {q}{\epsilon_0}$
Which of the following statements is correct?
(a) E on the LHS of the above equation will have a contribution from $q_1$, $q_5$ and $q_3$ while q on the RHS will have a contribution from $q_2$ and $q_4$ only.
(b) E on the LHS of the above equation will have a contribution from all charges while q on the RHS will have a contribution from $q_2$ and $q_4$ only.
(c) E on the LHS of the above equation will have a contribution from all charges while q on the RHS will have a contribution from $q_1$, $q_3$ and $q_5$ only.
(d) Both E on the LHS and q on the RHS will have contributions from $q_2$ and $q_4$ only
The electric field intensity at a point situated 4 meters from a point charge is 100 N/C. If the distance is reduced to 1 meters, the field intensity will be
(a) 400 N/C
(b) 600 N/C
(c) 800 N/C
(d) 1600 N/C
Three positive charges of equal value q are placed at the vertices of an equilateral triangle. The resulting lines of force should be sketched as in
The force between two charges is 120 N. If the distance between the charges is doubled, the force will be
(a) 60 N
(b) 30 N
(c) 40 N
(d) 15 N
The lines of force due to charged particles are
(a) always straight
(b) always curved
(c) sometimes curved
An arbitrary surface encloses a dipole. What is the electric flux through this surface?
a. Zero
b. $\frac {2q}{\epsilon_0}$
c. $\frac {q}{\epsilon_0}$
d. Cannot say with the given data
SI unit of electrical permittivity of Space
(a) $C^2/Nm^2$
(b) $C/Nm^2$
(c) $C^2/Nm$
(d) $C^2/N^2m$
Dimensional Formula for electrical permittivity of Space
(a) $[M^{-1}L^{-3}T^{4}A^2]$
(b)$[M^{-1}L^{-3}T^{4}A^1]$
(c)$[M^{-1}L^{-2}T^{2}A^2]$
(d) $[M^{-1}L^{-3}T^{2}A^2]$
Which is of these is not vector quantity
a. Electric Field
b. Electric Dipole Moment
c. Electric Flux
d. Force
Dimensional Formula for electric Field
(d) $[M^{1}L^{1}T^{-3}A^{-1}]$
Two point charges placed at a distance R in air exert a force F on each other. At what distance will these experience the same force F in a medium of dielectric constant K?
(a) $\frac {R}{K}$
(b) $\frac {R}{K^2}$
(c) $\frac {R}{\sqrt {K}}$
(d) $R \sqrt {K}$
Let F is the force between two equal point charges at some distance. If the distance between them is doubled and individual charges are also doubled, what will be force acting between the charges?
(a) F
(b) 2F
(c) 4F
(d) F/2
If $\oint \mathbf{E}.d\mathbf{s}=0$ over a surface, then
(a) the electric field inside the surface and on it is zero.
(b) the electric field inside the surface is necessarily uniform.
(c) the number of flux lines entering the surface must be equal to the number of flux lines leaving it.
(d) all charges must necessarily be outside the surface.
The electric field due to a uniformly charged sphere of radius R as a function of the distance from its centre is represented graphically by
Which of the following graphs shows the variation of electric field E due to a hollow spherical conductor of radius R as a function of distance from the centre of the sphere
6.(b)
7.(d)
10.(b)
11. (a)
14. (c)
15. (d)
18. (c) and (d)
A charge q is placed at the center of the line joining two equal charges Q. Show that that system of three charges will be in equilibrium if 4q + Q=0
It is clear that net force on the charge q is zero.So for system of three charges will be in equilibrium, the net force on Charge Q on either end should be zero.
Let 2a is the distance between the charge Q,then net force should be zero
$ \frac {1}{4\pi \epsilon_0} . \frac {Q^2}{4a^2} + \frac {1}{4\pi \epsilon_0} . \frac {Qq}{a^2} =0$
or $4q + Q=0$
A sphere $A_1$ of radius $R_1$ encloses a charge 2Q. If there is another concentric sphere $A_2$ of radius $R_2$ ($R_2 > R_1$) and there will be no additional charges between $A_1$ and $A_2$. Find the ratio of the electric Flux through $A_1$ and $A_2$
From Gauss's Theorem
Electric Flux through $A_1$
$\phi_1 = \frac {2Q}{\epsilon_0}$
Ratio of Flux
$\frac {\phi_1}{\phi_2}$ =1 :1
Given a uniformly charged plane/ sheet of surface charge density $\sigma = 2 \times 10^{17}$ C/m2
(i) Find the electric field intensity at a point A, 5mm away from the sheet on the left side.
(ii) Given a straight line with three points X, Y & Z placed 50 cm away from the charged sheet on the right side. At which of these points, the field due to the sheet remain the same as that of point A and why?
(i) at A, $E = \frac {\sigma}{2\epsilon_0}$
Substituting these values
$E = 1.1 \times 10^{28}$ N/C
Directed away from the sheet
(ii) Point Y
Because at 50cm, the charge sheet acts as a finite sheet and thus the magnitude remains same towards the middle region of the planar sheet.
A point charge +2Q is placed at the centre O of an uncharged hollow spherical conductor of inner radius $p$ and outer radius $q$. Find the following:
(a) The magnitude and sign of the charge induced on the inner and outer surface of the conducting shell.
(b) The magnitude of electric field vector at a distance (i) r =p/2 and (ii) r = 2q, from the centre of the shell.
(a) As the electrostatic field inside a conductor is zero, using Gauss's law,
charge on the inner surface of the shell = -2Q
Charge on the outer surface of the shell = +2Q
(i)From Gauss's law expression
Expression for electric field for radius, r=p/2
$ {E} \times {\pi p^2}= \frac {2Q}{\epsilon_0}$
or $E=\frac {2Q}{\pi \epsilon_0 p^2}$
From Gauss's law expression
Expression for electric field for radius, r=2q
$ {E} times {16\pi q^2}= \frac {2Q}{\epsilon_0}$
or $E=\frac {Q}{8\pi \epsilon_0 q^2}$
Two charges $\pm 10 \mu C$ are placed 5.00 mm apart. Determine the electric field at
(a) A point P on this axis of the dipole 15 cm away from its center O on the side of the positive charge
(b) A point Q , 15 cm away from O on a line passing through O and normal to the axis of the dipole
Here $q=10\mu C=10^{-5}$ C, $2a= 5mm= 5 \times 10^{-3}$m
a. Field at a axial point P of the dipole is given by
$E_p= \frac {2p}{4\pi \epsilon_0 r^3}= \frac {2 \times q \times 2a}{4\pi \epsilon_0 r^3}$
Substituting the values,we get
$E_p= 2.6 \times 10^6$ N/C
The direction of the electric field from -q to +q as shown below
b. Field at a equatorial point Q of the dipole is
$E_Q= \frac {p}{4\pi \epsilon_0 r^3}= \frac { q \times 2a}{4\pi \epsilon_0 r^3}$
$E_p= 1.33 \times 10^5$ N/C
The direction of the electric field from +q to -q as shown below
A spherical liquid drop of radius r has charge q. If n number of such drops coalesces to form a single bigger drop , then on the surface of bigger drop what is
(1) charge
(2) charge density
(3) electric field
(4) potential
Five Charges of equal amount (q) are placed at five corners of a regular hexagon of side 20 cm. What will be the value of sixth charge placed at sixth corner of the hexagon so that the electric field at the center of hexagon is zero ?.
An electric charge q is placed at one of the corner of a cube of side $a$. What will be the electric flux through its one of the face?
A charge Q is divided in two parts q and Q - q separated by a distance R. If force between the two charges is maximum, find the relationship between q & Q.
$F=\frac {K q(Q-q)}{r^2}$
for max and min ,dF/dq=0 , q=Q/2
A thin stationary ring of 1 m has a positive charge of $1 \times 10^{-5}$ C uniformly distributed over it. A particle of mass 0.9 gm and having a negative charge of $1 \times 10^{-6}$ C is placed on the axis at a distance 1 cm from the center of the ring. Show that the motion of the negatively charged particle is approximately simple harmonic. Calculate the time period of oscillation.
Short Answer type questions
A certain region has spherical symmetry of electric field. Name the charge distribution producing such a field.
Represent graphically the variation of electric field with distance, for a uniformly charged sphere.
How will the radius of a flexible ring change if it is given positive charge?
Electrostatics Important Questions for Class 12
Electric Charge , Basic properties of electric charge and Frictional Electricity
Electrical and electrostatic force
Coulomb's Law (with vector form)
Principle Of Superposition
Electric Field & Calculation of Electric Field
Electric Field Due to Line Charge
Electric Field Lines
Electric Dipole and Dipole Moment
Electric Dipole Field
Electrostatics Multiple Choice Questions
Electrostatics Problems
Electrostatics Questions and answers
|
CommonCrawl
|
Danchev, Peter
Isomorphism of commutative group algebras of $p$-mixed splitting groups over rings of characteristic zero. (English). Mathematica Bohemica, vol. 131 (2006), issue 1, pp. 85-93
MSC: 16S34, 16U60, 20C07, 20K10, 20K21 | MR 2211005 | Zbl 1111.20007 | DOI: 10.21136/MB.2006.134084
group algebras; isomorphisms; $p$-mixed splitting groups; rings with zero characteristic
Suppose $G$ is a $p$-mixed splitting abelian group and $R$ is a commutative unitary ring of zero characteristic such that the prime number $p$ satisfies $p\notin \mathop {\text{inv}}(R) \cup \mathop {\text{zd}}(R)$. Then $R(H)$ and $R(G)$ are canonically isomorphic $R$-group algebras for any group $H$ precisely when $H$ and $G$ are isomorphic groups. This statement strengthens results due to W. May published in J. Algebra (1976) and to W. Ullery published in Commun. Algebra (1986), Rocky Mt. J. Math. (1992) and Comment. Math. Univ. Carol. (1995).
[1] D. Beers, F. Richman, E. A. Walker: Group algebras of abelian groups. Rend. Sem. Mat. Univ. Padova 69 (1983), 41–50. MR 0716984
[2] P. V. Danchev: Isomorphic semisimple group algebras. C. R. Acad. Bulg. Sci. 53 (2000), 13–14. MR 1779521 | Zbl 0964.20001
[3] P. V. Danchev: A new simple proof of the W. May's claim: $FG$ determines $G/G_0$. Riv. Mat. Univ. Parma 1 (2002), 69–71. MR 1951976 | Zbl 1019.20002
[4] P. V. Danchev: A note on isomorphic commutative group algebras over certain rings. An. St. Univ. Ovidius Constanta 13 (2005), 69–74. MR 2230861 | Zbl 1113.20005
[5] J. M. Irwin, S. A. Khabbaz, G. Rayna: The role of the tensor product in the splitting of abelian groups. J. Algebra 14 (1970), 423–442. DOI 10.1016/0021-8693(70)90093-1 | MR 0255675
[6] G. Karpilovsky: On some properties of group rings. J. Austral. Math. Soc., Ser. A 29 (1980), 385–392. DOI 10.1017/S1446788700021534 | MR 0578697 | Zbl 0432.16007
[7] W. L. May: Commutative group algebras. Trans. Amer. Math. Soc. 136 (1969), 139–149. DOI 10.1090/S0002-9947-1969-0233903-9 | MR 0233903 | Zbl 0182.04401
[8] W. L. May: Invariants for commutative group algebras, Ill. J. Math. 15 (1971), 525–531. DOI 10.1215/ijm/1256052619 | MR 0286903
[9] W. L. May: Group algebras over finitely generated rings. J. Algebra 39 (1976), 483–511. DOI 10.1016/0021-8693(76)90049-1 | MR 0399232 | Zbl 0328.16012
[10] W. L. May: Isomorphism of group algebras. J. Algebra 40 (1976), 10–18. DOI 10.1016/0021-8693(76)90083-1 | MR 0414618 | Zbl 0329.20002
[11] W. D. Ullery: Isomorphism of group algebras. Commun. Algebra 14 (1986), 767–785. DOI 10.1080/00927878608823334 | MR 0834462 | Zbl 0587.16011
[12] W. D. Ullery: On isomorphism of group algebras of torsion abelian groups. Rocky Mt. J. Math. 22 (1992), 1111–1122. DOI 10.1216/rmjm/1181072715 | MR 1183707 | Zbl 0773.16008
[13] W. D. Ullery: A note on group algebras of $p$-primary abelian groups. Comment. Math. Univ. Carol. 36 (1995), 11–14. MR 1334408 | Zbl 0828.20005
|
CommonCrawl
|
The expansion entropy
By Gianluigi Filippelli on Wednesday, September 09, 2015
In simply: the expansion entropy is a new way to calculate the entropy of a given system.
Expansion entropy uses the linearization of the dynamical system and a notion of a volume on its state space
From a mathematical point of view, we can describe the evolution of a given system $M$ using a map (a function, an application) that acts in the same system $M$: $f: M \rightarrow M$. Every maps $f$ are depending on time, that it could be discrete or continuous.
Using these maps we can construct the so called derivative matrix $Df$, that is constituted by the partial derivatives of $f$ respect the coordinates of the $n$-space $M$.
At this point with $Df$, you can calculate the function $G(Df)$, that is
a local volume growth ratio for the (typically nonlinear) $f$.
or in other words a way to measure the growth of $M$ in time.
Now $G(Df)$ will be integrated on the whole $n$-space and renormalized on the volume, and the new quantity $E(f, S)$, will be used to define the expansion entropy: \[H_0 (f, S) = \lim_{t' \rightarrow \infty} \frac{\ln E_{t', t} (f, S)}{t'-t}\] where $t'$ is the final time, $t$ is the initial time.
In this way the expansion entropy measure the disorder of the system, like the topological entropy, but using the expansion entropy we can define the chaos when $H_0 > 0$.
Hunt, B., & Ott, E. (2015). Defining chaos Chaos: An Interdisciplinary Journal of Nonlinear Science, 25 (9) DOI: 10.1063/1.4922973 (arXiv)
Labels: entropy, mathematics, physics
Water on Mars
Freedom and truth in mathematics
Hyperbolic Pascal triangles and other stories
Alexander Gerst's timelapse
Hints of physics behind standard model?
Learning abstracts: from mathematics, to the neura...
|
CommonCrawl
|
Games with the computable-play paradox
Posted on March 28, 2017 by Joel David Hamkins
Let me tell you about a fascinating paradox arising in certain infinitary two-player games of perfect information. The paradox, namely, is that there are games for which our judgement of who has a winning strategy or not depends on whether we insist that the players play according to a deterministic computable procedure. In the the space of computable play for these games, one player has a winning strategy, but in the full space of all legal play, the other player can ensure a win.
The fundamental theorem of finite games, proved in 1913 by Zermelo, is the assertion that in every finite two-player game of perfect information — finite in the sense that every play of the game ends in finitely many moves — one of the players has a winning strategy. This is generalized to the case of open games, games where every win for one of the players occurs at a finite stage, by the Gale-Stewart theorem 1953, which asserts that in every open game, one of the players has a winning strategy. Both of these theorems are easily adapted to the case of games with draws, where the conclusion is that one of the players has a winning strategy or both players have draw-or-better strategies.
Let us consider games with a computable game tree, so that we can compute whether or not a move is legal. Let us say that such a game is computably paradoxical, if our judgement of who has a winning strategy depends on whether we restrict to computable play or not. So for example, perhaps one player has a winning strategy in the space of all legal play, but the other player has a computable strategy defeating all computable strategies of the opponent. Or perhaps one player has a draw-or-better strategy in the space of all play, but the other player has a computable strategy defeating computable play.
Examples of paradoxical games occur in infinite chess. We described such a paradoxical position in my paper Transfinite games values in infinite chess by giving a computable infinite chess position with the property that both players had drawing strategies in the space of all possible legal play, but in the space of computable play, then white had a computable strategy defeating any particular computable strategy for black.
For a related non-chess example, let $T$ be a computable subtree of $2^{<\omega}$ having no computable infinite branch, and consider the game in which black simply climbs in this tree as white watches, with black losing whenever he is trapped in a terminal node, but winning if he should climb infinitely. This game is open for white, since if white wins, this is known at a finite stage of play. In the space of all possible play, Black has a winning strategy, which is simply to climb the tree along an infinite branch, which exists by König's lemma. But there is no computable strategy to find such a branch, by the assumption on the tree, and so when black plays computably, white will inevitably win.
For another example, suppose that we have a computable linear order $\lhd$ on the natural numbers $\newcommand\N{\mathbb{N}}\N$, which is not a well order, but which has no computable infinite descending sequence. It is a nice exercise in computable model theory to show that such an order exists. If we play the count-down game in this order, with white trying to build a descending sequence and black watching. In the space of all play, white can succeed and therefore has a winning strategy, but since there is no computable descending sequence, white can have no computable winning strategy, and so black will win every computable play.
There are several proofs of open determinacy (and see my MathOverflow post outlining four different proofs of the fundamental theorem of finite games), but one of my favorite proofs of open determinacy uses the concept of transfinite game values, assigning an ordinal to some of the positions in the game tree. Suppose we have an open game between Alice and Bob, where the game is open for Alice. The ordinal values we define for positions in the game tree will measure in a sense the distance Alice is away from winning. Namely, her already-won positions have value $0$, and if it is Alice's turn to play from a position $p$, then the value of $p$ is $\alpha+1$, if $\alpha$ is minimal such that she can play to a position of value $\alpha$; if it is Bob's turn to play from $p$, and all the positions to which he can play have value, then the value of $p$ is the supremum of these values. Some positions may be left without value, and we can think of those positions as having value $\infty$, larger than any ordinal. The thing to notice is that if a position has a value, then Alice can always make it go down, and Bob cannot make it go up. So the value-reducing strategy is a winning strategy for Alice, from any position with value, while the value-maintaining strategy is winning for Bob, from any position without a value (maintaining value $\infty$). So the game is determined, depending on whether the initial position has value or not.
What is the computable analogue of the ordinal-game-value analysis in the computably paradoxical games? If a game is open for Alice and she has a computable strategy defeating all computable opposing strategies for Bob, but Bob has a non-computable winning strategy, then it cannot be that we can somehow assign computable ordinals to the positions for Alice and have her play the value-reducing strategy, since if those values were actual ordinals, then this would be a full honest winning strategy, even against non-computable play.
Nevertheless, I claim that the ordinal-game-value analysis does admit a computable analogue, in the following theorem. This came out of a discussion I had recently with Noah Schweber during his recent visit to the CUNY Graduate Center and Russell Miller. Let us define that a computable open game is an open game whose game tree is computable, so that we can tell whether a given move is legal from a given position (this is a bit weaker than being able to compute the entire set of possible moves from a position, even when this is finite). And let us define that an effective ordinal is a computable relation $\lhd$ on $\N$, for which there is no computable infinite descending sequence. Every computable ordinal is also an effective ordinal, but as we mentioned earlier, there are non-well-ordered effective ordinals. Let us call them computable pseudo-ordinals.
Theorem. The following are equivalent for any computable game, open for White.
White has a computable strategy defeating any computable play by Black.
There is an effective game-value assignment for white into an effective ordinal $\lhd$, giving the initial position a value. That is, there is a computable assignment of some positions of the game, including the first position, to values in the field of $\lhd$, such that from any valued position with White to play, she can play so as to reduce value, and with Black to play, he cannot increase the value.
Proof. ($2\to 1$) Given the computable values into an effective ordinal, then the value-reducing strategy for White is a computable strategy. If Black plays computably, then together they compute a descending sequence in the $\lhd$ order. Since there is no computable infinite descending sequence, it must be that the values hit zero and the game ends with a win for White. So White has a computable strategy defeating any computable play by Black.
($1\to 2$) Conversely, suppose that White has a computable strategy $\sigma$ defeating any computable play by Black. Let $\tau$ be the subtree of the game tree arising by insisting that White follow the strategy $\sigma$, and view this as a tree on $\N$, a subtree of $\N^{<\omega}$. Imagine the tree growing downwards, and let $\lhd$ be the Kleene-Brouwer order on this tree, which is the lexical order on incompatible positions, and otherwise longer positions are lower. This is a computable linear order on the tree. Since $\sigma$ is computably winning for White, the open player, it follows that every computable descending sequence in $\tau$ eventually reaches a terminal node. From this, it follows that there is no computable infinite descending sequence with respect to $\lhd$, and so this is an effective ordinal. We may now map every node in $\tau$, which includes the initial node, to itself in the $\lhd$ order. This is a game-value assignment, since on White's turn, the value goes down, and it doesn't go up on Black's turn. QED
Corollary. A computable open game is computably paradoxical if and only if it admits an effective game value assignment for the open player, but only with computable pseudo-ordinals and not with computable ordinals.
Proof. If there is an effective game value assignment for the open player, then the value-reducing strategy arising from that assignment is a computable strategy defeating any computable strategy for the opponent. Conversely, if the game is paradoxical, there can be no such ordinal-assignment where the values are actually well-ordered, or else that strategy would work against all play by the opponent. QED
Let me make a few additional observations about these paradoxical games.
Theorem. In any open game, if the closed player has a strategy defeating all computable opposing strategies, then in fact this is a winning strategy also against non-computable play.
Proof. If the closed player has a strategy $\sigma$ defeating all computable strategies of the opponent, then in fact it defeats all strategies of the opponent, since any winning play by the open player against $\sigma$ wins in finitely many moves, and therefore there is a computable strategy giving rise to the same play. QED
Corollary. If an open game is computably paradoxical, it must be the open player who wins in the space of computable play and the closed player who wins in the space of all play.
Proof. The theorem shows that if the closed player wins in the space of computable play, then that player in fact wins in the space of all play. QED
Corollary. There are no computably paradoxical clopen games.
Proof. If the game is clopen, then both players are closed, but we just argued that any computable strategy for a closed player winning against all computable play is also winning against all play. QED
This entry was posted in Exposition and tagged axiom of determinacy, computability, game values, infinite games, Noah Schweber, open games by Joel David Hamkins. Bookmark the permalink.
8 thoughts on "Games with the computable-play paradox"
Joel David Hamkins on March 28, 2017 at 4:44 pm said:
The painting is by Sofonisba Anguissola, 'The first great woman artist of the renaissance,' of her sisters playing chess (painted 1555). I noticed that the color of the squares on the board is the opposite from what is usually used in modern games (light on the right), and I wonder whether this convention was less used at that time or whether this was an imaginary scene or what. I posted a question about this on chess.stackexchange: http://chess.stackexchange.com/q/17104/3454.
Erin Carmody on March 29, 2017 at 6:02 pm said:
Very interesting post! And, I really love that painting!
Asaf Karagila on March 30, 2017 at 5:59 am said:
This is very interesting, no doubt. But I wouldn't go as far as calling this a paradox. If you change the rules, you can change who's winning. I do agree this is somewhat surprising. But not "Banach–Tarski level of surprising".
Joel David Hamkins on March 30, 2017 at 9:57 am said:
I wouldn't say the rules of the game have changed, since it is the same game tree in both cases. What has changed is the space of allowed strategies. It would be similar to having a game where one player had a winning strategy in one model of set theory and the other player in another model of set theory. There are limitations on that phenomenon, in light of Shoenfield absoluteness (since existence of strategies is essentially $\Sigma^1_2$, unless the payoff set is complicated), but I would still call this situation paradoxical.
Asaf Karagila on April 1, 2017 at 4:39 pm said:
How is this more paradoxical than anything else which we can change between models of set theory?
Joel David Hamkins on April 4, 2017 at 9:50 am said:
It seems that you want to use the word paradox only for extreme cases. That is fine, but then I wonder why you would call the Banach-Tarski paradox a paradox or the Russell paradox.
My perspective is that we use the word paradox only when a result goes against what might be naive intuitions on the topic. Most of the so-called paradoxes in set theory (Banach-Tarski, Russell, Burali-Forti) are not paradoxical or even confusing to someone with a more sophisticated understanding of the topic. The paradoxes evaporate with a more informed understanding. (One exception might be the liar paradox, whose confounding nature seems to be more enduring.) For this reason, I prefer a more relaxed use of the word 'paradox'.
In the instance of my post, I find the result paradoxical in that someone with a naive understanding of game theory might not expect that it would matter for the purpose of having a winning strategy, whether one insisted on computable play or not, especially since the game tree is computable. Indeed, I find the fact that there is a computable tree with no computable branch to itself be similarly paradoxical.
Similarly, I do find it paradoxical that CH can be true in some models of set theory and false in others, especially when those models have the same real numbers.
Asaf Karagila on April 4, 2017 at 10:52 am said:
Well, that's just taking my words to the extreme. I certainly agree that a paradox can be any result which defies naive intuition.
But I the problem is that naive intuition is not something concrete. The Banach–Tarski Paradox is indeed paradoxical to the naive intuition, since the naive intuition certainly does not expect it. And only a handful of people who might disagree with that naively.
Russell's paradox is also a bit paradoxical, but mostly because we usually present comprehension as "the naively correct intuition" (which may have been true back in the 19th century, but I argue that 120 years later, our intuitive sensitivities in logic may have already got to the place where this naive intuition is no longer that common).
But in this case? I just don't know if I agree. Maybe because I'm a set theorist and I'm more used to the whole "move to a different universe" thing. Maybe it's just my philosophical views that give me that point of view. But I am certainly someone not versed in game theory at all.
As far as CH goes, though, here I think that you're just missing the point. CH is not about reals, it's about sets of reals. Different models with the same reals which disagree about CH, will invariably disagree about the sets of reals as well.
Warren D Smith on July 21, 2017 at 11:09 am said:
How about this game: each turn, each player names an integer.
If black always names a busy beaver number he never named before,
then black wins. Otherwise white wins.
Obviously, black has a forced win but not if he has to
use computable strategies.
Also note: This game can be made to have finite length,
in the sense that Yedidia and Aaronson recently proved
that a turing machine with some exact number of states (I forget
their number, but it was about 8000; they constructed the machine
fully explicitly) cannot be proved in ZFC not to
halt; and it halts if and only if ZF is inconsistent. Which tells
us that BusyBeaver(n) is uncomputable for every n>8000.
Which means black can force a win in a finite length version of this game,
e.g. with 10000 turns — but only using uncomputable strategies.
|
CommonCrawl
|
Skip to main content Skip to sections
Health Economics Review
December 2019 , 9:38 | Cite as
Buying efficiency: optimal hospital payment in the presence of double upcoding
Simon B. Spika
Peter Zweifel
First Online: 28 December 2019
With DRG payments, hospitals can game the system by 'upcoding' true patient's severity of illness. This paper takes into account that upcoding can be performed by the chief physician and hospital management, with the extent of the distortion depending on hospital's internal decision-making process. The internal decision making can be of the principal-agent type with the management as the principal and the chief physician as the agent, but the chief physicians may be able to engage in negotiations with management resulting in a bargaining solution.
In case of the principal-agent mechanism, the distortion due to upcoding is shown to accumulate, whereas in the bargaining case it is avoided at the level of the chief physician.
In the presence of upcoding it may be appropriate for the sponsor to design a payment system that fosters bargaining to avoid additional distortions even if this requires extra funding.
Hospital organziation Upcoding Hierarchical principal-agent model Nash bargaining model Distribution of power
Unweighted patient's benefit
Subscript denoting the case of efficient hospital's reporting
h(θ)
Inverse hazard rate
Objective function of M
Principal agent
Service quantity / treatment cost
External cost target
Internal cost target
CP's information rent
Joint surplus of M and CP
Internal transfer
External transfer
Case severity
θE
Hospital's external report of case severity
θI
CP's internal report of case severity
Objective function of CP
CP's unweighted valuation of treatment
Objective function of the sponsor
CP's relative bargaining power
\(\bar{w}\)
Minimum of CP's bargaining power necessary for the CP to engage in bargaining
\(\bar{w}^{*}\)
Minimum of CP's bargaining power necessary for the sponsor to buy efficiency
Ever since the introduction of DRG payment of hospitals, there have been concerns about the truthfulness of their reporting. Because hospitals establish severity of illness, they are suspected by their sponsors to game the system by exaggerating true severity in an attempt to optimize revenue, by so-called 'upcoding'. Several empirical findings substantiate this suspicion ([14], [3], [2], [6]).
Upcoding strategies result in reimbursement that is higher than required for efficiency, and the sponsor of hospital services therefore needs information on whether and to what extent upcoding occurs in order to take appropriate countermeasures. Indeed, DRG payment is frequently supplemented by monitoring and sanctions that apply when false or biased reporting is detected. But since monitoring and imposing fines are not without their own cost, the optimal combination of payment, monitoring, and fining becomes an issue ([7]). To address it, however, an analysis of hospitals' reporting strategy is called for.
An important fact is that upcoding can occur at two points along the flow of information in a typical hospital with a central management and several clinical departments ([6]). True severity is only observed by the clinical department which conducts diagnostics and treatment. This information is forwarded to the coding division of management as a medical record. There, diagnoses and treatments are encoded according to standardized classification systems such as ICD-10. The encoded information is fed into a special software that uses an algorithm to assign a DRG to the case. This DRG is reported to the sponsor (the health insurer or the government) who effects payment accordingly. Clearly, a first opportunity for upcoding exists in the clinical departments by overstating the severity of illness as documented in the medical record. One example is understating birth weight because low birth weight indicates high severity (for empirical evidence, see [6]). Another example is prolonging stays in the intensive-care unit without medical necessity. The second upcoding opportunity exists in the management's coding division, which has some leeway in determining the main diagnosis or in the interpretation of medical reports, enabling the encoding of additional diagnoses or treatments. As a consequence, reporting to the sponsor may be the result of an accumulation of distortions at both levels of the hospital's hierarchy.
For upcoding to occur, there must be incentives inducing the individual in charge to misrepresent the severity of illness. Yet with DRG payment in place, such incentives clearly exist for management since a DRG with a higher case-weight increases hospital revenue. As to the clinical department, it may benefit from overstating severity as well, provided net revenue generated drives internal resource allocation (e.g. because a budget target must be achieved). This type of incentive is typical for DRG payment, and it is often intensified by benchmarking mechanisms which use the cost/casemix ratio as a performance indicator.
However, the internal incentives depend on other organizational features of the hospital as well. In particular, departmental involvement in the setting of the formal and informal rules coordinating management and clinical departments is of importance. Typically, there is a formal separation between the allocation and use of resources. The authority to determine the internal resource allocation through budgeting is vested with central management, while chief physicians decide on the use of these resources for treating patients subject to a budget constraint. This mechanism is of the principal-agent type, with management acting as the principal and the head of a clinical department acting as the agent. An important feature of such a mechanism is that both players maximize their own utility, without taking the effect of their actions on the joint surplus into account. Theory predicts that in the presence of asymmetry of information and divergent objectives of the two players, this leads to distortions due to information rents and a solution that usually is not Pareto-efficient ([9] ch. 1).
Commonly, however, a hospital's budgeting process in fact involves its departments. In a first round, central management communicates a target to the department. The department, having detailed information on demand and medical technology, then suggests adjustments of the target. While central management has the final say, this process provides the chief physician directing the department with a measure of influence. After all, chief physicians combine medical and management skills often with a high degree of assertiveness and perseverance in the pursuit of their objectives. The result is a negotiation with management allowing them to reach a higher utility than attainable by simply accepting targets imposed top-down. Hence, this budgeting process in fact suggests a bargaining solution which reflects the chief physician's bargaining power relative to that of the management. In contrast to the principal-agent mechanism, with bargaining the players are more likely take into account how joint surplus is affected by their actions. Thus, with a bargaining mechanism, the internal distortion due to asymmetry of information may be internalized, resulting in a Pareto-optimal outcome.
For the sponsor, the difference between the two types of internal decision-making is relevant. Since in the presence of a principal-agent mechanism the information advantage of the chief physician is likely to result in a distortion away from a Pareto-efficient solution, the incentive on the side of hospital management to exploit its informational advantage over the sponsor by biasing its reports gives rise to an accumulation of distortions. This accumulation is well known in contract theory, where the implications of information processing have been discussed in connection with the delegation of authority in internal hierarchies ([11]). Specifically, [10] show that information passing through multiple levels of a hierarchy may result in information rents accruing at each level of the hierarchy, causing a cumulative loss of efficiency. A bargaining mechanism that avoids this accumulation therefore would be preferable from the sponsor's perspective.
These considerations suggest that a theoretical analysis of hospital payment should not only reflect the internal asymmetry of information regarding the severity of cases but also the decision-making mechanism involving management and chief physician in the specification of the hospital's objective function which governs a hospital's response to the incentives provided by the payment system. [1] do examine the effect of an internal asymmetry of information, but with management and chief physician interacting in a principal-agent relationship only. Management decides on physician payment and high-tech treatment capacity, while the chief physician decides on the number of patients treated using this capacity. True case-mix is only observed by the chief physician, while hospital management can only associate case-mix with a high or low DRG value. At the top of the hierarchy, all the sponsor knows is the probability distribution of the case-mix. The utility-maximizing chief physician converts his/her informational advantage into information rent, causing an additional distortion away from the optimal allocation as seen by the sponsor, compared with a situation of no internal asymmetry of information.
In turn, different decision-making mechanisms are considered by [4]. The authors distinguish between a bargaining and a principal-agent mechanism to find that the internal decision-making mechanism matters for hospital behavior. In particular, management and chief physicians maximize joint surplus in the negotiation alternative regardless of the distribution of bargaining power. In the principal-agent setting, the two players make their decisions simultaneously in a Cournot game, failing to take the implications of their decisions on joint surplus into account. While this seems to speak in favor of the bargaining alternative, Gallizzi and Miraldo show that if case-mix is private information of the hospital and capital cost of high-tech treatment is not excessive, the sponsor fares better with the principal-agent alternative. The reason is that with the regulatory instruments assumed to be at its disposal, the sponsor is able to suppress the hospital's information rent in this case.
The novelty of this paper is to combine features of the contributions by [1] and [4]. Its core objective is to relate a hospital's reporting strategy to the presence of internal asymmetry of information between management and chief physician as well as to the balance of power between these two players and to demonstrate the relevance of both elements for the design of the optimal payment scheme by the sponsor. It adds to the existing literature in three ways. First, it combines internal asymmetry of information and decision making-mechanism in analyzing hospital behavior. Second, it highlights the relevance of the balance of power within a hospital. Third, it provides guidance in the design of a payment system that not only provides optimal incentives for a given internal structure of the hospital but also fosters an internal decision-making mechanism that benefits the sponsor.
The structure of this paper is as follows. In "Methods" section, the model and its assumptions are presented. "Results" section contains the analysis of the model. In "Full internal information" section, the outcomes without internal asymmetry of information are outlined. Next, in "Internal asymmetry of information" section, the two decision-making mechanisms are analyzed for the case of internal asymmetry of information, permitting the chief physician to report truthfully only if this is in her/his interest. "Discussion" section contains a discussion of the model and its implications, "Conclusions" section concludes the paper.
This section is devoted to the specification of the model in terms of patients types, objectives and participation constraints, the flow of information, contracts between the players, the two internal decision making mechanisms and the timeline.
Patients and treatment
Let a sponsor (a government agency or social health insurer) delegate the treatment of a patient with a certain illness to a hospital that has a monopoly in its catchment area. Case severity of the patient is represented by a one-dimensional parameter θ, which is distributed with cumulative distribution function F(θ) on the interval \(\Theta = [ \underline {\theta },\overline {\theta } ]\). This distribution is common knowledge and satisfies the monotone hazard property, i.e. \(h(\theta) = \frac {1-F(\theta)}{f(\theta)}\) is strictly decreasing. Furthermore convexity is assumed, h′(θ)<0,h″(θ)≥0, which is valid for common distributions like the (symmetrically truncated) normal or the uniform distribution. Medical treatment consists of a single service of quantity q. For simplicity fixed costs are set to zero and the price for one unit of q is one. Thus treatment cost equal q1. The sponsor observes and reimburses realized treatment cost q.
Objectives and participation constraints
The relevant decision makers in the hospital are the management (M) and a chief physician (CP). M is responsible for financial solvency, while treatment is planned and conducted by the CP according to her or his professional autonomy. CP's total utility per patient treated is given by
$$ U = \theta V(q) + t^{I}. $$
The term θV(q) in CP's objective function captures CP's intrinsic motivation and may be interpreted as CP's valuation of the treatment as such, expressed in monetary units. It is assumed continuous, strictly increasing and concave, reflecting a beneficial effect of treatment with decreasing marginal utility2. The severity of illness acts as a multiplier. This can be justified by noting that patients benefit more from treatment when their illness is severe. Alternatively, one may argue that the CP derives utility from treating a patient of high severity, demonstrating her or his skills. The CP further derives utility from an internal transfer per patient treated tI, effected by M. This transfer does not affect CP's personal income, which is assumed to be determined exogenously. The transfer tI could be interpreted as an additional budget the CP can use for activities in the department (e.g. to finance the participation in congresses). As to the valuation of tI, risk neutrality is assumed, reflecting the fact that the transfer is mainly used for financing fringe benefits accruing to co-workers in the department (who therefore are affected by its variation). Finally, to accept a patient for treatment, the CP must attain a minimum reservation utility. To simplify the analysis, it is assumed that denial of treatment is without further consequences and the CP can do some other work yielding an exogenously given utility equal to zero. Thus, the CP accepts the patient for treatment only if3
$$ U\geq 0. $$
Management is assumed to focus exclusively on financial matters4. Since the cost q is covered by the sponsor, M's objective is to maximize
$$ P = t^{E} - t^{I}, $$
with P symbolizing profit per patient, which amounts to the difference between an (external) transfer tE received from the sponsor per patient treated and the payment to the CP. In the present context, for M to agree to the treatment of a patient in the hospital, the outcome must result in a non-negative profit calling for the ex-post participation constraint5
$$ P \geq 0. $$
At the top of the hierarchy, the sponsor aims at maximizing patient utility net of expenditure,
$$ W = \theta B(q) - t^{E} - q. $$
Here, W symbolizes welfare per patient and θB(q) the patient's gross benefit scaled according to severity θ, reflecting the assumption that the sponsor prefers a severely ill patient to be treated over a moderately ill one6. B(q) is continuous, strictly increasing, and concave.
Flow of information
At the time a patient presents herself at the hospital, the CP conducts a costless first examination to determine severity θ. Hence, actual realizations of θ can only be observed by the CP. Next, the chief physician reports \(\hat {\theta }^{I} \in \Theta \) to M (internal report). On the top of the hierarchy, the sponsor communicates with M only and cannot observe CP's report \(\hat {\theta }^{I}\). Rather, the sponsor only observes a report \(\hat {\theta }^{E} \in \Theta \) made by M (external report), who acts as the intermediary between the sponsor and CP.
The sponsor designs and offers to M a payment scheme which consists of the external transfer to the hospital and a cost target per patient treated, both depending on severity of illness as reported by M, \(\left \{t^{E}(\hat {\theta }^{E}), q^{E}(\hat {\theta }^{E})\right \}\; \forall \hat {\theta }^{E} \in \Theta \).
Within the hospital, a contract defines the internal transfer to the CP and an internal cost target the CP must abide to when treating a patient. The internal transfer and the internal cost target both depend on the CP's report of case severity, \(\{t^{I}(\hat {\theta }^{I}), q^{I}(\hat {\theta }^{I})\}\; \forall \hat {\theta }^{I} \in \Theta \). However, since the external cost target \(q^{E}(\hat {\theta }^{E})\) must be met by the hospital, the external report \(\hat {\theta }^{E}\) implicitly defines the internal cost target. Therefore, the internal contract is equivalent to a contract defining the internal transfer and M's reporting strategy depending on the CP's internal report, i.e. \(\{t^{I}(\hat {\theta }^{I}), \hat {\theta }^{E}(\hat {\theta }^{I})\}\; \forall \hat {\theta }^{I} \in \Theta \).
Internal decision-making mechanisms
The hospital's formal statutes define internal decision making with M having the authority to set the initial offer \(\{t^{I}(\hat {\theta }^{I}), \hat {\theta }^{E}(\hat {\theta }^{I})\}\). This mechanism is of the principal-agent (PA) type, with M acting as the principal and the CP as the agent. However, by rejecting the initial offer, the CP can initiate a bargaining process involving M and CP that results in the contract \(\{t^{I}(\hat {\theta }^{I}), \hat {\theta }^{E}(\hat {\theta }^{I})\}\). In keeping with the standard approach of economic theory, the outcome of this bargaining process is the solution of a Nash bargaining game, in which the CP's share of the surplus increases with her or his bargaining power relative to M. In the following, CP's relative bargaining power is symbolized by w∈(0,1), while that of M is (1−w). In contrast to CP's reservation utility stated in Eq. (2), w captures personal characteristics which determine her/his bargaining behavior, i.e. self-confidence, aggressiveness, and stamina. M's relative bargaining power (1−w) in turn reflects the position of the administration in the power structure of the hospital.
Timeline of the model
The timeline of the model comprises four stages:
The sponsor offers the contract {tE(θ),qE(θ)} ∀θ∈Θ to M.
M determines the reporting strategy \(\hat {\theta }^{E}(\hat {\theta }^{I})\) and payment \(t^{I}(\hat {\theta }^{I})\) for every possible report of severity by the CP at stage No. 4 and offers this contract to the CP.
CP decides whether or not to engage in negotiations. If she/he engages in negotiations, the reporting strategy and payment determined by M in stage No. 3 are replaced by a bargaining solution for \(\hat {\theta }^{E}(\hat {\theta }^{I})\) and \(t^{I}(\hat {\theta }^{I})\).
A patient seeks treatment at the hospital. The CP observes case severity θ and decides on treatment. If treatment is denied, the game ends. If the patient is admitted, the CP reports \(\hat {\theta }^{I}\) to M and CP provides the quantity \(q^{I}(\hat {\theta }^{I})\) of service. M reports \(\hat {\theta }^{E}(\hat {\theta }^{I})\) according to the strategy determined at stage No. 2 (stage No. 3 respectively) and the payments are made.
Full internal information
As a benchmark assume the CP always reports truthfully7. With the bargaining solution, the CP and M aim at maximizing their respective share of their expected joint surplus in negotiating the internal transfer and the external report for every case severity in a Nash bargaining process. Note that joint surplus S(θ)=P(θ)+U(θ) is independent of internal payment tI(θ) and that the unrestricted Nash bargaining solution is Pareto-efficient (see e.g. [12] ch. 2). Because the bargaining solution in fact is not impaired by distortions due to asymmetric information if the CP reports always truthfully, this implies that the bargaining solution entails an external reporting \(\hat {\theta }^{E}(\theta)\) that maximizes joint surplus for every case severity independently of CP's bargaining power w,
$$ \max_{\hat{\theta}^{E}}S(\theta) = t^{E}(\hat{\theta}^{E}) + \theta V(q^{E}(\hat{\theta}^{E}))\;\; \forall \theta \in \Theta, $$
while the internal payment tI(θ) is used to split surplus according to the relative bargaining power. In the following, denote the solution to (6) as efficient reporting and the maximized surplus as Seff(θ).
With the PA mechanism, an identical solution results. With the PA setting, M seeks to maximize the external transfer \(t^{E}(\hat {\theta }^{E})\) received from the sponsor while keeping the internal transfer tI(θ) as low as possible. This is equivalent to the maximization of joint surplus while keeping the CP's utility at the lowest possible level. As in the case with bargaining, reporting is used to maximize joint surplus, i.e. in the PA setting, external reporting is efficient as well and maximizes (6). Further, since the PA solution must yield at least the same utility for the CP as the bargaining solution (otherwise the PA mechanism is replaced by bargaining), the internal transfer has the same value in the two settings.
For the design of the optimal contract by the sponsor, \(\{t^{E}(\hat {\theta }^{E}), q^{E}(\hat {\theta }^{E})\}\), all relevant information is contained in Eq. (6). However, the sponsor must account for the fact that severity of illness is only observed within the hospital. As noted in the Introduction, payment systems are often combined with monitoring and sanctioning by the sponsor in order to mitigate the effect of information asymmetry. But if these options are not available (as it is assumed in this analysis), the revelation principle implies that no contract performs better than a direct mechanism {tE(θ),qE(θ)} which induces truthful reporting, i.e. \(\hat {\theta }^{E}(\theta)= \theta \). To achieve truthful reporting, the hospital's incentive compatibility constraint must be satisfied, which requires surplus to increase with case severity, taking account of Eq. (6). In addition, to ensure that the contract is accepted for every case severity, M's and CP's joint surplus must be non-negative at the lowest level of case severity. With these two constraints, the optimal contract yields a surplus of zero for the hospital in the case of lowest severity, while the optimal cost ceiling qE(θ) achievable for the sponsor is given by
$$ \theta B_{q} (q^{E}(\theta)) + (\theta -h(\theta)) V_{q}(q^{E}(\theta)) = 1. $$
Note that (7) implies that the optimal cost ceiling is strictly increasing with case severity, i.e. \(\frac {dq^{E}(\theta)}{d\theta }>0\). The optimal allocation is characterized by the standard rent extraction-efficiency trade-off (see e.g. [8] ch. 1). Since \(h(\overline {\theta }) = 0\), the cost target and the treatment quantity exhausting the cost ceiling are efficient only for a patient with the highest severity level, for whom marginal benefit equals marginal expenditure. But at the same time, the hospital's undesirable information rent is maximum for such a patient, since incentive compatibility requires the surplus to increase with severity. Conversely, the information rent decreases with severity until it reaches zero at the minimum severity level \(\underline {\theta }\), where surplus equals zero. Finally, the distortion of qE(θ) from its efficient value increases with θ because at a low severity, the inverse hazard rate is high (by assumption, h′(θ)<0). In the following, the optimal contract without internal asymmetry of information is referred to as \(\left \{t_{eff}^{E}(\theta), q_{eff}^{E}(\theta)\right \}\).
Internal asymmetry of information
Now, the assumption that the CP always issues a truthful report is relaxed. The game is solved using backward induction starting at stage No. 4.
At stage No. 4, the CP observes severity of illness θ, decides on whether to treat the patient or not and issues the report \(\hat {\theta }^{I}(\theta)\) to M. Given the internal contract \(\{t^{I}(\theta), \hat {\theta }^{E}(\theta)\}\), the CP's maximization problem at this stage reads
$$ \max_{\hat{\theta}^{I}} U(\theta) = t^{I}(\hat{\theta}^{I}) + \theta V(q^{E}(\hat{\theta}^{E}(\hat{\theta}^{I}))). $$
An exaggerated CP's report would correspond to the second variant of upcoding (see the Introduction again). [6] point out that upcoding by the clinical department can hardly be detected, justifying the assumption that monitoring is not worthwhile for M. Instead, the focus is on an internal contract that results in the best allocation achievable without monitoring. By the revelation principle, it is sufficient to analyze a direct mechanism that induces the CP to report truthfully. Incentive compatibility can be established by applying the envelope theorem to (8) at \(\hat {\theta }^{I}(\theta) = \theta \),
$$ \dot{U}(\theta) = V(q^{E}(\hat{\theta}^{E}(\theta))) > 0, $$
implying that for a truthful report at stage No. 4, CP's utility needs to increase with case severity. In addition the CP's participation constraint U(θ)≥0 ∀θ∈Θ must be satisfied, which in combination with (9) guarantees the CP a utility amounting at least to
$$ R(\theta) = \int^{\theta}_{\underline{\theta}}V(q^{E}(\hat{\theta}^{E}(\tilde{\theta}))) d\tilde{\theta}, $$
where R(θ) denotes CP's information rent. Since \(R(\underline {\theta })=0\) and \(R(\theta)>0\;\forall \; \theta > \underline {\theta }\), the expected information rent E[R(θ)] is strictly positive. Note that this information rent accrues to the CP independently of her/his bargaining power w.
Conclusion 1
In the case of internal asymmetry of information, the CP attains an expected utility at least equal to the expected information rent E[R(θ)]>0.
The bargaining solution
At stage No. 3 the CP decides on whether to accept M's offer or to engage in bargaining. Assume that he/she opts for bargaining. In this case, tI(θ) and \(\hat {\theta }^{E}(\theta)\) are negotiated between the two players for every severity θ, given the contract {tE(θ),qE(θ)} offered by the sponsor to M at stage No. 1. As without internal asymmetry of information, the players aim to maximize their share of expected surplus. The Nash bargaining outcome in terms of optimal payment and reporting is the solution to the maximization problem
$$ \begin{aligned} && \pi(\theta) = \left(E\left[t^{E}(\hat{\theta}^{E}(\theta)) - t^{I}(\theta)\right] \right)^{1-w} \left(E\left[t^{I}(\theta) + \theta V(q^{E}[\hat{\theta}^{E}(\theta)])\right] \right)^{w}, \end{aligned} $$
but the solution now must not only satisfy M's and CP's participation constraints, but also induce CP to report true severity in stage No. 4. Thus, by the revelation principle the best attainable solution not only satisfies M's and CP's participation constraints but also CP's incentive compatibility constraint (9)8
As shown in "CP's information rent" section, the CP's participation and incentive compatibility constraints together require that the solution yield at least a non-negative utility in the guise of an information rent. However, given that the joint surplus exceeds CP's information rent, there is a minimum bargaining power for which the CP's bargained share of surplus exceeds her/his information rent. If this is the case, CP's and M's participation constraints are not binding. This in turn creates leeway for structuring the internal payment tI(θ) such that CP's incentive compatibility constraint is satisfied without impairing the maximization of joint surplus. In that event, efficient reporting \(\hat {\theta }^{E}(\theta)\) is also the solution to the bargaining problem despite internal asymmetry of information. For a proof, see Appendix A, where it is also shown that CP's bargaining power must be equal ore above the threshold value
$$ \bar{w} := \frac{E[R_{eff}]}{E[S_{eff}]}, $$
where E[Reff] denotes the CP's expected information rent with efficient reporting.
If CP's bargaining power w is equal or above \(\bar {w}:= \frac {E[R_{eff}]}{E[S_{eff}]}\), the bargaining solution yields efficient reporting and maximizes joint surplus even in the case of asymmetry of information.
The solution to the PA mechanism
For the analysis of the internal contract designed by M at stage No. 2, assume for the moment that the CP cannot engage in bargaining at the next stage No. 3. Through its choice of the internal payment and the reporting strategy, M seeks to maximize expected profit given the payment system selected by the sponsor,
$$ \max_{t^{I}(\theta), \hat{\theta}^{E}(\theta)} E[P(\theta)] = E[ t^{E}(\hat{\theta}^{E}(\theta))-t^{I}(\theta)], $$
subject to CP's participation and incentive compatibility constraints. Since M dislikes leaving any surplus to the CP, the internal transfer tI(θ) is optimally structured such that the CP just attains the information rent defined in (10), implying that CP's participation constraint binds, hence \(U(\underline {\theta }) =R(\underline {\theta })= 0.\) With these constraints, the optimal reporting strategy is the solution to (see Appendix A)
$$ \begin{aligned}\max_{\hat{\theta}^{E}(\theta)} \tilde{P}(\theta)= t^{E}(\hat{\theta}^{E}) + \theta V(q^{E}(\hat{\theta}^{E}(\theta))) - h(\theta) V(q^{E}(\hat{\theta}^{E}(\theta))) \;\forall \theta \in \Theta. \end{aligned} $$
In contrast to the bargaining case analyzed in the previous section, the binding participation constraint in the case of the PA mechanism causes distortions away from the optimal reporting strategy since M needs to trade off efficiency against the elicitation of CP's information rent. In Eq. (14) the distortions are reflected by \(h(\theta) V(q^{E}(\hat {\theta }^{E}(\theta)))>0\) which can be interpreted as cost incurred by M for 'buying' a truthful report from the CP ([10]). Note that because the reporting strategy is not efficient anymore, CP's information rent now is below the information rent with efficient reporting. In the following, CP's information rent given the PA mechanism is denoted by RPA(θ).
In the case of internal asymmetry of information, the PA mechanism yields a distorted hospital reporting strategy.
Equilibrium mechanism with internal asymmetry of information
Given the solutions of the bargaining and the PA mechanisms analyzed in "The bargaining solution" and "The solution to the PA mechanism" sections respectively, which one constitutes the equilibrium mechanism and what does this imply for the hospital's behavior?
It turns out that for \(w \geq \bar {w} = \frac {E[R_{eff}]}{E[S_{eff}]}\), the CP and M both prefer bargaining while for \(w < \bar {w}\), the PA setting is optimal for both players. The intuition is that for \(w \geq \bar {w}\), CP's bargained share of efficient surplus exceeds her/his information rent associated with the PA setting, while for \(w < \bar {w}\), it is not worthwhile for the CP to engage in bargaining, since her/his bargained share is too low. For M on the other hand, the PA contract defined in "The solution to the PA mechanism" section implies the optimal trade-off between efficiency and rent elicitation if \(w < \bar {w}\). If however \(w \geq \bar {w}\) holds, M could possibly dissuade the CP from bargaining by paying a sufficiently high lump sum in addition to the PA contract. But this is not optimal because the payment needed to compensate the CP exceeds M's loss due to bargaining. Therefore, M optimally accepts bargaining if \(w \geq \bar {w}\). The detailed argumentation is provided in Appendix A.
In the case of internal asymmetry of information, bargaining occurs if \(w \geq \bar {w}\). The hospital's reporting strategy is efficient and maximizes joint surplus. If on the other hand \(w < \bar {w}\), the PA mechanism prevails, causing the reporting strategy to be distorted and maximizing (14).
Optimal payment system with internal asymmetry of information - Buying efficiency
In the view of Conclusion 4, it may be advantageous for the sponsor to make an extra payment to induce bargaining within the hospital; in this way, the sponsor would 'buy efficiency'. This idea is pursued below.
Assume first that the CP accepts the PA contract, while the hospital adopts the reporting strategy defined by (14). If M is to pass on CP's report without manipulation, the incentive compatibility constraint (derived from applying the envelope theorem to (14)),
$$ \dot{\tilde{P}}(\theta) = (1 - h'(\theta)) V(q^{E}(\theta)) > 0, $$
must be satisfied to induce \(\hat {\theta }^{E}(\theta)= \theta \). As derived in Appendix A, the optimal allocation the sponsor can achieve when the PA mechanism is relevant, is characterized by a joint surplus for CP and M which increases with severity but is zero at the lowest severity, and a first-order condition for the cost ceiling that reads
$$ \theta B_{q}(q^{E}(\theta)) +\left(\theta - h(\theta)(2 - h'(\theta)\right)V_{q}(q^{E}(\theta)) = 1.\\ $$
Denote the optimal payment system to achieve the optimal allocation as \(\{t^{E}_{PA}(\theta), q^{E}_{PA}(\theta)\}\). By juxtaposing (16) with (7), an additional distortion becomes evident. The reason is that in the case of the PA setting combined with internal asymmetry of information, M's report is already distorted, causing the distortions across the two levels of the hierarchy to accumulate. This is to the detriment of the sponsor, who now must pay twice to obtain truthful reports (see [11]). As in the case with no internal asymmetric information, the cost ceiling qE(θ) is increasing and efficient only at the highest severity level. With asymmetric information added, inefficiency extends downward to all \(\theta <\overline {\theta }\). In sum, the sponsor's trade-off between efficiency and rent extraction deteriorates under the PA setting combined with internal asymmetric information compared with the case without internal information asymmetry.
Recall that in the case of the bargaining alternative in contrast, the hospital's reporting strategy \(\hat {\theta }^{E}(\theta)\) is efficient and the additional distortion is avoided. Therefore, the sponsor essentially prefers bargaining. However, efficiency comes with a price when internal asymmetry of information is relevant. Even if the reporting strategy with bargaining is efficient regardless of internal asymmetry of information, the sponsor cannot employ the payment system \(\left \{t_{eff}^{E}(\theta), q_{eff}^{E}(\theta)\right \}\) defined in "Full internal information" section. This system is designed to leave the hospital just the information rent it can obtain with efficient reporting as surplus, i.e. E[Seff]=E[Reff]. Yet in the view of (12) bargaining does not occur as the equilibrium mechanism, since w>1 is impossible.
But the sponsor can 'buy' efficient reporting. Since the threshold \(\bar {w}\) decreases with expected surplus, the sponsor could augment the payment \(t_{eff}^{E}(\theta)\) by a lump-sum payment ε>0 in order to increase joint surplus E[Seff] without affecting CP's information rent E[Reff]. Specifically, the sponsor can increase joint surplus up to a level where the threshold \(\bar {w}\) equals CP's actual bargaining power w. By proposition 1 in Appendix A, the lump sum needed to achieve \(\bar {w}=w\) must satisfy the condition
$$ \epsilon(w) = \frac{1-w}{w}E[R_{eff}]. $$
The lump sum ε(w) approaches zero as w→1 (CP has all bargaining power), but it is decreasing and strictly convex in w and reaches infinity for w→0 (CP has no bargaining power). On the other hand, because the distortions associated with the PA mechanism are to the detriment to the sponsor, the sponsor's willingness to pay for replacing it is positive. Therefore there exists an ε(w)>0 which makes the sponsor indifferent between the internal PA and the bargaining setting. This value in turn determines a unique threshold for CP's bargaining power \(\bar {w}^{*}\). These considerations lead to the following rule for the choice of the optimal payment system depending on CP's bargaining power,
$$ \begin{aligned} \{t^{E^{*}}(\theta), q^{E^{*}}(\theta)\} = \left\{\begin{array}{ll} \left\{t^{E}_{PA}(\theta), q^{E}_{PA}(\theta)\right\} &\text{if} \qquad w < \bar{w}^{*}\\ \left\{t^{E}_{eff}(\theta)+\epsilon(w), q^{E}_{eff}(\theta)\right\}&\text{if} \qquad w \geq \bar{w}^{*}. \end{array}\right. \end{aligned} $$
If \(w < \bar {w}^{*}\), the sponsor optimally employs the PA alternative \(\left \{t^{E}_{PA}(\theta), q^{E}_{PA}(\theta)\right \}\) to achieve the allocation associated with (16). For \(w \geq \bar {w}^{*}\), however, it is optimal for the sponsor to pay the lump sum ε(w) in addition to the external contract \(\left \{t^{E}_{eff}(\theta), q^{E}_{eff}(\theta) \right \}\) to foster internal bargaining.
The additional distortions associated with the PA mechanism are to the detriment of the sponsor. To avoid these distortions, the sponsor can foster bargaining by augmenting the payment \(t_{eff}^{E}(\theta)\) by a lump-sum payment ε(w). Specifically, it is worthwhile for the sponsor to 'buy' efficiency if CP's bargaining power w is equal or above the threshold \(\bar {w}^{*}\).
Upcoding is an issue affecting all hospital payment systems that offer a higher reimbursement for more severe cases. Upcoding can occur at two points along the flow of information in a typical hospital. True case severity is only observed by the clinical department establishing the diagnosis and rendering treatment, giving rise also to internal asymmetry of information to the detriment of hospital management. The clinical department may overstate the severity of illness in its report to management e.g. aiming to benefit from a more generous resource allocation. The second opportunity for upcoding obtains for management vis-à-vis the sponsor, by overstating the case severity reported by the clinical department in an attempt to increase payment by the sponsor. These distortions may accumulate along the flow of information, thwarting the sponsor's quest for efficiency and cost containment.
This paper analyzes a hierarchical model, with the chief physician (CP) at the bottom and the sponsor at the top of the hierarchy and management (M) acting as the intermediary between sponsor and CP. It combines the internal information asymmetry with the decision-making mechanism coordinating CP and M to derive the impact on the hospital's reporting strategy and hence attainment of the sponsor's objective. Two alternatives are considered, a principal-agent setting where M confronts the CP with a take-it-or-leave-it offer, and Nash bargaining - provided the CP is willing to engage in a negotiation with M. Without internal asymmetry of information, both mechanisms are shown to result in a Pareto-efficient allocation between M and CP and identical hospital reporting strategies. If internal asymmetry of information is present, however, the principal-agent mechanism leads to an accumulation of information rents reaped by CP and M, which causes the sponsor to incur an additional efficiency loss. In contrast, the bargaining alternative has the potential to re-establish Pareto efficiency between CP and M by avoiding these distortions. The condition is that the CP's bargaining power is equal or above a minimum threshold level, allowing him or her to appropriate a sufficiently high share of the joint surplus to absorb the CP's information rent and implying that his or her participation constraint is not binding.
The model thus demonstrates that in the presence of internal asymmetry of information, the CP's relative bargaining power is an important determinant of the hospital's reported case severity and hence the sponsor's optimal choice of hospital payment. In addition, since the threshold level for CP's bargaining power decreases with CP's and M's joint surplus, it may be appropriate for the sponsor not only to design the optimal payment system in response to the prevailing internal decision-making mechanism, but to pay an extra lump sum designed to encourage the CP to engage in bargaining, thus avoiding distortions induced by the principal-agent mechanism.
This work is subject to several limitations. Most importantly, risk neutrality of both CP and M concerning the internal transfer between them is assumed for simplicity. Given risk neutrality, available surplus can be reallocated between M and CP without affecting its total value, resulting in efficient reporting. Notably, risk aversion on the side of the CP would cause the marginal utility of the internal transfer to decrease, thus affecting the joint surplus in utility terms. However, this consideration does not affect the crucial insights of this analysis. Also with a risk averse CP, it is still the case that a CP with bargaining power equal or above a certain threshold obtains a utility from negotiation which exceeds his or her information rent, a sufficient condition to prevent the accumulation of rents.
Usually, the sponsor cannot observe the CPs' relative bargaining power within the hospital. However, ownership could be a pertinent indicator. Privately owned, for-profit hospitals are typically part of a group with clear business objectives to be pursued in the interest of private stakeholders. Here M is likely to be the dominating player endowed with bargaining power exceeding that of the CP, suggesting a principal-agent mechanism is in place. By way of contrast, in public hospitals the balance of power tends to be more in favor of the CP. One reason is diffuse objectives (e.g. provision of sufficient health care, excellence in medical research). Another reason may be that the traditional position of the CP as the 'fixed star' in a hospital's universe is still prominent in a public hospital. Consequently, the bargaining solution is the more plausible assumption for public hospitals than for privately owned ones. Ceteris paribus, the model thus predicts more intense upcoding behavior in private than in public hospitals.
This prediction is supported to some extent by empirical studies. [14] analyzed U.S. Medicare claims data for hospital discharges with DRGs related to respiratory infections. They measured upcoding using the ratio of discharges with the DRG triggering the highest expected reimbursement relative to those with a DRG from the set of all DRGs related to respiratory diseases. The upcoding rates in for-profit hospitals were up to 70% higher than those in public hospitals. Summarizing their findings, the authors state, '...we view upcoding as symptomatic of how for-profit and not-for-profit hospitals differ in managerial behavior and the organizational balance of power inside the hospital'. The model presented in this paper provides theoretical support for their view.
More recently, [5] tested for upcoding in the market for neonatal intensive care. Using German data, they compared the distribution of reported birth weights of newborns with the distribution that was to be expected in the absence of financial incentives. Financial incentives to misreport birth weight arise since the German DRG-based reimbursement system defines eight thresholds in birthweight below which expected payment increases substantially. The authors found that upcoding rates were higher in counties with only for-profit perinatal centers than in counties with only public centers. The difference, however, was not statistically significant.
The hospital's internal power structure is likely to be a determinant of its internal decision making, which in turn needs to be considered for explaining and predicting hospital behavior in response to financial incentives. Thus, the sponsor's optimal choice of payment scheme is found to depend on the hospital's internal power structure. In addition, the sponsor may be well advised to spend extra money to foster Nash bargaining between management and chief physicians rather than the principal-agent alternative where management just makes a take-it-or-leave-it offer. The reason is that negotiation promises to avoid the accumulation of information rents, thus yielding a preferable outcome.
Results in the case of full internal information
Hospital's reporting with full internal information
The game is solved using backward induction. If the CP reports case severity always truthfully, i.e. \(\hat {\theta }^{I} = \theta \;\forall \; \theta \), all she/he does at stage No. 4 is to decide on whether or not to treat the patient.
At stage No. 3 however, the CP decides on whether to accept M's offer or to engage in bargaining. Assume that the CP opts for bargaining. In this case the paths of tI(θ) and \(\hat {\theta }^{E}(\theta)\) are negotiated between the two players, given the contract {tE(θ),qE(θ)} offered by the sponsor to M at stage No. 1. Since both parameters must be determined ex ante, the players aim to maximize their share of expected surplus. The Nash bargaining outcome in terms of optimal payment and reporting is the solution to the maximization problem
$$\begin{array}{*{20}l} \max_{t^{I}(\theta), \hat{\theta}^{E}(\theta)} \pi(\theta)& =\left(E\left[t^{E}\left(\hat{\theta}^{E}(\theta)\right) - t^{I}(\theta)\right] \right)^{1-w} \\&\quad \left(E \left[t^{I}(\theta) + \theta V\left(q^{E}(\hat{\theta}^{E}(\theta))\right)\right] \right)^{w} \end{array} $$
subject to CP's and M's (ex-post) participation constraints U≥0 and P≥0. To solve (19), note that surplus S(θ)=P(θ)+U(θ) is independent of payment tI(θ). Thus, tI(θ) can be used to achieve any arbitrary distribution of the surplus without affecting its total value. Since the Nash bargaining solution is Pareto-efficient (see e.g. [12] ch. 2), the optimal negotiated reporting strategy without internal asymmetry of information \(\hat {\theta }^{E}(\theta)\;\forall \theta \in \Theta \) maximizes expected surplus,
$$ \max_{\hat{\theta}^{E}(\theta)} E[S(\theta)]= \int^{\overline{\theta}}_{\underline{\theta}}[ t^{E}(\hat{\theta}^{E}(\theta)) + \theta V(q^{E}(\hat{\theta}^{E}(\theta)))] f(\theta) d\theta, $$
provided CP's and M's participation constraints do not bind. Assume this to be the case. Since (20) is additive in θ and f(θ), it suffices to maximize ex-post surplus S(θ) at a given value of θ, i.e.
$$ \max_{\hat{\theta}^{E}}S(\theta) = t^{E}(\hat{\theta}^{E}) + \theta V(q^{E}(\hat{\theta}^{E}))\;\; \forall \theta \in \Theta. $$
The solution to (21) is denoted efficient reporting and the maximized surplus, Seff(θ).
While the reporting strategy \(\hat {\theta }^{E}\) is used to maximize surplus, the optimal path of internal payment tI(θ) is used to distribute the surplus according to the CP's relative bargaining power. Since both players are risk-neutral and assign the same weight to tI(θ), the choice of the path does not matter to them; rather they negotiate over the expected value E[tI(θ)]. Still assuming that both participation constraints do not bind, the first-order condition for the optimum of (19) w.r.t. E[tI(θ)] reads
$$ E[U(\theta)] = \frac{w}{1-w} E[P(\theta)]\;\; \text{or} \;\; E[U(\theta)] = w E[S(\theta)]. $$
Expression (22) states that through bargaining, the CP obtains an expected utility that equals expected joint surplus weighted by her/his relative bargaining power, leaving (1−w)E[S(θ)] as the expected profit for M.
As to the participation constraints, note that S(θ)=0 ∀θ and S(θ)>0 for at least one θ is necessary for both constraints not to bind in the optimum since S(θ)<0 would imply P(θ)<0 or U(θ)<0 or both. This condition is also sufficient, since it implies E[S(θ)]>0 and therefore ensures P(θ)≥0 and U(θ)≥0 for all θ. It follows that, given the sponsor's payment system enables S(θ)=0 ∀θ and S(θ)>0 for at least one θ, the reporting strategy is efficient if the CP engages in bargaining.
At stage No. 2, M determines the initial offer \(\{t^{I}(\theta), \hat {\theta }^{E}(\theta)\}\) for all θ, given the contract {tE(θ),qE(θ)} offered by the sponsor to M at stage No. 1. Again tI(θ) and \(\hat {\theta }^{E}(\theta)\) must be determined ex ante, i.e. before the patient's severity is established and reported. Thus M aims at maximizing expected profit
$$ E[P(\theta)] = E[t^{E}(\hat{\theta}^{E}(\theta))] - E[t^{I}(\theta)]. $$
or, with \(U(\theta)=t^{I}(\theta) + \theta V(q^{E}(\hat {\theta }^{E}))\)
$$ E[P(\theta)] = E[S(\theta)] - E[U(\theta)]. $$
subject to CP's participation constraint. Further, M anticipates that the CP engages in bargaining at the next stage if this is to her/his of advantage. The contract must therefore also ensure that E[U(θ)]≥wE[Seff(θ)], in order to dissuade the CP from bargaining at the next stage. Since M wants to keep CP utility as low as possible, this inequality constraint binds in the optimum and M in fact aims at maximizing
$$ E[P(\theta)] = E[S] -wE[S_{eff}(\theta)]. $$
Because wE[Seff(θ)] is constant, maximizing (25) is equivalent to maximizing expected joint surplus. Therefore, the optimal reporting strategy is efficient as in the case of bargaining. It follows that M's expected profit equals the share of efficient surplus E[P]=(1−w)E[Seff] that M would achieve with bargaining, causing tI(θ) to be the same as with bargaining and M to be indifferent between the two mechanisms.
Optimal payment system with full internal information
At stage No. 1, the sponsor designs the contract taking into account hospital behavior. By the revelation principle, the optimal allocation is achieved by a direct mechanism {tE(θ),qE(θ)} that induces truthful reporting, i.e. \(\hat {\theta }^{E}(\theta)= \theta \). This means that the hospital's incentive compatibility constraint
$$ \dot{S} (\theta) \equiv \frac{d S (\theta)}{d \theta}= V(q^{E}(\theta)) > 0 $$
must be satisfied. This condition is derived from applying the envelope theorem to the objective function (21) (see e.g. [9] ch. 1). Condition (26) states that for truthful reporting, the payment system {tE(θ),qE(θ)} must ensure that the joint surplus available to CP and M is increasing with case severity.
In designing the contract, the sponsor aims to maximize expected patient utility net of expenditure,
$$ E[W(\theta)] = \int^{\overline{\theta}}_{\underline{\theta}} \left [ \theta B(q^{E}(\theta)) - t^{E}(\theta)- q^{E}(\theta) \right ] f\theta d\theta, $$
while considering hospital's incentive compatibility constraint (26) as well as the participation constraint
$$ S(\theta)\geq 0 \; \forall \theta \in \Theta \; \text{and} \; S(\theta) > 0 \; \text{for at least one} \; \theta. $$
$$ \frac{d q^{E} (\theta)}{d \theta} \geq 0 \;\; \forall \; \theta $$
must be satisfied to ensure that truthful reporting is globally optimal (for a proof, see again e.g. [9] ch. 1). For the moment, assume that condition (29) is satisfied. It will be verified below whether it holds in equilibrium (see expression (38)).
Since S(θ)=tE(θ)+θV(qE(θ)),
$$ \int^{\overline{\theta}}_{\underline{\theta}} \left[ t^{E}(\theta) \right] f\theta d\theta = \int^{\overline{\theta}}_{\underline{\theta}} \left[ \theta V(q^{E}(\theta))-S(\theta) \right] f\theta d\theta. $$
Substituting (30) into (27) one obtains as the sponsor's maximization problem,
$$ \begin{aligned} &\max_{q^{E}(\theta), S(\theta)} E[W(\theta)]\! =\! \int^{\overline{\theta}}_{\underline{\theta}} \left [ \theta B(q^{E}(\theta)) \!+ \theta V(q^{E}(\theta)) - q^{E}(\theta) - S(\theta) \right ] f\theta d\theta \\ & s.t. \\ & \dot{S}(\theta) = V(q^{E}(\theta))>0 \\ &S(\theta)\geq 0 \; \forall \theta \in \Theta \; \text{and} \; S(\theta) > 0 \; \text{for at least one} \; \theta. \end{aligned} $$
To solve this, note that the incentive compatibility constraint requires surplus S(θ) to be strictly increasing in θ. Since the sponsor seeks to keep the surplus as small as possible to reduce expenditure, the participation constraint binds at the lowest value of severity,
$$ S(\underline{\theta}) = 0. $$
The incentive condition (26) together with (32) determines surplus,
$$ S(\theta) = \int^{\theta}_{\underline{\theta}} V(q^{E}(\tilde{\theta})) d\tilde{\theta}. $$
Thus expected surplus can be written as
$$ ES(\theta)] = \int^{\overline{\theta}}_{\underline{\theta}} \int^{\theta}_{\underline{\theta}} V(q^{E}(\tilde{\theta}))d\tilde{\theta} f(\theta) d\theta. $$
Integration by parts results in
$$ E(S(\theta)) = \int^{\overline{\theta}}_{\underline{\theta}} h(\theta) V(q^{E}(\theta)) f(\theta) d\theta, $$
with \(h(\theta) \equiv \frac {1- F(\theta)}{f(\theta)}\) denoting the inverse hazard rate. Inserting (35) into the sponsor's objective function, one can rewrite (31) as the unrestricted maximization problem
$$\begin{array}{*{20}l} \max_{q^{E}(\theta)}E[W(\theta)] &= \int^{\overline{\theta}}_{\underline{\theta}}[\theta B(q^{E} (\theta)) + \theta V(q^{E} (\theta))\\&\quad - q^{E} (\theta) - h(\theta)V(q^{E} (\theta)) ]f(\theta) d(\theta) \end{array} $$
Assuming the integrand in (36) to be concave and continuously differentiable, point-wise maximization can be applied, resulting in the first-order condition with respect to qE(θ),
$$ \theta B_{q} (q^{E}(\theta)) + (\theta -h(\theta)) V_{q}(q^{E}(\theta)) = 1, $$
where the subscript denotes the first derivative w.r.t. q. This establishes Eq. (7) in the text.
The optimal path qE(θ) satisfies the necessary condition (29). Differentiating (37) with respect to θ yields
$$ \frac{dq^{E}(\theta)}{d\theta}=-\frac{B_{q} (q^{E}(\theta))+(1-h'(\theta))V_{q} (q^{E}(\theta))}{\theta B_{qq} (q^{E}(\theta)) + (\theta -h(\theta)) V_{qq}(q^{E}(\theta))}>0; $$
the sign follows from h′(θ)<0.
Efficient reporting with bargaining
The maximization problem with bargaining and internal asymmetry of information reads
$$ \begin{aligned} &\pi(\theta) = \left(E\left[t^{E}(\hat{\theta}^{E}(\theta)) - t^{I}(\theta)\right] \right)^{1-w} \left(E\left[t^{I}(\theta) + \theta V(q^{E}[\hat{\theta}^{E}(\theta)])\right] \right)^{w}\\ \end{aligned} $$
$$\begin{array}{*{20}l} &&\dot{U}(\theta) = V(q^{E}(\hat{\theta}^{E}(\theta))) >0 \;\forall \theta \in \Theta, \end{array} $$
$$\begin{array}{*{20}l} && U(\underline{\theta}) \geq 0, \end{array} $$
$$\begin{array}{*{20}l} && P(\theta) \geq 0 \; \forall \; \theta \in \Theta. \end{array} $$
Notice that if neither participation constraint (41) nor (42) is binding, the incentive compatibility constraint (40) does not constrain the maximization either since it can be attained by structuring the path of the internal transfer tI(θ) ∀θ∈Θ accordingly. This path may be determined such that condition (40) is satisfied while leaving the joint surplus unaffected. This allows the optimal reporting strategy \(\hat {\theta }^{E}(\theta)\) to be chosen such that it maximizes joint surplus.
In fact, both participation constraints do not bind, given the payment system designed by the sponsor ensures that the joint surplus exceeds CP's information rent for every level of severity, i.e. Seff(θ)>Reff(θ) ∀θ:
If the sponsor employs a payment system that yields Seff(θ)=Reff(θ)+ε ∀ θ with ε>0, there uniquely exists a \(\bar {w} = \frac {E[R_{eff}]}{E[R_{eff}]+ \epsilon }\) such that the unrestricted bargaining solution implies U(θ)≥0 ∀ θ if \(w \geq \bar {w}\) while P(θ)>0 ∀ θ.
Proof Writing the CP's ex-post utility with the bargaining solution as the sum of her/his information rent and a constant k, one has
$$ U(\theta) = R(\theta) + k. $$
with \(R(\theta) = \int ^{\theta }_{\underline {\theta }} V(q^{E}(\hat {\theta }^{E}(\tilde {\theta }))) d\tilde {\theta }\). Since k is the level of utility obtained by CP for the lowest severity level \(\theta =\underline {\theta }\), it follows that CP's participation constraint is not binding for all θ if the solution yields k≥0.
Recall that the unrestricted bargaining solution is characterized by an efficient reporting strategy that maximizes joint surplus and an expected utility for the CP that equals wE[Seff] (see Eq. (22)). Using (43), one therefore obtains
$$ wE[S_{eff}] = E[R_{eff}] + k. $$
It follows that
$$ k = wE[S_{eff}] - E[R_{eff}] \geq 0 \; \leftrightarrow \; w \geq \bar{w} := \frac{E[R_{eff}]}{E[S_{eff}]}, $$
that is, with efficient reporting k≥0 holds if \(w \geq \bar {w}\), which implies that the bargaining solution in fact is unrestricted if \(w \geq \bar {w}\). Further, provided the sponsor employs a payment system {tE(θ),qE(θ)} that yields Seff(θ)=Reff(θ)+ε with \(\epsilon > 0, \bar {w}\) lies between 0 and 1 because \(\frac {E[R_{eff}]}{E[S_{eff}]}=\frac {E[R_{eff}]}{E[R_{eff}]+ \epsilon }\; \in \; (0,1)\). Also, since \(\frac {d\bar {w}}{d \epsilon } < 0, \bar {w}\) is unique.
On the other hand, M's participation constraint also is not binding, since Seff(θ)=Reff(θ)+ε ∀ θ implies E[Seff]=E[Reff](θ)+ε. Because E[Seff]>wE[Seff] from (44) one has ε>k. Since P(θ)=S(θ)−R(θ)−k=ε−k, it follows that P(θ)>0 ∀ θ.
From Proposition 1 it follows that if CP's bargaining power exceeds \(\bar {w}\), the bargaining solution indeed maximizes surplus even if internal asymmetry of information is present.
Optimal reporting with internal asymmetric information in the case of the PA mechanism
M seeks to maximize expected profit
$$ \max_{\hat{\theta}^{E}(\theta), t^{E}(\theta)}E[P(\theta)] = E[ t^{E}(\hat{\theta}^{E}(\theta))-t^{I}(\theta)]. $$
subject to CP's participation constraint and CP's incentive compatibility constraint. With U(θ)=θV(q)+tI(θ), M's maximization problem thus can be written as
$$\begin{array}{*{20}l} \max_{\hat{\theta}^{E}(\theta), U(\theta)}E[P(\theta)] &\,=\, \int^{\overline{\theta}}_{\underline{\theta}}\![ \!t^{E}(\hat{\theta}^{E}(\theta))\! +\! \theta V(q^{E}(\hat{\theta}^{E}(\theta)))] f(\theta) d\theta \\&\quad- \int^{\overline{\theta}}_{\underline{\theta}} U(\theta) f(\theta) d\theta \\ & \\ &s.t. \\ &\dot{U}(\theta) \!= V(q^{E}(\hat{\theta}^{E}(\theta))), \; U(\theta) \geq 0 \;\forall \theta \in \Theta. \end{array} $$
Further, for truthful reporting to be globally optimal, M's reporting strategy must ensure that
$$ \frac{d q^{E}(\hat{\theta}^{E}(\theta))}{d \theta} \geq 0 \;\; \forall \; \theta $$
For the moment, assume that condition (48) is satisfied. This will be verified below in Appendix 2, expression (67).
CP's incentive compatibility constraint implies
$$ U(\theta) = \int^{\theta}_{\underline{\theta}}V(q^{E}(\hat{\theta}^{E}(\tilde{\theta}))) d\tilde{\theta}+ a, $$
where a is a constant. Hence, CP's expected utility can be written as
$$ EU(\theta)] = \int^{\overline{\theta}}_{\underline{\theta}} \int^{\theta}_{\underline{\theta}} V(q^{E}(\hat{\theta}^{E}(\tilde{\theta})))d\tilde{\theta} f(\theta) d\theta + a. $$
$$ E[U(\theta)] = \int^{\overline{\theta}}_{\underline{\theta}} h(\theta)V(q^{E}(\hat{\theta}^{E}(\theta))) f(\theta) d\theta + a, $$
with \(h(\theta) \equiv \frac {1- F(\theta)}{f(\theta)}\) again denoting the inverse hazard rate.
M dislikes leaving any surplus to the CP, whose incentive compatibility constraint however demands that utility increase with case severity. M therefore sets \(U(\underline {\theta })=0\), implying a=0. Hence the maximization problem of M, using (51), and a=0, reduces to
$$\begin{array}{*{20}l} \max_{\hat{\theta}^{E}(\theta)}E[P(\theta)] &= \int^{\overline{\theta}}_{\underline{\theta}}[ t^{E}(\hat{\theta}^{E}(\theta)) + \theta V(q^{E}(\hat{\theta}^{E}(\theta)) \\&\quad- h(\theta) V(q^{E}(\hat{\theta}^{E}(\theta)))] f(\theta) d\theta. \end{array} $$
Equivalently, once again relying on point wise maximization, one has
$$ \begin{aligned} \max_{\hat{\theta}^{E}(\theta)}\tilde{P}(\theta) = t^{E}(\hat{\theta}^{E}) + \theta V(q^{E}(\hat{\theta}^{E}(\theta))) - h(\theta) V(q^{E}(\hat{\theta}^{E}(\theta))) \;\forall \theta \in \Theta. \end{aligned} $$
This is expression (14) in the text.
At stage No. 2, assume that M offers the contract defined in "The solution to the PA mechanism" section. Since this contract guarantees the CP an expected information rent E[RPA]>0, the bargaining alternative comes about only if CP's bargaining power w is high enough such that her/his expected share of (efficient) surplus exceeds E[RPA].
Clearly this is the case if \(w \geq \bar {w} = \frac {E[R_{eff}]}{E[S_{eff}]}\), since E[Reff]>E[RPA], i.e. CP's information rent with efficient reporting exceeds the information rent given inefficient reporting in the PA setting. On the other hand, the CP's bargaining power may be so low that even the bargained share of the efficient surplus falls short of the expected information rent in the inefficient PA setting. Denote this level with \(\underline {w} := \frac {E[R_{PA}]}{E[S_{eff}]}\); therefore if \(w < \underline {w}\), the PA mechanism would prevail.
Further, given \(w < \bar {w}=\frac {E[R_{eff}]}{E[S_{eff}]}\), the CP's participation constraint becomes relevant in the maximization problem of the bargaining solution of (11), causing the bargained reporting strategy to be distorted away from the efficient strategy. Obviously, this distortion increases and surplus decreases as w decreases, making the constraint more binding. Consequently, CP's share of surplus strictly decreases as w decreases. Therefore there is a threshold level of bargaining power \(w \in [\underline {w}, \bar {w}]\) for which the CP is indifferent between bargaining and accepting the PA alternative. If CP's bargaining power is above this threshold, the CP engages in bargaining, if it is below, she/he accepts the PA mechanism.
The crucial insight, however, is that any level of bargaining power \(w < \bar {w}\) precludes efficient reporting. Therefore, a formal derivation of this threshold and hospital behavior associated with it does not seem worthwhile. Instead the CP is assumed to engage in bargaining iff \(w \geq \bar {w}\). Therefore M offers the contract defined in "The solution to the PA mechanism" section at stage No. 2 if \(w < \bar {w}\) since this contract implies the optimal trade-off between efficiency and rent elicitation. If however \(w \geq \bar {w}\) holds, M could dissuade the CP from bargaining by paying a sufficiently high lump sum in addition to the PA contract. However, this would not be optimal for M. If \(w \geq \bar {w}\), the bargaining solution yields an expected utility of wE[Seff] for the CP, while with the PA solution defined in ("The solution to the PA mechanism") section her/his expected utility equals the expected information rent E[RPA]. Therefore, to dissuade the CP from bargaining and to make her/him accept the PA solution, M would have to pay a lump sum in addition to the PA contract amounting to
$$ b = wE[S_{eff}] - E[R_{PA}]. $$
With (54), the net profit for M from avoiding a bargaining solution amounts to
$$ \begin{aligned} \Delta E[P] &= E[P_{PA}]- b -E[P_{eff}]\\ & =\left(E[S_{PA}] - E[R_{PA}] \right) \,-\, \left(wE[S_{eff}] - E[R_{PA}] \right) \,-\, \left((1-w)E[S_{eff}] \right) \\ & = E[S_{PA}] -E[S_{eff}] < 0. \end{aligned} $$
The negative sign follows from the fact that the surplus with the inefficient PA solution is strictly below the efficient surplus. Therefore it is better for M to accept the bargaining solution.
Optimal allocation with internal asymmetry of information and PA setting
At stage No. 1, the sponsor aims to maximize expected patient utility net of expenditure,
$$ E[W(\theta)] = \int^{\overline{\theta}}_{\underline{\theta}} \left [ \theta B(q^{E}(\theta)) - t^{E}(\theta)- q^{E}(\theta) \right ] f\theta d\theta $$
in determining the optimal payment system. Since S(θ)=tE(θ)+θV(qE(θ)),
$$ \int^{\overline{\theta}}_{\underline{\theta}} \left[ t^{E}(\theta) \right] f(\theta) d\theta = \int^{\overline{\theta}}_{\underline{\theta}} \left[ \theta V(q^{E}(\theta))-S(\theta) \right] f(\theta) d\theta. $$
After substitution of (57) into (56), the sponsor's objective function can be written as
$$ \begin{aligned} E[W(\theta)] = \int^{\overline{\theta}}_{\underline{\theta}} \left [ \theta B(q^{E}(\theta)) + \theta V(q^{E}(\theta)) - q^{E}(\theta) - S(\theta) \right ] f(\theta) d\theta.\\ \end{aligned} $$
The sponsor now needs to consider the incentive compatibility constraint
$$ \dot{\tilde{P}}(\theta) = (1 - h'(\theta)) V(q^{E}(\theta)) > 0. $$
$$ \dot{q}^{E}(\theta)\geq 0 \;\; \forall \; \theta $$
must be satisfied to ensure that truthful reporting is globally optimal. For the moment, assume that condition (60) is satisfied. This will be verified below in expression (67).
Integrating (59) yields
$$ \tilde{P}(\theta) = \int^{\theta}_{\underline{\theta}} (1 - h'(\tilde{\theta})) V(q^{E}(\tilde{\theta})) d\tilde{\theta} + g $$
with g as a constant. With (61), expected profit can be written as
$$ E[P(\theta)]= \int^{\overline{\theta}}_{\underline{\theta}} \int^{\theta}_{\underline{\theta}} (1 - h'(\tilde{\theta})) V(q^{E}(\tilde{\theta})) d\tilde{\theta} f(\theta) d\theta + g. $$
Adding CP's expected utility defined by expression (50) (where a=0), the expected surplus writes
$$ E[S(\theta)] = \int^{\overline{\theta}}_{\underline{\theta}} \int^{\theta}_{\underline{\theta}} (2 - h'(\tilde{\theta})) V(q^{E}(\tilde{\theta})) d\tilde{\theta} f(\theta) d\theta + g. $$
Applying integration by parts, one obtains
$$ E[S(\theta)] = \int^{\overline{\theta}}_{\underline{\theta}} h(\theta)(2 - h'(\theta)) V(q^{E}(\theta))f(\theta) d\theta + g $$
Inserting (64) into (58), the sponsor's maximization problem can be formulated as
$$\begin{array}{*{20}l} \max_{q^{E}(\theta),g} E[W(\theta)] &= \int^{\overline{\theta}}_{\underline{\theta}} \theta B(q^{E}(\theta)) + \theta V(q^{E}(\theta)) - q^{E}(\theta)\\&\quad - h(\theta)(2 - h'(\theta) V(q^{E}(\theta)) f(\theta) d\theta - g \end{array} $$
subject to M's participation constraint.
Point wise maximization of (65) yields the first-order condition
$$ \theta B_{q}(q^{E}(\theta)) +\left(\theta - h(\theta)(2 - h'(\theta)\right)V_{q}(q^{E} (\theta)) = 1. $$
This is equation (16) in the text. Further,
$$ \begin{aligned} \frac{dq^{E}(\theta)}{d\theta}\,=\,-\!\frac{B_{q} (q^{E}(\theta))+\left(1\,-\,2h'(\theta)\,+\,h'(\theta)\,+\,h(\theta)h^{\prime\prime}(\theta) \right) V_{q} (q^{E}(\theta))}{\theta B_{qq}(q^{E}(\theta)) \,+\,\left(\theta - h(\theta)(2 - h'(\theta)\right)V_{qq}(q^{E}(\theta))}\!>\!0. \end{aligned} $$
This is positive because of h(θ)>0,h′(θ)<0) and h′′(θ)≥0.
Since M's profit as well as CP's utility must be increasing in θ, it follows that
$$ \dot{S}(\theta)>0. $$
With the PA mechanism, CP attains a reservation utility of zero for treating a patient with minimum severity \(\underline {\theta }\). Thus the participation constraint of M can be written in terms of ex-post surplus as
$$ S(\underline{\theta}) \geq 0. $$
From (64) follows \(S(\underline {\theta })=g\). The sponsor optimally sets g=0, implying
This implies that treatment cost is independent of case severity. This is in contrast to the common assumption that treatment cost rises with the severity of illness. In fact, treatment cost rises with the resources used for treatment. A positive correlation between cost and severity therefore results only if the resources used for treatment increase with case severity, which is not given but depends on the decision of the physician.
There may be a saturation quantity beyond which the negative effect of extra treatment outweighs the positive one. However, any equilibrium is assumed to be associated with a quantity below this level.
This participation constraint is ex-post, since it prevents the CP from dumping individual cases after observing severity. An ex-ante participation constraint would state that the CP needs to accept the complete contract, in which case the next-best alternative would be to leave the hospital in favor of another healthcare facility. This ex-ante participation constraint is satisfied along with the ex-post constraint.
This assumption is made to simplify the analysis and not to deny the importance of social and altruistic preferences on the part of hospital managers.
As in the case of the CP, M's ex-ante participation constraint is satisfied along the ex-post constraint.
For a discussion see [13] ch. 4.
Given space constraints, the analysis of the case with full internal information is only sketched in the main text. The complete analysis is provided in Appendix A.
Alternatively, the CP might honor the agreement even if a false report would yield a higher utility. In this case, the bargaining solution need not satisfy the incentive compatibility constraint (9). However, here it is assumed that the CP behaves non-cooperatively, exploiting her/his informational advantage at stage No. 4 despite her/his prior involvement in decision making at stage No. 3.
The authors would like to thank four referees for their comments and criticisms. The usual disclaimer applies.
SBS designed and analyzed the theoretical model. SBS and PZ jointly discussed the results. PZ revised the manuscript prepared by SBS. Both authors read and approved the final manuscript.
No funding was received.
Boadway R, Marchand M, Sato M. An optimal contract approach to hospital financing. J Health Econ. 2004; 23(1):85–110.CrossRefGoogle Scholar
Bowblis JR, Brunt CS. Medicare skilled nursing facility reimbursement and upcoding. Health Econ. 2014; 23(7):821–40.CrossRefGoogle Scholar
Dafny LS. How do hospitals respond to price changes?Am Econ Rev. 2005; 95(5):1525–47.CrossRefGoogle Scholar
Galizzi MM, Miraldo M. The effects of hospitals' governance on optimal contracts: Bargaining vs. contracting. J Health Econ. 2011; 30(2):408–24.CrossRefGoogle Scholar
Jürges H, Köberlein J. First do no harm. then do not cheat: Drg upcoding in german neonatology. SSRN Electron J. 2013. https://doi.org/10.2139/ssrn.2307495.
Jürges H, Köberlein J. What explains drg upcoding in neonatology? the roles of financial incentives and infant health. J Health Econ. 2015; 43:13–26.CrossRefGoogle Scholar
Kuhn M, Siciliani L. Manipulation and auditing of public sector contracts. Eur J Polit Econ. 2013; 32:251–67.CrossRefGoogle Scholar
Laffont J-J, Martimort D. The Theory of Incentives. Princeton: Princeton Univ. Press; 2002.CrossRefGoogle Scholar
Laffont J-J, Tirole J. A Theory of Incentives in Procurement and Regulation, 3. print. edn.Cambridge, Mass.: MIT Press; 1998.Google Scholar
McAfee RP, McMillan J. Organizational diseconomies of scale. J Econ Manag Strateg. 1995; 4(3):399–426.CrossRefGoogle Scholar
Melumad ND, Mookherjee D, Reichelstein S. Hierarchical decentralization of incentive contracts. RAND J Econ. 1995; 26(4):654–72.CrossRefGoogle Scholar
Muthoo A. Bargaining Theory with Applications. Cambridge: Cambridge Univ Press; 1999.CrossRefGoogle Scholar
Nord E. Cost-value Analysis in Health Care: Making Sense Out of QALYs. Cambridge: Cambridge Univ Press; 1999.CrossRefGoogle Scholar
Silverman E, Skinner J. Medicare upcoding and hospital ownership. J Health Econ. 2004; 23(2):369–89.CrossRefGoogle Scholar
© The Author(s) 2019
1.Department of Economics, University of KonstanzKonstanzGermany
2.Department of Economics, Emeritus, University of ZürichZürichSwitzerland
Spika, S.B. & Zweifel, P. Health Econ Rev (2019) 9: 38. https://doi.org/10.1186/s13561-019-0256-4
Received 14 November 2018
First Online 28 December 2019
Publisher Name Springer Berlin Heidelberg
Not logged in Not affiliated 100.26.179.196
|
CommonCrawl
|
Fees and Categories
How To Reserve a Room
Roommate Search Board
Exhibits/Sponsors
Exhibits and Sponsor Information
BIG Careers Exhibit
Mathematical Art Exhibition
Invitation to Exhibit
Index - Complete Descriptions of Sessions and Talks
Invited Speakers - A Closer Look
This Short Course took place January 14-15, 2019, before the Joint Mathematics Meetings in Baltimore.
Interest in the theory and application of sums of squares (SOS) polynomials has exploded in the last two decades, spanning a wide spectrum of mathematical disciplines from real algebraic geometry to convex geometry, combinatorics, real analysis, theoretical computer science, quantum information and engineering. This two-day short course, organized by Pablo Parrilo, Massachusetts Institute of Technology, and Rekha Thomas, University of Washington, will offer six introductory lectures on the theory of SOS polynomials and their applications. The six speakers represent a broad and diverse cross-section of the many aspects of sum of squares techniques and their interconnections.
The origins of SOS polynomials are anchored in the 19th century by Hilbert's famous characterization of nonnegative polynomials that are SOS. In 1924 Artin gave an affirmative answer to Hilbert's 17th problem on whether all nonnegative polynomials were SOS of rational functions. From this the field of real algebraic geometry was born the study of real solutions to polynomial systems. While real solutions of polynomials equations are considerably more complicated than their complex counterpart, their role in applications cannot be overstated. SOS polynomials have experienced a renaissance in the last few years following the work of Shor, Nesterov, Lasserre, and Parrilo that connected them to modern optimization via semidefinite programming. An active interdisciplinary community now exists around SOS polynomials with a variety of conferences and research programs at many of the top institutions worldwide. While the theory is rich and fascinating with many open questions, the wide array of applications are equally enticing, allowing for many different angles and access points to the field.
Each of the six lectures in the course will be broken into two 45-minute sections. Problem sessions are planned for both days, and speakers and organizers will be available to assist participants with these exercises. Some exercises will be facilitated by numerical software such as Macaulay2, Maple™, MATLAB, or Julia. Participants with a desire to experiment with computations and an interest in applications should find this experience especially stimulating. The speakers will also provide participants with a list of research questions to guide their explorations after the course.
Please bring your laptop to the course, and if possible, download Macauley2 or Julia before the course starts.
This short course is aimed at students with minimal to no background in the theory of SOS polynomials. However, familiarity with properties of polyhedra, convex sets, and polynomials will provide useful background. Appendix A in the book Semidefinite Optimization and Convex Algebraic Geometry, G. Blekherman, P.A. Parrilo and R.R. Thomas (eds.), MOS-SIAM Series on Optimization, SIAM 2012, is recommended as useful preparatory reading. A free pdf of the book can be found at www.mit.edu/~parrilo/sdocag/.
Speakers and Lecture Topics
Photo courtesy Sung Ha Kang.
Overview of SOS polynomials
Grigoriy Blekherman, Georgia Institute of Technology.
Nonnegativity of polynomials and its relations with sums of squares are classical topics in real algebraic geometry. This area has a rich and distinguished mathematical history beginning with Hilbert's fundamental work, his 17th problem, and its solution by Artin. Work in this area involves a blend of ideas and techniques from algebraic geometry, convex geometry and optimization. Recently SOS algorithms and relaxations have found numerous applications in engineering and theoretical computer science. There is also an emergent understanding that the study of nonnegative and SOS polynomials on a variety is inextricably linked to classical topics in algebraic geometry and commutative algebra, such as minimal free resolutions.
This exciting blend of ideas can be demonstrated with a single prominent example: the cone of positive semidefinite (PSD) matrices. On the one hand this cone can be viewed as the set of all homogeneous quadratic polynomials (forms) nonnegative on all of $\mathbb{R}^n$, while on the other hand it is also a convex cone in the vector space of real symmetric matrices. It is also known that quadratic forms nonnegative on $\mathbb{R}^n$ are always SOS. An affine linear slice of the PSD cone is called a spectrahedron. The object of semidefinite programming is optimization of a linear function over a spectrahedron, and it is the engine that makes SOS algorithms work. SOS algorithms naturally lead to spectrahedra via convex duality, and understanding algebraic and convexity properties of these spectrahedra is the key to understanding the quality of these algorithms.
Photo courtesy Hamza Fawzi
Lifts of convex sets
Hamza Fawzi, University of Cambridge
A central question in optimization is to maximize (or minimize) a linear function on a given convex set. Such a problem may be easy or hard depending on the geometry of the convex set. Motivated by this problem, this lecture considers the following question: given a convex set, is it possible to express it as the projection of a simpler convex set in a higher-dimensional space? Such a lift of the convex set allows us to reformulate the original optimization problem as an easier one over the higher-dimensional convex set. In order to make this question precise we need a way to measure the complexity of convex sets. We will focus in this lecture on two classes of lifts, namely polyhedral and spectrahedral lifts, where a natural notion of complexity can be defined. For spectrahedral lifts, we will see that the existence of lifts is characterized by the existence of SOS certificates for a certain class of nonnegative functions. We will give some examples of convex sets that admit small lifts, and others that do not, and will discuss applications in optimization.
Photo courtesy Kimberly Lipinacci
Engineering applications of SOS polynomials
Georgina Hall, INSEAD
The problem of optimizing over the cone of nonnegative polynomials is a fundamental problem that appears in many different areas of engineering and computational mathematics. Long thought to be intractable, several breakthrough papers in the early 2000s showed that this problem could be tackled by using SOS programming, a class of optimization problems which is intimately connected to semidefinite programming.
In this lecture, we present a number of problems where the need to optimize over the cone of nonnegative polynomials arises and discuss how they can be reformulated using SOS programming. Problems of this type include, but are not limited to, problems in power engineering (e.g., the optimal power flow problem) control and robotics (e.g., formal safety verification), machine learning and statistics (e.g., shape-constrained regression), and game theory (e.g., Nash equilibria computation). We conclude by highlighting some directions that could be pursued to further disseminate these techniques within more applied fields. Among other things, we address the current challenge that scalability represents for SOS programming and discuss recent trends in software development.
Photo courtesy Kate McKenna
Crabapple Photography
Connections to theoretical computer science
Ankur Moitra, Massachusetts Institute of Technology
Many hard optimization problems---like finding large cliques in a graph---can be cast as maximizing a linear function over a convex, but highly complicated domain. For example, if the feasible region is a polytope it often has an exponential number of vertices and facets without an obvious way to decide if a given point is contained inside. The SOS hierarchy gives a sequence of tighter and tighter relaxations. Each of them can be efficiently optimized over, and so it yields a sequence of algorithms that trade off complexity with accuracy. However in many cases, understanding exactly how powerful these algorithms are turns out to be quite challenging.
In this survey, we will discuss two central problems in theoretical computer science: (1) In the planted clique problem, the goal is to find a large clique that has been added to a random graph, and has recently found important applications in establishing computational vs. statistical tradeoffs in machine learning. (2) The unique games conjecture asserts that it is hard to find an assignment that satisfies many clauses in certain two-variable constraint satisfaction problems. If true, it would characterize the best possible approximation ratio achievable by efficient algorithms for a broad class of problems. We will explore how a deeper understanding of the power of SOS proofs has been intertwined with progress on these questions.
Photo courtesy Mauricio Velasco
Algebraic geometry through the lens of sums of squares
Mauricio Velasco, Universidad de los Andes
SOS optimization has a wealth of connections to pure mathematics allowing us to define new invariants of real projective varieties and to connect algebraic geometry, convexity and optimization. This lecture will illustrate these connections by focusing on a basic example: (*) the classification of those (real, projective) varieties on which nonnegative quadratic forms and sums-of-squares of linear forms coincide.
In the first part of the lecture, we will give a self-contained introduction to projective algebraic geometry and describe the classification of "varieties of minimal degree", one of the great achievements of the Italian school of algebraic geometry in the XIX century. In the second part of the lecture we will explain why varieties of minimal degree play a central "extremal" role in the theory of SOS on varieties and are the building blocks for solving problem (*) above. The solution of (*) allows us to generalize and synthesize all known results of equality between nonnegative polynomials and sums of squares (by Choi-Lam-Reznick, Grone, Hilbert, Yakubovich, among others).
Photo courtesy Cynthia Vinzant
The geometry of spectrahedra
Cynthia Vinzant, North Carolina State University
A spectrahedron is an affine slice of the convex cone of positive semidefinite real symmetric matrices. Spectrahedra form a rich class of convex bodies that are computationally tractable and appear in many areas of mathematics. Examples include polytopes, ellipsoids, and more exotic convex sets, like the convex hull of some curves. Techniques for maximizing a linear function over a spectrahedron are called semidefinite programs. These numerical polynomial-time algorithms generalize linear programs and form a powerful tool in optimization.
The geometry of spectrahedra is fundamental to the theory of semidefinite programming, just as the geometry of polyhedra is to that of linear programming. This lecture will introduce the theory of spectrahedra and describe how convex geometry, real algebraic geometry, and topology all contribute to our understanding of their intricate geometry. We will use this to explore examples and applications coming from distance geometry, moment problems, and combinatorial optimization.
Call for Proposals for 2020 Short Courses · Manual for Organizers and Lecturers, AMS Short Course Series · Past Short Courses
JMM 2019
JMM 2019 Home
JMM Daily Newsletter
JMM Survey Post Meeting Survey
Video, Photography, and Broadcast Policy
Welcome Environment Policy
JMM Site Terms of Use
Child Care Grants - Deadline November 12
Courses/Workshops
AMS Workshops
MAA Workshops
AMS Employment Center
Exhibits / Sponsors
Mathemati-Con
Prizes and Awards Recipients
List of Events by Organizations
Travel/Transportation Information
For Registration and Housing Questions
For All Other Questions
AMS National Meetings · AMS Sectional Meetings · MAA National Meetings · MAA Sectional Meetings · [email protected]
|
CommonCrawl
|
Divergent global-scale temperature effects from identical aerosols emitted in different regions
Anthropogenic aerosol drives uncertainty in future climate mitigation efforts
E. J. L. Larson & R. W. Portmann
Large uncertainty in future warming due to aerosol forcing
Duncan Watson-Parris & Christopher J. Smith
Similar spatial patterns of global climate response to aerosols from different regions
Matthew Kasoar, Dilshad Shawki & Apostolos Voulgarakis
Weakening aerosol direct radiative effects mitigate climate penalty on Chinese air quality
Chaopeng Hong, Qiang Zhang, … Kebin He
Short-lived climate forcers have long-term climate impacts via the carbon–climate feedback
Bo Fu, Thomas Gasser, … Jing Xu
Aerosols in current and future Arctic climate
Julia Schmale, Paul Zieger & Annica M. L. Ekman
Extreme wet and dry conditions affected differently by greenhouse gases and aerosols
Jana Sillmann, Camilla W. Stjern, … Francis W. Zwiers
Human-driven greenhouse gas and aerosol emissions cause distinct regional impacts on extreme fire weather
Danielle Touma, Samantha Stevenson, … Sloan Coats
Estimating the timing of geophysical commitment to 1.5 and 2.0 °C of global warming
M. T. Dvorak, K. C. Armour, … C. J. Smith
Geeta G. Persad1 &
Ken Caldeira1
Atmospheric dynamics
Climate and Earth system modelling
Climate-change mitigation
Projection and prediction
The distribution of anthropogenic aerosols' climate effects depends on the geographic distribution of the aerosols themselves. Yet many scientific and policy discussions ignore the role of emission location when evaluating aerosols' climate impacts. Here, we present new climate model results demonstrating divergent climate responses to a fixed amount and composition of aerosol—emulating China's present-day emissions—emitted from 8 key geopolitical regions. The aerosols' global-mean cooling effect is fourteen times greater when emitted from the highest impact emitting region (Western Europe) than from the lowest (India). Further, radiative forcing, a widely used climate response proxy, fails as an effective predictor of global-mean cooling for national-scale aerosol emissions in our simulations; global-mean forcing-to-cooling efficacy differs fivefold depending on emitting region. This suggests that climate accounting should differentiate between aerosols emitted from different countries and that aerosol emissions' evolving geographic distribution will impact the global-scale magnitude and spatial distribution of climate change.
The global distribution of anthropogenic aerosol emissions has evolved substantially over the industrial era. In the mid-20th century, North America and Western Europe were the primary anthropogenic aerosol source regions, whereas today South and East Asian sources dominate anthropogenic aerosol emissions1. Aerosol particles have direct negative impacts on air quality, human health, and agricultural productivity in the regions in which they are concentrated2,3,4, but also change regional and global climate through their interactions with solar radiation and clouds. In the global total, anthropogenic aerosols are estimated to have offset approximately a third of the global-mean greenhouse gas driven warming since the 1950s5. However, because of aerosols' relatively short atmospheric lifetime, their atmospheric distribution and temperature effects are heterogeneous and dependent on the distribution of emissions6. With this geographic redistribution of aerosol emissions, therefore, comes the potential for redistribution of their climate effects—a characteristic not present with long-lived greenhouse gases.
Aerosols' heterogeneous spatial distribution is recognized to influence their overall climate impact relative to more homogeneous climate forcers, like carbon dioxide7,8. Although carbon dioxide radiative forcing has some spatial structure associated with the climatological radiative environment9, the historical spatial distribution of aerosol forcing has been shown to generate a larger transient and equilibrium global-mean climate response than equivalent amounts of long-lived greenhouse gas forcing, as a result of historical aerosol forcing's greater spatial coincidence with Northern Hemisphere land and polar regions10,11,12. This difference in the rate at which aerosols' top-of-atmosphere radiative forcing produces global-mean temperature effects relative to greenhouse gases' is characterized by the efficacy of its radiative forcing7 (i.e., global-mean temperature response divided by global-mean top-of-atmosphere radiative forcing), and is a major factor in determining the degree to which anthropogenic aerosols have offset greenhouse-gas driven warming over the historical period10,11,13. Localized aerosol concentrations also strongly influence regional temperature, rainfall, and circulation in emitting regions14.
Yet, despite the acknowledged sensitivity of aerosols' total climate impacts and the spatial distribution thereof to the spatial distribution of the aerosol emissions themselves, many scientific and policy discussions treat the climate effect of aerosol emissions from all regions as equal. Policy analysis continues to use single global-mean metric values, such as Global Warming Potential, to trade off the impacts of aerosol emissions from any region against greenhouse gas emissions15, despite studies demonstrating that, for certain aerosol species, this value may differ substantially depending on the emitting region16,17. The reduced-form integrated assessment models used to analyze costs and benefits of climate policy similarly treat aerosol emissions from all regions as having the same capacity to affect global climate18. Global-mean aerosol radiative forcing and its offsetting effects on greenhouse gas-driven warming, meanwhile, have not been widely recognized to be dependent on the evolution of global aerosol emissions' spatial distribution11,13.
The scientific community has made great progress to date in building a theoretical framework for understanding the importance of the spatial heterogeneity in anthropogenic aerosol forcing. However, limitations remain in leveraging the existing literature to address key scientific and decision-making issues surrounding the differential role of aerosols emitted from different regions. Many studies assessing the temperature response to regionally distinct aerosol perturbations use radiative forcing or atmospheric aerosol concentrations as their unit of regional perturbation19,20,21. However, these units are not easily attributable to individual regions, as radiative forcing or atmospheric concentrations occurring in a given region may be attributable to aerosol emissions from well outside that region's boundaries. This framing is additionally difficult to apply in decision-making contexts, as mitigation targets are generally set in terms of emissions rather than concentrations or radiative forcing. Several studies have looked at the effects of all aerosols within a given latitude band19,20,21,22,23, but this is not a geographic delineation along which mitigation decisions, emissions accounting, or conjunct trends in aerosol emissions occur. Other studies have assessed the relative effects of historical emissions or scalings thereon in different countries17,23,24,25,26. However, these emissions are unequal and therefore cannot be used straightforwardly to create a quantitative framework for assessing the relative importance of aerosol emissions from different regions in the context of the climate metrics used in many scientific and policy frameworks.
Here, we address a fundamental outstanding question underpinning both the consideration of aerosol emissions in climate policy and the analysis of their evolving role in global and regional climate change and climate sensitivity: can the temperature effects of a unit of aerosols emitted from any major emitting economy be considered equivalent? We analyze the relative climate effects of identical combined sulfate, black carbon, and organic carbon aerosol emissions—equivalent to China's total annual year 2000 anthropogenic emissions1—sourced from eight major past, present, or projected future emitting regions (Fig. 1a) in a global atmospheric general circulation model coupled to a mixed-layer ocean model (see Methods). These three aerosol species drive the vast majority of anthropogenic aerosol forcing over the historical period27. They have historically varied in tandem at the national-scale, as economy-wide transitions have driven aerosol emissions changes over the historical period1, and are projected to continue to do so in future28,29. This novel design provides a crucial advancement by allowing for a unique one-to-one comparison (cleanly interpretable in scientific and decision-making contexts) between aerosol emissions (the perturbation unit that most directly corresponds to mitigation decision-making) in several past, present, or projected future major emitting geopolitical regions (approaching the geographic delineations along which mitigation decisions and emissions accounting are conducted). Our results demonstrate substantial inequalities in both the magnitude and spatial distribution of temperature effects of aerosol emissions located in different major emitting economies. These divergent temperature effects are fundamentally driven by differences in the degree to which each region's emissions and the resulting distribution of radiative effects generate remote feedback processes, and reveal new challenges for understanding and addressing the global and regional climate influence of aerosols.
Global and regional-mean temperature responses to identical aerosol emissions in different regions. Identical total annual emissions of sulfate and black carbon aerosol from eight major past, present, and potential future emitting regions (a) result in global-mean cooling spanning a 14x range (b, y-axis), differing substantially in the degree to which that cooling is felt in the emitting region (b, x-axis). Diagonal lines (b) indicate the ratio of regional- to global-mean cooling. Error bars in b capture the standard error
Divergent magnitude and distribution of temperature effects
Our simulations reveal substantial inequality in the global-mean temperature response to identical total annual aerosol emissions in each region (Fig. 1b, y-axis). The largest global mean temperature change (−0.29 ± 0.01 °C, induced in our simulations by emissions from Western Europe) is approximately 14 times larger than the smallest global mean temperature changes (−0.02 ± 0.01 °C induced by emissions from India). Global-mean temperature effects correlate roughly with latitude, with higher latitude emitting regions typically generating stronger temperature effects than lower latitude regions, a behavior also seen in previous studies of the response to aerosol forcing in different latitude bands19. However, substantial differentiation occurs within latitude bands. For example, emissions sourced from East Africa generate approximately four times more cooling (−0.06 ± 0.01 °C) than emissions from India (−0.02 ± 0.02 °C) at similar latitudes—demonstrating the importance of our region-by-region focus.
There are also important regional distinctions in the degree to which the temperature changes due to aerosol emissions are likely to be concentrated in the emitting region versus being felt globally (Fig. 1b, placement relative to diagonals). Because aerosols generally remain concentrated near their source and most strongly influence surface temperature by attenuating incoming solar radiation to the regions they overlie, they cool the emitting region more strongly than the global mean in all cases (Figs. 1b, 2). However, although Indian emissions produce the smallest global-mean cooling effects, their impacts are much more strongly concentrated within the emitting region than for other regions. India experiences cooling from its own emissions at a level ~21 times greater than the global mean (−0.42 ± 0.03 K versus −0.02 ± 0.01 K), while Western European, and Indonesian emissions generate localized cooling (−0.50 ± 0.06, and −0.14 ± 0.05 K, respectively) that is less than two times as great as the respective global-mean cooling (−0.29 ± 0.01, and −0.07 ± 0.01 K, respectively). As a result, even though Western European emissions produce ~14 times the global mean cooling effect of Indian emissions, India and Western Europe experience comparable regional-mean cooling from their own emissions. In other words, regions like Western Europe, Indonesia, and (to a slightly lesser extent) the United States strongly export the climate impacts of their emissions, while regions like India more strongly experience the cooling effects of their own emissions.
Patterns of surface air temperature response to identical aerosol emissions from eight regions. Spatial patterns of surface air temperature change in response to identical aerosol emission in each the eight regions are shown. Global mean temperature change (K) with standard error is shown in bottom left. Grid markings indicate regions that are not statistically significant at the 95% confidence level via t-test
Role of of aerosol burden and radiative forcing differences
We find that differences in global-mean temperature response resulting from identical emissions emerge at each of several steps between initial emission and eventual temperature effect. Aerosol emissions impact temperature when their resulting atmospheric concentrations interact with atmospheric radiation, both directly and through aerosols' rapid effects on cloud radiative properties and amount. These radiative interactions then modify the top-of-atmosphere energy balance, which creates a global-mean temperature change mediated by a series of feedback processes. Differences in the temperature change induced by each regional emission can emerge at each step in this process: from emission to atmospheric burden, from burden to top-of-atmosphere radiative forcing, and from top-of-atmosphere radiative forcing to temperature change.
The total atmospheric aerosol burdens generated by the identical regional emissions are spread between 128–233 Gg of sulfate aerosol, 9.29–15.7 Gg of black carbon aerosol, and 23.1–54.2 Gg of organic carbon aerosol (Supplementary Table 1; spatial distributions shown in Supplementary Figs. 1, 2, and 3, respectively). However, the atmospheric burdens of the individual aerosol species generated by each regional emission are largely uncorrelated with the regional emissions' relative potency at changing global-mean temperature (Supplementary Fig. 4). Thus, the disparity in temperature effects does not arise solely from a disparity in the atmospheric lifetime and resulting atmospheric burden of aerosols emitted from each region, but also through the differential generation of climate forcing and feedbacks by those burdens.
How do the aerosol burdens from each region's emissions translate into radiative forcing, which in turn drives the global mean temperature change? The radiative effects of sulfate (a global-mean cooling agent), black carbon (a global-mean warming agent), and organic carbon (a global-mean cooling agent, though with minor shortwave absorbing properties) will counteract each other in driving the radiative forcing. This cancellation can be accommodated by using an aggregate measure of aerosol optical properties, such as aerosol optical depth. In order to capture the cancellation between the absorbing and scattering aerosol burdens, we calculate this as the ratio between the change in global-mean total aerosol optical depth and global-mean absorbing aerosol optical depth caused by each regional emission.
The ratio of total to absorbing aerosol optical depth explains approximately 60% of the variance (R = 0.60, 0.44–0.69) in global-mean top-of-atmosphere effective radiative forcing (ERF) from each regional emission (Fig. 3a). We calculate the ERF as the top-of-atmosphere radiative imbalance induced by the regional emission after the atmosphere and land surface has been allowed to respond (see Methods). The variance in ERF is thus largely explained by the rate of global-mean cancellation between the absorbing and scattering aerosol burdens resulting from each regional emission, arising from differences in aerosol mixing rates, relative altitudes, and other microphysical and radiative factors30,31. The remaining variance may be explained by (1) the radiative environment in which the aerosol population is interacting, created by regional differences in climatological cloud cover or background aerosol; or (2) the particular cloud/convective environment in which the aerosols are placed and the influence this has on the manifestation of the aerosols' indirect and semidirect effects on clouds32,33. We find that the ratio of total aerosol optical depth to absorbing aerosol optical depth is approximately as well correlated with clear-sky ERF (R = 0.58, 0.31–0.61) as it is with the total ERF, indicating that the presence or absence of climatological cloud does not substantially impact the translation of the atmospheric aerosol burdens into radiative forcing.
Relationships between aerosol burden, radiative forcing, and temperature change. The ratio of aerosol optical depth produced by the atmospheric burden of sulfate, black carbon, and organic carbon to its absorbing component (AOD/AAOD) (a, y-axis) explains 60% of variance in effective radiative forcing (a, b, x-axis), which in turn has differing efficacy (b, diagonals) at producing global mean temperature changes (b, y-axis). Error bars capture the standard error
Drastic inequalities in forcing efficacy
Although the rate of global-mean cancellation between the absorbing and scattering aerosol burdens is somewhat correlated with the top-of-atmosphere radiative forcing generated by each emitting region, the relative top-of-atmosphere effective radiative forcings are not well correlated with the relative global mean temperature change (Fig. 3b), indicating a substantial divergence in forcing efficacy depending on emitting region. The forcing efficacy of each regional emission—i.e., the global-mean temperature change per unit top-of-atmosphere effective radiative forcing (captured in Fig. 4a and by placement relative to the diagonal lines in Fig. 3b)—range from 0.24 ± 0.14 K(Wm−2)−1 to 1.3 ± 0.06 K(Wm−2)−1. This constitutes a factor of five range in forcing efficacy between emitting regions, roughly twice the relative range in estimated forcing efficacy between global historical aerosols and greenhouse gases7,34. Emissions from the U.S. and Western Europe have the largest forcing efficacies (1.32 ± 0.06 and 1.09 ± 0.07 K(Wm−2)−1, respectively), producing outsized temperature responses for the effective radiative forcing they generate. Emissions from regions like Brazil, meanwhile, produce a comparable effective radiative forcing to emissions from Western Europe or the U.S., but generate much smaller global-mean temperature change (Fig. 3b). These efficacy differences highlight the shortcomings of using global-mean radiative forcing to estimate the climate effects of highly regionalized forcings.
Radiative feedbacks versus forcing efficacy. Differences in forcing efficacy (a) are largely explained by variance in the global-mean radiative gain from surface albedo and cloud feedbacks (b)—i.e., the additional top-of-atmosphere flux change from the feedback due to a unit of effective radiative forcing—across emissions regions. Error bars capture the standard error
Differential excitement of feedbacks
Differing forcing efficacy is fundamentally driven by differences in how the particular spatiotemporal distribution of global- and annual-mean top-of-atmosphere radiative forcing excites feedback processes that contribute to eventual temperature change. This forcing-feedback framework can be represented mathematically as:
$${\mathrm{d}}F{\, \mathrm{ = }}\, \frac{{\partial R}}{{\partial T}}{\mathrm{d}}T + \mathop {\sum }\limits_i \frac{{\partial R}}{{\partial X_i}}\frac{{\partial X_i}}{{\partial T}}{\mathrm{d}}T$$
The climate system balances an initial radiative forcing (dF) through the radiative effects of a change in temperature (\(\frac{{\partial R}}{{\partial T}}{\mathrm{d}}T\)), which is either amplified or damped by the top-of-atmosphere radiative effects of temperature-sensitive changes in factors like surface albedo, clouds, and water vapor (\(\mathop {\sum }\limits_i \frac{{\partial R}}{{\partial X_i}}\frac{{\partial X_i}}{{\partial T}}{\mathrm{d}}T\)). The degree to which a given initial radiative forcing excites these feedbacks will determine the extent to which the temperature must change to achieve re-equilibration. The differing spatial distributions of the radiative forcing (Supplementary Fig. 5) versus the surface temperature responses (Fig. 2) demonstrates a role for these remote feedback processes in setting the temperature responses to each regional emission.
We find that the differences in forcing efficacy across emitting region can be largely explained by differences in the degree to which the top-of-atmosphere radiative forcing from each regional emission excites top-of-atmosphere surface albedo radiative feedbacks and cloud radiative feedbacks. For each feedback process, we characterize its relative rate of excitement by a given region's emissions as the radiative gain: the ratio of radiative perturbation from the feedback (\(\frac{{\partial R}}{{\partial X_i}}\frac{{\partial X_i}}{{\partial T}}{\mathrm{d}}T\)) to the initial radiative forcing generated by the emission (dF). Water vapor feedbacks show relatively small and uncorrelated differences in radiative gain across emitting region (Supplementary Fig. 6). However, surface albedo feedbacks—driven primarily by sea ice changes—and cloud feedbacks vary in correspondence with the differences in efficacy (Fig. 4 and Supplementary Fig. 6). The regional differences in the combined radiative gain from the cloud and surface albedo feedbacks (Fig. 4b) explains 84% (R = 0.84, 0.71–0.91) of the variance in efficacy across regional emissions (Fig. 4a).
The differences in radiative gain from surface albedo feedbacks and the associated forcing efficacies sort roughly by latitude of emissions. The surface albedo feedback is dominantly driven by sea ice changes (Supplementary Fig. 5) in both the Arctic and Antarctic, and manifests in increased spatial extent and temporal duration of the sea ice. Forcing from Western European and Chinese emissions strongly increases sea ice in both the Arctic and Antarctic, inducing a strong surface albedo feedback radiative gain, while forcing induced by Indian and Brazilian emissions has relatively little effect. The sea ice responses are not strongly spatially collocated with the respective radiative forcings (Supplementary Fig. 5) or atmospheric aerosol burdens (Supplementary Figs. 1, 2, and 3), indicating that the polar effects primarily result from changes in atmospheric circulations that control the energy balance of and/or sea ice dynamics in the polar regions, rather than in situ forcing or aerosol deposition onto the ice. This aligns well with previous studies that show strong dependence of sea-ice albedo feedbacks on the meridional placement of forcings34,35.
The inter-regional differences in radiative gain associated with cloud feedbacks also help to explain the inter-regional differences in forcing efficacy, and are partially driven by the climatological cloud environment with which each region's aerosol emissions interact. Aerosols emitted in India, Indonesia, and Brazil produce large localized cloud changes (Supplementary Fig. 7), associated with the dynamical and thermodynamical effects of the localized aerosol forcing acting on the strongly convective cloud environment in these regions36,37,38,39. The cloud changes in response to all regional emissions are primarily dynamical or thermodynamical—rather than microphysical—in nature, as aerosol indirect effects (captured by the change in cloud droplet number concentration, Supplementary Fig. 7) are locally confined and relatively uncorrelated with the maxima in cloud change. India's strong cloud feedback gain and weak surface albedo feedback gain counteract each other in setting the relative overall efficacy of the radiative forcing from Indian emissions.
In our simulations, the cloud feedbacks generated in several regions largely manifest through a north–south shift in tropical cloud cover (Supplementary Fig. 7) associated with the intertropical convergence zone (ITCZ). This meridional ITCZ shift results from the large-scale atmospheric circulation adjusting to compensate for the hemispheric radiative imbalance induced by the localized aerosol forcing and its climate effects40,41,42,43. This is likely amplified by the surface albedo feedback to each regional emission, which will generate its own hemispheric energy imbalance44. Even in the presence of Southern Hemispheric emissions, Arctic sea ice increases more strongly than Antarctic sea ice in all cases (Supplementary Fig. 8). This is likely attributable to the stronger overall regional climate sensitivity of the Arctic relative to the Antarctic45,46,47. This common hemispheric imbalance in sea ice response contributes to the comparable ITCZ shifts seen in response to many of the regional aerosol emissions.
These results demonstrate that geographic location substantially influences the cooling potential of a given aerosol emission. Crucially, countries that historically have or presently do account for the majority of anthropogenic aerosol emissions—Europe, the U.S., and China—are the regions whose emissions have the largest cooling potential. Meanwhile, regions with rapid current emissions growth and/or that are projected to be the dominant sources of anthropogenic emissions going forward—such as India and East Africa—have the smallest cooling potential. This suggests that the historical spatial distribution of anthropogenic emissions may have had a larger cooling potential than will the emissions distribution of the future. Aerosol emissions are projected to decline over the 21st century, as countries increasingly value the air quality benefits of aerosol mitigation28,48. However, these findings suggest that the rate at which anthropogenic aerosol emissions offset global-mean greenhouse gas-driven warming may decline more rapidly than changes in global total emissions alone would indicate, due to the additional effects of the simultaneous spatial redistribution of emissions.
While fully coupled climate models will capture the effects of these changes in overall rate of offsetting, the reduced form integrated assessment models (IAMs) and earth system models of intermediate complexity (EMICs) used in many impact assessment settings would fail to capture this signal. The widely used dynamic integrated climate-economy (DICE) integrated assessment model, for example, assumes a time-constant translation of aerosol emissions into temperature effects in calculating future global-mean climate change18. It would therefore overestimate the future offsetting effect of anthropogenic aerosol emissions, if their global distribution evolves toward lower potency regions as our results suggest.
These findings have additional implications for estimates of climate sensitivity from historical observations. Estimates of the transient climate response (TCR) and equilibrium climate sensitivity (ECS) to a doubling of CO2 have been attempted from global-mean historical temperature and forcing estimates11,13. Recent studies have argued that the different efficacies of individual forcings operating over the historical period must be factored into these observationally-based estimates of TCR and ECS, but do so using time-invariant estimates of efficacy11,13. The TCR and ECS are estimated via a statistical fit through the distribution of temperature change versus forcing over historical observations, using the same efficacy across the time series. Results from this study suggest that the efficacy of global aerosol forcing may have evolved substantially over the historical period as different regional emissions dominated the global total emissions. This would suggest the need for a time-evolving value for overall aerosol forcing efficacy over the observational record, potentially resulting in different estimates of TCR and ECS from these methods.
Finally, the inequality in the climate impacts of aerosols from different emitting regions has substantial ramifications for climate accounting and aerosol mitigation policy. These findings highlight the importance of recognizing differences in relative climate influence at the scale of regional aerosol emissions, as aerosols are increasingly factored into international climate policy discussions. Studies to date on the climate influence of aerosols' heterogeneity have primarily focused on the total global distribution or on atmospheric concentrations or radiative forcings within certain latitude bands6,7,19. The decisions influencing aerosols' climate effects, however, manifest at the emissions level along political boundaries; increases are generally driven by national or multi-national economic policies that encourage emissions-intensive economic activity, while decreases have been largely driven by regional air quality concerns48. A mismatch, therefore, has existed until now between the regional scale and surface emissions framing of aerosol-relevant decision-making and the hemispheric scales and atmospheric focus of previous scientific analysis.
Although aerosol emissions mitigation (or its opposite) has historically occurred outside the purview of climate policy, efforts are emerging to factor the climate damage from warming revealed by aerosol mitigation into cost-benefit analyses that have historically considered only air quality benefits. Our findings highlight that calculation of these climate damages must be regionally-specific, both in determining global-mean penalties from different emitting regions and in evaluating the regional distribution of those penalties. The pronounced divergence in forcing efficacies seen here for national-scale emissions changes—the scale at which climate accounting schemes and mitigation cost-benefit analyses will be undertaken—highlights the danger associated with using global-mean radiative forcing and metrics based on it, such as global warming potentials, as a universal conversion factor for aerosols' global-mean climate impacts across different emitting regions. These results also suggest that for certain regions, such as India, climate impacts may be dramatically stronger in the mitigating region than outside it, while for others it may not. Such inter-regional differences will influence the rate at which the localized climate penalty from revealed warming counteracts the localized air quality benefits of the aerosol mitigation.
These factors demonstrate that evaluation of the social cost of climate impacts from aerosol mitigation will need to be more regionalized than for long-lived greenhouse gases. The particulars of the magnitude and spatial distribution of each regional emission's temperature effects may differ depending on the climate model used, given the substantial spread that exists in model treatment of aerosol processes49,50,51. However, our results demonstrate that major inter-regional differences do emerge, with substantial scientific and policy implications, providing an important first step in motivating future multi-model assessment using the regional emissions framework laid out here.
Simulations for this study were conducted in the National Center for Atmospheric Research Community Atmosphere Model 5 (NCAR CAM5), the atmospheric component of the Community Earth System Model 152, coupled to a mixed-layer ocean. Mixed-layer coupling provides benefits in decreased computational intensity compared to full-ocean coupling, and has been found to lead to similar responses to full ocean coupling in the CESM model suite using an earlier version of the CESM atmospheric model53. This similarity is expected to hold with use of CAM554. Simulations using CAM5 coupled to a slab ocean have been widely used in the peer-reviewed literature54,55,56, including to assess the climate response to aerosols57,58. NCAR CAM5 contains a fully interactive aerosol scheme, which transports and removes the emitted aerosols according to the model's meteorology. We use the CAM5 model with its three mode modal aerosol module (MAM3)59, containing internal mixing of black carbon, sulfate, and organic carbon with other aerosol species using the volume mixing rule. Refractive indices for sulfate and organic carbon are taken from Hess et al.60 and for black carbon is taken from Bond et al.61. CAM5 also includes aerosol indirect effects on clouds and the radiative effects of black carbon deposition on ice.
Simulations and analysis
Nine 100-year, repeating annual cycle simulations were conducted in CAM5 coupled to the mixed-layer ocean: one control simulation, and eight regionally perturbed simulations. The control simulation is a year 2000 climate with non-biomass burning anthropogenic black carbon, organic carbon, sulfur dioxide (SO2), and sulfate (SO4) emissions fields set to 1850 values. In each of the eight regionally perturbed simulations, the relevant region is populated with that region's year 2000 values, scaled at every regional grid point and time step to achieve additional total annual emissions equivalent to China's total year 2000 values: 22.4 Tg sulfate precursor, 1.61 Tg of black carbon emissions, and 4.03 Tg of organic carbon emissions. The 1850 and 2000 baseline emissions fields on which these are based are CAM5's standard historical emissions fields1, and the resulting emissions fields used to drive simulations are publicly accessible to allow for replication in other model suites (see Data availability).
The eight regions were defined according to the Intergovernmental Panel on Climate Change's regional definitions, and are shown in Fig. 1a. These eight geopolitical regions were selected to sample major past, present, or projected future emitting economies located in a range of climate regimes. Western Europe and the United States, major emitting regions over the historical period, were chosen to sample the response to aerosols emitted in Northern Hemisphere mid-latitude climate regions with different longitudinal locations and associated storm track regimes. India and China, current major emitting regions, were chosen to capture the response to two different monsoonal paradigms. The projected potential future emitting regions62, Indonesia, Brazil, East Africa, and South Africa, were chosen to assess the impact of aerosols emitted in the following respective climate regimes: the deep convective western Pacific warm pool region, the Pacific and Atlantic basin branches of the Intertropical Convergence Zone, and the Southern Hemisphere midlatitudes. The regional signals cited throughout the manuscript are calculated as the difference between the corresponding regionally perturbed simulation and the control simulation over the final 60 years of each simulation.
Although black carbon, sulfate, and organic carbon aerosol have somewhat opposing optical properties and global-mean top-of-atmosphere radiative effects, which can complicate analysis when they are co-emitted, we choose to include all of these species in each simulation. The species are co-emitted by many industrial processes, and their mitigation and growth often occur in tandem—a characteristic seen in their Representative Concentration Pathway and Shared Socioeconomic Pathway trajectories28,29. Analyzing the climate effects of their simultaneous mitigation or growth, therefore, is likely to provide a better proxy for the climate effects of economy-wide transitions in aerosol emissions, as has been the historical norm and as is projected across the current suite of emissions scenarios used by the climate modeling and policy communities. Simulations that include only one aerosol species may not be additive, particularly in regional responses63, reducing their utility in assessing the climate response to changes in multiple aerosol species simultaneously, as has been the dominant mode of aerosol emission change in reality. Future work will aim to illuminate the degree of additivity in the multi-species response through comparison with a planned suite of complementary single species simulations.
Effective radiative forcing (ERF) values are calculated using a suite of 9 CAM5 simulations formulated in the same way as the mixed-layer ocean simulations described above, but with the mixed-layer ocean and sea ice module replaced with a repeating annual cycle of observed year 2000 sea surface temperatures (SSTs) and sea ice coverage. These simulations are run for 60 years, the model equilibrates within the first 20 years, and the final 40 years of the simulation are used in the ERF calculation. The ERF is calculated as the top-of-atmosphere radiative imbalance between the control and regional fixed-SST simulation, i.e., after atmospheric and land surface temperatures have been allowed to adjust to the regional emissions. This follows the Fixed SST radiative forcing definition described in Myhre et al.27.
Feedback calculations
Top-of-atmosphere radiative feedbacks are calculated via standard radiative kernel methods64,65,66, using radiative kernels generated in the NCAR CESM-CAM5 model67 and associated kernel calculation code provided by Pendergrass (https://github.com/apendergrass/cam5-kernels). Use of radiative kernels generated within the same climate model as the simulations to which they are applied, as is the case here, is generally considered to be best practice for reducing errors associated with differences in radiative transfer codes between climate models64,68.
The radiative kernel for a given feedback process consists of gridded monthly values of the radiative perturbation induced by a prescribed change in a given feedback process, such as surface albedo or atmospheric water vapor. These gridded kernels are then multiplied by the monthly change in the relevant feedback process for each of the regional emissions signals (see Simulations and analysis), normalized by the prescribed change used to derive the kernel. This process is used to calculate the feedbacks due to all process other than the cloud radiative feedback. The cloud radiative feedback is calculated using the adjusted cloud radiative effect method64. In this method, a cloud radiative effect is calculated as the difference between the all-sky and clear-sky (i.e., non-cloud-permitting) top-of-atmosphere radiative imbalance. This cloud radiative effect is then adjusted for potential contamination by non-cloud factors through subtraction of the kernel-calculated cloudy-sky (i.e., all-sky kernel minus clear-sky kernel) values of the other feedbacks. This method has been shown to provide a reliable estimate for cloud feedbacks relative to other, more computationally prohibitive, methods47.
Statistical testing
Error ranges for the values in Figs. 1, 3, and 4, Supplementary Figs. 4 and 6, and Supplementary Table 1 are calculated as the standard error of the mean, using the final 60 years of each simulation's data, with the exception of the ERF values in Supplementary Table 1, which use the final 40 years of the Fixed SST simulations' data. In each case an effective sample size is calculated, accounting for any autocorrelation in the time series69. The statistical significance masking in Fig. 2 and Supplementary Figs. 1, 2, 3, 7, and 8 is calculated using a one-sample t-test, again using the final 60 years of model data with an effective sample size adjusted for autocorrelation69. Supplementary Fig. 5 is calculated using the final 40 years of the Fixed SST simulation data. Regions with gridlines in these figures indicate signals that are not significant at the 95% confidence level. Error ranges provided for R values are calculated using jackknife resampling70, in which the R value is recalculated seven times, dropping one of the regions each time.
The code for the NCAR CESM model is publicly available at http://www.cesm.ucar.edu/models/. The code used to set up the simulations used in this study and to analyze data is available from the authors upon request.
Emissions input fields used to drive the simulations are available at https://drive.google.com/drive/folders/1ggEJ4J6vrwk6HDYAFI2lgYhmYjRkJ8kk?usp=sharing. Simulation output evaluated in this study are available from the authors upon request.
This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication.
Lamarque, J.-F. et al. Historical (1850–2000) gridded anthropogenic and biomass burning emissions of reactive gases and aerosols: methodology and application. Atmos. Chem. Phys. 10, 7017–7039 (2010).
Shindell, D. et al. Simultaneously mitigating near-term climate change and improving human health and food security. Science 335, 183–189 (2012).
Article ADS PubMed CAS Google Scholar
Burney, J. & Ramanathan, V. Recent climate and air pollution impacts on Indian agriculture. PNAS 111, 16319–16324 (2014).
Heft-Neal, S., Burney, J., Bendavid, E. & Burke, M. Robust relationship between air quality and infant mortality in Africa. Nature 559, 254–258 (2018).
Bindoff, N. L. et al. in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds Stocker, T. F. et al.) Ch. 10 (Cambridge University Press, Cambridge, 2013).
Shindell, D. et al. Spatial scales of climate response to inhomogeneous radiative forcing. J. Geophys. Res. 115, D19110 (2010).
Hansen, J. et al. Efficacy of climate forcings. J. Geophys. Res. 110, D18104 (2005).
Taylor, K. E. & Penner, J. E. Response of the climate system to atmospheric aerosols and greenhouse gases. Nature 369, 734–737 (1994).
Huang, Y., Tan, X. & Xia, Y. Inhomogeneous radiative forcing of homogeneous greenhouse gases. J. Geophys. Res. 121, 2780–2789 (2016).
Shindell, D. T. Inhomogeneous forcing and transient climate sensitivity. Nat. Clim. Change 4, 274–277 (2014).
Marvel, K., Schmidt, G. A., Miller, R. L. & Nazarenko, L. S. Implications for climate sensitivity from the response to individual forcings. Nat. Clim. Change 6, 386–389 (2016).
Shindell, D. T., Faluvegi, G., Rotstayn, L. & Milly, G. Spatial patterns of radiative forcing and surface temperature response. J. Geophys. Res. Atmos. 120, 5385–5403 (2015).
Kummer, J. R. & Dessler, A. E. The impact of forcing efficacy on the equilibrium climate sensitivity. Geophys. Res. Lett. 41, 3565–3568 (2014).
Ramanathan, V., Crutzen, P. J., Kiehl, J. T. & Rosenfeld, D. Aerosols, climate, and the hydrological cycle. Science 294, 2119–2124 (2001).
Allen, M. R. et al. New use of global warming potentials to compare cumulative and short-lived climate pollutants. Nat. Clim. Change 6, 773–776 (2016).
Bond, T. C., Zarzycki, C., Flanner, M. G. & Koch, D. M. Quantifying immediate radiative forcing by black carbon and organic matter with the specific forcing pulse. Atmos. Chem. Phys. 11, 1505–1525 (2011).
Collins, W. J. et al. Global and regional temperature-change potentials for near-term climate forcers. Atmos. Chem. Phys. 13, 2471–2485 (2013).
Nordhaus, W. Estimates of the social cost of carbon: concepts and results from the DICE-2013R model and alternative approaches. J. Assoc. Environ. Resour. Econ. 1, 273–312 (2014).
Shindell, D. & Faluvegi, G. Climate response to regional radiative forcing during the twentieth century. Nat. Geosci. 2, 294–300 (2009).
Forster, P. M., de, F., Blackburn, M., Glover, R. & Shine, K. P. An examination of climate sensitivity for idealised climate change experiments in an intermediate general circulation model. Clim. Dyn. 16, 833–849 (2000).
Shindell, D. T., Voulgarakis, A., Faluvegi, G. & Milly, G. Precipitation response to regional radiative forcing. Atmos. Chem. Phys. 12, 6969–6982 (2012).
Shindell, D. T. Evaluation of the absolute regional temperature potential. Atmos. Chem. Phys. 12, 7955–7960 (2012).
Aamaas, B., Berntsen, T. K., Fuglestvedt, J. S., Shine, K. P. & Collins, W. J. Regional temperature change potentials for short-lived climate forcers based on radiative forcing from multiple models. Atmos. Chem. Phys. 17, 10795–10809 (2017).
Koch, D., Bond, T. C., Streets, D., Unger, N. & van der Werf, G. R. Global impacts of aerosols from particular source regions and sectors. J. Geophys. Res. 112, D02205 (2007).
Liu, L. et al. A PDRMIP Multimodel Study on the impacts of regional aerosol forcings on global and regional precipitation. J. Clim. 31, 4429–4447 (2018).
Conley, A. J. et al. Multimodel surface temperature responses to removal of U.S. sulfur dioxide emissions. J. Geophys. Res. 123, 2773–2796 (2018).
Myhre, G. et al. in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds Stocker, T. F. et al.) Ch. 8 (Cambridge University Press, 2013).
Rao, S. et al. Future air pollution in the shared socio-economic pathways. Glob. Environ. Change 42, 346–358 (2017).
Rogelj, J. et al. Air-pollution emission ranges consistent with the representative concentration pathways. Nat. Clim. Change 4, 446–450 (2014).
Loeb, N. G. & Su, W. Direct aerosol radiative forcing uncertainty based on a radiative perturbation analysis. J. Clim. 23, 5288–5293 (2010).
Ma, X., Yu, F. & Luo, G. Aerosol direct radiative forcing based on GEOS-Chem-APM and uncertainties. Atmos. Chem. Phys. 12, 5563–5581 (2012).
Boucher, O. et al. in Climate change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (eds Stocker, T. F. et al.) 571–657 (Cambridge University Press, 2013).
Koch, D. & Del Genio, A. D. Black carbon semi-direct effects on cloud cover: review and synthesis. Atmos. Chem. Phys. 10, 7685–7696 (2010).
Stuber, N., Ponater, M. & Sausen, R. Why radiative forcing might fail as a predictor of climate change. Clim. Dyn. 24, 497–510 (2005).
Joshi, M. et al. A comparison of climate response to different radiative forcings in three general circulation models: towards an improved metric of climate change. Clim. Dyn. 20, 843–854 (2003).
Bollasina, M. A., Ming, Y., Ramaswamy, V., Schwarzkopf, M. D. & Naik, V. Contribution of local and remote anthropogenic aerosols to the twentieth century weakening of the South Asian Monsoon. Geophys. Res. Lett. 41, 680–687 (2014).
Persad, G. G., Ming, Y. & Ramaswamy, V. Tropical tropospheric-only responses to absorbing aerosols. J. Clim. 25, 2471–2480 (2012).
Salzmann, M., Weser, H. & Cherian, R. Robust response of Asian summer monsoon to anthropogenic aerosols in CMIP5 models. J. Geophys. Res. Atmos. 119, 11,321–11,337 (2014).
Tao, W.-K., Chen, J.-P., Li, Z., Wang, C. & Zhang, C. Impact of aerosols on convective clouds and precipitation. Rev. Geophys. 50, RG2001 (2012).
Broccoli, A. J., Dahl, K. A. & Stouffer, R. J. Response of the ITCZ to Northern Hemisphere cooling. Geophys. Res. Lett. 33, L01702 (2006).
Ming, Y. & Ramaswamy, V. A model investigation of aerosol-induced changes in tropical circulation. J. Clim. 24, 5125–5133 (2011).
Hwang, Y.-T., Frierson, D. M. W. & Kang, S. M. Anthropogenic sulfate aerosol and the southward shift of tropical precipitation in the late 20th century. Geophys. Res. Lett. 40, 2845–2850 (2013).
Allen, R. J., Evan, A. T. & Booth, B. B. B. Interhemispheric aerosol radiative forcing and tropical precipitation shifts during the late twentieth century. J. Clim. 28, 8219–8246 (2015).
Chiang, J. C. H. & Bitz, C. M. Influence of high latitude ice cover on the marine intertropical convergence zone. Clim. Dyn. 25, 477–496 (2005).
Salzmann, M. The polar amplification asymmetry: role of Antarctic surface height. Earth Syst. Dynam. 8, 323–336 (2017).
Goosse, H. et al. Quantifying climate feedbacks in polar regions. Nat. Commun. 9, 1919 (2018).
Article ADS PubMed PubMed Central CAS Google Scholar
Serreze, M. C. & Barry, R. G. Processes and impacts of Arctic amplification: a research synthesis. Glob. Planet. Change 77, 85–96 (2011).
Smith, S. J., West, J. J. & Kyle, P. Economically consistent long-term scenarios for air pollutant emissions. Clim. Change 108, 619–627 (2011).
Myhre, G. et al. PDRMIP: A Precipitation Driver and Response Model Intercomparison Project—protocol and preliminary results. Bull. Am. Meteor. Soc. 98, 1185–1198 (2016).
Stier, P. et al. Host model uncertainties in aerosol radiative forcing estimates: results from the AeroCom Prescribed intercomparison study. Atmos. Chem. Phys. 13, 3245–3270 (2013).
Kasoar, M. et al. Regional and global temperature response to anthropogenic SO2 emissions from China in three climate models. Atmos. Chem. Phys. 16, 9785–9804 (2016).
Hurrell, J. W. et al. The community earth system model: a framework for collaborative research. Bull. Am. Meteor. Soc. 94, 1339–1360 (2013).
Bitz, C. M. et al. Climate sensitivity of the community climate system model, version 4. J. Clim. 25, 3053–3070 (2011).
Gettelman, A., Kay, J. E. & Shell, K. M. The evolution of climate sensitivity and climate feedbacks in the community atmosphere model. J. Clim. 25, 1453–1469 (2011).
Modak, A., Bala, G., Cao, L. & Caldeira, K. Why must a solar forcing be larger than a CO2 forcing to cause the same global mean surface temperature change? Environ. Res. Lett. 11, 044013 (2016).
Pedersen, R. A., Cvijanovic, I., Langen, P. L. & Vinther, B. M. The impact of regional Arctic Sea ice loss on atmospheric circulation and the NAO. J. Clim. 29, 889–902 (2015).
Ganguly, D., Rasch, P. J., Wang, H. & Yoon, J. -H. Climate response of the South Asian monsoon system to anthropogenic aerosols. J. Geophys. Res. 117,D13209 (2012).
Clark, S. K., Ward, D. S. & Mahowald, N. M. The sensitivity of global climate to the episodicity of fire aerosol emissions. J. Geophys. Res. 120, 11589–11607
Liu, X. et al. Toward a minimal representation of aerosols in climate models: description and evaluation in the Community Atmosphere Model CAM5. Geosci. Model Dev. 5, 709–739 (2012).
Hess, M., Koepke, P. & Schult, I. Optical properties of aerosols and clouds: The Software Package OPAC. Bull. Am. Meteorol. Soc. 79, 831–844 (1998).
Bond, T. C., Habib, G. & Bergstrom, R. W. Limitations in the enhancement of visible light absorption due to mixing state. J. Geophys. Res. 111, D20211 (2006).
Takemura, T. Distributions and climate effects of atmospheric aerosols from the preindustrial era to 2100 along Representative Concentration Pathways (RCPs) simulated using the global aerosol model SPRINTARS. Atmos. Chem. Phys. 12, 11555–11572 (2012).
Persad, G. G., Paynter, D. J., Ming, Y. & Ramaswamy, V. Competing atmospheric and surface-driven impacts of absorbing aerosols on the East Asian Summertime Climate. J. Clim. 30, 8929–8949 (2017).
Soden, B. J. et al. Quantifying climate feedbacks using radiative kernels. J. Clim. 21, 3504–3520 (2008).
Shell, K. M., Kiehl, J. T. & Shields, C. A. Using the radiative kernel technique to calculate climate feedbacks in NCAR's Community Atmospheric Model. J. Clim. 21, 2269–2282 (2008).
Soden, B. J. & Held, I. M. An assessment of climate feedbacks in coupled ocean–atmosphere Models. J. Clim. 19, 3354–3360 (2006).
Pendergrass, A. G., Conley, A. & Vitt, F. M. Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5. Earth Syst. Sci. Data 10, 317–324 (2018).
DeAngelis, A. M., Qu, X., Zelinka, M. D. & Hall, A. An observational radiative constraint on hydrologic cycle intensification. Nature 528, 249–253 (2015).
Santer, B. D. et al. Statistical significance of trends and trend differences in layer-average atmospheric temperature time series. J. Geophys. Res. 105, 7337–7356 (2000).
Friedl, H. & Stampfer, E. in Wiley StatsRef: Statistics Reference Online. https://doi.org/10.1002/9781118445112.stat07185 (American Cancer Society, 2014).
The authors wish to thank Anna Possner and Patrick Brown for helpful comments on early drafts of this work. This work was partially supported by National Science Foundation Grant NSF CNH-L 1715557.
Department of Global Ecology, Carnegie Institution for Science, 260 Panama Street, Stanford, CA, 94305, USA
Geeta G. Persad & Ken Caldeira
Geeta G. Persad
Ken Caldeira
G.G. Persad conceived of the study, ran model simulations, conducted analysis, and wrote the manuscript. K.C. provided input on data analysis and simulation set-up, and contributed to editing of the manuscript.
Correspondence to Geeta G. Persad.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Persad, G.G., Caldeira, K. Divergent global-scale temperature effects from identical aerosols emitted in different regions. Nat Commun 9, 3289 (2018). https://doi.org/10.1038/s41467-018-05838-6
Responsibility of major emitters for country-level warming and extreme hot years
Lea Beusch
Alexander Nauels
Sonia I. Seneviratne
Communications Earth & Environment (2022)
Strong control of effective radiative forcing by the spatial pattern of absorbing aerosol
Andrew I. L. Williams
Philip Stier
Duncan Watson-Parris
Nature Climate Change (2022)
Aerosols must be included in climate risk assessments
Bjørn H. Samset
Laura J. Wilcox
Different climate response persistence causes warming trend unevenness at continental scales
Qingxiang Li
Bosi Sheng
Phil Jones
Climate effects of aerosols reduce economic inequality
Yixuan Zheng
Steven J. Davis
|
CommonCrawl
|
K. N. Abhini1,
Akhila B. Rajan1,
K. Fathimathu Zuhara1 &
Denoj Sebastian ORCID: orcid.org/0000-0002-0778-09061
This study targets the enhanced production of l-asparaginase, an antitumor enzyme by Acinetobacter baumannii ZAS1. This organism is an endophyte isolated from the medicinal plant Annona muricata. Plackett–Burman design (PBD) and central composite design (CCD) were used for statistical optimization of media components.
The organism exhibited 18.85 ± 0.2 U/mL enzyme activities in unoptimized media. Eight variables: l-asparagine, peptone, glucose, lactose, yeast extract, NaCl, MgSO4, and Na2HPO4 were screened by PBD. Among them, only four factors—l-asparagine, peptone, glucose, and Na2HPO4—were found to affect enzyme production significantly (p < 0.05). Furthermore, the best possible concentrations and interactive effects of the components that enhance this enzyme's output were chosen by using CCD on these selected variables. The results revealed that an optimized medium produces a higher concentration of enzymes than the unoptimized medium. After optimizing media components, the maximum l-asparaginase activity was 45.59 ± 0.36 U/mL, around the anticipated value of 45.04 ± 0.42 U/mL. After optimization of process parameters, it showed a 2.41-fold increase in the production of l-asparaginase by the endophyte Acinetobacter baumannii ZAS1.
The findings of this study indicated that an endophyte, Acinetobacter baumannii ZAS1 that produces l-asparaginase could be used to increase enzyme output. However, using the statistical methods Plackett-Burman design and central composite design of response surface methodology is a handy tool for optimizing media components for increased l-asparaginase synthesis.
l-asparaginase (EC 3.5.1.1) is an enzyme that belongs to the amidase group, which catalyzes the conversion of l-asparagine into aspartic acid and ammonia [1]. The amino acid l-asparagine is essential for cell survival and is primarily involved in the synthesis of several essential proteins [2]. It is a non-essential amino acid that healthy cells can make with the help of the enzyme l-asparagine synthetase [3]. However, due to the insufficient expression of this particular enzyme in certain tumors, such as leukemic cells, they rely on the extracellular supply of this amino acid for their multiplication and survivability [3]. This suggests the importance of l-asparagine deprivation therapy for the selective elimination of tumor cells. l-asparaginase is one of the drugs used in the treatment of acute lymphoblastic leukemia (ALL).
The commercially available l-asparaginase is obtained from bacteria, mainly Escherichia coli and Erwinia chrysanthemi. This enzyme derived from Escherichia coli exhibits several side effects when used as a drug, including a high rate of hypersensitive reactions [4, 5]. Hypersensitivity reactions can range from moderate allergic reactions to potentially lethal anaphylaxis [6]. In addition to this, several side effects like edema, serum sickness, bronchospasm, urticaria and rash, itching and swelling of extremities, and erythema also have been reported [7]. An enzyme derived from Erwinia chrysanthemi is successfully used to reduce hypersensitivity to 12–20%, but this drug has a shorter half-life and lacks complete remission [7, 8]. The essential prerequisites for developing a bio-better drug are obtaining an enzyme from a novel habitat and augmenting its production through process parameter optimization [9]. Many researchers are now using statistical methods to maximize l-asparaginase production by various microorganisms [10]. Increased l-asparaginase production (17.089 U/mL) from halotolerant Bacillus licheniformis PPD37 was achieved by optimizing the process parameters through RSM [11]. Another study also reported improved l-asparaginase production from Pectobacterium carotovorum using 19.36 U/mL through RSM [12].
Recent research emphasizes the importance of this enzyme as an anticancer agent in various cancer cell lines [13]. This enzyme is also used in the food industry as an acrylamide-reducing agent in starchy food products. The diverse applications and growing needs have evoked scientists to search for efficient l-asparaginase producers from new environments. Recently, the search for metabolites with desirable properties concentrates on organisms with novel biotopes [14]. These endophytes are microbes that grow within plant tissue symbiotically associated with such a biotope. Bioactive substances from endophytes are gaining attention due to their vast diversity, reduced toxicity, and ability to withstand different environmental conditions [15]. Though there have been several studies of l-asparaginase from various microbial sources, the production of this enzyme from medicinal plant endophytes has received less attention. In this context, we emphasize the need to study l-asparaginase from endophytes and improve its synthesis. Moreover, response surface methodology (RSM) is one of the statistical methods that have been successfully used to optimize the growth conditions of organisms to increase overall metabolite and biomass production [16, 17].
Acinetobacter baumannii ZAS1 is an endophyte obtained from the medicinal plant Annona muricata and thus expected to produce minimal side effects. This work aimed to optimize physical factors and media components for enhanced l-asparaginase synthesis from this organism. We employed the statistical method Plackett-Burman design and response surface methodology's central composite design to accomplish this. The main objective of this work was to optimize l-asparaginase production using appropriate media components to produce this enzyme at a low cost. The Acinetobacter baumannii ZAS1 strain utilized in this investigation produced merely 18.85 ± 0.2 U/mL of l-asparaginase in the unoptimized medium. As a result, statistical optimization of nutritional requirements is being investigated to boost l-asparaginase synthesis by this organism.
The process parameters of the endophyte Acinetobacter baumannii ZAS1 were optimized through the OFAT (one factor at a time) method as well as statistical methods like Plackett–Burman design (PBD) and central composite design (CCD) of RSM.
Microbial strain
The l-asparaginase producing bacterial endophyte Acinetobacter baumannii ZAS1 (Genbank Accession No. KX186685) [18], isolated from the medicinal plant Annona muricata (Accession No. CALI 7006), was used for the study. The organism was sub-cultured in nutrient agar (NA) medium every month and stored at 4 °C.
l-asparagine used for the study was purchased from Sisco Research Laboratories Pvt Ltd (SRL). Other chemicals (analytical grade) used were obtained from different commercial sources.
Acinetobacter baumannii ZAS1 cultures were grown in unoptimized M9 medium [19], which contained the following components: l-asparagine (10 g/l), KH2PO4 (3 g/l), Na2HPO4 (6 g/l), NaCl (0.5 g/l), CaCl2.2H2O (0.001 g/l), MgSO4.7H2O (0.12 g/l), and agar (20 g/l). The pH of the medium was 7.0, and the incubation temperature was set as 37 °C.
Extracellular l-asparaginase production and extraction
The bacterial culture used as primary inoculum was grown overnight in 10 ml of M9 medium at 37 °C in an incubator (shaking). In 50 ml of M9 broth medium, 1% of the overnight grown culture (adjusted to a McFarland standard of 0.5) was used as an inoculum to produce l-asparaginase. Following incubation, the culture was centrifuged at 10,000× g for 10 min at 4 °C, and the supernatant was used to determine the enzyme activity.
l-asparaginase assay
The Nesslerization reaction [20] is a commonly used method for the determination of l-asparaginase activity. The amount of ammonia liberated was determined in this reaction. At 37 °C, one unit of l-asparaginase activity corresponds to the amount of enzyme required to liberate 1 μmol of ammonia per minute.
Optimization of physical parameters through OFAT method
The OFAT method was used to optimize the temperature, pH, and agitation speed for l-asparaginase production. For optimizing temperature, the unoptimized medium was seeded with inoculum and incubated in a shaking incubator at varying temperatures (27 °C, 32 °C, 37 °C, 42 °C, and 47 °C). The culture was centrifuged after 24 h of incubation, and the cell-free supernatant was used as a crude enzyme for the l-asparaginase assay. For optimizing pH, the unoptimized medium with varying pH (6, 7, 8, 9, and 10) was seeded with inoculum and incubated in a shaking incubator at an optimized temperature. The appropriate agitation speed for the best production of the enzyme was discovered by inoculating the medium (adjusted to optimized pH) and incubating at various agitation speeds (50, 100, 150, and 200) under optimized temperature. The l-asparaginase assay was performed in duplicates with crude enzyme produced from a 24-h culture.
Selection of carbon-nitrogen and mineral sources through OFAT method
For selecting the best sources of carbon, nitrogen, and ion, the unoptimized M9 medium was supplemented with one of the carbon (sucrose, lactose, maltose, glucose, fructose, and galactose), nitrogen (peptone, beef extract, yeast extract, malt extract, ammonium chloride, and sodium nitrate), or mineral (NaCl, KCl, Na2HPO4, KH2PO4, MgSO4) sources. It was inoculated with primary culture and incubated under optimized physical conditions.
Screening of important variables through PBD
Different carbon, nitrogen, and mineral sources were screened using a statistical method called PBD [21]. The statistical software package MINITAB (Release 16, PA, USA) was used to design the experiments under PBD. Eight variables were considered in this study. Among these variables, one is l-asparagine, which is an inducer for the production of the enzyme. The remaining seven variables were sources of carbon, nitrogen, and ions obtained by preliminary screening of different variables through the OFAT method. Table 1 shows the experimental model for the selection of different variables. Each variable was investigated at its low level (− 1) and high level (+ 1).
Table 1 Variables and their codes used in Plackett-Burman design
This PBD is based on the first-order model;
$$Y={\beta}_0+\varSigma {\beta}_i{x}_i\left(i=1,\dots, k\right)$$
where Y is l-asparaginase activity (the response), β0 is the model intercept, βi is the linear coefficient, and xi is the level of an independent variable.
The eight variables selected for the experiment were l-asparagine, peptone, glucose, lactose, yeast extract, NaCl, MgSO4, and Na2HPO4. The statistical tool designed 20 experimental runs to screen those variables. The tests were conducted in duplicate, and the calculations were based on the average enzyme activity (Table 2). Based on this, variables with confidence levels equal to or greater than 95% were thought to impact l-asparaginase production significantly.
Table 2 Experimental design and results of PBD
Optimization of critical medium components using CCD
Variables selected through PBD were subjected to CCD as it comprises duplication of the central point. The CCD containing four variables at its five coded levels (− α, − 1, 0, + 1 and + α) were generated using the statistical software package "Design Expert 7® (Stat-Ease Inc., USA) (Table 3). A total of 30 experimental runs were created (Table 4), in which 24 runs were the blend of actual levels of the study parameters, and the remaining six runs were the replication of medial points. Experimental runs performed at the medial points were to build the curvature and balance the lack of fit values, which describes the significance of the model [22]. CCD based designs are widely used for the production optimization of many industrial enzymes [23, 24]. The experiments were conducted in duplicates and the average l-asparaginase enzyme activity was used to calculate the response.
Table 3 Experimental range and levels of independent variables used for CCD
Table 4 CCD of selected variables with the experimental and predicted response
Analysis of variance (ANOVA) of data was performed. The response surface regression procedure was used to fit the experimental results by the second-order polynomial equation:
$${\displaystyle \begin{array}{c}\mathrm{Y}=\kern0.5em {\upbeta}_0+{\upbeta}_1\mathrm{A}+{\upbeta}_2\mathrm{B}+{\upbeta}_3\mathrm{C}+{\upbeta}_4\mathrm{D}+{\upbeta}_{11}{\mathrm{A}}^2+{\upbeta}_{22}{\mathrm{B}}^2+{\upbeta}_{33}{\mathrm{C}}^2+\\ {}\kern2.25em {\upbeta}_{44}{\mathrm{D}}^2+{\upbeta}_{12}\mathrm{AB}+{\upbeta}_{13}\mathrm{AC}+{\upbeta}_{14}\mathrm{AD}+{\upbeta}_{23}\mathrm{BC}+{\upbeta}_{24}\mathrm{BD}+{\upbeta}_{34}\mathrm{CD}\end{array}}$$
where Y is the predicted l-asparaginase activity (response), A, B, C, and D are the independent variables studied, β0 is intercept, β1, β2, β3, and β4 are linear coefficients, β11, β22, β33, β44 are squared coefficients, and β12, β13, β14, β23, β24, and β34 are interaction coefficients.
Validation of the model
Experiments were conducted to validate the statistical model at its optimal levels of the most significant variables under a predicted set of conditions.
Analysis of the growth curve
A growth curve analysis was performed to examine the correlation between bacterial growth and l-asparaginase production in the optimized medium. Bacterial growth was monitored using an optical density (OD) at 600 nm, and activity was recorded in U/mL every 2 h.
Extracellular l-asparaginase production
According to quantitative analysis using the Nesslerization reaction, the organism Acinetobacter baumannii ZAS1 had 18.85 ± 0.2 U/mL enzyme activities in unoptimized media. The high value obtained indicated the release of the maximal quantity of ammonia after the catalytic activity of l-asparaginase on the substrate l-asparagine.
Optimization of physical parameters for l-asparaginase production
The physical factors like temperature, pH, and agitation speed that influence l-asparaginase production were optimized by the OFAT method. The results are depicted in Fig. 1. In the case of the temperature study, the endophyte Acinetobacter baumannii ZAS1 showed excellent activity (18.854 ± 0.2 U/mL) at 37 °C; this activity was sustained until 42 °C and declined with a further hike in temperature. The optimum pH required for the augmented production (19.201 ± 0.2 U/mL) of l-asparaginase by this organism was pH 7. The pH above and below this point showed decreased activity. Furthermore, the optimum agitation speed required for the maximum production of l-asparaginase enzyme (18.854 ± 0.6 U/mL) by Acinetobacter baumannii ZAS1 was 150 rpm. In contrast, the production was lower at speeds above and below this, i.e., at 100 rpm and 200 rpm.
OFAT optimization of a temperature, b pH, and c agitation speed
Primary screening of carbon, nitrogen, and ion sources
The result of the primary screening experiment is shown in Fig. 2. According to the experimental results, glucose and lactose were promising carbon sources for l-asparaginase production among the different carbon sources evaluated (Fig. 2a). Over the various nitrogen sources tested, peptone and yeast extract expressed increased enzyme activity (Fig. 2b). NaCl, Na2HPO4, and MgSO4 showed considerable improvement in the enzyme production from the diverse sources of ions tested (Fig. 2c). According to the findings, the production of l-asparaginase by Acinetobacter baumannii ZAS1 improves when different carbon, nitrogen, and ion sources are used in the media.
OFAT optimization of a carbon, b nitrogen, and c ion sources supplementing M9 media
Screening of most significant variables through PBD
PBD was applied in this study to determine the main media components influencing l-asparaginase production. The l-asparagine, peptone, glucose, and Na2HPO4 were found to have a major impact on l-asparaginase synthesis by Acinetobacter baumannii ZAS1. The complete experimental design, including response values, is provided in Tables 2 and 5. The selected variable, l-asparagine (p-value 0), significantly affects the production of l-asparaginase. The second variable selected through the PBD experiment was peptone (p-value 0.025) as an additional nitrogen source in the enhanced production of l-asparaginase. The third variable chosen by the PBD experiment was glucose (p-value 0.027), which acts as a carbon source for the better production of this enzyme. Through this experiment, the ion source Na2HPO4 (p-value 0.005) also showed a significant effect on l-asparaginase production. The results revealed that additional nitrogen, carbon, and ion sources contributed significantly to the highest production of this enzyme.
Table 5 Statistical analysis of PBD of eight variables
Optimization of important medium components using CCD
In this CCD experiment, the various substances chosen through PBD studies, namely l-asparagine, peptone, glucose, and Na2HPO4, were treated as four independent variables on which responses were calculated. A total of 30 experimental runs were performed using these four variables. The entire experimental plan, including response values, is described in Table 4.
Appropriate conditions for maximum l-asparaginase activity were established using regression analysis. A second-order polynomial equation representing the relationship between enzyme activity, l-asparagine, peptone, glucose, and Na2HPO4 was generated using multiple regression analyses.
The second-order polynomial equation calculated the predicted l-asparaginase activity is as follows:
$${\displaystyle \begin{array}{c}\mathrm{L}-\mathrm{asparaginase}\ \mathrm{activity}\ \left(\mathrm{U}/\mathrm{mL}\right)=-88.4578+5.0611\mathrm{A}+11.0414\mathrm{B}+43.1387\mathrm{C}\ \\ {}+13.0488\mathrm{D}-0.1312\mathrm{AB}-0.1171\ \mathrm{AC}\ \\ {}+1.08507\mathrm{E}-003\mathrm{AD}+0.4448\mathrm{BC}-0.0515\mathrm{BD}-0.9657\mathrm{C}\mathrm{D}\ \\ {}-0.1415{\mathrm{A}}^2-0.8956{\mathrm{B}}^2-16.4134{\mathrm{C}}^2-0.8413{\mathrm{D}}^2\end{array}}$$
where A, B, C, and D are concentrations of l-asparagine, peptone, glucose and Na2HPO4, respectively. The F test was assessed the statistical significance of the second-order polynomial equation and the results of ANOVA are given in Table 6.
Table 6 Analysis of variance (ANOVA) of response surface quadratic model for the production of l-asparaginase
According to the ANOVA analysis, the model p-value was 0.0001, which is less than 0.05, indicating that the model terms are significant. In this case, A, B, C, D, AB, CD, A2, B2, C2, and D2 are all-important model terms. The lack of fit p-value of 0.7239 indicates that the lack of fit is insignificant in proportion to the pure error. A non-significant lack of fit is good. The "predicted R-squared value" of 0.9748 agrees with the "adjusted R-squared value" of 0.9876. "Adequate precision" measures the signal-to-noise ratio. A ratio greater than 4 is desirable. Here, the value of 41.326 indicates an adequate signal. Thus, the model could be used to navigate the design space.
Interaction among variables
The effects of variable interactions on l-asparaginase activity were investigated. Optimized 3D surface plots were used to depict the interactive effects of any two factors. In this plot, two variables were kept constant while the other one was present in the investigation range. The level of each variable influencing the maximum yield of l-asparaginase was analyzed. The 3D response plot shown in Fig. 3a depicts the interaction between l-asparagine and peptone. The l-asparaginase activity increases with the increasing concentration of l-asparagine; the activity starts to decline only towards its maximum concentration. In the case of peptone, the enzyme activity is highest at its intermediate value and begins to decline after that. This plot showed a strong interaction between variables as indicated by p < 0.05. Figure 3b shows the interaction between l-asparagine and glucose. In this case, enzyme activity increases with a higher concentration of l-asparagine and starts declining at its maximal point. As the glucose concentration increases, the enzyme activity also increases up to its median value and starts declining beyond that. Because the p-value is so high, there was no interaction between these two factors. Figure 3c explains the interaction between l-asparagine and Na2HPO4. Here, enzyme activity increases as l-asparagine and Na2HPO4 concentrations increase, and it starts declining only at its maximum value. There was no evidence of an interaction effect between these variables. Figure 3d describes the interaction between peptone and glucose. According to the graph, the enzyme activity is highest when the concentrations of peptone and glucose are at their midpoints; activity declines beyond this point as the concentrations of variables increase. There was no interactive effect among these variables.
Response surface plot showing the interaction between variables a peptone and l-asparagine, b glucose and l-asparagine, c Na2HPO4 and l-asparagine, d glucose and peptone, e peptone and Na2HPO4, and f glucose and Na2HPO4
Figure 3e shows the interaction between peptone and Na2HPO4. The enzyme activity becomes maximum when peptone reaches its middle value, whereas in the case of Na2HPO4, the enzyme activity starts declining towards its end value. In this case, also we could not see any interaction among these variables. Figure 3f depicts the interaction between glucose and Na2HPO4. Enzyme activity increases as the glucose concentration reach its middle value and start declining beyond that. In the case of Na2HPO4, enzyme activity rises beyond the middle value and begins to decline only when it approaches its maximum value. This graph also showed the strong interaction between variables as indicated by p < 0.05.
To verify the adequacy of statistical analysis, some experiments were suggested by the software and were carried out in duplicates (Table 7). The maximum l-asparaginase activity predicted by the experimental design using the optimum concentrations of selected components (11.50 g/l l-asparagine, 5.30 g/l peptone, 1.32 g/l glucose, and 7.54 g/l Na2HPO4) was 45.04 ± 0.42 U/mL, and this value was in agreement with the experimental yield of 45.59 ± 0.36 U/mL. This experimental result verifies the validity of the model and the existence of optimal points.
Table 7 Validation of the model
Analysis of growth curve
A result of the growth curve analysis is depicted in Fig. 4. The organism showed maximum l-asparaginase activity (45.52 ± 0.23 U/mL) at 24 h of incubation. The organism revealed a maximum growth rate after 24 h.
l-asparaginase production profile and growth curve of Acinetobacter baumannii ZAS1
Recent developments have heightened the need to find better sources of the l-asparaginase enzyme. The endophyte Acinetobacter baumannii ZAS1 showed promising l-asparaginase activity. Many scholars have discussed the differences in l-asparaginase production between microorganisms from different habitats, and the impact of cultural conditions on the amount of l-asparaginase generated [11, 12]. The physical factors and the culture media composition considerably influence the cell growth and production of metabolites [25]. The present study was performed to improve the enzyme production by this endophyte by optimizing the physical conditions and media components using response surface methodology.
Temperature is an important factor in the success of fermentation reactions. It controls the growth and production of metabolites by microorganisms and varies from one organism to another [26]. The temperature modulates extracellular enzyme synthesis by altering the physical properties of the cell membrane [27]. In the case of Acinetobacter baumannii ZAS1, the maximum l-asparaginase (18.854 ± 0.2 U/mL) production was observed at 37 °C, and the enzyme production was found to be sustained up to 42 °C. A gradual decline of enzyme activity was observed beyond this temperature range. The ideal temperature for the synthesis of l-asparaginase by many organisms was between 37 and 42 °C, which falls within the range of 35 to 50 °C recorded by many other researchers in their investigations. E. coli exhibited good production of l-asparaginase at a temperature of 37 °C [28]. According to Erva et al. [22], the best temperature for the enhanced production of l-asparaginase by Bacillus subtilis VUVD001 was 49.9 °C.
Another physical factor that has a significant role in microbial growth and metabolite production is pH. Acinetobacter baumannii ZAS1 showed its maximum l-asparaginase activity of 19.201 ± 0.2 U/mL at pH 7 and lower enzyme activity below and above pH 7. Similarly, Ghosh et al. [10] obtained maximum l-asparaginase activity at a neutral pH of 7.4 for Serratia marcescens.
Agitation speed is a critical parameter that influences enzyme production by providing adequate mixing, increasing the oxygen transfer, and maintaining the homogeneous chemical and physical conditions in the medium [29]. The endophyte Acinetobacter baumannii ZAS1 showed maximum l-asparaginase production of 18.854 ± 0.6 U/mL with an agitation speed of 150 rpm. Similarly, the bacteria E. coli ATCC 11303 [30] exhibited maximum l-asparaginase activity when incubated under an agitation speed of 150 rpm. A slightly higher agitation speed of 200 rpm was found to be effective for producing l-asparaginase by Bacillus subtilis VUVD001 [22]. There was a report on a very high agitation speed of 500 rpm for better production of l-asparaginase by Erwinia aroideae [31]. It shows that optimum agitation speed varies among microbes.
The preliminary screening of different nitrogen, carbon, and ion sources through the OFAT method revealed that the supplementation of these ingredients considerably enhanced enzyme production. Carbon sources are essential components in constructing cellular materials and are also used as energy sources [32]. Several earlier studies revealed the importance of additional nitrogen, carbon, and ion sources in the augmented production of this enzyme [33]. The organism Enterobacter aerogenes showed maximum l-asparaginase production in the presence of diammonium hydrogen phosphate and sodium citrate as nitrogen and carbon sources, respectively [34]. Pseudomonas resinovorans IGS-131 exhibited maximum production of l-asparaginase when Na2HPO4, KH2PO4, and NaCl were used as ion sources [35].
In the present study, screening of different media ingredients for the maximal production of l-asparaginase by Acinetobacter baumannii ZAS1 was performed through PBD experiments. PBD was successfully used in previous studies to optimize media components for maximum l-asparaginase activity by Bacillus sp. GH5 [36] as well as enhanced production of glucoamylase by Humicola grisea MTCC 352 [37]. This experiment identified four ingredients, namely, l-asparagine, peptone, glucose, and Na2HPO4, as statistically significant using Plackett-Burman design. The R2 score in this study was 0.9445, indicating that the model was good. Furthermore, the adjusted R2 of 0.9042 was relatively high, showing that the model was highly significant. This model had a higher predicted R2 (0.8166) value, indicating that it can predict the value of l-asparaginase synthesis in the range of parameters utilized. Similar work has been reported in PBD experiment of Streptomyces rochei in the improved production of l-asparaginase [38].
In this study, l-asparagine showed a significant effect with (p = 0) on the production of l-asparaginase. In previous investigations, the amino acid l-asparagine was discovered as a natural substrate for synthesizing l-asparaginase [39]. The second variable chosen via PBD, namely peptone, contains various amino acids and short peptides that may act as additional stimulatory components for the production of the l-asparaginase enzyme. Some previous works also reported the importance of peptone as an additional supplement for the best production of l-asparaginase by Streptomyces ginsengisoli [40]. In the present investigation, the carbon source glucose (simple sugar) was the third variable selected through PBD, which significantly affects the production of l-asparaginase. Reports were available to depict the importance of glucose in producing l-asparaginase from different microbes [40]. Ions also play a crucial role in the metabolic process of all organisms, as it is essential for the formation of cell mass and acts as a cofactor for many biosynthetic enzymes to catalyze the necessary reactions [41]. Borkotaky and Bezbaurauh [42] have reported no inhibitory effect on the production of l-asparaginase by 10 mmol/L metal ions such as Na2+, K+, Mg2+, Zn2+, Ca2+, Co2+, Ba2+, and Ni2+. In this study, Na2HPO4 had been selected as the significant ion source for the improved production of l-asparaginase enzyme. The optimization studies of Bacillus sp. GH5 through statistical analyses had revealed the significance of Na2HPO4 in the production of l-asparaginase [36].
Further optimizations of these selected factors for maximizing the l-asparaginase activity were performed through the CCD of RSM. In an analysis of variance (ANOVA), the closer the R2 number is to 1.0, the stronger the model is and the better it predicts the reaction of the system [43]. In the current study, we obtained an R2 value near 1.0, indicating the model's strength. The ANOVA results of this study revealed the importance of a well-designed experimental model that accurately depicts the relationship of the variables in the enhanced l-asparaginase activity. The higher F value (165.72) in the ANOVA findings implies that the second-order model equation derived was significant. Lack of fit F value can also be used to confirm the importance of the second-order model. Lack of fit has a lower F value (0.67) than the other variables, with larger F values. The non-significant lack of fit value suggests that the model was significant [44]. There is a previous report on the high F value (43.04) of the model and lower F value (2.98) of lack of fit that suggested the significance of the model in the optimization of glucoamylase production by Humicola grisea MTCC 352 [45].
The interactive effect of the above described four variables on the l-asparaginase output by Acinetobacter baumannii ZAS1 was studied using this method. Response surface plots were used to illustrate the function of two variables at a time while keeping all other variables constant. As a result, these graphs were more helpful in comprehending the interaction effects of these two variables [46].
In the validation of the present study, a 2.41-fold increase in l-asparaginase activity (18.85 ± 0.2 U/mL to 45.59 ± 0.36 U/mL) by Acinetobacter baumannii ZAS1 has been obtained with optimized medium (11.50 g/l l-asparagine, 5.30 g/l peptone, 1.32 g/l glucose, and 7.54 g/l Na2HPO4). Similarly, Kenari et al. [30], in their study, revealed that the highest l-asparaginase activity by E. coli ATCC 11303 was observed by optimizing the media ingredients through RSM.
The growth curve study revealed that the organism's growth and the production of l-asparaginase were associated. The most activity was seen in the late log phase, and after that, it steadily declined. Similar results were reported by Shirazian et al. [47]. According to their study, the halophilic Bacillus strain gA5 revealed the correlation between growth and l-asparaginase production.
Based on the results of this experiment, we concluded that the endophyte Acinetobacter baumannii ZAS1 is the best source of the l-asparaginase enzyme. The enzyme production with this strain was enhanced by optimizing process parameters through statistical methods PBD and CCD of RSM. We were able to find the ideal parameters for obtaining the best l-asparaginase production (45.59 ± 0.36 U/mL) using these methods. Statistical optimization showed a 2.41-fold increase in enzyme production compared to unoptimized media. Finally, this research had demonstrated that experimental designs gave a quick and meaningful approach to improve productivity of l-asparaginase. The findings showed that Acinetobacter baumannii ZAS1 can produce large quantities of l-asparaginase with minimal medium components, implying that its usage in industries could result in significant cost reductions on an industrial scale.
All data generated or analyzed during this study are included in this article.
CCD:
Central composite design
OFAT:
One factor at a time
PBD:
Plackett–Burman design
RSM:
Response surface methodology
UGC:
Anishkin A, Vanegas JM, Rogers DM, Lorenzi PL, Chan WK, Purwaha P et al (2015) Catalytic role of the substrate defines specificity of therapeutic l-asparaginase. J Mol Biol 427:2867–2885. https://doi.org/10.1016/j.jmb.2015.06.017
Cooney DA, Handschumacher RE (1970) L-asparaginase and L-asparagine metabolism. Annu Rev Pharmacol 10:421–440. https://doi.org/10.1146/annurev.pa.10.040170.002225
Ficai A, Grumezescu AM (2017) Nanostructures for cancer therapy. Elsevier, Amsterdam, Netherlands
Blake MK, Carr BJ, Mauldin GE (2016) Hypersensitivity reactions associated with L-asparaginase administration in 142 dogs and 68 cats with lymphoid malignancies: 2007-2012. Can Vet J 57:176–182
dos Santos AC, Land MGP, da Silva NP, Santos KO, Lima-Dellamora ED (2017) Reactions related to asparaginase infusion in a 10-year retrospective cohort. Rev Bras Hematol E Hemoter 39:337–342. https://doi.org/10.1016/j.bjhh.2017.08.002
Offman MN, Krol M, Patel N, Krishnan S, Liu J, Saha V et al (2011) Rational engineering of L-asparaginase reveals importance of dual activity for cancer cell toxicity. Blood 117:1614–1621. https://doi.org/10.1182/blood-2010-07-298422
Fonseca MHG, da Silva Fiúza T, de Morais SB, de Souzade ACBT, Trevizani R (2021) Circumventing the side effects of L-asparaginase. Biomed Pharmacother 139:111616. https://doi.org/10.1016/j.biopha.2021.111616
Pieters R, Hunger SP, Boos J, Rizzari C, Silverman L, Baruchel A et al (2011) L-asparaginase treatment in acute lymphoblastic leukemia: a focus on Erwinia asparaginase. Cancer 117:238–249. https://doi.org/10.1002/cncr.25489
Thenmozhi C, Sankar R, Karuppiah V, Sampathkumar P (2011) L-asparaginase production by mangrove derived Bacillus cereus MAB5: Optimization by response surface methodology. Asian Pac J Trop Med 4:486–491. https://doi.org/10.1016/S1995-7645(11)60132-6
Ghosh S, Murthy S, Govindasamy S, Chandrasekaran M (2013) Optimization of L-asparaginase production by Serratia marcescens (NCIM 2919) under solid state fermentation using coconut oil cake. Sustain Chem Process 1:9. https://doi.org/10.1186/2043-7129-1-9
Patel P, Gosai HB, Panseriya H, Dave B (2021) Development of process and data centric inference system for enhanced production of L-asparaginase from halotolerant Bacillus licheniformis PPD37, In Review. https://doi.org/10.21203/rs.3.rs-651645/v1
Singhal B, Swaroop K (2013) Optimization of culture variables for the production of L- asparaginase from Pectobacterium carotovorum. Afr J Biotechnol 12:6959–6967. https://doi.org/10.5897/AJB12.2624
Baskar G, Supria Sree N (2020) Synthesis, characterization and anticancer activity of β-cyclodextrin-Asparaginase nanobiocomposite on prostate and lymphoma cancer cells. J Drug Deliv Sci Technol 55:101417. https://doi.org/10.1016/j.jddst.2019.101417
Schulz B, Boyle C, Draeger S, Römmert A-K, Krohn K (2002) Endophytic fungi: a source of novel biologically active secondary metabolites. Mycol Res 106:996–1004. https://doi.org/10.1017/S0953756202006342
Strobel GA (2003) Endophytes as sources of bioactive products. Microbes Infect 5:535–544. https://doi.org/10.1016/S1286-4579(03)00073-X
Khan YM, Munir H, Anwar Z (2019) Optimization of process variables for enhanced production of urease by indigenous Aspergillus niger strains through response surface methodology. Biocatal Agric Biotechnol 20:101202. https://doi.org/10.1016/j.bcab.2019.101202
Kavuthodi B, Sebastian D (2018) Biotechnological valorization of pineapple stem for pectinase production by Bacillus subtilis BKDS1: Media formulation and statistical optimization for submerged fermentation. Biocatal Agric Biotechnol 16:715–722. https://doi.org/10.1016/j.bcab.2018.05.003
Abhini KN, Fathimathu Zuhara K (2018) Isolation screening and identification of bacterial endophytes from medicinal plants as a potential source of L-asparaginase enzyme. J Chem Pharm Sci 11:73–76. https://doi.org/10.30558/jchps.20181101014
Nucleo E, Steffanoni L, Fugazza G, Migliavacca R, Giacobone E, Navarra A et al (2009) Growth in glucose-based medium and exposure to subinhibitory concentrations of imipenem induce biofilm formation in a multidrug-resistant clinical isolate of Acinetobacter baumannii. BMC Microbiol 9:270. https://doi.org/10.1186/1471-2180-9-270
Imada A, Igarasi S, Nakahama K, Isono M (1973) Asparaginase and glutaminase activities of micro-organisms. J Gen Microbiol 76:85–99. https://doi.org/10.1099/00221287-76-1-85
Barrak N, Mannai R, Zaidi M, Kechida M, Helal AN (2016) Experimental design approach with response surface methodology for removal of indigo dye by electrocoagulation. J Geosci Environ Prot 04:50–61. https://doi.org/10.4236/gep.2016.411006
Erva RR, Goswami AN, Suman P, Vedanabhatla R, Rajulapati SB (2017) Optimization of L-asparaginase production from novel Enterobacter sp., by submerged fermentation using response surface methodology. Prep Biochem Biotechnol 47:219–228. https://doi.org/10.1080/10826068.2016.1201683
Saeed AM, El-Shatoury EH, Sayed HAE (2021) Statistical factorial designs for optimum production of thermostable α-amylase by the degradative bacterium Parageobacillus thermoglucosidasius Pharon1 isolated from Sinai. Egypt J Genet Eng Biotechnol 19:24. https://doi.org/10.1186/s43141-021-00123-4
Sreena CP, Sebastian D (2018) Augmented cellulase production by Bacillus subtilis strain MU S1 using different statistical experimental designs. J Genet Eng Biotechnol 16:9–16. https://doi.org/10.1016/j.jgeb.2017.12.005
Dayal MS, Goswami N, Sahai A, Jain V, Mathur G, Mathur A (2013) Effect of media components on cell growth and bacterial cellulose production from Acetobacter aceti MTCC 2623. Carbohydr Polym 94:12–16. https://doi.org/10.1016/j.carbpol.2013.01.018
Banerjee R, Bhattacharyya BC (1992) Extracellular alkaline protease of newly isolatedRhizopus oryzae. Biotechnol Lett 14:301–304. https://doi.org/10.1007/BF01022328
Rahman RNZA, Geok LP, Basri M, Salleh AB (2005) Physical factors affecting the production of organic solvent-tolerant protease by Pseudomonas aeruginosa strain K. Bioresour Technol 96:429–436. https://doi.org/10.1016/j.biortech.2004.06.012
Cachumba JJM, Antunes FAF, Peres GFD, Brumano LP, Santos JCD, Da Silva SS (2016) Current applications and different approaches for microbial l-asparaginase production. Braz J Microbiol 47:77–85. https://doi.org/10.1016/j.bjm.2016.10.004
Mostafa Y, Alrumman S, Alamri S, Hashem M, Al-izran K, Alfaifi M et al (2019) Enhanced production of glutaminase-free l-asparaginase by marine Bacillus velezensis and cytotoxic activity against breast cancer cell lines. Electron J Biotechnol 42:6–15. https://doi.org/10.1016/j.ejbt.2019.10.001
Kenari SLD, Alemzadeh I, Maghsodi V (2011) Production of l-asparaginase from Escherichia coli ATCC 11303: optimization by response surface methodology. Food Bioprod Process 89:315–321. https://doi.org/10.1016/j.fbp.2010.11.002
Liu FS, Zajic JE (1973) Effect of oxygen-transfer rate on production of L-asparaginase by Erwinia aroideae. Can J Microbiol 19:1153–1158. https://doi.org/10.1139/m73-183
Masurekar P (2009) Antibiotic production. In: Encycl Microbiol. Elsevier, pp 174–190. https://doi.org/10.1016/B978-012373944-5.00032-8
El-Naggar NE-A, Moawad H, El-Shweihy NM, El-Ewasy SM, Elsehemy IA, Abdelwahed NAM (2019) Process development for scale-up production of a therapeutic L-asparaginase by Streptomyces brollosae NEAE-115 from shake flasks to bioreactor. Sci Rep 9:13571. https://doi.org/10.1038/s41598-019-49709-6
Mukherjee J, Majumdar S, Scheper T (2000) Studies on nutritional and oxygen requirements for production of L- asparaginase by Enterobacter aerogenes. Appl Microbiol Biotechnol 53:180–184. https://doi.org/10.1007/s002530050006
Mihooliya KN, Nandal J, Kumari A, Nanda S, Verma H, Sahoo DK (2020) Studies on efficient production of a novel l-asparaginase by a newly isolated Pseudomonas resinovorans IGS-131 and its heterologous expression in Escherichia coli. 3 Biotech 10:148. https://doi.org/10.1007/s13205-020-2135-4
Gholamian S, Gholamian S, Nazemi A, Miri Nargesi M (2013) Optimization of culture media for L-asparaginase production by newly isolated bacteria, Bacillus sp. GH5. Microbiology 82:856–863. https://doi.org/10.1134/S0026261714010032
Ramesh V, Ramachandra Murty V (2014) Sequential statistical optimization of media components for the production of glucoamylase by thermophilic fungus Humicola grisea MTCC 352. Enzyme Res 2014:1–9. https://doi.org/10.1155/2014/317940
El-Naggar NE-A, El-Shweihy NM (2020) Bioprocess development for L-asparaginase production by Streptomyces rochei, purification and in-vitro efficacy against various human carcinoma cell lines. Sci Rep 10:7942. https://doi.org/10.1038/s41598-020-64052-x
Aghaiypour K, Wlodawer A, Lubkowski J (2001) Structural basis for the activity and substrate specificity of Erwinia chrysanthemi L-asparaginase. Biochemistry 40:5655–5664. https://doi.org/10.1021/bi0029595
Deshpande N, Choubey P, Agashe M (2014, 2014) Studies on optimization of growth parameters for L-asparaginase production by streptomyces ginsengisoli. Sci World J. https://doi.org/10.1155/2014/895167
Hutner SH, Provasoli L, Schatz A, Haskins CP (1950) Some approaches to the study of the role of metals in the metabolism of microorganisms. Proc Am Philos Soc 94:152–170
Borkotaky B, Bezbaruah RL (2002) Production and properties of asparaginase from a new Erwinia sp. Folia Microbiol (Praha) 47:473–476. https://doi.org/10.1007/BF02818783
Doddapaneni KK, Tatineni R, Potumarthi R, Mangamoori LN (2007) Optimization of media constituents through response surface methodology for improved production of alkaline proteases by Serratia rubidaea. J Chem Technol Biotechnol 82:721–729. https://doi.org/10.1002/jctb.1714
Montgomery DC, Myers RH, Carter WH, Vining GG (2005) The hierarchy principle in designed industrial experiments. Qual Reliab Eng Int 21:197–201. https://doi.org/10.1002/qre.615
Department of Chemical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India, Ramesh V, Murty VR, and Department of Biotechnology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India (2019) Optimization of glucoamylase production by humicola grisea MTCC 352 in solid state fermentation. Chiang Mai Univ. J. Nat. Sci. 18:. https://doi.org/10.12982/CMUJNS.2019.0018.
Bibi N, Ali S, Tabassum R (2016) Statistical optimization of pectinase biosynthesis from orange peel by Bacillus licheniformis using submerged fermentation. Waste Biomass Valorization 7:467–481. https://doi.org/10.1007/s12649-015-9470-4
Shirazian P, Asad S, Amoozegar MA (2016) The potential of halophilic and halotolerant bacteria for the production of antineoplastic enzymes: L-asparaginase and L-glutaminase. EXCLI J 15:268 ISSN 1611-2156. https://doi.org/10.17179/EXCLI2016-146
The project was funded by Junior Research Fellowship provided by UGC (321453), and the work was carried out at the Department of Life Sciences, University of Calicut. The authors, therefore, acknowledge the UGC for financial support and the Dept. of Life Sciences for providing the necessary facilities. We also thank our colleagues who have supported us in every way towards the completion of this work.
The project was funded by a UGC Junior Research Fellowship (321453).
Department of Life Sciences, University of Calicut, Malappuram, Kerala, 673635, India
K. N. Abhini, Akhila B. Rajan, K. Fathimathu Zuhara & Denoj Sebastian
K. N. Abhini
Akhila B. Rajan
K. Fathimathu Zuhara
Denoj Sebastian
The present work was carried out in collaboration with all four authors. Author KFZ and DS designed and supervised the study. Author KNA managed the literature search and performed the experiment, data analysis, and wrote the first draft of the manuscript. ABR helped in performing the data analysis and writing the manuscript. Authors KFZ and DS edited and proofread the final manuscript. All four authors read and approved the final manuscript.
Correspondence to Denoj Sebastian.
Additional file 1: Supplementary Figure 1.
Normal plot of residuals and other plots of CCD.
Abhini, K.N., Rajan, A.B., Fathimathu Zuhara, K. et al. Response surface methodological optimization of l-asparaginase production from the medicinal plant endophyte Acinetobacter baumannii ZAS1. J Genet Eng Biotechnol 20, 22 (2022). https://doi.org/10.1186/s43141-022-00309-4
Acinetobacter baumannii
Endophyte
l-asparaginase
|
CommonCrawl
|
Negative dielectric constant of water confined in nanosheets
Two-dimensional quantum-sheet films with sub-1.2 nm channels for ultrahigh-rate electrochemical capacitance
Wenshu Chen, Jiajun Gu, … Y. Morris Wang
High-entropy enhanced capacitive energy storage
Bingbing Yang, Yang Zhang, … Yuan-Hua Lin
Probing consequences of anion-dictated electrochemistry on the electrode/monolayer/electrolyte interfacial properties
Raymond A. Wong, Yasuyuki Yokota, … Yousoo Kim
Extraordinary pseudocapacitive energy storage triggered by phase transformation in hierarchical vanadium oxides
Bo-Tian Liu, Xiang-Mei Shi, … Qing Jiang
Electrocalorics hit the top
M. Otoničar & B. Dkhil
Electrolyte gating in graphene-based supercapacitors and its use for probing nanoconfined charging dynamics
Jing Xiao, Hualin Zhan, … Dan Li
Decoupling electron and ion storage and the path from interfacial storage to artificial electrodes
Chia-Chin Chen & Joachim Maier
Topological quantum materials for energy conversion and storage
Huixia Luo, Peifeng Yu, … Kai Yan
Modeling the electrical double layer at solid-state electrochemical interfaces
Michael W. Swift, James W. Swift & Yue Qi
Akira Sugahara ORCID: orcid.org/0000-0001-9411-23261 na1,
Yasunobu Ando2,3 na1,
Satoshi Kajiyama ORCID: orcid.org/0000-0002-2200-75241,
Koji Yazawa4,
Kazuma Gotoh3,5,
Minoru Otani2,3,
Masashi Okubo1,3 &
Atsuo Yamada ORCID: orcid.org/0000-0002-7880-57011,3
Nature Communications volume 10, Article number: 850 (2019) Cite this article
Two-dimensional materials
Electric double-layer capacitors are efficient energy storage devices that have the potential to account for uneven power demand in sustainable energy systems. Earlier attempts to improve an unsatisfactory capacitance of electric double-layer capacitors have focused on meso- or nanostructuring to increase the accessible surface area and minimize the distance between the adsorbed ions and the electrode. However, the dielectric constant of the electrolyte solvent embedded between adsorbed ions and the electrode surface, which also governs the capacitance, has not been previously exploited to manipulate the capacitance. Here we show that the capacitance of electric double-layer capacitor electrodes can be enlarged when the water molecules are strongly confined into the two-dimensional slits of titanium carbide MXene nanosheets. Using electrochemical methods and theoretical modeling, we find that dipolar polarization of strongly confined water resonantly overscreens an external electric field and enhances capacitance with a characteristically negative dielectric constant of a water molecule.
An electric double-layer capacitor (EDLC) is an important class of electrochemical capacitors, in which electrochemical double layers are formed on the electrode surface and polarized solvents between ions and the electrode act as a dielectric medium1,2. Due to the fact that the electrochemical double-layer formation is a fast and highly reversible surface process with minimal ion migration, EDLCs can operate intrinsically at high charge/discharge rates without degradation over millions of repeated cycles, which enables load-leveling of intermittent power from renewable energy sources. However, as capacitance of currently used EDLC electrodes is limited, energy density of EDLCs is not satisfactory to be widely distributed in power grids. Therefore, increasing the capacitance of EDLC electrodes has been an active area of research3,4,5,6,7,8.
From a theoretical point of view, a conventional EDLC electrode can be considered as a parallel-plate capacitor that delivers a capacitance (C) according to
$$C = \frac{{\varepsilon A}}{d}$$
where ε is the permittivity between the ions and the electrode, A is the electrode surface area, and d is the separation between the ions and the electrode (Supplementary Table 1)1. Consequently, increasing A of the EDLC electrodes using meso- or nanostructured materials is a classic approach toward realizing large specific capacitance and high energy density3,4,5,6,7. Alternatively, in 2006, Chmiola et al.8 experimentally proved the confinement effect on d in microporous carbon EDLCs (pore size < 1 nm) that achieves an anomalous increase in the specific capacitance from 95 to 140 F g−18,9,10.
Two-dimensional materials, such as graphene sheets11, transition metal dichalcogenides12, and MXenes5, have recently been developed as electrode materials in EDLCs. Complete delamination of the two-dimensional materials increases the accessible surface area and the nanosheets with a large interlayer separation give a large specific capacitance greater than 240 F g−15. However, except for a few theoretical simulation results13, there has been very limited research carried out on the application of the confinement effect in two-dimensional materials as an additional strategy toward high-energy density capacitors.
Our aim was to quantify the contribution of the confinement effect to capacitance in EDLC electrodes consisting of two-dimensional materials, theoretically modeled as a slit capacitor (Fig. 1a)9,13. This conceptually straightforward but experimentally difficult study was performed using MXene EDLC electrodes. As pioneered by research groups led by Gogotsi and colleagues5,14,15, MXene is a class of two-dimensional materials with the following chemical formula: Mn+1XnTx (where M is a transition metal, X is carbon or nitrogen, T is surface termination groups) and gives a large capacitance that is associated with ion intercalation16. Importantly, MXene maintains its stacked structure with a short interlayer distance during ion (de)intercalation, owing to strong interactions among the surface termination groups, intercalated ion species, and embedded solvents17. Such an anomalous interlayer interaction has led us to expect that MXene nanosheets strongly confine the intercalated ions and MXene is an ideal platform to study the guest confinement effect in two-dimensional materials.
Enlargement of the capacitance of a microslit capacitor. a Schematic illustration of a continuum model of a microslit capacitor consisting of MXene Ti2CTx nanosheets. Ti (dark cyan), C (brown), surface termination group T (gray) atoms are shown. b Schematic illustration of a bilayer-capacitor model. c Experimental specific capacitance of MXene Ti2CTx with aqueous Li+, Na+, K+, Rb+, TMA+ (tetramethylammonium), TEA+ (tetraethylammonium), and TBA+ (tetrabutylammonium) electrolytes. Each capacitance is calculated from the CV curve at the scan rate of 0.5 mV s−1. d Orders of bare-ion size, hydrated ion size, and observed capacitance. e Ion-MXene distance (b − a0) dependence of the experimental specific capacitance. The black dotted line shows a calculated capacitance based on the continuum model (\(C = \frac{{\varepsilon _{\mathrm{r}}\varepsilon _0A}}{{b\, -\, a_0}}\) with constant εrε0), whereas the blue line shows a calculated capacitance based on the bilayer-capacitor model with variable εrε0 from the 3D-RISM calculation
Herein, using the MXene EDLC electrode as experimental and theoretical models for a slit capacitor, we demonstrate the negative dielectric constant of water confined between MXene nanosheets. This specific dipolar polarization of the confined water strongly overscreens an external electric field within the MXene EDLC electrode, leading to an anomalously enhanced capacitance.
Intercalation capacitance of MXene
MXene Ti2CTx was synthesized by removal of Al from Ti2AlC (Supplementary Fig. 1) with LiF-HCl aqueous solution17. The chemical composition of the resulting compound after complete dehydration at 200 °C (anhydrous MXene) was determined as Ti2C(OH)0.3O0.7F0.6Cl0.4 using a standard microanalytical technique. As the water molecules can easily penetrate into the MXene interlayer, anhydrous MXene transforms to a hydrous form Ti2CTx·0.5H2O (hydrous MXene) in ambient atmosphere. A transmission electron micrograph confirms a stacked structure of the MXene nanosheets (Supplementary Fig. 2), which enables a capacitance from ion intercalation (intercalation capacitance; Supplementary Note 1). The EDLC electrode consisting of the hydrous MXene was fabricated without delaminating the stacked structure of the Ti2CTx layers. However, scanning electron micrographs (Supplementary Fig. 3) indicate an existence of partially exfoliated structures allowing a certain contribution to a capacitance from surface ion adsorption (surface capacitance)18. To spotlight on the two-dimensional confinement effect on the intercalation capacitance of the MXene nanosheets, we carefully separated the two contributions using various aqueous electrolytes (Fig. 1a–e). First of all, to isolate the contribution from the surface capacitance, we measured capacitances with electrolytes consisting of large alkylammonium cations [N(CnH2n+1)4]+ (n = 1, 2, and 4) (Supplementary Fig. 4), because these large cations cannot be intercalated into nanoscale space between the MXene nanosheets. We believe this simple methodology provides better analysis than a widely used method using a rate-dependent capacitance (Supplementary Fig. 5)19. As a result (Fig. 1c), for all tested alkylammonium cation species a specific capacitance of the MXene electrode is ~40 F g−1, which is attributed to the surface capacitance. This interpretation was validated by ex situ X-ray diffraction (XRD) patterns for the charged electrode, which do not indicate any change in the interlayer distance (Supplementary Fig. 6), excluding the possibility of alkylammonium cation intercalation.
Having determined the surface capacitance of the MXene electrode, we measured the intercalation capacitance using electrolytes consisting of small alkali cations (Li+, Na+, K+, and Rb+) at the scan rate of 0.5 mV s−1. The MXene electrode gave a much larger specific capacitance (90–160 F g−1) than the surface capacitance (Fig. 1c and Supplementary Fig. 4), and the ex situ XRD patterns indicate an increase in the interlayer distance upon charging (Supplementary Fig. 6). Furthermore, ex situ Ti K-edge spectroscopy suggests the reversible redox of Ti upon charge/discharge (Supplementary Fig. 7). All these experimental observations confirm the occurrence of intercalation capacitance. Cations are intercalated into nanoscale space between the MXene nanosheets, which gives a capacitance (C) of a slit capacitor with pore sizes < 2 nm expressed as follows13:
$$C = \frac{{{\varepsilon} _{\mathrm{r}}{\varepsilon} _{0}A}}{{b - a_0}}$$
where εr is the total dielectric constant between the electrode surface and the ion, ε0 is the vacuum permittivity, A is the total surface area of both walls, a0 is the ionic radius20,21, and 2b is the separation of the slit walls (Fig. 1a). It is important to note that, in contrast to a cylinder capacitor (Supplementary Table 1), b for the slit capacitor depends on the alkali ion species intercalated in the slit. This equation predicts that the capacitance of the slit capacitor increases as the effective distance (b − a0) between the electrode surface and the counterion decreases. However, contrary to the prediction of the above equation (black dotted line, Fig. 1e), the specific capacitance of the MXene electrode increased as b − a0 increases (Li+ > Na+ > K+ > Rb+). The enlarged capacitance, e.g., for a Li+ electrolyte is observed even at faster charge/discharge rates and during 300 charge/discharge cycles (Supplementary Fig. 8). The similar capacitance enlargement is generally observed for other MXenes (Ti3C2Tx and Mo2CTx) (Supplementary Figs. 9 and 10).
The anomalous increase of the MXene electrode capacitance in the order of Li+ > Na+ > K+ > Rb+ is further highlighted when considering that a conventional activated carbon EDLC electrode exhibits a roughly constant capacitance independent of the alkali ions species (Supplementary Figs. 11 and 12). XRD analysis shows that the intercalation of ions with larger "hydrated" ionic radii (Fig. 1d, Li+ > Na+ > K+ > Rb+) gave rise to a larger separation in the slit walls (Supplementary Fig. 6), whereas the 1H magic angle spinning (MAS) NMR indicates that the amount of confined water in MXene nanosheets increases after hydrated-ion intercalation (Supplementary Fig. 13). Therefore, water molecules are definitely co-intercalated (Fig. 1b) and we expect that the hydration shell has an important role in determining the structure inside the microslit and thus the anomalous capacitance.
3D-RISM calculations of MXene
To further understand and theoretically model the hydration structure in the MXene microslit capacitor for various alkali ions, we conducted a three-dimensional reference interaction site model (3D-RISM) calculation22,23,24,25. The 3D-RISM calculation allows the simulation of a 3D distribution of solvent molecules, as well as calculations of solvation energy and optimal solvent density (Fig. 2a–f). Before cation intercalation (Fig. 2d), the atomic density profile along the c axis (perpendicular to the MXene layers) suggests that both oxygen and hydrogen atoms primarily accumulate in the central plane of the microslit (away from the MXene interface). We presume that there is only a weak interaction between water and the MXene surface. Osti et al.26 also reported that water in pristine Ti3C2Tx has bulk-like characteristics, which indicates the weak interaction between water and MXene.
3D-RISM calculation results for the hydrated ions confined in the MXene microslit. a, b Oxygen and hydrogen distributions of ion intercalated Ti2CTx·nH2O. Ti (dark cyan), C (brown), O (red), F (gray), Cl (green), and H (pale pink) atoms are shown. c Radial distribution functions of ion–oxygen and ion–hydrogen distances. d, e, f Hydrogen and oxygen atomic density profiles along the c axis (perpendicular to the MXene layers) in hydrous, Rb+ intercalated, and Li+ intercalated Ti2CTx·nH2O. The optimized n values are 0.5, 0.8, and 1.35 for the hydrous, Rb+ intercalated, and Li+ intercalated MXenes, respectively, and these values are consistent with the thermogravimetric experimental results (Supplementary Fig. 14b)
3D-RISM calculations were then conducted for cation-intercalated Ti2CTx·nH2O. The amount of H2O between the MXene layers, n, was optimized energetically (Supplementary Fig. 14a) for each alkali ion with the fixed interlayer distance that was experimentally determined using the XRD patterns (Supplementary Fig. 6). The optimized hydration structures indicate the accommodation of larger amounts of water in MXene in the order of Li+ > Na+ > K+ > Rb+. This trend is explained by the increasing hydration energy of cations as their bare ionic radii decrease. The experimentally determined amount of water for each MXene using thermogravimetric (TG) analyses is in perfect agreement with the 3D-RISM calculation results (Supplementary Fig. 14b).
The oxygen distribution in the cation-intercalated MXenes (Fig. 2a) indicates that the oxygen atoms accumulate inside the hydration shells of the cations. Based on the radial distribution function of an ion–oxygen distance (Fig. 2c), the oxygen density immediately around Li+ is much higher than that around Rb+, suggesting the stronger hydration energy of Li+ compared with Rb+. Indeed, the oxygen density profile in the Li+-intercalated MXene along the c axis contains two peaks around Li+ and these peaks are more intense than those around Rb+ (Fig. 2e, f). Simultaneously, the hydrogen density profiles for the cation-intercalated MXene (Fig. 2b) indicate considerable hydrogen density close to the MXene surface. Considering a small hydrogen density near MXene before cation intercalation, the hydrogen atoms of water distributed along the surface of each MXene layer are predominantly due to the formation of the hydration shell around intercalated cations (Fig. 2e, f). The 3D-RISM calculations for a series of alkali ions show that the cations with smaller bare ionic radius tend to exhibit a larger hydrogen density near the MXene surface (Supplementary Fig. 15). The significant interference to water arrangement between Ti3C2Tx nanosheets by cation intercalation was also observed by Osti et al.26. It is most likely to be that the hard hydration shell of strong Lewis acid cations such as Li+ forces the hydrogen atoms to reside close to the surface of the MXene layer, whereas the soft hydration shell of weak Lewis acid cations such as Rb+ deforms easily when confined within the microslit.
After confirming that the hydrogen and oxygen distributions in the microslit depend on the intercalated ionic species, we calculated the electrostatic potential profiles, relative to the MXene electrode (Fig. 3a, b and Supplementary Fig. 16). In the Rb+-intercalated MXene (Fig. 3a), the electrostatic potential (Φ) monotonically decreases from the MXene layer until very close to the ion location in the center of the microslit, as expected for a conventional capacitor sandwiching a dielectric layer. The charge density (ρ), electric field (E), and Φ are depicted schematically in Fig. 3c. For Li+-intercalated MXene (Fig. 3b), in contrast, the profile change in Φ is not monotonic; Φ rises near the locus of Li+, leading to a reduced total potential difference (∆Φ). The 3D-RISM calculations for intercalation of a series of alkali ions show that the smaller cation-intercalated MXenes induces more reduced total ∆Φ (Supplementary Fig. 16). Considering the capacitance (C) is expressed as,
$$C = \frac{{{\mathrm{\Delta }}Q}}{{{\mathrm{\Delta }}{{\Phi }}}}$$
where ∆Q is the stored charge by applying ∆Φ (Supplementary Note 1), the reduced ∆Φ explains the larger capacitance of the MXene electrode in the order of Li+ > Na+ > K+ > Rb+ (Fig. 1d). Indeed, the calculated C based on the value of ∆Φ from 3D-RISM well reproduces the experimental C (blue line, Fig. 1e and Supplementary Fig. 17).
Negative dielectric constant of confined water. a, b Theoretical calculation for the electrostatic potential profile of Rb+- and Li+-intercalated Ti2CTx·nH2O. c, d Schematic illustrations of the charge density (ρ), the strength of the electric field (E), and the electrostatic potential (Φ) of weakly hydrated cations and strongly hydrated cations confined in a microslit supercapacitor as a function of the spatial coordinates. The arrows indicate the direction of the water dipole moments
Then, we consider the origin and implications of the reduced ∆Φ specifically observed in Li+-intercalated MXene. The electric-double layer formed in MXene system can be modeled as an equivalent circuit of a bilayer capacitor (Fig. 1b). The capacitance of a capacitor sandwiching two series of a contact layer and a water layer can be expressed as,
$$C = \frac{{A\varepsilon _0}}{\lambda }$$
$$\lambda = \frac{{l^{\mathrm{c}}}}{{{\varepsilon _{\mathrm{r}}}^{\mathrm{c}}}} + \frac{{l^{\mathrm{h}}}}{{{\varepsilon _{\mathrm{r}}}^{\mathrm{h}}}}$$
with thicknesses (lc, lh) and dielectric constants \(({\varepsilon _{\mathrm{r}}}^{\mathrm{c}},{\varepsilon _{\mathrm{r}}}^{\mathrm{h}})\) for a contact layer and a water layer, respectively27.
Based on density functional theory (DFT) calculations28, in the first term \(\frac{{l^{\mathrm{c}}}}{{{\varepsilon _{\mathrm{r}}}^{\mathrm{c}}}}\) for the contact layer between the water and an electrode, \({\varepsilon _{\mathrm{r}}}^{\mathrm{c}}\) is small to be around 100 and lc is 2–3 Å. In the second term \(\frac{{l^{\mathrm{h}}}}{{{\varepsilon _{\mathrm{r}}}^{\mathrm{h}}}}\) for the water layer confined in spaces ranging from macroslit (2b > 50 nm) to mesoslit (2 nm < 2b < 50 nm) (Supplementary Table 1), water molecules are weakly bounded to rotate freely and give a large positive \({\varepsilon _{\mathrm{r}}}^{\mathrm{h}}\) of ~80, where the situation becomes as such \(\frac{{l^{\mathrm{c}}}}{{{\varepsilon _{\mathrm{r}}}^{\mathrm{c}}}} \gg \frac{{l^{\mathrm{h}}}}{{{\varepsilon _{\mathrm{r}}}^{\mathrm{h}}}}\), and hence \(\lambda \sim \frac{{l^{\mathrm{c}}}}{{{\varepsilon _{\mathrm{r}}}^{\mathrm{c}}}}\)28. Therefore, in macroslit and mesoslit capacitors, the dielectric contact layer (the first term) dominates the overall capacitance, where the water molecules do not have essential roles.
In striking contrast, in a microslit capacitor, the water layer (the second term) largely influences the overall capacitance. The increase in Φ near the locus of Li+ indicates excess polarization (called overscreening)29,30 and inversion of E at the confined water layer (i.e., hydration shell) relative to the external electric field (Eext; Fig. 3d). This inversion of E \(\left( {{\mathrm{where}}\,E = \frac{{E_{{\mathrm{ext}}}}}{{\varepsilon _{\mathrm{r}}}}} \right)\) indicates that the dielectric constant of the hydration shell is negative. As theoretically shown by Bopp et al.29, a negative dielectric constant of water is possible under the condition of microscopic overscreening, which is realized by a resonant effect between dipolar polarization of water and an external electric field. Resonant conditions were suggested as follows: (i) an external electric field has several Å modulation and (ii) water is confined between a slit wall and an ion30.
For a microslit (2b < 2 nm) capacitor, such as the MXene system described here, we presume that the resonant condition (i) on double-layer thickness is satisfied for all of Li+, Na+, K+, and Rb+ ions based on calculation results (Supplementary Fig. 16), whereas the resonant condition (ii) on firm water confinement is satisfied only by strong Lewis acid cations such as Li+. The soft hydration shell of the weakly hydrated cations such as Rb+ largely deforms even in the microslit as demonstrated in Fig. 2e, whereby water molecules are not confined effectively, and remain to have a positive dielectric constant. In contrast, the hard hydration shell of Li+ is strongly confined between the MXene wall and Li+ as evidenced in Fig. 2f, and water dipolar polarization resonantly overscreens the external electric field to induce an inverse E. It is this situation that can lead to a negative dielectric constant for the hydration layer and hence an increase in capacitance. Importantly, the overscreening of a hydration shell confined in a microslit capacitor is a general phenomenon: e.g., the 3D-RISM calculation for Li+-intercalated Mo2CTx also indicates the negative dielectric constant of confined water (Supplementary Fig. 18), explaining the capacitance enhancement experimentally observed for Mo2CTx (Supplementary Fig. 10). As the simplest combination, the water confined in the microslit consisting of graphene is also predicted to exhibit the negative dielectric constant and the overscreening behavior (Supplementary Fig. 19). Furthermore, Geng et al.31 reported that the capacitance of metallic 1T MoS2 with an aqueous Li+ electrolyte is larger than that with an aqueous Na+ electrolyte. These verifications strongly suggest that exploiting the water confinement effect is a versatile strategy to enhance the capacitance of microslit capacitors.
In general, the strategy of using larger surface area materials to increase the gravimetric capacitance severely suffers from smaller electrode density and smaller volumetric capacitance thereof32. Our discovery of a negative dielectric constant of confined water and its contribution to larger capacitance of the microslit capacitor not only solve this long-standing dilemma but also offer an important prospect that stacked two-dimensional materials might have considerable potential as EDLC electrodes. The capacitance enlargement by confined negative dielectric water could be extended to other systems. For example, H+ intercalation, which was not considered in this work, has been reported to exhibit a large capacitance through the protonation of MXene33; therefore, the influence of water to the H+ capacitance would be of particular interest to further increase the capacitance. The influence of non-aqueous electrolyte solvents to the MXene capacitance is also an important issue that needs to be clarified (Supplementary Fig. 20), as it includes more complicated phenomena such as interfacial desolvation, solid-electrolyte interphase formation, and co-intercalation34,35. Moreover, existence of water molecules with a negative dielectric constant casts doubt on a conventional capacitor model postulated by Stern36 with low-dielectric surface water layer and potentially leads to redefinition of how to construct theoretical models for EDLC electrodes. Further studies on similar confinement effects in other microporous materials are expected to stimulate a range of striking discoveries of EDLCs with higher gravimetric and volumetric energy densities.
Synthesis of MXene
Ti2AlC was prepared by high-frequency induction heating of a precursor mixture consisting of TiC, Ti, and Al at 1300 °C for 1 h under an Ar flow. Ti2CTx was synthesized by reacting 0.5 g of Ti2AlC powder with an aqueous mixture of 2 M LiF and 6 M HCl for 12 h at room temperature. The treated powder was dried under vacuum at 200 °C for 24 h (anhydrous MXene). The chemical composition of anhydrous MXene Ti2CTx was determined by the standard microanalytical method for C, H, F, and Cl, and by X-ray photoelectron spectroscopy for an O/OH ratio. Calc. (Found) for Ti2C(OH)0.3O0.7F0.6Cl0.4: C: 8.03% (7.50%), H: 0.20% (0.25%), F: 7.62% (7.84%), Cl: 9.48% (9.22%). Ti3C2Tx and Mo2CTx were synthesized according to the procedures reported previously5,37.
The powder XRD patterns were recorded on a Rigaku SMART-LAB powder diffractometer with Cu Kα radiation with a step of 0.02° over a 2θ range of 3°–80°. Samples for ex situ XRD patterns were prepared electrochemically and were used for the measurements without drying. The number of intercalated water molecules in MXene was estimated by TG analysis. The TG data were collected on a Seiko Extar 6000 TG/DTA instrument over a 30–400 °C temperature range using an Ar gas atmosphere. The heating rate was fixed at 5 K min−1.
For the electrochemical measurements, the working electrode was fabricated by mixing Ti2CTx, acetylene black, and polytetrafluoroethylene in 80:10:10 weight ratio. The resulting paste was pressed onto a nickel mesh. A three-electrode glass cell was assembled with a Pt mesh as the counter electrode and Ag/AgCl in saturated aqueous solution of KCl for a reference electrode. Aqueous solutions (0.5 M) of Li2SO4, Na2SO4, K2SO4, Rb2SO4, tetramethylammonium chloride, tetraethylammonium chloride, and tetrabutylammonium chloride were used as the electrolytes. The sweep rate of the cyclic voltammetry (CV) measurements was set to 0.5 and 2.0 mV s−1, and the cutoff voltages were −0.7 V and −0.2 V (vs. Ag/AgCl). The specific capacitance from the CV curve was calculated as \({\textstyle{1 \over {{\mathrm{\Delta }}V}}}{\int} {{\textstyle{{j(V)} \over s}}{\mathrm{d}}V}\), where V is the potential, ΔV is the potential window, j(V) is the specific current, and s is the scan rate. The activated carbon for the reference experiment (Supplementary Figs. 6 and 7) was purchased from Kansai Coke and Chemicals (MSC-30 with a specific surface area of 3000 m2 g−1). The X-ray absorption spectra were measured in the transmission mode at room temperature at BL-9C of Photon Factory. The X-ray energy for each edge was calibrated by using a corresponding metal foil. The obtained experimental data were analyzed using Rigaku REX2000 software. The 1H MAS NMR spectra were recorded at frequency of 800 MHz (18.79 T) using a JEOL JNM-ECA800 system equipped with a JEOL 1.0 mm HXMAS probe. To reduce the 1H background signal from the probe material, the DEPTH2 pulse sequence was used. The experimental conditions were set up with 90° pulse length of 1.2 µs, recycle delay of 5 s, and the MAS rate of 60 kHz. The 1H chemical shift was referenced to the peak of silicon rubber and set to 0.12 p.p.m. from tetramethylsilane.
The hydration energy and atomic distributions are calculated by using a 3D-RISM theory combined with DFT. The 3D-RISM code is implemented into a DFT simulation package named "Quantum Espresso"24,38. The exchange–correlation functional was used within the generalized gradient approximation proposed by Perdew et al.25. The ultrasoft pseudopotential scheme combined with plane-wave basis sets imposing cutoff energies of 30 and 300 Ry was used to describe the Kohn–Sham orbitals and electron density, respectively. The Brillouin-zone summation was evaluated using a 3 × 3 × 1 k-point grid for structure optimization and total energy calculation. The convergence criteria for structure optimization included 10 × 10−3 Ry per Bohr for forces and 10 × 10−4 Ry for the energy.
The structural model of MXene Ti2CTx consisted of \(2 \times 2\sqrt 3\) rectangular supercell with four different surface functions (representing F, Cl, O, and OH). The chemical composition of hydrous MXene was assumed to be Ti2C(F0.5Cl0.5O0.5OH0.5)·0.5H2O. The lattice constants of the supercell were theoretically optimized as 12.326 × 21.349 Å2. The interlayer distance between the central carbon layers of the adjacent MXene sheets was set to experimentally determined values (i.e., 13.2, 13.1, 12.8, 12.7 Å for Li+-, Na+-, K+-, and Rb+-intercalated models, respectively). Each intercalated model has two ions located at (1/4, 1/4, 1/2) and (3/4, 3/4, 1/2) in the fractional coordinates of each supercell. The validity of our structural models was confirmed by the perfect agreement between the experimentally determined and the theoretically optimized water contents (n) in various cation-intercalated MXenes (Supplementary Fig. 14). The calculated capacitance (C) for each cation-intercalated MXene (inset in Fig. 1b) was obtained by the equation as,
$$C = C_{{\mathrm{surface}}} + \frac{{\Delta Q_{{\mathrm{intercalation}}}}}{{{\mathrm{\Delta }}{{\Phi }}_{{\mathrm{calc}}}}}$$
where Csurface = 40 F g−1, ∆Qintercalation = 80.6 C g−1 (0.125 cation intercalation per the formula unit), ∆Φ(Rb)calc = 1.46 eV, ∆Φ(K)calc = 1.17 eV, ∆Φ(Na)calc = 1.0 eV, and ∆Φ(Li)calc = 0.91 eV, respectively.
The whole datasets are available from the corresponding author on request.
Béguin, F. & Frąckowiak, E. Supercapacitors (Wiley-VCH, Weinheim, 2013).
Simon, P., Gogotsi, Y. & Dunn, B. Where do batteries end and supercapacitors begin. Science 343, 1210–1211 (2014).
Chmiola, J., Largeot, C., Taberna, P. L., Simon, P. & Gogotsi, Y. Monolithic carbide-derived carbon films for micro-supercapacitors. Science 328, 480–483 (2010).
Merlet, C. et al. Highly confined ions store charge more efficiently in supercapacitors. Nat. Commun. 4, 2701 (2013).
Ghidiu, M., Lukatskaya, M. R., Zhao, M. Q., Gogotsi, Y. & Barsoum, M. W. Conductive two-dimensional titanium carbide 'clay' with high volumetric capacitance. Nature 516, 78–81 (2014).
ADS CAS PubMed Google Scholar
Lin, T. et al. Nitrogen-doped mesoporous carbon of extraordinary capacitance for electrochemical energy storage. Science 350, 1508–1513 (2015).
Sheberla, D. et al. Conductive MOF electrodes for stable supercapacitors with high areal capacitance. Nat. Mater. 16, 220–224 (2017).
Chmiola, J. et al. Anomalous increase in carbon capacitance at pore sizes less than 1 nanometer. Science 313, 1760–1763 (2006).
Huang, J., Sumpter, B. G. & Meunier, V. Theoretical model for nanoporous carbon supercapacitors. Angew. Chem. Int. Ed. 47, 520–524 (2008).
Luo, Z. X. et al. Dehydration of ions in voltage-gated carbon nanopores observed by in situ NMR. J. Phys. Chem. Lett. 6, 5022–5026 (2015).
Yang, X., Cheng, C., Wang, Y., Qiu, L. & Li, D. Liquid-mediated dense integration of graphene materials for compact capacitive energy storage. Science 341, 534–537 (2013).
Acerce, M., Voiry, D. & Chhowalla, M. Metallic 1T phase MoS2 nanosheets as supercapactor electrode materials. Nat. Nanotech. 10, 313–318 (2015).
Feng, G., Qiao, R., Huang, J., Sumpter, B. G. & Meunler, V. Ion distribution in electrified micropores and its role in the anomalous enhancement of capacitance. ACS Nano 4, 2382–2390 (2010).
Naguib, M. et al. Two-dimensional nanocrystals produced by exfoliation of Ti3AlC2. Adv. Mater. 23, 4248–4253 (2011).
Lukatskaya, M. R. et al. Cation intercalation and high volumetric capacitance of two-dimensional titanium carbide. Science 341, 1502–1505 (2013).
Mashtalir, O. et al. Intercalation and delamination of layered carbides and carbonitrides. Nat. Commun. 4, 1716 (2013).
Kajiyama, S. et al. Enhanced Li-ion accessibility in MXene titanium carbide by steric chloride termination. Adv. Energy Mater. 7, 1601873 (2017).
Levi, M. D. et al. Solving the capacitive paradox 2D MXene using electrochemical quartz-crystal admittance and in situ electronic conductance measurments. Adv. Energy Mater. 5, 1400815 (2014).
Lukatskaya, M. R., Dunn, B. & Gogotsi, Y. Multidimensional materials and device architectures for future hybrid energy storage. Nat. Commun. 7, 12647 (2016).
Shannon, R. D. Revised effective ionic radii and systematic studies of interatomic distances in halides and chalcogenides. Acta Cryst. A32, 751–767 (1976).
Robinson, R. A. & Stokes, R. H. Electrolyte solutions (Butterworth, London, 1959).
Kovalenko, A. & Hirata, F. Three-dimensional density profiles of water in contact with a solute of arbitrary shape: a RISM approach. Chem. Phys. Lett. 290, 237–244 (1998).
Sato, H., Kovalenko, A. & Hirata, F. Self-consistent field, ab ignition molecular orbital and three-dimensional reference interaction site model study for solvation effect on carbon monoxide in aqueous solution. J. Chem. Phys. 112, 9463–9468 (2000).
Giannozzi, P. S. et al. QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. J. Phys. Condens. Matter 21, 395502 (2009).
Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996).
Osti, N. C. et al. Influence of metal ions intercalation on the vibrational dynamics of water confined between MXene layers. Phys. Rev. Mater. 1, 065406 (2017).
Kim, Y. J. et al. Frustration of negative capacitance in Al2O3/BaTiO3 bilayer structure. Sci. Rep. 6, 19039 (2016).
Ando, Y., Gohda, Y. & Tsuneyuki, S. Ab initio molecular dynamics study of the Helmholtz layer formed on solid–liquid interfaces and its capacitance. Chem. Phys. Lett. 556, 9–12 (2012).
Bopp, P. A., Kornyshev, A. A. & Sutmann, G. Frequency and wave-vector dependent dielectric function of water: collective modes and relaxation spectra. J. Chem. Phys. 109, 1939–1958 (1998).
Bonthuis, D. J., Gekle, S. & Netz, R. R. Dielectric profile of interfacial water and its effect on double-layer capacitance. Phys. Rev. Lett. 107, 166102 (2011).
Geng, X. et al. Two-dimensional water-coupled metallic MoS2 with nanochannels for ultrafast supercapacitors. Nano Lett. 17, 1825–1832 (2017).
Zhang, C., Lv, W., Tao, Y. & Yang, Q. H. Towards superior volumetric performance: design and preparation of novel carbon materials for energy storage. Energy Environ. Sci. 8, 1390–1403 (2015).
Hu, M. et al. Electrochemical Raman spectroscopy investigation. ACS Nano 10, 11344–11350 (2016).
Dall'Agnese, Y., Rozier, P., Taberna, P. L., Gogotsi, Y. & Simon, P. Capacitance of two-dimensional titanium carbide (MXene) and MXene/carbon nanotube composites in organic electrolytes. J. Power Sources 306, 510–515 (2016).
Okubo, M., Sugahara, A., Kajiyama, S. & Yamada, A. MXene as a charge storage host. Acc. Chem. Res. 51, 591–599 (2018).
Stern, O. The theory of the electrolytic double shift. Z. Elektrochem. 30, 508–516 (1924).
Halim, J. et al. Synthesis and characterization of 2D molybdenum carbide (MXene). Adv. Funct. Mater. 26, 3118–3127 (2016).
Nishihara, S. & Otani, M. Hybrid solvation models for bulk, interface, and membrane: Reference interaction site methods coupled with density functional theory. Phys. Rev. B 96, 115429 (2017).
This work was financially supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan under the "Elemental Strategy Initiative for Catalysts and Batteries (ESICB)." This work was also supported by MEXT, Japan, and Grant-in-Aid for Specially Promoted Research Number 15H05701. M. Okubo was financially supported by JSPS KEKENHI Grant Numbers JP15H03873, JP16H00901, and 18H03924. We are grateful to Satomichi Nishihara for implementation of the 3D-RISM into Quantum Espresso package. X-ray absorption spectroscopy was conducted under the approval of the Photon Factory Program Advisory Committee (Proposal 2016G031 and 2018G082).
These authors contributed equally: Akira Sugahara, Yasunobu Ando.
Department of Chemical System Engineering, School of Engineering, The University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo, 113-8656, Japan
Akira Sugahara, Satoshi Kajiyama, Masashi Okubo & Atsuo Yamada
CD-FMat, National Institute of Advanced Industrial Science and Technology (AIST), Umezono 1-1-1, Tsukuba, Ibaraki, 305-8568, Japan
Yasunobu Ando & Minoru Otani
Elements Strategy Initiative for Catalysts and Batteries (ESICB), Kyoto University, Nishikyo-ku, Kyoto, 615-8245, Japan
Yasunobu Ando, Kazuma Gotoh, Minoru Otani, Masashi Okubo & Atsuo Yamada
JEOL Resonance, 3-1-2 Musashino, Akishima, Tokyo, 196-8558, Japan
Koji Yazawa
Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan
Kazuma Gotoh
Akira Sugahara
Yasunobu Ando
Satoshi Kajiyama
Minoru Otani
Masashi Okubo
Atsuo Yamada
M. Okubo and A.Y. conceived and directed the project. A.S. and S.K. synthesized and characterized MXenes. A.S. measured and analyzed the electrochemical properties. A.S., K.Y., and K.G. conducted and analyzed the 1H MAS NMR spectra. Y.A. and M. Otani conducted the 3D-RISM calculations. All authors wrote the manuscript.
Correspondence to Atsuo Yamada.
Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Sugahara, A., Ando, Y., Kajiyama, S. et al. Negative dielectric constant of water confined in nanosheets. Nat Commun 10, 850 (2019). https://doi.org/10.1038/s41467-019-08789-8
Critical role of water structure around interlayer ions for ion storage in layered double hydroxides
Tomohito Sudare
Takuro Yamaguchi
Katsuya Teshima
Nature Communications (2022)
Microwaves reduce water refractive index
Yusuke Asakuma
Tomoisa Maeda
Shuji Taue
Scientific Reports (2022)
Wenshu Chen
Jiajun Gu
Y. Morris Wang
Nature Nanotechnology (2022)
Unlocking the secrets behind liquid superlubricity: A state-of-the-art review on phenomena and mechanisms
Tianyi Han
Shuowen Zhang
Chenhui Zhang
Friction (2022)
Influence of surfaces and interfaces on MXene and MXene hybrid polymeric nanoarchitectures, properties, and applications
Christopher Igwe Idumah
Journal of Materials Science (2022)
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online)
|
CommonCrawl
|
Quantum Computing Meta
Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. Join them; it only takes a minute:
Is the Kraus representation of a quantum channel equivalent to a unitary evolution in an enlarged space?
I understand that there are two ways to think about 'general quantum operators'.
Way 1
We can think of them as trace-preserving completely positive operators. These can be written in the form $$\rho'=\sum_k A_k \rho A_k^\dagger \tag{1}$$ where $A_k$ are called Kraus operators.
As given in (An Introduction to Quantum Computing by Kaye, Laflamme and Mosca, 2010; pg59) we have a $$\rho'=\mathrm{Tr}_B\left\{ U(\rho \otimes \left| 00\ldots 0\right>\left<00\ldots 0 \right|) U^\dagger \right\} \tag{2}$$ where $U$ i s a unitary matrix and the ancilla $\left|00 \ldots 0\right>$ has at most size $N^2$.
Exercise 3.5.7 (in Kaye, Laflamme and Mosca, 2010; pg60) gets you to prove that operators defined in (2) are completely positive and trace preserving (i.e. can be written as (1)). My question is the natural inverse of this; can we show that any completely positive, trace preserving map can be written as (2)? I.e. are (1) and (2) equivalent definitions of a 'general quantum operator'?
quantum-gate quantum-information quantum-channel quantum-operation
edited May 14 at 17:15
Quantum spaghettificationQuantum spaghettification
This question is posed, and answered positively, in Nielsen & Chuang in a subsection of chapter 8 entitled "System-environment models for and operator-sum representation". In my version, it can be found on page 365.
Imagine $|\psi\rangle$ is an arbitrary pure state on the space upon which you wish to enact the operators. Let $|e_0\rangle$ be some fixed state on another quantum system (with dimension equal to at least the number of Krauss operators, and labelled 'B'). Then you can define a unitary by its action on the space of states spanned by $|\psi\rangle$: $$ U|\psi\rangle|e_0\rangle=\sum_k(A_k|\psi\rangle)|e_k\rangle, $$ where the $|e_k\rangle$ are an orthonormal basis. To check that this corresponds to a valid unitary, we just have to test it for different input states and ensure that the initial overlap is preserved: $$ \langle\psi|\phi\rangle\langle e_0|e_0\rangle=\langle\psi|\langle e_0|U^\dagger U|\phi\rangle|e_0\rangle=\langle\psi|\sum_kA_k^\dagger A_k|\phi\rangle, $$ which is true thanks to the completeness relation of the Krauss operators.
Finally, one just has to check that this unitary does indeed implement the claimed map: $$ \text{Tr}_B\left(U|\psi\rangle\langle \psi|\otimes|e_0\rangle\langle e_0|U^\dagger\right)=\sum_kA_k|\psi\rangle\langle\psi|A_k^\dagger. $$
DaftWullieDaftWullie
18.9k11 gold badge77 silver badges4848 bronze badges
Here is another way to prove the equivalence of the two expressions explicitly:
For the Kraus representation, $\Phi(\rho)=\sum_a A^a \rho A^{a\dagger}$, if we make the indices explicit, we get $$\Phi(\rho)_{ij}=\sum_{a,k,\ell}A^a_{ik}A^{a*}_{j\ell}\rho_{k\ell}.\tag1$$
On the other hand, unravelling the second expression we have for a generic $\sigma$ (let me here use numbers instead of latin letters for the indices, for better clarity, as well as Einstein's notation for repeated indices), $$[U(\rho \otimes \sigma) U^\dagger]_{1234}=U_{1256}U^{*}_{3478}\rho_{57}\sigma_{68}.$$ Note that here the first two indices, ($1$ and $2$) correspond to the "output space" of the operator $\Phi(\rho)$, while the other two ($3$ and $4$) correspond to its "input space". Similarly, $2$ and $4$ live in the second Hilbert space, while $1$ and $3$ live in the first one.
Tracing with respect to the second Hilbert space amounts to introducing a $\delta_{24}$ factor, and we thus get
$$\left\{\mathrm{Tr}_B\left[ U(\rho \otimes \sigma) U^\dagger \right]\right\}_{13} =U_{1256}U^{*}_{3478}\rho_{57}\sigma_{68} \color{red}{\delta_{24}} =U_{1256}U^{*}_{3278}\rho_{57}\sigma_{68}.$$
If we take $\sigma$ to be a pure state, for example $\sigma=\lvert0\rangle\!\langle0\rvert$, so that $\sigma_{68}=\delta_{60}\delta_{80}$, then we have
$$\left\{\mathrm{Tr}_B\left[ U(\rho \otimes \lvert0\rangle\!\langle0\rvert) U^\dagger \right]\right\}_{13} =U_{1250}U^*_{3270}\rho_{57}.$$ Going back to using the standard notation for the indices, and making explicit the sums, we have
$$\left\{\mathrm{Tr}_B\left[ U(\rho \otimes \lvert0\rangle\!\langle0\rvert) U^\dagger \right]\right\}_{ij} =\sum_{a,k,\ell}U_{iak0}U^*_{ja\ell0}\rho_{k\ell}.\tag2$$ This expression is essentially equivalent to (1). To see it more clearly, just write $A^a_{ik}\equiv U_{iak0}$ and $A_{j\ell}^{a*}\equiv U^*_{ja\ell0}$.
In summary, (1) and (2) are absolutely equivalent expressions, up to redefinition of the objects involved.
glSglS
Thanks for contributing an answer to Quantum Computing Stack Exchange!
Not the answer you're looking for? Browse other questions tagged quantum-gate quantum-information quantum-channel quantum-operation or ask your own question.
Positive maps on pure states?
Purpose of using Fidelity in Randomised Benchmarking
Tensor product properties used to obtain Kraus operator decomposition of a channel
Kraus operator of dephasing channel
Isometric Extension of an Erasure Channel
Depolarizing channel operator sum representation
How does the probability of measurement turn out to be negative?
What's the difference between Kraus operators and measurement operators?
Direct derivation of the Kraus representation from the natural representation, using SVD
Does the dilation in Naimark's theorem produce a state?
|
CommonCrawl
|
Home | Science | * Gravity | Gravity and Energy Share This Page
Gravity and Energy
Exploring the interplay of gravity and mass-energy
— P. Lutus — Message Page —
Copyright © 2017, P. Lutus
Most recent revision:
Introduction | Conservation of Mass-Energy | Escape Velocity | Cosmological Implications | Notes
Figure 1: Mass-Energy Venn diagram
In this article we'll explore the relationship between gravity and energy, and consider some consequences for matters both large and small. This article uses animations and graphics to clarify its points, and some key equations are included and explained. Finally, we'll discuss a new theory about the universe — how it might have come into being without either violating any laws of physics or requiring supernatural intervention.
The overview:
In modern physics, mass and energy are complementary aspects of a fundamental quantity that, for lack of a better word, we call mass-energy.
Mass-energy cannot be created or destroyed, only changed in form. This is called the Principle of Energy Conservation (some use the term "law").
Energy has two basic forms — kinetic and potential.
Kinetic energy is the energy of motion — examples might be a spinning wheel or an arrow in flight.
Potential energy is the energy of position or state — examples might be a book on a high shelf or a charged battery.
Many physical processes cause energy to be converted from potential to kinetic or the reverse, and from energy to mass or the reverse.
The unit of power is the Watt. One watt may be defined in several ways. Here are two:
A constant velocity of one meter per second against an opposing force of one Newton.
A current flow of one ampere through a potential difference of one volt.
The energy unit is the Joule.
Energy is the time integral of power. One joule is defined as the expenditure of one watt of power for one second.
Mass and energy are complementary aspects of mass-energy:
To convert mass to energy, use this equation:
\begin{equation} E = m c^2 \end{equation}
To convert energy to mass, use this equation:
\begin{equation} m = \frac{E}{c^2} \end{equation}
Mass has units of kilograms.
Energy has units of joules.
The constant $c$ in the above equations is the speed of light and is defined to be equal to 299,792,458 m/s.
These principles are not merely laboratory curiosities, they're part of everyday life:
If I lift a one-kilogram book from the floor and place it on a two-meter-high shelf, the book gains 19.6 joules of potential energy (enough to power a small flashlight for about one second) and 2.2 * 10-16 kilograms of mass (about 1/3 that of a small bacterium).
If I take a fully discharged D-size flashlight battery and charge it fully, it gains 74,970 joules of potential energy and 8.3 * 10-13 kilograms of mass, about that of a typical human cell.
Kinetic energy is relatively easy to quantify with physical measurements. It is equal to:
\begin{equation} E_k = \frac{1}{2}m v^2 \end{equation}
$E_k$ = kinetic energy, joules
$m$ = mass, kilograms
$v$ = velocity, m/s
Potential energy has much more variety and is a bit more difficult to pin down. One of its simpler forms comes up in a gravitational field, where it is equal to:
\begin{equation} E_P = -\frac{G m_1 m_2}{r} \end{equation}
$E_p$ = potential energy, joules
$G$ = universal gravitational constant, equal to 6.67428 * 10-11 m3 kg-1 s-2
$m_1$,$m_2$ = masses (kilograms) of two bodies in mutual gravitational attraction.
$r$ = distance between $m_1$ and $m_2$, meters.
Notice the minus sign in equation (4) above — it means that gravitational potential energy is negative. Because this is an important property with cosmological significance, I would like to explain how it comes about.
The reader may recall my earlier remark than energy is the time integral of power, but this is just one example — in mechanics, work ($W$) can be expressed as the integral of force ($f$) with respect to distance ($x$) rather than time:
\begin{equation} W = \int_a^b f dx = f x \end{equation}
Expressed in everyday language, work is equal to force times distance. Now we'll apply this to gravitation — here is the force equation ($f$) for gravitational attraction between two masses $m_1$ and $m_2$, separated by a distance $r$, and under the influence of the gravitational constant term $G$:
\begin{equation} f = G \frac{m_1 m_2}{r} \end{equation}
Equation (6) is the classic expression of Newton's Law of Universal Gravitation. To move from force to energy, we need to integrate equation (6) with respect to distance ($r$):
\begin{equation} E_p \int G \frac{m_1 m_2}{r} dr = -G \frac{m_1 m_2}{r} \end{equation}
Equation (7) tell us that negative gravitational potential energy is the correct physical interpretation, and it arises from mathematics, not an arbitrary choice or convention.
One more thing — under General Relativity, gravity is not a force, instead it arises as a result of spacetime curvature. But in ordinary circumstances the Newtonian conventions still apply, and energy is still a meaningful concept in orbital mechanics.
Conservation of Mass-Energy
NOTE: If the animations in this section distract the reader, one may click them to make them stop.
Remember that mass-energy cannot be created or destroyed, only changed in form. A more general way to say this is that the universe has a constant quantity $Q$ of mass-energy, fixed at the moment of the Big Bang and unchanged since. We'll be discussing the quantity $Q$ throughout this paper, and we'll eventually assign it a value.
(Click below to start or stop the animation)
Kinetic Energy $E_k$:
Potential Energy $E_p$:
Total Energy $E_t$ ($E_p+E_k$):
Figure 2: Pendulum Energy Model
(Click the image to start/stop the animation.)
As a mass moves in a gravitational field, it typically exchanges kinetic and potential energy. A swinging pendulum (Figure 2) has maximum kinetic energy at the lowest point in its swing, and zero kinetic energy at the highest. The pendulum's potential energy has the reverse relationship — it increases (i.e. becomes less negative) with distance from the center of the earth, and in exchange, the kinetic energy must decrease. The important thing to understand about freely moving objects in a gravitational field is that their energy, the sum of kinetic and potential energy, is constant.
There is a well-known principle in mechanics called Newton's First Law which says that, unless acted on by an external force, an object will maintain a constant state of motion. There is, or should be, a corollary for freely moving objects in space:
Unless acted on by an external force, an object moving in space will maintain a constant energy.
This doesn't mean the object's velocity will remain the same, nor does it mean the object's kinetic and potential energy values will remain the same. It means the total energy, the sum of kinetic and potential energy, will remain the same.
The swinging pendulum in Figure 2 shows this — even though there is a periodic exchange between kinetic and potential energy, the total energy ($E_p + E_k$) is constant. If our pendulum were located in a vacuum and had lossless bearings, it would continue to swing forever in the same way, perpetually conserving its energy.
For small-scale mechanical systems like the pendulum, it's convenient to establish an arbitrary zero point for potential energy. In this case, the zero point is set at the bottom of the swing, so potential energy is pictured as increasing from zero to positive values as the pendulum swings. This is a reasonable way to picture a physical system, but the absolute value of gravitational potential energy is typically a much larger value, and is always negative.
Pendulums don't usually get to swing in a vacuum with frictionless bearings, but an orbiting satellite is a better example of a frictionless system. Like the pendulum, as the satellite orbits it carries both kinetic and potential energy:
Its (positive) kinetic energy results from its orbital velocity.
Its (negative) potential energy results from its altitude above the center of mass of the body it orbits.
Here again are the equations for kinetic and potential energy ($E_k$ and $E_p$), and a derived equation for total orbital energy ($E_t$):
\begin{equation} E_k = \frac{1}{2} m v^2 \end{equation} \begin{equation} E_p = -\frac{G m_1 m_2}{r} \end{equation} \begin{equation} E_t = \frac{m_1 r v^2 - 2 G m_1 m_2}{2 r} \end{equation}
To be consistent with the Principle of Energy Conservation, for a freely orbiting body with no external forces acting on it, over time equation (10), the sum of kinetic and potential energies, produces a constant.
Figure 3: Elliptical Orbit Energy Model
Figure 3 shows a satellite in an elliptical (oval-shaped) orbit around a central body. I chose this configuration to show that, even though there is an ongoing exchange between kinetic and potential energy, as with the pendulum the total energy remains constant. By the way, this orbital shape isn't hypothetical — comets often have highly elliptical orbits like this. Many comets dwell far beyond Pluto and only rarely descend into our neighborhood for a brief appearance. And sometimes an object approaches the solar system from afar, in a hyperbolic orbit (explained below), to then depart at high speed.
Interestingly, Johannes Kepler computed the properties of orbits and wrote what we know as Kepler's Laws of Planetary Motion, but without understanding the reason orbits behave as they do. The secret to understanding orbits is to recognize that their motion conserves energy, and if they behaved at all differently, nature would need different laws.
Now let's look at the relationship between an orbit's kinetic and potential energy. Here is an approximate equation for the velocity of a circular orbit $v_o$:
\begin{equation} v_o \approx \sqrt{\frac{m_2^2G}{(m_1 + m_2)r}} \end{equation}
Where $m_1$ is the satellite's mass, $m_2$ is the central body's mass, $r$ is the orbital radius and $G$ is the universal gravitational constant. If the satellite is much less massive than the central body, this simpler approximate equation may be used:
\begin{equation} v_o \approx \sqrt{\frac{G M}{r}} \end{equation}
Where $M$ is the central mass, $r$ is the orbital radius and $G$ is the universal gravitational constant. It turns out that, for circular orbits, the relationship between kinetic and potential energy is fixed, regardless of the orbit's other properties — the negative gravitational potential energy is always twice the magnitude of the positive kinetic energy. Another way to say this is that, for a circular orbit, 2/3 of the energy is negative potential and 1/3 is positive kinetic.
In each of the cases examined so far — the pendulum as well as the elliptical and circular orbits — the sum of energies has been negative, dominated by negative gravitational potential energy. Obviously we might select a very high velocity and produce a positive result for equation (10) above, the total orbital energy. But is there an orbital velocity that exactly balances the two kinds of energy and produces zero? Yes, there is — it's called escape velocity. Here is its value, using the terminology from the previous section:
\begin{equation} v_e = \sqrt{\frac{2 G M}{r}} \end{equation}
Escape velocity has some interesting properties. If an object is propelled away from an airless planet with an initial impulse of escape velocity (sort of like Alan Shepard's famous golf shot on the moon, but a much higher velocity), that object will continue to move away, at gradually decreasing speed, but it will never stop and return. In fact, at an infinite distance, an escape-velocity object will achieve zero velocity. Here are the properties of an object initially given escape velocity:
The object has zero net energy — positive kinetic energy $E_k$ and negative gravitational potential energy $E_p$ are equal.
At an infinite distance, the object will achieve zero velocity.
Therefore escape velocity is the only case where, at an infinite time and distance, an object possesses zero energy and achieves zero velocity.
There are two canonical orbital velocities — one is the circular velocity $v_o$ provided by equations (11) or (12), the other is escape velocity $v_e$ provided by equation (13). Velocities greater than escape velocity $v_e$ or less than circular velocity $v_o$ produce interesting effects, like the large family of elliptical orbits resulting from initial velocities in the range 0 < v < $v_o$ and shown in Figure 3 above. But because escape velocity has properties of cosmological significance, it merits a closer look.
Because of its importance to what follows, we should prove that escape velocity results in zero net energy (i.e. $E_k$ + $E_p$ = 0). First, let's simplify equation (10) — let's normalize the mass of the orbiting body $m_1$ to 1. Here is the result:
\begin{equation} E_t = \frac{r v^2 - 2 G M}{2 r} \end{equation}
Remember that equation (14) provides the total energy of an orbiting body, the sum of positive kinetic and negative potential energy. At this point, those sufficiently adept at mathematics will compare equations (13) (escape velocity) and (14) (total energy) and a light bulb will go off. For the rest of us, here's a step-by-step proof:
\begin{equation} E_t = \frac{r v_e^2 - 2 G M}{2 r} = \frac{r \sqrt{\frac{2 G M}{r}}^2 - 2 G M}{2 r} = \frac{r \frac{2 G M}{r} - 2 G M}{2 r} = \frac{2 G M - 2 G M}{2 r} = 0 \end{equation}
Q.E.D. An object given escape velocity will have zero orbital energy, and very important, this is only true at escape velocity, no other.
Orbits in which $E_p$ + $E_k$ < 0 are elliptical, including the circular special case. These orbits can be stable and repeating.
Orbits in which $E_p$ + $E_k$ = 0 are parabolic, and there is only one such orbit for given conditions. This is the escape-velocity orbit.
Orbits in which $E_p$ + $E_k$ > 0 are hyperbolic, and approach straight lines as velocity approaches $\infty$.
Cosmological Implications
When Georges Lemaître first proposed the Big Bang theory, there were a number of objections — at the time there was no evidence in support of the idea, it seemed counterintuitive, and it appeared to violate basic physical principles. How could the universe arise out of nothing?
But over the years, evidence has begun to accumulate that the Big Bang may be real:
Astronomer Edwin Hubble detected a systematic redshift in the spectra of distant galaxies (more distant galaxies have proportionally more redshift), and this was eventually taken to mean those galaxies were moving away from each other and us.
Physicist George Gamow conjectured that, if the Big Bang was real, there would be a residual radiation left over from the exceedingly high temperatures of the explosion, but very much redshifted, all the way into the microwave region of the spectrum and with a characteristic temperature of about 5 Kelvins.
Radio astronomers Arno Penzias and Robert Wilson inadvertently discovered this microwave signal, now known as the cosmic microwave background (CMB) radiation, and this signal has become the subject of intense study.
All this evidence has given the Big Bang the status of a scientific theory, that is to say, a theory supported by evidence and falsifiable in principle. But one objection to the Big Bang remains, and it is serious — there is no physical law so well-established as the conservation of mass-energy, and the Big Bang seems to violate it. By creating an entire universe of mass-energy out of nothing, the Big Bang seems to break the most basic rule of physics: no free lunch.
But this final objection is answered by the idea expressed in this article — if the universe began with an exact balance between positive mass-energy and negative gravitational potential energy, the law of mass-energy conservation is honored.
For this condition to be met, the Big Bang would have to create the universe with an exact balance between positive mass-energy and negative gravitational potential energy, so the total mass-energy is equal to zero. Therefore the Big Bang would have to give matter an initial velocity exactly equal to escape velocity. Is there any evidence for this? In a word, yes.
It turns out there is a relationship between the average velocity of matter in the expanding universe, and the overall curvature of spacetime. Since the beginning of Big Bang cosmology, the spacetime curvature issue has been much studied, with three likely outcomes:
$\Omega$ = Mass-energy density parameter
$Q$ = Sum of positive kinetic energy and negative gravitational energy.
$V$ = Expansion velocity, normalized to escape velocity $v_e$.
$\theta$ = Sum of a triangle's inner angles.
Graphic (click images for 3D)
$\Omega \gt 1$
$Q \lt 0$
$V \lt v_e$
$\theta \gt 180^{\circ}$
Expansion velocity is less than escape velocity, negative gravitational energy predominates, space is positively curved, expansion will reverse and the universe will eventually collapse.
$\Omega = 1$
$Q = 0$
$V = v_e$
$\theta = 180^{\circ}$
Expansion velocity is equal to escape velocity, total energy is equal to zero, space is flat or classically Cartesian, expansion velocity will decrease asymptotically and reach zero at infinity.
$\Omega \lt 1$
$Q \gt 0$
$V \gt v_e$
$\theta \lt 180^{\circ}$
Expansion velocity is greater than escape velocity, positive mass-energy predominates, space is negatively curved, expansion will not approach zero at infinity.
I emphasize that the above table summarizes conditions near the time of the Big Bang. The recent discovery of Dark Energy as an acceleration term in universal expansion doesn't change the physics for that era because positive mass-energy and negative gravitational energy were both much larger factors than dark energy.
The above table suggests that, if space is classically flat or Cartesian, this supports the zero-energy condition required for the Big Bang to create the universe without violating energy conservation. And there is good evidence that space is flat. This doesn't mean there isn't local strong curvature near masses, it means the overall large-scale curvature of spacetime is flat.
Quantum Uncertainty
It has been recently suggested that, if the Big Bang could impart escape velocity to the universe's matter — thus balancing positive and negative energy — a random quantum fluctuation could have brought the universe into existence. To those unfamiliar with quantum ideas this may seem absurd — aren't quantum effects limited to extremely small scales?
Well, no — quantum effects are a matter of probability, not possibility. On a microscopic scale, quantum effects are routine and must be taken into account on a moment-to-moment basis. But there's no "quantum barrier" that separates large-scale reality from the microscopic scale. It is a simple matter of statistics — the probability of a macroscopic quantum effect is inversely proportional to the mass under consideration. Consider this expression:
\begin{equation} \Delta x \Delta p > \frac{\hbar}{2} \end{equation} Where:
$\Delta x$ = uncertainly in position
$\Delta p$ = uncertainly in momentum
$\hbar$ = Planck's Constant adjusted
The above relation, known as Heisenberg's Uncertainly Principle, describes the role of uncertainly in quantum theory. Instead of denying the possibility of large-scale quantum effects, this principle gives them a probability estimate. And the outcome is that, for large masses, one might have to wait a very long time to see a manifestation of quantum uncertainly at a macroscopic scale — maybe even a billion years. But a billion years seems like reasonable time to wait for a universe.
"Because there is a law such as gravity, the universe can and will create itself from nothing ... Spontaneous creation is the reason there is something rather than nothing, why the universe exists, why we exist." — Stephen Hawking in "The Grand Design".
Revision Note:
In October 2017 this article was extensively reworked (same content, new presentation). Because of the abandonment of Java in Web pages, the Java demonstration applets were replaced by JavaScript. All equation renderings, originally graphic images, were replaced by Latex content rendered by the MathJax engine. Much better overall appearance, and the demonstration animations function once again.
Newton (unit of force)
Watt (unit of power)
Joule (unit of energy)
Newton's Law of Universal Gravitation
Kepler's Laws of Planetary Motion
Orbital speed
Shape of the Universe
Heisenberg's Uncertainty Principle
|
CommonCrawl
|
On global well-posedness of the modified KdV equation in modulation spaces
An optimization problem with volume constraint for an inhomogeneous operator with nonstandard growth
June 2021, 41(6): 2947-2969. doi: 10.3934/dcds.2020392
Schrödinger equations with vanishing potentials involving Brezis-Kamin type problems
Jose Anderson Cardoso 1, , Patricio Cerda 2, , Denilson Pereira 3, and Pedro Ubilla 2,,
Departamento de Matemática, Universidade Federal de Sergipe, São Cristóvão-SE, 49100-000, Brazil
Departamento de Matematica y C. C., Universidad de Santiago de Chile, Casilla 307, Correo 2, Santiago, Chile
Unidade Acadêmica de Matemática, Universidade Federal de Campina Grande, Campina Grande 58429-900, Brazil
Received December 2019 Revised October 2020 Published June 2021 Early access December 2020
Fund Project: The first author is partially supported by FAPITEC/CAPES and by CNPq - Universal.
The second author was partially supported by Proyecto código 042033CL, Dirección de Investigación, Científica y Tecnológica, DICYT.
The third author was partially supported by Proyecto código 041933UL POSTDOC, Dirección de Investigación, Científica y Tecnológica, DICYT.
The fourth author was partially supported by FONDECYT grant 1181125, 1161635, 1171691
We prove the existence of a bounded positive solution for the following stationary Schrödinger equation
$ \begin{equation*} -\Delta u+V(x)u = f(x,u),\,\,\, x\in\mathbb{R}^n,\,\, n\geq 3, \end{equation*} $
$ V $
is a vanishing potential and
$ f $
has a sublinear growth at the origin (for example if
$ f(x,u) $
is a concave function near the origen). For this purpose we use a Brezis-Kamin argument included in [6]. In addition, if
has a superlinear growth at infinity, besides the first solution, we obtain a second solution. For this we introduce an auxiliar equation which is variational, however new difficulties appear when handling the compactness. For instance, our approach can be applied for nonlinearities of the type
$ \rho(x)f(u) $
is a concave-convex function and
$ \rho $
satisfies the
$ \mathrm{(H)} $
property introduced in [6]. We also note that we do not impose any integrability assumptions on the function
, which is imposed in most works.
Keywords: Concave-convex nonlinearities, upper and lower solutions, variational methods, Schrödinger equation, bounded solutions.
Mathematics Subject Classification: 35J20, 35J10, 35J91, 35J15, 35B09.
Citation: Jose Anderson Cardoso, Patricio Cerda, Denilson Pereira, Pedro Ubilla. Schrödinger equations with vanishing potentials involving Brezis-Kamin type problems. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2947-2969. doi: 10.3934/dcds.2020392
A. Ambrosetti, H. Brezis and G. Cerami, Combined effects of concave and convex nonlinearities in some elliptic problems, J. Funct. Anal., 122 (1994), 519-543. doi: 10.1006/jfan.1994.1078. Google Scholar
A. Ambrosetti, V. Felli and A. Malchiodi, Ground states of nonlinear Schrödinger equations with potentials vanishing at infinity, J. Eur. Math. Soc., 7 (2005), 117-144. doi: 10.4171/JEMS/24. Google Scholar
A. Bahrouni, H. Ounaies and V. D. Rădulescu, Bound state solutions of sublinear Schrödinger equations with lack of compactness, RACSAM, 113 (2019), 1191-1210. doi: 10.1007/s13398-018-0541-9. Google Scholar
A. Bahrouni, H. Ounaies and V. D. Rădulescu, Infinitely many solutions for a class of sublinear Schrödinger equations with indefinite potentials, Proc. Roy. Soc. Edinburgh Sect. A, 145 (2015), 445-465. doi: 10.1017/S0308210513001169. Google Scholar
H. Brezis and L. Oswald, Remarks on sublinear elliptic equations, Nonlinear Analysis. Theory, Methods & Applications., 1 (1986), 55-64. doi: 10.1016/0362-546X(86)90011-8. Google Scholar
H. Brezis and S. Kamin, Sublinear elliptic equations in $\mathbb{R}^N$, Manuscripta Math., 74 (1992), 87-106. doi: 10.1007/BF02567660. Google Scholar
H. Brezis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents, Comm. Pure Appl. Math., 36 (1983), 437-477. doi: 10.1002/cpa.3160360405. Google Scholar
J. Chabrowski and J. M. B. do Ó, On semilinear elliptic equations involving concave and convex nonlinearities, Math. Nachr., 233/234 (2002), 55-76. doi: 10.1002/1522-2616(200201)233:1<55::AID-MANA55>3.0.CO;2-R. Google Scholar
D. G. de Figueiredo, J-P Gossez and P. Ubilla, Local superlinearity and sublinearity for indefinite semilinear elliptic problems, J. Funct. Anal., 199 (2003), 452-467. doi: 10.1016/S0022-1236(02)00060-5. Google Scholar
D. G. de Figueiredo, J-P Gossez and P. Ubilla, Multiplicity results for a family of semilinear elliptic problems under local superlinearity and sublinearity, J. Eur. Math. Soc., 8 (2006), 269-286. doi: 10.4171/JEMS/52. Google Scholar
F. Gazzola and A. Malchiodi, Some remark on the equation $-\Delta u = \lambda(1+u)^p$ for varying $\lambda, p$ and varying domains, Comm. Partial Differential Equations, 27 (2002), 809-845. doi: 10.1081/PDE-120002875. Google Scholar
D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer-Verlag, 1983. doi: 10.1007/978-3-642-61798-0. Google Scholar
Q. Han and F. Lin, Elliptic Partial Differential Equations, Courant Lect. Notes Math., vol. 1, AMS, Providence, RI, 1997. Google Scholar
T-S Hsu and H-L Lin, Four positive solutions of semilinear elliptic equations involving concave and convex nonlinearities in $\mathbb{R}^n$, J. Math. Anal. Appl., 365 (2010), 758-775. doi: 10.1016/j.jmaa.2009.12.004. Google Scholar
Z. Liu and Z-Q Wang, Schrödinger equations with concave and convex nonlinearities, Z. angew. Math. Phys., 56 (2005), 609-629. doi: 10.1007/s00033-005-3115-6. Google Scholar
M. H. Protter and H. F. Weinberger, Maximum Principle in Differential Equations, Prentice Hall, Englewoood Cliffs, New Jersey, 1967. Google Scholar
T-F Wu, Multiple positive solutions for a class of concave-convex elliptic problems in $\mathbb{R}^n$ involving sign-changing weight, J. Funct. Anal., 258 (2010), 99-131. doi: 10.1016/j.jfa.2009.08.005. Google Scholar
Miao-Miao Li, Chun-Lei Tang. Multiple positive solutions for Schrödinger-Poisson system in $\mathbb{R}^{3}$ involving concave-convex nonlinearities with critical exponent. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1587-1602. doi: 10.3934/cpaa.2017076
Mingzheng Sun, Jiabao Su, Leiga Zhao. Infinitely many solutions for a Schrödinger-Poisson system with concave and convex nonlinearities. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 427-440. doi: 10.3934/dcds.2015.35.427
Jia-Feng Liao, Yang Pu, Xiao-Feng Ke, Chun-Lei Tang. Multiple positive solutions for Kirchhoff type problems involving concave-convex nonlinearities. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2157-2175. doi: 10.3934/cpaa.2017107
Junping Shi, Ratnasingham Shivaji. Exact multiplicity of solutions for classes of semipositone problems with concave-convex nonlinearity. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 559-571. doi: 10.3934/dcds.2001.7.559
João Marcos do Ó, Uberlandio Severo. Quasilinear Schrödinger equations involving concave and convex nonlinearities. Communications on Pure & Applied Analysis, 2009, 8 (2) : 621-644. doi: 10.3934/cpaa.2009.8.621
Luisa Malaguti, Cristina Marcelli. Existence of bounded trajectories via upper and lower solutions. Discrete & Continuous Dynamical Systems, 2000, 6 (3) : 575-590. doi: 10.3934/dcds.2000.6.575
M. L. M. Carvalho, Edcarlos D. Silva, C. Goulart. Choquard equations via nonlinear rayleigh quotient for concave-convex nonlinearities. Communications on Pure & Applied Analysis, 2021, 20 (10) : 3445-3479. doi: 10.3934/cpaa.2021113
Salvatore A. Marano, Nikolaos S. Papageorgiou. Positive solutions to a Dirichlet problem with $p$-Laplacian and concave-convex nonlinearity depending on a parameter. Communications on Pure & Applied Analysis, 2013, 12 (2) : 815-829. doi: 10.3934/cpaa.2013.12.815
Nakao Hayashi, Chunhua Li, Pavel I. Naumkin. Upper and lower time decay bounds for solutions of dissipative nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2089-2104. doi: 10.3934/cpaa.2017103
Ana Maria Bertone, J.V. Goncalves. Discontinuous elliptic problems in $R^N$: Lower and upper solutions and variational principles. Discrete & Continuous Dynamical Systems, 2000, 6 (2) : 315-328. doi: 10.3934/dcds.2000.6.315
Yaoping Chen, Jianqing Chen. Existence of multiple positive weak solutions and estimates for extremal values for a class of concave-convex elliptic problems with an inverse-square potential. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1531-1552. doi: 10.3934/cpaa.2017073
Jianqing Chen. A variational argument to finding global solutions of a quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2008, 7 (1) : 83-88. doi: 10.3934/cpaa.2008.7.83
Qingfang Wang. Multiple positive solutions of fractional elliptic equations involving concave and convex nonlinearities in $R^N$. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1671-1688. doi: 10.3934/cpaa.2016008
Jinguo Zhang, Dengyun Yang. Fractional $ p $-sub-Laplacian operator problem with concave-convex nonlinearities on homogeneous groups. Electronic Research Archive, 2021, 29 (5) : 3243-3260. doi: 10.3934/era.2021036
Chunyan Ji, Yang Xue, Yong Li. Periodic solutions for SDEs through upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4737-4754. doi: 10.3934/dcdsb.2020122
Lucas C. F. Ferreira, Elder J. Villamizar-Roa. On the heat equation with concave-convex nonlinearity and initial data in weak-$L^p$ spaces. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1715-1732. doi: 10.3934/cpaa.2011.10.1715
Binhua Feng. On the blow-up solutions for the fractional nonlinear Schrödinger equation with combined power-type nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1785-1804. doi: 10.3934/cpaa.2018085
D.G. deFigueiredo, Yanheng Ding. Solutions of a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems, 2002, 8 (3) : 563-584. doi: 10.3934/dcds.2002.8.563
João Fialho, Feliz Minhós. The role of lower and upper solutions in the generalization of Lidstone problems. Conference Publications, 2013, 2013 (special) : 217-226. doi: 10.3934/proc.2013.2013.217
Massimo Tarallo, Zhe Zhou. Limit periodic upper and lower solutions in a generic sense. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 293-309. doi: 10.3934/dcds.2018014
Jose Anderson Cardoso Patricio Cerda Denilson Pereira Pedro Ubilla
|
CommonCrawl
|
Effects of seasonality and land use on the diversity, relative abundance, and distribution of mosquitoes on St. Kitts, West Indies
Matthew J. Valentine ORCID: orcid.org/0000-0001-5584-78131,
Brenda Ciraola1,
Gregory R. Jacobs ORCID: orcid.org/0000-0002-3659-15812,3,
Charlie Arnot4,
Patrick J. Kelly5 &
Courtney C. Murdock ORCID: orcid.org/0000-0001-5966-15142,3,6,7,8,9,10
Mosquito surveys that collect local data on mosquito species' abundances provide baseline data to help understand potential host-pathogen-mosquito relationships, predict disease transmission, and target mosquito control efforts.
We conducted an adult mosquito survey from November 2017 to March 2019 on St. Kitts, using Biogents Sentinel 2 traps, set monthly and run for 48-h intervals. We collected mosquitoes from a total of 30 sites distributed across agricultural, mangrove, rainforest, scrub and urban land covers. We investigated spatial variation in mosquito species richness across the island using a hierarchical Bayesian multi-species occupancy model. We developed a mixed effects negative binomial regression model to predict the effects of spatial variation in land cover, and seasonal variation in precipitation on observed counts of the most abundant mosquito species observed.
There was high variation among sites in mosquito community structure, and variation in site level richness that correlated with scrub forest, agricultural, and urban land covers. The four most abundant species were Aedes taeniorhynchus, Culex quinquefasciatus, Aedes aegpyti and Deinocerites magnus, and their relative abundance varied with season and land cover. Aedes aegypti was the most commonly occurring mosquito on the island, with a 90% probability of occurring at between 24 and 30 (median = 26) sites. Mangroves yielded the most mosquitoes, with Ae. taeniorhynchus, Cx. quinquefasciatus and De. magnus predominating. Psorophora pygmaea and Toxorhynchites guadeloupensis were only captured in scrub habitat. Capture rates in rainforests were low. Our count models also suggested the extent to which monthly average precipitation influenced counts varied according to species.
There is high seasonality in mosquito abundances, and land cover influences the diversity, distribution, and relative abundance of species on St. Kitts. Further, human-adapted mosquito species (e.g. Ae. aegypti and Cx. quinquefasciatus) that are known vectors for many human relevant pathogens (e.g. chikungunya, dengue and Zika viruses in the case of Ae. aegypti; West Nile, Spondweni, Oropouche virus, and equine encephalitic viruses in the case of Cx. quinqefasciatus) are the most wide-spread (across land covers) and the least responsive to seasonal variation in precipitation.
Mosquitoes are responsible for considerable human and animal suffering and economic losses because of their nuisance value and the diseases of high morbidity and mortality they can transmit [1, 2]. Recent mosquito-borne arboviral pandemics have been able to emerge and spread through human populations in previously unaffected regions, like the Americas [3, 4], due to the widespread presence and abundance of human-adapted mosquito species. Furthermore, mosquito-borne pathogens can become established in new areas if there are suitable animal reservoir populations and mosquito species that can transmit the organisms between these animals (and potentially to humans), as has occurred with yellow fever virus in South America [5, 6].
High quality mosquito surveys are an essential tool for predicting mosquito-borne disease transmission and for mosquito control [7, 8]. Surveys that collect fine resolution local data on mosquito species abundances provide fundamental baseline data on the composition of mosquito communities in a given area, the relative abundances of mosquito species within the community, and how the abundance of species and the composition of mosquito communities change across space and time. The development of population abundance models that leverage count data generated from these surveys, in turn, can be used to predict how mosquito abundances change seasonally and across different land covers. Information of this nature is crucial for describing potential host-pathogen-mosquito relationships in novel transmission foci, accurately predicting disease transmission, and for targeting and assessing the efficacy of mosquito control efforts [8, 9].
St. Kitts is a small tropical island in the Caribbean where local experience shows mosquitoes are very common and their nuisance value high. Outbreaks of chikungunya, dengue, and Zika viruses have recently occurred on the island, which also has a large population of African green monkeys (Chlorocebus aethiops sabaeus) that may be involved in arbovirus sylvatic cycles as is the case in Africa [6]. Due to the large numbers of tourists visiting the region each year, islands in the Caribbean like St. Kitts could be a source of mosquitoes, and the pathogens they carry, for transfer into currently naïve areas of the world like the USA [10].
Historically, there has been long standing interest in the mosquito species inhabiting the Caribbean particularly since it was discovered that malarial parasites (Plasmodium spp.) and yellow fever virus are transmitted by mosquitoes. Detailed mosquito surveys from the 1970s included several Caribbean islands including St. Kitts and Nevis [11]. The most recent survey on St. Kitts was conducted in 2010 [12] and although this was the most comprehensive survey performed on the island to date, it did not provide data on the how the distribution and relative abundances of mosquito species changes seasonally and with land cover. Mosquitoes were only collected during a single week of the dry and wet seasons and sampling did not include all land covers. As part of an investigation into arboviral sylvatic cycles on St. Kitts, we carried out a comprehensive survey of the mosquito populations across the various land covers on the island on a monthly basis from September 2017 to March 2019. We related mosquito survey data to relevant biological and environmental covariates to assess the influence of land use and seasonal climate variation (e.g. precipitation) on the spatial and temporal biodiversity and relative abundance of mosquitoes on St. Kitts. Below are a description of our methods and our findings.
St. Kitts (Fig. 1) is a 168 km2, geographically isolated, volcanic, Caribbean island located in the Lesser Antilles (17.33°N, 62.75°W). It has a population of approximately 40,000 people mostly inhabiting Basseterre, the capital, and a string of small village communities distributed along the main coastal road which circles the island. The climate in St. Kitts is tropical, driven by constant sea breezes with little seasonal temperature variation (27–30 °C). The wet season runs from May to November with risk of hurricanes from June to November. Rainforest covers the uninhabited, steep volcanic slopes in the center of the island, surrounded by lower gentler slopes consisting mostly of abandoned sugar cane fields or arable farmlands. The south east of the island is primarily an arid peninsula covered mainly in scrub with beaches, mangroves, and salt-ponds.
Numbers and species of mosquitoes trapped on St. Kitts at 30 sites comprising six replicates in each of five land covers
Mosquito sampling
To estimate the diversity and relative abundance of mosquito species across different land covers and seasons, we evaluated counts and species identity of adult mosquitoes captured in trap arrays set across the island. Trapping was carried out monthly from November 2017 to March 2019 in each of five representative land covers unless there was inclement weather. Due to high spatial heterogeneity in potential habitat, we used a randomized simplified stratified sampling design [13, 14] to increase the precision of generating a representative sample of the mosquito community on St. Kitts. To do this, we stratified St. Kitts by the five common land uses on the island (agricultural, mangrove, rainforest and urban). We then created a grid of St. Kitts and randomly, when possible, selected six replicate sites at least 1 km apart in each of these five distinct land cover categories producing 30 sites in total (Fig. 1). Final site selection was ultimately dependent on accessibility and landowner consent.
We used Biogents Sentinel 2 traps (BGS) (Biogents AG, Regensburg, Germany) baited with the BG-sentinel lure (Biogents AG, Germany) and carbon dioxide CO2. Carbon dioxide was generated by mixing 35 g of dried bread making yeast (Fleischmann's Active Dry Yeast, USA), 0.7 kg of unbranded white sugar, and approximately 2.5 l water in a 5 l water bottle. Carbon dioxide was delivered to each trap via a 5 m length of 5 mm (internal diameter) PVC tubing [15,16,17]. Although yeast generated CO2 will collect significantly fewer mosquitoes than dry ice or compressed CO2 from cylinders, it provides a useful alternative that is cheaper and more easily obtained than dry ice in tropical areas [15, 17, 18]. Traps were run monthly during the study period when possible for 48 h, with yeast-sugar solution, batteries, and catch bags replaced every 24 h. Trapped mosquitoes were transported to the research laboratory of Ross University School of Veterinary Medicine (RUSVM) and stored at -80 °C for later identification. After being rehydrated on chilled damp tissue paper, mosquitoes were identified on a chill table using morphological keys under a stereomicroscope (Cole Palmer, USA) at 10–40× magnification [19,20,21]. Counts of each mosquito species were recorded for each sampling date, land use, and location.
Estimating mosquito diversity
We used a hierarchical Bayesian parameterization of the multi-species occupancy model (MSOM) of Royle & Dorazio [22] and Dorazio et al. [23] with data augmentation [24] to estimate true species diversity and its variation among our surveyed sites, while accounting for inter-species heterogeneity in detection:
$$R=n+{\sum }_{i=1}^{{n}_{aug}}{w}_{n+i}$$
$${w}_{i} \sim \mathrm{Bernoulli}(\Omega )$$
$${Z}_{j,i} \sim \mathrm{Bernoulli}({\psi }_{j,i}\times {w}_{i} )$$
$${X}_{j,k,i} \sim \mathrm{Bernoulli}({p}_{j,k,i}\times {Z}_{j,i}),$$
where \(R\) is the posterior distribution of simulated species richness, \(n\) is the number of of observed species, \({n}_{aug}\) is the number of augmented (all-zero capture history) species added to the dataset, \(Z\) is the latent occupancy variable, and \(X\) is the data. Species occurrence (\({\psi }_{j,i}\)) and detection (\({p}_{j,k,i}\)) were modeled as hierarchical random effects,
$$\mathrm{logit}\left({p}_{j,k,i}\right)={v}_{i}$$
$${v}_{i}\sim \mathrm{Normal}({\mu }_{v},{\tau }_{v})$$
$$\mathrm{logit}({\psi }_{j,i})={u}_{i}$$
$${u}_{i}\sim \mathrm{Normal}({\mu }_{u},{\tau }_{u})$$
We used weakly informative Gaussian priors for \({\mu }_{u}\) and \({\mu }_{v}\) with a mean of 0 and a standard deviation of 2.25 [25, 26] and vague gamma (\(r\) = 0.1, \(\lambda\)= 0.1) priors for their precision, \({\tau }_{u}\) and \({\tau }_{v}\). Omega was given a flat uniform (0, 1) prior. We also monitored derived measures of diversity: alpha-diversity (α, mean site-level species richness) and beta-diversity (β, ratio between regional and site-level species richness) [27], zeta-diversity (ζ, number of species present at all sites) [28], and the number of sites each species occupied. We fit our model in JAGS 4.3.0 [29] implemented in the program R [30] using the R package runJAGS [31]. Posterior parameter estimates were drawn from three 20,000-iteration MCMC chains following a 1000-iteration adaptation period, and 10,000 iterations of burn-in. Convergence was assessed using the \(\widehat{R}\) statistic [32] and by visually inspecting and comparing each MCMC chain's sample traces and posterior sampling distributions. We illustrated our overall diversity results by plotting the posterior median and 90% credible interval of our diversity metrics of interest (R, α, β, and ζ). We then illustrated among-site variation in species diversity with respect to percent local land cover by plotting the posterior median and 90% credible intervals of site-level richness estimates against site-level proportion of local land covers: scrub, agriculture, mangrove, rainforest and urban.
Estimating relative mosquito abundance
We evaluated influences of different land covers (agricultural, mangrove, rainforest, scrub, urban) on the relative abundance of the four most common mosquito species found in our survey. The land covers in a 1 km2 area (565 m radius) around each sampling site were determined from local observation, a remote sensing vegetation classification [33], the St. Christopher (St. Kitts) and Nevis Biodiversity Strategy and Action Plan [34], and the most recent Google Imagery (2019). When discordance in ascribing land covers was found between the different methods, the Google images were used preferentially. The percentages of each land cover at each site (Additional file 1: Figure S1) were calculated and used as a continuous covariate in establishing the models.
We assessed the effects of land cover at each trap location and monthly precipitation on the numbers of the four most common mosquito species trapped using mixed effects generalized linear regression models [30]. Our response variable for this analysis was the number of mosquitoes of each species captured by BGS traps during each 48-h trapping interval. A list of variables and general expectations of their effects on the counts of mosquitoes of each species captured can be found in Table 1. We also included two categorical variables reflecting the high affinity of some mosquito species (e.g. Ae. taeniorhynchus and De. magnus) for mangrove habitat and crabhole habitat located in the vicinity of mangroves [11, 35,36,37]. These variables included "mangrove", which described the land cover of sites that fell within mangrove habitats regardless of surrounding land covers, and "m_trait", which described a mosquito species preference for mangrove habitat. Monthly precipitation measurements were obtained at the Robert L. Bradshaw International Airport and accessed as archived data downloaded from the Weather Underground website (www.wunderground.com: accessed August 2019).
Table 1 Variables and associated hypotheses evaluated in statistical models
We fitted models to predict observed counts of the four most abundant mosquitoes in our dataset using spatial variation in landscape variables and seasonal variation in precipitation using the R package glmmTMB, which allows the specification of generalized linear mixed-effects models for a variety of error distributions, including Poisson and negative binomial distributions [38]. Preliminary analyses revealed that a negative binomial distribution with a quadratic variance-to-mean relationship best explained our data [39] (Additional file 2: Table S1), and we used this error distribution for all subsequent analyses. We assumed a linear relationship between overall mosquito counts (on the log-link scale) and monthly average precipitation at the island scale to account for intra-annual seasonality (e.g. wet vs dry seasons). Species-specific random slope and intercept terms for precipitation allow its effect to vary by species, and random site intercepts account for repeat observations at each site. The effects of land cover were allowed to vary independently by species. We evaluated 12 model hypotheses of species-specific variation in relative abundance with land cover. Land cover variables incorporated in our 12 models generally include the percentage of local land cover. We excluded any hypotheses for which variance inflation factors were greater than five prior to model evaluation and used AIC to select our best model from the candidate set [40]. To assess model fit, we evaluated the distribution of re-scaled model residuals from the R package DHARMa [41] and calculated conditional and marginal R2 values following Nakagawa et al. [42]. We used site-level predictions from our best model to show model-estimated trends in mosquito abundance across land use and season for the duration of our study.
From November 2017 to March 2019 we captured 10 of the 14 species previously recorded on St. Kitts [11, 12, 19] (Fig. 1, Additional file 3: Table S2 and Additional file 4: Table S3). We were unable to trap during the months of April and December 2018 due to inclement weather events. Voucher specimens were deposited in the United States National Museum (USNM) under the following catalog numbers USNMENT01239050-74 and USNMENT01239079. The most abundant species of mosquito was Aedes taeniorhynchus (n = 3861, mean = 276, SD = 643), which was primarily found in mangroves (88.4%). Culex quinquefasciatus (n = 1663, mean=119, SD = 121) was the second most abundant species primarily captured in urban areas (48.8%). Deinocerites magnus (n = 1577, mean = 113, SD = 150) and Aedes aegypti (n = 443, mean = 32, SD = 40) were the third and fourth most abundant mosquito species captured, respectively. Aedes aegypti (n = 443, mean = 89, SD = 161), Ae. taeniorhynchus (n = 3861, mean = 772, SD = 1830), Cx. quinquefasciatus (n = 1663, mean = 333, SD = 628), and Deinocerites magnus (n = 1577, mean = 315, SD = 797) were species captured in all five land covers. All other species were much less abundant. Psorophora pygmaea and Toxorhynchites guadeloupensis were only captured in scrub habitat, with the remaining species being distributed across more than one land cover. The highest overall number of mosquitoes captured, mostly Ae. taeniorhynchus, were caught in November 2018 (n = 3786), and monthly mean average catches were higher in general during the wet season (n = 1080) than the dry season (n = 177). Aedes aegypti was the only species to be captured during every trapping month. Only four Anopheles albimanus, the main vector of malaria in the Caribbean [43], were caught across the entire survey period. Finally, due to specimen damage during capture and transport to the laboratory, a small proportion of Aedes spp. (n = 305) and a larger portion of Culex spp. (n = 1687) were only reported to the genus level (Additional file 3: Table S2 and Additional file 4: Table S3).
Mosquito community diversity
Our diversity analysis predicted that there are more mosquito species on St. Kitts than we directly observed, but by a reasonably low margin. We observed 10 species in our survey, while our model indicates that true species richness (R) on St. Kitts falls within 10–18 species (90% Bayesian credible interval), with a median of 13 species (Table 2). The logit-mean occupancy (µu) and detectability (µv) parameters indicated a mean occupancy probability of 31% and a mean detection probability of 9% for species present on the island (Table 2). We also predicted the number of species per site (alpha-diversity, α) to be 3–7 species (median: 4 species), the change in diversity of species among sites (beta-diversity, β) to be 1–5 species (median: 3 species), and the number of species present at all sites (zeta-diversity, ζ) to range from 0–1 species (median: 0 species) (Table 2, Fig. 2). Two highly abundant species (Ae. aegypti and Cx. quinquefasciatus), that are also important disease vectors, had relatively high predicted occupancy across our 30 study sites (Ae. aegypti: 26 sites; Cx. quinquefasciatus: 22 sites; Table 2). Finally, we observed potential effects of local land cover on overall species richness, with the overall richness increasing in response to the percentage of scrub and mangrove land cover and decreasing with the percentage of agricultural and urban land cover (Fig. 2).
Table 2 Median values and 90% credible interval for parameter estimates, mosquito community diversity metrics, and the number of sites predicted to have Aedes aegypti and Culex quinquefasciatus from our multiple species occupancy model
Median values for species richness and 90% credible intervals for each site from our multi-species occupancy model are plotted against the percentage of scrub forest (a), agriculture (b), mangrove (c), rainforest (d), and urban (e) land covers. f Median values and 90% credible intervals of regional species richness (R), alpha-diversity (α), beta-diversity (β), and zeta-diversity (ζ)
Mosquito abundance is affected by land cover and seasonal precipitation
The best model of temporal variation in the relative abundance of the four most common mosquito species on the Island of St. Kitts was the model H12 from Table 3. This model included the effects of monthly precipitation, a mangrove breeding trait interaction with mangrove sites, and species-specific effects driven by agriculture, urban, and rainforest land covers. Our best model can be expressed in pseudo-code as
Table 3 A list of main effects of all candidate models considered in analyses of land cover effects on mosquito relative abundance
$$\begin{aligned} y_{(i,j,s)} &\sim \mathrm{NegBin}(\mu_{(i,j,s)}, \theta) \\ \mathrm{log}({\mu_{(i,j,s)}}) &= \beta_{0} + \alpha_{0(s)} + (\beta_{1} + \alpha_{1(s)})Precip_{(j)} + \beta_{2}m_{trait(s)} + \beta_{3} Mangrove_{(i)} + \beta_{4(s)}LocalAgriculture_{(i)} + \beta_{5(s)}LocalUrban_{(i)} + \beta_{6(s)}LocalRainforest_{(i)} + \beta_{7}m_{trait(s)}Mangrove_{(i)} + \alpha_{2(i)} \end{aligned}$$
where \(i\), \(j\), and \(s\) denote indices for site, month, and species, respectively. \(\mathrm{NegBin}\) reflects the negative binomial distribution, and the final model includes the following: random intercepts for each species (\({\beta }_{0}+{\alpha }_{0(s)}\)), a main effect of precipitation with species-specific random slopes (\(({\beta }_{1}+{\alpha }_{1(s)})Preci{p}_{(j)}\)), an interaction between mangrove and the mangrove breeding trait (\({{{\beta }_{2}{m\_trait}_{(s)}+\beta }_{3}Mangrov{e}_{(i)}+\beta }_{7}{m\_trait}_{(s)}Mangrov{e}_{(i)}\)), species-specific main effects on proportional local land cover variables (\({{\beta }_{4(s)}LocalAgricultur{e}_{(i)}+\beta }_{5(s)}LocalUrba{n}_{(i)}+{\beta }_{6(s)}LocalRainfores{t}_{(i)}\)), and a site-level random intercept term (\({\alpha }_{2(i)}\)). The variance for our best negative binomial model scales quadratically with the mean (\(\mu\)): \(Var({y}_{(i,j,s)})={\mu }_{(i,j,s)}(1+\frac{{\mu }_{(i,j,s)}}{\phi })\) [39]. Our best model's predicted relative abundance for each mosquito species, averaged across each site's land cover, illustrated species-specific responses to surrounding land cover and seasonal precipitation effects across the island (Table 4, Fig. 3, Additional file 5: Figure S2). Overall, we observed a positive effect of precipitation on the relative abundance of mosquitoes (\({\beta }_{1}\)), with significant among-species variation in this relationship (\({\alpha }_{1(s)}\), Table 4). To further explore the effect of precipitation, we derived the best linear unbiased predictors (BLUPs) for each species' precipitation effect. The BLUPs suggested that the effect of precipitation was strongest for Ae. taeniorhynchus and to a lesser extent Ae. aegypti (Fig. 4), and was weakest for Cx. quinquefasciatus and De. magnus. The model also predicted significantly negative relationships between urban land cover and Ae. taeniorhynchus and De. magnus, but only slightly positive, non-significant relationships between urban land cover and the urban-associated mosquitoes, Cx. quinquefasciatus and Ae. aegypti. Rainforest and agricultural land cover were negatively associated with the relative abundance of all species we considered except Ae. aegypti, which exhibited no significant covariation with either rainforest or agricultural land cover (Table 4). As expected, the model predicts the relative abundance of mosquito species with a mangrove breeding preference (Ae. taeniorhynchus and De. magnus) to be lower than average except when trapping sites occur within mangrove habitat where the mangrove "trait" had a net positive effect (i.e. \(({\beta }_{2}+{\beta }_{7})>0\), Table 4). Inspection of re-scaled residuals generated by simulation from the fitted model [41] suggested uniformity in the distribution of residuals (one-sample Kolmogorov-Smirnov test: D = 0.018, P = 0.671), indicating good concurrence between the data and model predictions. Marginal and conditional R2 (0.601 and 0.696, respectively) also indicated a well-fit model (Table 4): the marginal R2 of 0.601 suggested that the fixed effects portion of the model explained over 60% of the variation in counts, and the conditional R2 of 0.695 revealed the additional variance explained by accounting for additional variation attributable to the random effects [42].
Table 4 Model parameters and diagnostics from our best model of mosquito counts. Means and confidence intervals are given on the log-scale, and symbols correspond to parameters in equation 1
Time series plot of predicted relative abundance (conditional on random effects) from our best model for each site land cover (line color) and for the four major species we captured (panel). Solid lines denote the average predicted relative abundance of mosquito species across the six sites located within that land cover
Best linear unbiased predictors (BLUPs) for the effect of precipitation on mosquito counts
Monthly mosquito surveillance on St. Kitts from November 2017 to March 2019 enabled us to capture a diversity of mosquito species that varied in abundance across seasons and land cover types. We captured 10 of the 14 species (5 genera) historically recorded on the island [11, 12, 19]. While our results largely confirm those of the 2010 survey, they provide higher spatial and temporal resolution of the mosquito community diversity, as well as the relative abundance and distribution of different mosquito species on the island.
Our multi-species occupancy model demonstrates that we were able to capture most mosquito species on St. Kitts during the survey period and that any detection failures in other mosquito species on the island are likely attributed to low rates of occurrence. Thus, the species detected in our survey likely are an accurate reflection of the true mosquito community on St. Kitts. While we were able to detect the majority of species predicted to be present on St. Kitts, our actual ability to detect a given mosquito species was low on average (9% average detection rate). That being said, multi-species occupancy models are robust to low detection probabilities as long as mean site-level occupancy is relatively high, which it was in this study (30%) [44]. Using the site-level species richness estimated from our multi-species occupancy model, we noted some potential correlation between percentage of local land cover and mosquito species richness. However, our values for beta- and zeta-diversity indicate species turnover across sites and few to no species found everywhere. The effects of land use on site-level species richness may be masked by species replacement driven by species-specific responses to landscape variables, which warrants further investigation into species-specific responses to landscape variation.
The four most abundant species captured were Ae. taeniorhynchus, Cx. quinquefasciatus, Ae. aegypti and De. magnus. These species were detected at least once in all land cover types over the study period. The three most abundant species captured are competent vectors of pathogens recorded on St. Kitts (Cx. quinquefasciatus: West Nile virus and Dirofilaria immitis; Ae. taeniorhynchus: D. immitis; and Ae. aegypti: dengue, chikungunya and Zika viruses) [45,46,47,48,49] and other pathogens that could potentially become established on the island in the future due to the abundance of their vectors. For example, Cx. quinquefasciatus, the southern house mosquito, is widely distributed across the subtropics and can also transmit Saint Louis encephalitis virus, Western equine encephalitis virus, Rift Valley fever virus, Wuchereria bancrofti and avian malaria [21]. Aedes taeniorhynchus, the black salt marsh mosquito, is widely distributed in all islands of the Caribbean and can transmit Venezuelan, Eastern and Western equine encephalitis virus [50, 51]. The eponymous yellow fever mosquito, Ae. aegypti, can also transmit yellow fever virus [21]. While An. albimanus is a known malaria vector in Central America, northern South America, and the Caribbean, the overall low abundance of this species on St. Kitts (only four individuals total were collected in our study and two individuals in 2010 [12]), suggests this species is unlikely to support malaria transmission on the island.
The presence and absence, as well as overall relative abundance, of particular mosquito species captured across the different land covers on St. Kitts broadly align with what is known for these species in the literature. The species count model for our four most abundant mosquitoes predicts both Cx. quinquefasciatus and Ae. aegypti to have the highest relative abundance in an urban habitat. Both species breed most successfully in fresh water-filled man-made containers and are therefore found primarily around houses in urban environments. Further, Ae. aegypti preferentially feeds on human hosts, particularly when indoors [21, 52] and rests inside domestic dwellings [21]. The fact that we observed these species, albeit at lower abundances, in other land covers is not entirely surprising. The model predicted Ae. aegypti to be similarly abundant across the survey period in urban as well as agricultural habitats. From local experience and surveys on other islands [53], agricultural land covers provide ample breeding habitats for container breeding mosquito species in the form of discarded tires, styrofoam containers, plastic water bottles and bags, and agricultural equipment in which water can collect.
Interestingly, the model also predicted moderate relative abundance across the survey period for both Cx. quinquefasciatus and Ae. aegypti in mangrove and scrub habitats, which may be due to the presence of artificial containers suitable for breeding or an increased tolerance to brackish water in high marsh areas of the mangrove [36, 53]. Culex quinquefasciatus has been recorded previously in brackish water on St. Kitts [11] and Ae. aegypti is tolerant of brackish water in other coastal regions [54,55,56,57]. Additionally, Cx. quinquefasciatus is an opportunistic forager that has the ability to fly several hundred meters [58]; thus, adults could be found in areas far removed from their larval breeding sites. Finally, Ae. aegypti was also predicted to be abundant, albeit at lower levels, in rainforest habitat across the trapping period. Elsewhere in the Caribbean, Ae. aegypti have been found breeding in more natural habitats in addition to artificial containers [37]. These include rock holes, calabashes, tree holes, leaf axils, bamboo joints, papaya stumps, coconut shells, bromeliads, ground pools, coral rock holes, crab holes, and conch shells which are all also present on St. Kitts.
Species with the highest capture rates and predicted by our model to have high relative abundance in mangrove habitats included both Ae. taeniorhynchus and De. magnus. These two species were predicted to have relatively low abundance in scrub surrounding mangrove sites on the island, and were not predicted to be abundant in other land cover types. Aedes taeniorhynchus, the black salt marsh mosquito, is widely distributed in all islands of the Caribbean where it also favors low lying marsh land as is the case on St. Kitts [36, 51]. Similarly, De. magnus, the crabhole mosquito, is found primarily in crabholes that are abundant in the soft sands in mangrove habitats around the Caribbean [11, 59]. We also captured Culex nigripalpus, An. albimanus, Ps. pygmaea, and Aedes tortilis in mangroves or the surrounding scrub land cover, likely due to their preference for breeding in temporary brackish and/or fresh water sources [11, 19]. We did not include these species in our relative abundance model due to low capture rates.
Generally, we had very low capture rates of mosquitoes in rainforest land cover which was consistent with our model's predictions of a negative effect of rainforest cover on the relative abundance of the four most abundant species we trapped. This is not necessarily reflective of the results of surveys from other regions of the Caribbean. For example, a study in forested areas of eastern Trinidad between July 2007 and March 2009 collected 185,397 mosquitoes across 46 species [59]. Although this study was of a similar duration, our low capture rates in rainforest land cover might reflect (i) less breeding habitat or fewer vertebrate hosts present in the rainforest, (ii) different sampling methods across the studies (e.g. CDC light traps deployed with CO2 lures used vs. BGS traps baited with the human lure or CDC light traps baited with sugar-yeast CO2 lures [15]), and (iii) frequency of trapping effort (weekly vs monthly). It seems unlikely that our trapping at ground level may have excluded mosquito species that thrive in tree-top habitats because several arboviral surveillance studies in forests in Brazil [60], New Mexico [61], and other sites across the USA [62] demonstrate that various trapping methods (e.g. entomological nets, aspirators, and CDC light traps) set in the canopy did not catch significantly more mosquitoes than those on the ground. We did capture one Aedes busckii in the rainforest during our survey. A previous survey [11] as well as some experience with larval surveys of tree holes in rainforest habitats on St. Kitts (data not shown) suggest Ae. busckii could be a rain forest habitat specialist, but more data are needed. In general, little is known about the ecology of this mosquito other than that it is confined to the Lesser Antilles (Dominica, Grenada, Guadeloupe, Martinique, Montserrat, St. Kitts and Nevis and Saint Lucia) [63]. We also captured Cx. quinquefasciatus, Ae. taeniorhynchus and De. magnus at very low rates in the rainforest (Additional file 3: Table S2). We speculate that these captures could be the result of mosquitoes either breeding in man-made containers in neighboring agricultural habitat (Cx. quinquefasciatus) or mosquitoes being blown into novel habitats during tropical storms (Ae. taeniorhynchus and De. magnus). In the case of De. magnus, land crabs (Gecarcinus ruricola) do inhabit the rainforest, which could provide breeding and resting sites in their crabhole burrows for this specialist mosquito species [59].
Mosquito capture rates were also strongly determined by time of season and precipitation throughout the survey period. In general, our species count model predicted a positive effect of precipitation on the relative abundance of our four most common mosquito species. The effects of precipitation we found could be due to several reasons. Although excess rain may flush larvae from their habitats and decrease adult mosquito populations [64, 65], a seasonal increase in precipitation increases the abundance and persistence of larval habitats resulting in higher densities and overall capture rates [65,66,67,68]. Additionally, increased precipitation is associated with increased relative humidity, which has been shown to have important positive effects on the abundance [68,69,70], lifespan [70, 71], and activity and questing behavior [72, 73] of adult mosquitoes. Interestingly, the effect of precipitation on relative abundance was species-specific among the four most common mosquito species we found. Our post-hoc assessment of our model suggests the effect of precipitation had a strong effect on the relative abundances of Ae. taeniorhynchus, a moderate effect on Ae. aegypti, and smaller effects on Cx. quinquefasciatus and De. magnus. The strong effect of precipitation on Ae. taeniorhynchus might occur because this mosquito species relies largely on natural habitats, which are often dependent on local rainfall. While Ae. aegypti utilizes artificial, and human watered containers heavily for ovipositing, it is also known to oviposit in natural habitats on other Caribbean islands [37], which could become more abundant with increased rainfall. Culex quinquefasciatus might be less dependent on rainfall, as most individuals were captured in urban habitat and were most likely emerging from persistent, human-watered, artificial habitats. Whereas increased rainfall above a certain threshold might expand water bodies in mangroves, flooding crabholes that in turn could locally reduce breeding sites for De. magnus [58].
While the overall capture rates of Ae. aegypti were significantly lower across non-urban habitats, their presence in other land covers on St. Kitts and other Caribbean islands [11, 37] could have several implications for our understanding of the general ecology of this species and transmission of arboviruses in the Caribbean. Mosquitoes living across these land covers likely experience variation in local microclimate [69, 74], quality and quantity of oviposition sites [11, 37], and access to vertebrate species available for blood-feeding [75]. This variation in turn could result in potential disease transmission among sylvatic reservoirs (e.g. non-human primates) in some habitats and differential exposure of human populations to infectious mosquitoes on the island. We are currently conducting studies to confirm the presence of reproducing Ae. aegypti adults across each land cover and using blood-meal analysis and bait trapping to identify novel mosquito-host associations.
Our study provides a more comprehensive spatial and temporal (within-year) picture of the distribution of mosquito species on St. Kitts relative to previous surveys [11, 12, 19]. However, it suffers from several minor limitations. Due to specimen damage during capture and transport to the laboratory, a proportion of Aedes spp. (n = 219) and Culex spp. (n = 1694) were reported only to genus level. These counts differed substantially from those identified to species level, which comprised 4334 Aedes spp. and 1697 Culex spp. in total. Based on the capture location of these specimens, the majority of these individuals are likely Ae. taeniorhynchus (mangrove) and Cx. quinquefasciatus (urban), respectively. By not incorporating the unidentified mosquitoes into our relative abundance and diversity analyses, we are inherently assuming that each mosquito species has an equal chance of being unidentified. This assumption could be violated if the ability to identify specimens varies by species (e.g. species that tend to be captured at higher numbers may sustain more damage during trapping and handling), which could have implications for both our diversity and relative abundance models. For example, capture rates for Ae. tortilis and Cx. nigripalpus are likely underestimated. However, we believe these effects to be minimal. For the relative abundance model, we selected the four most abundant species for whom violation of this assumption would have a small chance of affecting the relative proportion of captured individuals. In the diversity analysis, we used a model that estimates variation in detectability by species, which will help account for violations of this assumption because it allows rarer species to be detected less often.
Our island-wide mosquito survey has demonstrated that the species detected in our survey are a good representation of the mosquito community on St. Kitts. Further, the community of mosquitoes on the island is highly structured and likely shaped by local land cover. We also found substantial effects of land cover and seasonality (likely driven by variation in precipitation) on mosquito capture rates. Interesting insights gained from this study include the presence of Ae. aegypti in all the land covers we studied, which could have important implications for mosquito-borne disease transmission on the island. Further, human-adapted mosquito species (e.g. Ae. aegypti and Cx. quinquefasciatus) that are known vectors for many human relevant pathogens (e.g. chikungunya, dengue and Zika viruses in the case of Ae. aegypti; West Nile, Spondweni, Oropouche virus, and equine encephalitic viruses in the case of Cx. quinqefasciatus) are the most wide-spread (across land covers) and the least responsive to seasonal variation in precipitation. This somewhat counters the current literature suggesting Ae. aegypti is primarily found in highly urban habitats and feeds almost exclusively on human hosts. Finally, although Aedes albopictus occurs on other Caribbean islands [76], we did not find this species in our survey. Ongoing surveillance will be important to continue, as changes in land use and climate could lead to shifts in mosquito community composition, host contact rates, and mosquito-borne disease transmission in humans and animals.
The datasets supporting the conclusions of this article are included within the article and its additional files. Voucher specimens of some of the mosquito specimens associated with this study were deposited in the United States National Museum (USNM) under the following catalog numbers USNMENT01239050-74 and USNMENT01239079.
BGS:
Biogents Sentinel 2 Trap
CO2 :
CDC:
Centre for Disease Control (USA)
RUSVM:
Ross University School of Veterinary Medicine
USNM:
United States National Museum
WHO. Global Burden of Major Vector-borne Diseases, as of March 2017. Geneva: World Health Organization; 2018. https://www.who.int/vector-control/burden_vector-borne_diseases.pdf Accessed 13 Jun 2019.
WHO. Vector Borne Diseases Factsheet. Geneva: World Health Organization. https://www.who.int/en/news-room/fact-sheets/detail/vector-borne-diseases. Accessed 13 Jun 2019.
WHO. Chikungunya. Geneva: World Health Organization. https://www.who.int/news-room/fact-sheets/detail/chikungunya. Accessed 13 Jun 2019.
WHO. Zika. Geneva: World Health Organization. https://www.who.int/news-room/fact-sheets/detail/zika-virus. Accessed 13 Jun 2019.
Figueiredo LTM. Human urban arboviruses can infect wild animals and jump to sylvatic maintenance cycles in South America. Front Cell Infect Microbiol. 2019;9:259.
Valentine MJ, Murdock CC, Kelly PJ. Sylvatic cycles of arboviruses in non-human primates. Parasit Vectors. 2019;12:463.
Diallo D, Diagne CT, Buenemann M, Ba Y, Dia I, Faye O, et al. Biodiversity pattern of mosquitoes in southeastern Senegal, epidemiological implication in arbovirus and malaria transmission. J Med Entomol. 2018;56:453–63.
Cornel AJ, Lee Y, Almeida APG, Johnson T, Mouatcho J, Venter M, et al. Mosquito community composition in South Africa and some neighboring countries. Parasit Vectors. 2018;11:331.
Shragai T, Tesla B, Murdock C, Harrington LC. Zika and chikungunya: mosquito-borne viruses in a changing world. Ann N Y Acad Sci. 2017;1399:61–77.
Mavian C, Dulcey M, Munoz O, Salemi M, Vittor AY, Capua I. Islands as hotspots for emerging mosquito-borne viruses: a One Health perspective. Viruses. 2019;11:11.
Belkin JN, Heinemann SJ. Collection records of the project of "Mosquitoes of middle America" 4. Leeward Islands Anguilla (ANG), Antigua (ANT), Barbuda (BAB), Montserrat (MNT), Nevis (NVS), St. Kitts (KIT). Mosq Syst. 1976;8:123–62.
Mohammed H, Evanson J, Revan F, Lee E, Krecek RC, Smith J. A mosquito survey of the twin-island Caribbean nation of Saint Kitts and Nevis, 2010. J Am Mosq Control Assoc. 2015;31:360–3.
Andrew NL, Mapstone BD. Sampling and the description of spatial pattern in marine ecology. Oceanogr Mar Biol Ann Rev. 1987;25:39–90.
Cochran W. Sampling Techniques. New York: Wiley; 1977.
Saitoh Y, Hattori J, Chinone S, Nihei N, Tsuda Y, Kurahashi H, et al. Yeast-generated CO2 as a convenient source of carbon dioxide for adult mosquito sampling. J Am Mosq Control Assoc. 2004;20:261–4.
Smallegange RC, Schmied WH, van Roey KJ, Verhulst NO, Spitzen J, Mukabana WR, et al. Sugar-fermenting yeast as an organic source of carbon dioxide to attract the malaria mosquito Anopheles gambiae. Malar J. 2010;9:292.
Hapairai LK, Joseph H, Sang MAC, Melrose W, Ritchie SA, Burkot TR, et al. Field evaluation of selected traps and lures for monitoring the filarial and arbovirus vector, Aedes polynesiensis (Diptera: Culicidae), in French Polynesia. J Med Entomol. 2013;50:731–9.
Van Roey KJ. Yeast-generated carbon dioxide as a mosquito attractant. Thesis: Wageningen University, Wageningen; 2009.
Belkin JN, Heinemann SJ, Page WA. The Culicidae of Jamaica (Insecta, Diptera). Contrib Am Entomol Inst. 1970;6:1–458.
Darsie RFJ, Ward RA. Identification and geographical distribution of the mosquitoes of North America, north of Mexico. Gainsville: University Press of Florida; 2005.
Burkett-Cadena ND. Mosquitoes of the southeastern United States. Tuscaloosa, Alabama: The University of Alabama Press; 2013.
Dorazio RM, Royle JA. Estimating size and composition of biological communities by modeling the occurrence of species. J Am Stat Assoc. 2005;100:389–98.
Dorazio RM, Royle JA, Söderström B, Glimskär A. Estimating species richness and accumulation by modeling species occurrence and detectability. Ecology. 2006;87:842–54.
Royle JA, Dorazio RM. Parameter-expanded data augmentation for Bayesian analysis of capture-recapture models. J Ornithol. 2012;152:521–37.
Hobbs NT, Hooten MB. Bayesian Models: A Statistical Primer for Ecologists. New Jersey: Princeton University Press; 2015.
Book Google Scholar
Northrup JM, Gerber BD. A comment on priors for Bayesian occupancy models. PLoS ONE. 2018;13:e0192819.
Whittaker RH. Vegetation of the Siskiyou Mountains. Oregon and California Ecol Monogr. 1960;30:279–338.
Hui C, McGeoch MA. Zeta diversity as a concept and metric that unifies incidence-based biodiversity patterns. Am Nat. 2014;184:684–94.
Plummer M. JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. Version 4.3.0. 2017.
R Development Core Team. R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2019. https://www.R-project.org/.
Denwood MJ. runjags: An R package providing interface utilities, model templates, parallel computing methods and additional distributions for MCMC Models in JAGS. J Stat Soft. 2016;71:1–25.
Gelman A, Rubin DB. Inference from iterative simulation using multiple sequences. Stat Sci. 1992;7:457–72.
Helmer EH, Kennaway TA, Pedreros DH, Clark ML, Marcano-Vega H, Tieszen LL, et al. Land cover and forest formation distributions for St. Kitts, Nevis, St. Eustatius, Grenada and Barbados from decision tree classification of cloud-cleared satellite imagery Caribb J Sci. 2008;44:175–98.
Ministry of Sustainable Development. St. Christopher (St. Kitts) and Nevis Biodiversity Strategy and Action Plan., Basseterre. 2014. https://www.cbd.int/doc/world/kn/kn-nbsap-v2-en.pdf.
Lane J. Neotropical Culicidae. Sao Paulo, Brazil: University of Sao Paulo; 1953.
Ritchie SA. Mosquito Control Handbook: Salt Marshes and Mangrove Forests. Gainesville: University of Florida; 1992.
Chadee DD, Ward RA, Novak RJ. Natural habitats of Aedes aegypti in the Caribbean - a review. J Am Mosq Control Assoc. 1998;14:5–11.
Brooks ME, Kristensen K, van Benthem KJ, Magnusson A, Berg CW, Nielsen A, et al. glmmTMB balances speed and flexibility among packages for zero-inflated generalized linear mixed modeling. R journal. 2017;9:378–400.
Hardin JW, Hilbe JM. Generalized Linear Models and Extensions. 4th ed. College Station, Texas: Stata Press; 2007.
Burnham KP, Anderson DR, editors. Practical use of the information-theoretic approach. Model Selection and Inference. New York, New York: Springer; 1998.
Hartig F. Residual diagnostics for hierarchical (multi-level/mixed) regression models. R package version 0.2. 4. 2019.
Nakagawa S, Johnson PC, Schielzeth H. The coefficient of determination R 2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded. J Royal Soc Interface. 2017;14:20170213.
Briët OJT, Impoinvil DE, Chitnis N, Pothin E, Lemoine JF, Frederic J, et al. Models of effectiveness of interventions against malaria transmitted by Anopheles albimanus. Malar J. 2019;181:263.
Tingley MW, Nadeau CP, Sandor ME. Multi-species occupancy models as robust estimators of community richness. Methods Ecol Evol. 2020;11:633–42.
Conan A, Napier P, Shell L, Knobel DL, Dundas J, Scorpio D, et al. Heterogeneous distribution of Dirofilaria immitis in dogs in St. Kitts, West Indies, 2014–2015. Vet Parasitol: Regional Studies and Reports. 2017;10:139–42.
Bolfa P, Jeon I, Loftis A, Leslie T, Marchi S, Sithole F, et al. Detection of West Nile virus and other common equine viruses in three locations from the Leeward Islands. West Indies Acta Trop. 2017;174:24–8.
PAHO/WHO. Data - Dengue cases. Pan American Health Organization/World Health Organization; 2020. https://www.paho.org/data/index.php/en/mnu-topics/indicadores-dengue-en/dengue-nacional-en/252-dengue-pais-ano-en.html. Accessed 6 Aug 2019.
PAHO/WHO. Chikungunya Data, Maps and statistics. Pan American Health Organization/World Health Organization; 2014. https://www.paho.org/hq/index.php?option=com_topics&view=rdmore&cid=5927&item=chikungunya&type=statistics&Itemid=40931&lang=en. Accessed 6 Aug 2019.
PAHO/WHO. Zika Cumulative Cases. Pan American Health Organization/World Health Organization; 2016. https://www.paho.org/hq/index.php?option=com_content&view=article&id=12390:zika-cumulative-cases&Itemid=42090&lang=en. Accessed 6 Aug 2019.
Barrera R, MacKay A, Amador M, Vasquez J, Smith J, Díaz A, et al. Mosquito vectors of West Nile virus during an epizootic outbreak in Puerto Rico. J Med Entomol. 2014;47:1185–95.
Agramonte NM, Connelly CR. Black salt marsh mosquito Aedes taeniorhynchus (Wiedemann). In: EDIS.EENY591. University of Florida. 2014. Accessed 4 Apr 2020.
Harrington LC, Edman JD, Scott TW. Why do female Aedes aegypti (Diptera: Culicidae) feed preferentially and frequently on human blood? J Med Entomol. 2001;38:411–22.
Chadee DD, Huntley S, Focks DA, Chen AA. Aedes aegypti in Jamaica, West Indies: container productivity profiles to inform control strategies. Trop Med Int Health. 2009;14:220–7.
Roberts D. Mosquitoes (Diptera: Culicidae) breeding in brackish water: female ovipositional preferences or larval survival? J Med Entomol. 1996;33:525–30.
Ramasamy R, Surendran SN, Jude PJ, Dharshini S, Vinobaba M. Larval development of Aedes aegypti and Aedes albopictus in peri-urban brackish water and its implications for transmission of arboviral diseases. PLoS Negl Trop Dis. 2011;5:e1369.
Ramasamy R, Jude PJ, Veluppillai T, Eswaramohan T, Surendran SN. Biological differences between brackish and fresh water-derived Aedes aegypti from two locations in the Jaffna peninsula of Sri Lanka and the implications for arboviral disease transmission. PLoS ONE. 2014;9:e104977.
de Brito AM, Mucci LF, Serpa LLN, de Moura RM. Effect of salinity on the behavior of Aedes aegypti populations from the coast and plateau of southeastern Brazil. J Vector Borne Dis. 2015;52:79.
Verdonschot P, Besse-Lototskaya A. Flight distance of mosquitoes (Culicidae): a metadata analysis to support the management of barrier zones around rewetted and newly constructed wetlands. Limnologica. 2014;45:69–79.
Belkin JN, Hogue CL. A review of the crabhole mosquitoes of the genus Deinocerites (Diptera, Culicidae). Berkeley: University of California Press; 1959.
Lira-Vieira AR, Gurgel-Gonçalves R, Moreira IM, Yoshizawa MA, Coutinho ML, Prado PS, et al. Ecological aspects of mosquitoes (Diptera: Culicidae) in the gallery forest of Brasília National Park, Brazil, with an emphasis on potential vectors of yellow fever. Rev Soc Bras Med Trop. 2013;46:566–74.
DiMenna MA, Bueno R Jr, Parmenter RR, Norris DE, Sheyka JM, Molina JL, et al. Comparison of mosquito trapping method efficacy for West Nile virus surveillance in New Mexico. J Am Mosq Control Assoc. 2006;22:246–53.
Andreadis TG, Armstrong PM. A two-year evaluation of elevated canopy trapping for Culex mosquitoes and West Nile virus in an operational surveillance program in the northeastern United States. J Am Mosq Control Assoc. 2007;23:137–48.
WRBU Walter Reed Biosystematics Unit: Systematic catalog of Culicidae. Walter Reed Biosystematics Unit, Smithsonian Institution, Washington D.C. 2015. https://www.mosquitocatalog.org/taxon_descr.aspx?ID=15654 accessed 4th April 2020.
DeGaetano AT. Meteorological effects on adult mosquito (Culex) populations in metropolitan New Jersey. Int J Biometeorol. 2005;49:345–53.
Dieng H, Rahman GS, Hassan AA, Salmah MC, Satho T, Miake F, et al. The effects of simulated rainfall on immature population dynamics of Aedes albopictus and female oviposition. Int J Biometeorol. 2012;56:113–20.
Reisen WK, Cayan D, Tyree M, Barker CM, Eldridge B, Dettinger M. Impact of climate variation on mosquito abundance in California. J Vector Ecol. 2008;33:89–98.
De Little SC, Bowman DM, Whelan PI, Brook BW, Bradshaw CJ. Quantifying the drivers of larval density patterns in two tropical mosquito species to maximize control efficiency. Environ Entomol. 2009;38:1013–21.
Asigau S, Parker PG. The influence of ecological factors on mosquito abundance and occurrence in Galápagos. J Vector Ecol. 2018;43:125–37.
Evans MV, Hintz CW, Jones L, Shiau J, Solano N, Drake JM, et al. Microclimate and larval habitat density predict adult Aedes albopictus abundance in urban areas. Am J Trop Med. 2019;101:362–70.
Bayoh MN. Studies on the development and survival of Anopheles gambiae sensu stricto at various temperatures and relative humidities. PhD Thesis, University of Durham, Durham; 2001.
Hylton AR. Studies on longevity of adult Eretmapodites chrysogaster, Aedes togoi and Aedes (Stegomyia) albopictus females (Diptera: Culicidae). J Med Entomol. 1969;6:147–9.
Rowley WA, Graham CL. The effect of temperature and relative humidity on the flight performance of female Aedes aegypti. J Insect Physiol. 1968;14:1251–7.
Okech BA, Gouagna LC, Knols BG, Kabiru EW, Killeen GF, Beier JC, et al. Influence of indoor microclimate and diet on survival of Anopheles gambiae s.s. (Diptera: Culicidae) in village house conditions in western Kenya. Int J Trop Insect Sci. 2004;24:207–12.
Murdock CC, Evans MV, McClanahan TD, Miazgowicz KL, Tesla B. Fine-scale variation in microclimate across an urban landscape shapes variation in mosquito population dynamics and the potential of Aedes albopictus to transmit arboviral disease. PLoS Negl Trop Dis. 2017;11:e0005640.
Burkett-Cadena ND, McClure CJ, Estep LK, Eubanks MD. Hosts or habitats: What drives the spatial distribution of mosquitoes? Ecosphere. 2013;4:1–16.
Ali I, Mundle M, Anzinger JJ, Sandiford SL. Tiger in the sun: a report of Aedes albopictus. Acta Trop. 2019;199:105112.
We would like to thank Gilbert Gordon, Ross University School of Veterinary Medicine Students, as well as David Pecor, Erica McAlister, Michelle Evans, Mike Newberry, Bryan Giordano, Rob and Kathleen Gilbert, Anna Becker, Moses Humphrey, Michel Vandenplas, Jermaine Lake (Deputy Environmental Health Officer, St. Kitts and Nevis Government) and Leshan Evans, Lornette Browne, Hyacinth Richardson, Larry Greaux, Tremecia Rawlins (Vector Control Officers, St. Kitts and Nevis Government) for their contributions to this research.
The project was funded by National Institute of Allergy and Infectious Diseases R21 grant (1R21AI128407-01) and RUSVM.
One Health Centre for Zoonoses and Tropical Veterinary Medicine, Ross University School of Veterinary Medicine, Island Main Road, West Farm, Basseterre, Saint Kitts and Nevis
Matthew J. Valentine & Brenda Ciraola
Odum School of Ecology, University of Georgia, Athens, GA, 30602, USA
Gregory R. Jacobs & Courtney C. Murdock
River Basin Center, Odum School of Ecology, University of Georgia, Athens, Ga, 30602, USA
CNWA Consulting, Basseterre, Saint Kitts and Nevis
Charlie Arnot
Department of Clinical Sciences, Ross University School of Veterinary Medicine, Island Main Road, West Farm, Basseterre, Saint Kitts and Nevis
Patrick J. Kelly
Department of Infectious Diseases, College of Veterinary Medicine, University of Georgia, Athens, GA, 30602, USA
Courtney C. Murdock
Center for Ecology of Infectious Diseases, Odum School of Ecology, University of Georgia, Athens, GA, 30602, USA
Center for Tropical Emerging and Global Diseases, University of Georgia, Athens, GA, 30602, USA
Center for Vaccines and Immunology, College of Veterinary Medicine, University of Georgia, Athens, GA, 30602, USA
Department of Entomology, College of Agriculture and Life Sciences, Cornell University, Ithaca, NY, 14853, USA
Matthew J. Valentine
Brenda Ciraola
Gregory R. Jacobs
MV, BC and CM conducted the survey. PK and CM designed the study and supervised the survey. GJ, CA and CM produced the models. CA, MV and GJ produced tables and figures for the manuscript text. MV, CM, GJ, PK and CA prepared the manuscript text. All authors read and approved the final manuscript.
Correspondence to Courtney C. Murdock.
Additional file 1: Figure S1.
The proportions of the different land covers found in a 1 km2 area (565 m radius) around each of the trapping 30 sites used in the study.
Statistical and model methods.
Counts of mosquito species caught across the five different land covers from November 2017 to March 2019 on St. Kitts.
Counts of mosquito species per month on St. Kitts from Nov 2017 to March 2019 with the wet season highlighted in grey (May-November).
Time series plot for each site land use category (column) and for each species (row) in their predicted relative abundance (conditional on random effects) from our best model. Solid lines denote the average predicted relative abundance of mosquito species across the six sites within that land cover and dotted lines denote the 95% confidence interval of that 6-site mean. Points denote the 6-site average relative abundance from the raw data. Note that the y-axis scale varies by species (row).
Valentine, M.J., Ciraola, B., Jacobs, G.R. et al. Effects of seasonality and land use on the diversity, relative abundance, and distribution of mosquitoes on St. Kitts, West Indies. Parasites Vectors 13, 543 (2020). https://doi.org/10.1186/s13071-020-04421-7
Land cover
Dipteran vectors and associated diseases
|
CommonCrawl
|
Lipid production by the oleaginous yeast Yarrowia lipolytica using industrial by-products under different culture conditions
Magdalena Rakicka1,2,3,5 nAff3,
Zbigniew Lazar1,2,3,
Thierry Dulermo1,2,
Patrick Fickers4 &
Jean Marc Nicaud1,2,5
Microbial lipid production using renewable feedstock shows great promise for the biodiesel industry.
In this study, the ability of a lipid-engineered Yarrowia lipolytica strain JMY4086 to produce lipids using molasses and crude glycerol under different oxygenation conditions and at different inoculum densities was evaluated in fed-batch cultures. The greatest lipid content, 31% of CDW, was obtained using a low-density inoculum, a constant agitation rate of 800 rpm, and an oxygenation rate of 1.5 L/min. When the strain was cultured for 450 h in a chemostat containing a nitrogen-limited medium (dilution rate of 0.01 h−1; 250 g/L crude glycerol), volumetric lipid productivity was 0.43 g/L/h and biomass yield was 60 g CDW/L. The coefficient of lipid yield to glycerol consumption (Y L/gly) and the coefficient of lipid yield to biomass yield (Y L/X ) were equal to 0.1 and 0.4, respectively.
These results indicate that lipids may be produced using renewable feedstock, thus providing a means of decreasing the cost of biodiesel production. Furthermore, using molasses for biomass production and recycling glycerol from the biodiesel industry should allow biolipids to be sustainably produced.
The distinct possibility of fossil fuel depletion is currently forcing the fuel industry to develop alternative energy sources, such as biodiesel [1]. Because biodiesel is derived from vegetable oils, there is competition between biodiesel producers and food crop farmers for arable lands [2]. Consequently, one of the industry's goals is to find novel ways of producing biodiesel. One possible strategy involves the transformation of waste materials and/or co-products, such as whey, crop residues, crude glycerol, or crude fats, into triglycerides or fatty acids using microbial cell factories [3, 4]. These processes are advantageous compared to conventional methods, since they use waste materials generated by various industries as feedstock. Moreover, microbial lipid can be produced in close proximity to biodiesel industrial plants and it is easy to scale up their production [5].
Different bacteria, yeasts, algae, and fungi have the ability to convert carbohydrates and other substrates into intracellular lipid. When a microorganism's intracellular lipid accumulation levels are greater than 20% of cell dry weight (CDW), it is labeled an "oleaginous microorganism". Oleaginous microorganisms include yeast species, such as Rhodosporidium sp., Rhodotorula sp., Lipomyces sp., and Yarrowia lipolytica, whose intracellular lipid accumulation levels can reach 80% of CDW [6–8]. The main components of the accumulated lipid are triacylglycerols composed of long-chain fatty acids (16–18 carbon atoms in the chain) [6–8].
There are many ways of increasing intracellular lipid accumulation. Some involve metabolically engineering microbial strains to either improve their lipid storage capacities or synthesize lipids with specific fatty acid profiles [8–11]. Others focus on refining the production process by identifying optimal culture conditions and defining optimal medium composition [12–14]. For instance, fed-batch culturing is the most convenient system in pilot experiments seeking to establish optimal production conditions: it helps identify the best medium composition and any supplements needed. However, continuous cultures are also of great interest when the goal is to enhance lipid accumulation levels, especially those of yeast grown as well-dispersed, non-filamentous cells [15].
Due to Y. lipolytica's unique physiological characteristics (i.e., its ability to metabolize hydrophobic substrates such as alkanes, fatty acids, and lipids), its ability to accumulate high levels of lipids, and its suite of efficient genetic tools [16], this yeast is a model organism for biolipid production and it is thought to have great applied potential [6–8, 11], both in the production of typical biofuel lipids [9–11] and oils with unusual fatty acid profiles or polyunsaturated fatty acids [3, 4, 17]. In this study, Y. lipolytica JMY4086, a strain with an improved lipid accumulation capacity, was used to exploit unpurified, low-cost industrial by-products, such as sugar beet molasses and the crude glycerol produced by the biodiesel industry and lipid production under different culture conditions was quantified. Molasses was used as a source of carbon, minerals, and vitamins, which are crucial for fermentation [18]. Moreover, molasses is used as the main substrate in the production of baker's yeast, organic acids, amino acids, and acetone/butanol [15]. In yeast, the glycolytic pathway produces intermediate compounds from glycerol either via the phosphorylation pathway [19, 20] or the oxidative pathway (dehydrogenation of glycerol and the subsequent phosphorylation of the reaction product) [21]. Dihydroxyacetone phosphate, the product of these reactions, can subsequently be converted into citric acid, storage lipids, or various other products [22, 23]. Additionally, glycerol may be readily incorporated in the core of triglycerides, which are stored in lipid bodies along with steryl esters [10].
The aim of this study was to produce valuable information that could be used in future research examining the biotransformation of crude glycerol into triglycerides (TAGs) with a view to producing biolipids, also known as single-cell oils (SCOs). This process may serve as an alternative means of decreasing biodiesel production costs while simultaneously recycling glycerol.
Previous work has found that Y. lipolytica JMY4086 can produce biolipids from substrates such as pure glucose, fructose, and sucrose in batch bioreactors [19]. The present study investigated whether low-cost raw materials such as molasses and crude glycerol could also serve as substrates for biolipid production and accumulation; the substrate concentration, oxygenation conditions, and inoculum densities were varied. Compared to other oleaginous microorganisms, Y. lipolytica has the unique ability to accumulate lipids when nitrogen is limited and to remobilize them when carbon is limited [24]. Therefore, all culturing was performed under low nitrogen conditions. Furthermore, TAG remobilization was avoided because the TGL4 gene, which encodes triglyceride lipase YlTgl4, was deleted from JMY4086 [17].
Fed-batch cultures subject to different oxygenation conditions and initiated with different inoculum densities
Studies examining lipid production by Y. lipolytica using fed-batch or repeated-batch cultures are scarce. Moreover, only glycerol has been used as a substrate for cell growth and lipid synthesis [25]. This study utilized a two-step process: biomass was produced using molasses for 48 h, and then lipids were produced using glycerol as the main carbon source. Biomass yield and lipid production were analyzed at two different inoculum densities (low density and high density) and under two sets of oxygenation conditions (unregulated and regulated). In the unregulated strategy "Oxy-const", dissolved oxygen (DO) was not regulated; in the regulated strategy "Oxy-regul", DO was regulated at 50% saturation (see "Methods").
When Oxy-const strategy and low-density inoculum were used, the biomass reached 50 g CDW/L and citric acid production was 36.8 g/L after 55 h of culture (Figure 1a, b). During the glycerol-feeding phase, cells converted the citric acid produced into lipids. In those conditions, total lipid concentration increased from 11 to 15.5 g/L (Figure 1c). Yeast lipid content reached 31% of CDW, which corresponds to a volumetric lipid productivity (Q L ) of 0.18 g/L/h and a coefficient of lipid yield to glycerol consumption (Y L/gly) of 0.083 g/g (Table 1). This condition also produced a small amount of mycelial cells (Figure 2a). Indeed, low DO levels have been shown to induce the yeast-to-mycelium transition in Y. lipolytica. Bellou and colleagues demonstrated that mycelial and/or pseudomycelial forms predominated over the yeast form when DO was low, regardless of the carbon and nitrogen sources used [26].
Effect of oxygenation conditions and inoculation densities on growth, citric acid production, and lipid production by Y. lipolytica grown in molasses. Strain JMY4086 was grown in a molasses medium and fed with crude glycerol. Growth is expressed as a cell dry weight, b citric acid production, and c lipid production. X biomass, CA citric acid, L lipids, 1 low-density inoculum/unregulated oxygenation condition (filled circles), 2 low-density inoculum/regulated oxygenation condition (filled squares), 3 high-density inoculum/unregulated oxygenation condition (filled triangles), 4 high-density inoculum/regulated oxygenation condition (filled diagonals). The low-density and high-density inocula had optical densities of OD600 = 1 and OD600 = 6, respectively. For the unregulated oxygenation condition, stirring speed was a constant 800 rpm and the aeration rate was 1.5 L/min. For the regulated oxygenation condition, dissolved oxygen was maintained at 50% saturation and the aeration rate was 0–3.5 L/min. All the results presented are the mean values ± SD for two independent biological replicates.
Table 1 Lipid production by Y. lipolytica JMY4086 during the glycerol-feeding phase for different oxygenation conditions and inoculation densities
Visualization of JMY4086 cell morphology and lipid bodies at the end of the fed-batch culturing experiment. Images are of cultures from the a low-density inoculum/unregulated oxygenation condition, b low-density inoculum/regulated oxygenation condition, c high-density inoculum/unregulated oxygenation condition; and d high-density inoculum/regulated oxygenation condition. The lipid bodies were stained with Bodipy®.
When oxygenation was regulated and a low-density inoculum was used, cell growth was surprisingly very slow (r x = 0.24 gCDW/h) and the culture duration (the time required for complete glycerol consumption) was 190 h. Consequently, when crude glycerol was fed into the bioreactor at 48 h, the fructose concentration was still high (30 g/L). In this condition, the biomass yield was lower (40 g/L) because citric acid production was greater (50 g/L) (Figure 1a, b); there was an apparent trade-off between the two processes. The citric acid produced was never reconsumed. The total lipid content was very low, 7 g/L, which corresponds to a Q L of 0.04 g/L/h (Figure 1c; Table 1). However, Y L/gly and Y L/X were equal to 0.056 and 0.17 g/g, respectively (Table 1). In these conditions, JMY4086 formed short true mycelia and pseudomycelia (Figure 2b).
High-density inocula were also used under both regulated and unregulated oxygenation conditions. As shown in Figure 1, the lag phase became shorter and culture duration decreased significantly; the latter was 70 and 66 h under regulated and unregulated conditions, respectively. When oxygenation was unregulated, the biomass yield was 58 g/L; it reached 70 g/L when oxygenation was regulated (Figure 1a). Citric acid production was similar across the two conditions (19 and 23 g/L, respectively); however, it was only reconsumed when DO was regulated (Figure 1b). In both cases, compared to the unregulated/low-density condition, Q L was low, as were Y L/gly and Y L/X (Table 1). Furthermore, in both conditions, JMY4086 formed short true mycelia and pseudomycelia (Figure 2c, d).
Because the fed-batch culture initiated with a low-density inoculum and subject to unregulated oxygenation had the highest lipid production, these conditions were used in a second experiment, in which a higher airflow rate of 3.5 L/min (the high-oxygen condition, "Oxy-high") was utilized. As a consequence, the lag phase lengthened, sucrose hydrolysis began later—after 30 h (Figure 3)—and lipid accumulation was limited. Citric acid production exceeded 40 g/L, and the compound was not reconsumed (Figure 3). The biomass yield was 59 g/L, and final lipid content was 7.7 g/L, which corresponds to a Y L/gly of 0.077 g/g (Table 1). These results indicate that increasing the oxygenation rate did not improve yeast growth and lipid production.
Time course of carbon sources concentration, biomass yield, and lipid and citric acid production during culture of Y. lipolytica JMY4086 in the low-density inoculum/high-oxygen experimental conditions. Sucrose (SUC), glucose (GLU), fructose (FRU), glycerol (GLY), biomass yield (X), lipid (L), and citric acid (CA). For the high-oxygen condition, the stirring speed was a constant 800 rpm, and the agitation rate was maintained at 3.5 L/min.
The fed-batch experiments revealed that the highest Q L , Y L/gly, and Y L/X values were obtained using a low-density inoculum and unregulated oxygenation. Consequently, these conditions were used in the continuous culture pilot experiment.
Continuous culture experiment: effects of increasing concentrations of glycerol in the feeding medium
Little research has looked at the synthesis of biolipids from sugars or renewable feedstock by nitrogen-limited continuous cultures [27–29]. Papanikolaou and Aggelis conducted the only study to date to examine biolipid synthesis by Y. lipolytica under continuous culture conditions using glycerol as the sole substrate [15].
In this experiment, we used a stepwise continuous fed-batch (SCFB) approach to test the effect of glycerol concentration on lipid production. The cultures started as batch cultures that were grown with molasses to produce biomass; once the sugar supply was exhausted, continuous culturing was initiated. Glycerol was used as feed, and the dilution rate was 0.01 h−1. The glycerol concentration in the feeding medium was increased from 100 to 450 g/L in steps that took place every 100 h (Figure 4). It has been shown that the dilution rate and culture-medium C:N ratio strongly affect lipid accumulation [28, 30]. In general, dilution rates of less than 0.06 h−1 have been shown to maximize lipid production in continuous cultures across different yeasts [31]. However, a higher dilution rate, of about 0.01 h−1, optimizes Y. lipolytica's production of citric acid from glycerol [32]. Therefore, in this experiment, a dilution rate of 0.01 h−1 was used. In addition, for JMY4086, lipid accumulation levels were similar across a range of C:N ratios, from 60:1 to 120:1 [19]. Consequently, to maximize cell growth and prevent nitrogen starvation, SCFB culturing was performed using a C:N ratio of 60:1.
Biomass (X), lipid (L), and citric acid (CA) production during SCFB culture of Y. lipolytica JMY4086. All the results presented are the mean values ± SD for two independent replicates. The black line (GLY) without symbol represents the glycerol concentration in the feeding solution.
Biomass yield and lipid production depended on the glycerol concentration in the feed solution (Figure 4). For glycerol concentrations of 100 g/L, the biomass yield was 32.2 g CDW/L; it reached 67.4 g CDW/L at the highest glycerol concentrations (450 g/L; Table 2). Under such conditions, DO was not limiting, except after 600 h of culture (data not shown). Glycerol was never detected in the culture broth, except at higher feeding concentrations (450 g/L), where glycerol accumulated in the culture broth at a concentration of 0.5 g/L. By comparison, Meesters et al. [33] observed that, in Cryptococcus curvatus, cell growth was restricted during lipid accumulation when glycerol concentrations were higher than 64 g/L.
Table 2 Characteristics of lipid production in continuous cultures of Y. lipolytica JMY4086 grown in crude glycerol; SCFB and chemostat culturing were used
The highest Y L/gly value was obtained at a glycerol concentration of 100 g/L. However, since biomass yield was lowest at that concentration, Q L was also low (0.09 g/L/h). In contrast, when the glycerol concentration was 250 g/L, Q L and Y L/X were 0.31 g/L/h and 0.46, respectively. At higher glycerol concentrations (350 and 450 g/L), both Y L/gly and Y L/X were lower (Table 2). During SCFB culturing, very low concentrations of citric acid were present until 400 h. Then, as the glycerol concentration increased to 350 g/L, citric acid started to accumulate; it reached a concentration of 40 g/L (Table 2). This accumulation of citric acid may have resulted from nitrogen limitations or a transition in cell morphology. Indeed, cells occurred in yeast form up until 400 h, at which point they started to filament, forming true mycelia and pseudomycelia (data not shown).
Lipid production from glycerol in a chemostat culture
The SCFB culturing experiment showed that feed glycerol concentrations of 250 g/L yielded the highest Q L and Y L/X values. In addition, citric acid and glycerol did not accumulate under those conditions; DO levels exceeded 70% saturation and no mycelia were observed. To assess lipid production in continuous cultures, yeasts were grown in a chemostat for over 400 h using a dilution rate of 0.01 h−1. To start off, yeasts were batch cultured for 48 h using molasses as the primary carbon source. Then, chemostat culturing was used; yeasts were kept in a medium with a glycerol concentration of 100 g/L for 100 h. They were then given a medium with a glycerol concentration of 250 g/L (Figure 5). At a steady state, between 200 h and 500 h of culture, the biomass yield was 59.8 g/L. The yeast produced 24.2 g/L of lipids; Q L was 0.43 g/L/h; and Y L/gly and Y L/X were 0.1 and 0.4, respectively (Table 2). Under these conditions, citric acid production was 50 g/L (Figure 5; Table 2). In contrast to the SCFB experiments, citric acid was produced from the start in the chemostat culture. One hypothesis explaining this difference between the two culture methods is that, in the chemostat, DO was limited. During SCFB culture, DO was not a limiting factor until 600 h into the experiment, when citric acid began to be secreted into the culture broth. However, in the chemostat culture, when the glycerol concentration in the feeding medium was increased to 250 g/L, DO dramatically decreased. DO limitations resulted in citric acid secretion, but not in cell filamentation. The cell morphology was constant; indeed, during the whole culturing process, cells remained in yeast form (Figure 6). Fatty acid profiles were similar across the three types of cultures (fed-batch, SCFB, and chemostat; Table 3). The yeast produced mainly C16 and C18 long-chain fatty acids, as do other oleaginous yeasts [25, 32]. In general, differences in fatty acid profiles seem to result not from culture type, but from substrate type. When industrial fats have been used as carbon sources, yeast demonstrates a different total fatty acid composition, which is characterized by high levels of cellular stearic acid [3].
Biomass (X), lipid (L), and citric acid (CA) production during the chemostat culture of JMY4086 when 250 g/L of crude glycerol was present in the feeding medium. All the results presented are the mean values ± SD for two independent replicates.
Cell morphology of JMY4086 when the strain was continuously cultured in crude glycerol: a at 200 h and b at 400 h. The white squares show a representative cell that has been enlarged (×2).
Table 3 Fatty acid profiles for Y. lipolytica JMY4086 under different culture conditions
The results of this study represent a good starting point for research seeking to further optimize chemostat culturing conditions. We found that the concentration of dissolved oxygen is one of the most important factors affecting lipid production. Oxygen limitation can be rate limiting in carbon metabolism and results in citric acid secretion. Additionally, optimizing nitrogen levels in the medium is also an important means by which citric acid secretion can be restricted. However, increasing nitrogen concentration can also increase biomass production, which in turn can result in problems with the oxygenation and stirring of the medium. It is therefore important to balance the regulation of available nitrogen with the optimization of the dilution rate to avoid generating overly high biomass concentrations in the bioreactor. All of these parameters should be used to find the right equilibrium between biomass and lipid production, the goal being to maximize total lipid production using chemostat culturing.
In conclusion, the results obtained in this study clearly show that the continuous culture method is an interesting means of producing lipids. Overall lipid production in the continuous culture experiment was almost 2.3 times higher than that in the fed-batch culture experiment. Y. lipolytica JMY4086 produced 24.2 g/L of lipids; the coefficient of lipid yield to glycerol consumption (Y L/gly) was 0.1 g/g and volumetric lipid productivity (Q L ) was 0.43 g/L/h. In the fed-batch cultures, lipid concentrations never exceeded 15.5 g/L, which corresponded to a Y L/gly of 0.083 g/g and a Q L of 0.18 g/L/h.
Bioengineered Y. lipolytica strain JMY4086 shows promise in the development of industrial biodiesel production processes. Indeed, in this strain, the inhibition of the degradation and remobilization pathways (via the deletion of the six POX genes and the TGL4 gene, respectively) was combined with the boosting of lipid synthesis pathways (via overexpression of DGA2 and GPD1). Additionally, molasses is an excellent substrate for biomass production, because it is cheap and contains several other compounds that are crucial for the fermentation processes. However, its concentration must be controlled because it also contains compounds that inhibit Y. lipolytica growth. Moreover, the subsequent addition of glycerol did not delay cell growth. This study provided valuable foundational knowledge that can be used in future studies to further optimize lipid production in fed-batch and continuous cultures in which biomass production takes place in molasses and lipid production takes place in industrial glycerol-based medium.
The Y. lipolytica strain used in this study, JMY4086 [17], was obtained by deleting the POX1–6 genes (POX1–POX6) that encode acyl-coenzyme A oxidases and the TGL4 gene, which encodes an intracellular triglyceride lipase. The aim was to block the β-oxidation pathway and inhibit TAG remobilization, respectively. In addition, to push and pull TAG biosynthesis, YlDGA2 and YlGPD1, which encode the major acyl-CoA:diacylglycerol acyltransferase and glycerol-3-phosphate dehydrogenase, respectively, were constitutively overexpressed. Additionally, the S. cerevisiae invertase SUC2 and Y. lipolytica hexokinase HXK1 genes were overexpressed to allow the strain to grow in molasses.
Medium and culturing conditions
The YPD medium contained Bacto™ Peptone (20 g/L, Difco, Paris, France), yeast extract (10 g/L, Difco, Paris, France), and glucose (20 g/L, Merck, Fontenay-sous-Bois, France). The medium for the batch cultures contained molasses (245 g/L, sucrose content of 600 g/L, Lesaffre, Rangueil, France), NH4Cl (4.0 g/L), KH2PO4 (0.5 g/L), MgCl2 (1.0 g/L), and YNB (without amino acids and ammonium sulfate, 1.5 g/L, Difco). For fed-batch cultures, crude glycerol (96% w/v, Novance, Venette, France) was added after 48 h at a feeding rate of 8.8 g/h until a total of 100 g/L of glycerol had been delivered (C:N ratio of 100:1). In the stepwise continuous fed-batch (SCFB) cultures, a C:N ratio of 60:1 was maintained as glycerol concentrations increased (100, 200, 250, 350, and 450 g/L); NH4Cl ranged from 4 to 12.5 g/L. Chemostat cultures were grown in either crude glycerol (100 g/L)/NH4Cl (2.5 g/L), with a C:N ratio of 25, or glycerol (250 g/L)/NH4Cl (6,25 g/l), with a C:N ratio of 40. Toward the beginning of the culturing process (at 100 h), the concentration of glycerol in the feeding medium was 100 g/L; it was subsequently increased to 250 g/L. This approach was used because past observations had suggested that slowly increasing the concentration of the carbon source in the feeding medium results in greater oxygenation of the culture and higher lipid production. Additionally, the fact that the carbon source was present in high concentrations from the beginning of the culturing process resulted in strong cell filamentation and lower final lipid yields (data not shown). For the stepwise continuous fed-batch and chemostat cultures, the dilution rate was 0.01 h−1 and the working volume was maintained at 1.5 L. All culturing took place in a 5-L stirred tank reactor (Biostat B-plus, Sartorius, Germany). The temperature was controlled at 28°C and the pH was kept at 3.5 by adding 40% (W/V) NaOH. We used three oxygenation conditions in our experiments: unregulated dissolved oxygen (DO), "Oxy-const", regulated DO, "Oxy-regul", and high DO, "Oxy-high" conditions. For the unregulated condition, the airflow rate was 1.5 L/min and the stirring speed was 800 rpm. In the regulated condition, DO was maintained at 50% saturation by a PID controller (the airflow rate ranged between 0 and 3.5 L/min, and stirring speed ranged between 200 and 1,000 rpm). In the oxy-high condition, the airflow rate was 3.5 L/min and the stirring speed was 800 rpm. Bioreactors were inoculated using samples with an initial OD600 nm of 0.15 (low-density inoculum) or of 0.8 (high-density inoculum). Precultures were grown in YPD medium. The bioreactor containing a given medium (prepared with tap water) was sterilized in an autoclave at 121°C for 20 min. We conducted two biological replicates of all fed-batch cultures, for which means and standard deviations were calculated. A single replicate was performed for the SCFB and the chemostat culture.
Quantifying dry biomass
Ten milliliters of culture broth was centrifuged for 5 min at 13,000 rpm. The cell pellet was washed with distilled water and filtered on membranes with a pore size of 0.45 μm. The biomass yield was determined gravimetrically after samples were dried at 105°C. It was expressed in grams of cell dry weight per liter (gCDW/L).
Measuring sugar and citric acid concentrations
The concentrations of glycerol (GLY), sucrose (SUC), glucose (GLU), fructose (FRU), and citric acid (CA) were measured in the culture supernatants by HPLC (Dionex-Thermo Fisher Scientific, UK) using an Aminex HPX-87H column (Bio-Rad, Hercules, CA, USA) coupled with a refractive index (RI) detector (Shodex, Ogimachi, Japan). The column was eluted with 0.1 N sulfuric acid at 65°C at a flow rate of 0.6 ml min−1.
Images were obtained using a Zeiss Axio Imager M2 microscope (Zeiss, Le Pecq, France) with a 100× objective lens and Zeiss filter sets 45 and 46 for fluorescence microscopy. Axiovision 4.8 software (Zeiss, LePecq, France) was used for image acquisition. To make the lipid bodies visible, BodiPy® Lipid Probe (2.5 mg/mL in ethanol; Invitrogen) was added to the cell suspension (OD600 = 5) and the samples were incubated for 10 min at room temperature.
Quantifying lipid levels
The fatty acids (FAs) in 15-mg aliquots of freeze-dried cells were converted into methyl esters using the method described in Browse et al. [34, 30]. FA methyl esters were analyzed by gas chromatography (GC) on a Varian 3900 equipped with a flame ionization detector and a Varian Factor Four vf-23 ms column, for which the bleed specification at 260°C was 3 pA (30 m, 0.25 mm, 0.25 μm). FAs were identified by comparing their GC patterns to those of commercial FA methyl ester standards (FAME32; Supelco) and quantified using the internal standard method, which involved the addition of 50 mg of commercial C17:0 (Sigma). Total lipid extractions were obtained from 100-mg samples (expressed in terms of CDW, as per Folch et al. [16]). Briefly, yeast cells were spun down, washed with water, freeze dried, and then resuspended in a 2:1 chloroform/methanol solution and vortexed with glass beads for 20 min. The organic phase was collected and washed with 0.4 mL of 0.9% NaCl solution before being dried at 60°C overnight and weighed to quantify lipid production.
Volumetric lipid productivity (Q L ) was defined using Eqs. (1–3):
$$Q_{\text{L}} = \frac{{{\text{Lipid}}_{\text{acc}} + {\text{Lipid}}_{\text{out}} }}{V \Delta t},$$
$${\text{Lipid}}_{\text{acc}} = \left[ {\text{Lipid}} \right] . X,$$
$${\text{Lipid}}_{\text{out}} = \frac{{\Delta \left( {{\text{Lipid}}_{\text{acc}} } \right)}}{F \Delta t}.$$
The coefficient of lipid yield to glycerol consumption (Y L/gly) was defined using Eqs. (4–7):
$$Y_{{L/{\text{gly}}}} = \frac{{{\text{Lipid}}_{\text{acc}} + {\text{Lipid}}_{\text{out}} }}{{{\text{Gly}}_{\text{in}} - ({\text{Gly}}_{\text{acc}} + {\text{Gly}}_{\text{out}} )}},$$
$${\text{Gly}}_{\text{in}} = \left[ {\text{Gly}} \right] . F. \Delta t,$$
$${\text{Gly}}_{\text{Acc}} = \Delta \left[ {{\text{Gly}}_{\text{med}} } \right] . V,$$
$${\text{Gly}}_{\text{out}} = \frac{{\Delta \left( {{\text{Gly}}_{\text{acc}} } \right)}}{F \Delta t}.$$
The coefficient of lipid yield to biomass yield (Y L/X ) was defined using Eqs. (8–10):
$$Y_{L/X} = \frac{{{\text{Lipid}}_{\text{acc}} + {\text{Lipid}}_{\text{out}} }}{{X_{\text{in}} + X_{\text{out}} }},$$
$$X_{\text{in}} = \Delta \left[ X \right] . V,$$
$$X_{\text{out}} = \frac{{\Delta \left( {X_{\text{in}} } \right)}}{F \Delta t}.$$
In the above equations, Lipidacc is the lipid accumulated in the cells in the bioreactor (g); Lipidout the lipid accumulated in the cells drawn off from the bioreactor (g); Δ(lipidacc) the difference in lipidacc for time period Δt; Glyin the glycerol fed to the bioreactor (g); Glyacc the glycerol accumulated in the bioreactor (g); Glyout the glycerol drawn off from the bioreactor (g); Δ(Glyacc) the difference in Glyacc for the time period Δt; V the volume of the culture (L); Δt the duration between two measurements (h); X the biomass yield (gCDW/L); [Gly] the glycerol concentration in the feeding medium (g/L); [Glymed] the glycerol concentration in the bioreactor (g/L); [Lipid] the lipid concentration (g/CDW); [X in] the cell concentration in the bioreactor (gCDW/L); [X out] the cell concentration in the culture broth drawn off from the bioreactor (gCDW/L); F the flow rate of the feeding medium (L/h).
Y L/gly :
coefficient of lipid yield to glycerol consumption
Y L/X :
coefficient of lipid yield to biomass yield
Q L :
volumetric lipid productivity
biomass yield
CDW:
cell dry weight
SCFB:
stepwise continuous fed batch
GLY:
L :
intracellular lipid
D :
dilution rate
rotations per minute
Tai M, Stephanopoulos G (2013) Engineering the push and pull of lipid biosynthesis in oleaginous yeast Yarrowia lipolytica for biofuel production. Metab Eng 15:1–9
Hill J, Nelson E, Tilman D, Polasky S, Tiffany D (2006) Environmental, economic, and energetic costs and benefits of biodiesel and ethanol biofuels. Proc Natl Acad Sci USA 103:11206–11210
Papanikolaou S, Chevalot I, Komaitis M, Aggelis G, Marc I (2001) Kinetic profile of the cellular lipid composition in an oleaginous Yarrowia lipolytica capable of producing a cocoabutter substitute from industrial fats. Antonie Van Leeuwenhoek 80:215–224
Papanikolaou S, Chevalot I, Komaitis M, Marc I, Aggelis G (2002) Single cell oil production by Yarrowia lipolytica growing on an industrial derivative of animal fat in batch cultures. Appl Microbiol Biotechnol 58:308–312
Meher LC, Sagar DV, Naik SN (2006) Technical aspects of biodiesel production by transesterification—a review. Renew Sust Energy Rev 10:248–268
Beopoulos A, Cescut J, Haddouche R, Uribelarrea JL, Molina-Jouve C, Nicaud JM (2009) Yarrowia lipolytica as a model for bio-oil production. Prog Lipid Res 48:375–387
Beopoulos A, Nicaud JM (2012) Yeast: a new oil producer. Ol Corps Gras Lipides OCL 19:22–88. doi:10.1684/ocl.2012.0426
Thevenieau F, Nicaud J-M (2013) Microorganisms as sources of oils. Ol Corps Gras Lipides OCL 20(6):D603
Mliĉková K, Luo Y, Andrea S, Peĉ P, Chardot T, Nicaud JM (2004) Acyl-CoA oxidase, a key step for lipid accumulation in the yeast Yarrowia lipolytica. J Mol Catal B Enzym 28:81–85
Beopoulos A, Mrozova Z, Thevenieau F, Dall MT, Hapala I, Papanikolaou S et al (2008) Control of lipid accumulation in the yeast Yarrowia lipolytica. Appl Environ Microbiol 74:7779–7789
Blazeck J, Hill A, Liu L, Knight R, Miller J, Pan A et al (2014) Harnessing Yarrowia lipolytica lipogenesis to create a platform for lipid and biofuel production. Nat Commun 5:3131. doi:10.1038/ncomms4131
Li Y, Zhao ZK, Bai F (2007) High-density cultivation of oleaginous yeast Rhodosporidium toruloides Y4 in fed-batch culture. Enzyme Microb Tech 41:312–317
Zhao X, Kong X, Hua Y, Feng B, Zhao ZK (2008) Medium optimization for lipid production through co-fermentation of glucose and xylose by the oleaginous yeast Lipomyces starkeyi. Eur J Lipid Sci Technol 110:405–412
Meesters PAEP, Huijberts GNM, Eggink G (1996) High-cell-density cultivation of the lipid accumulating yeast Cryptococcus curvatus using glycerol as a carbon source. App Microbiol Biotechnol 45:575–579
Papanikolaou S, Aggelis G (2002) Lipid production by Yarrowia lipolytica growing on industrial glycerol in a single-stage continuous culture. Bioresour Technol 82:43–49
Barth G, Gaillardin C (1996) Yarrowia lipolytica. In: Wolf K (ed) Nonconventional yeasts in biotechnology. Springer-Verlag, Berlin, Heidelberg, New York, pp 313–388
Xie D, Jackson EN (2015) Zhu Q Sustainable source of omega-3 eicosapentaenoic acid from metabolically engineered Yarrowia lipolytica: from fundamental research to commercial production. Appl Microbiol Biotechnol 99:1599–1610
Folch J, Lees M, Sloane-Stanley GH (1957) A simple method for the isolation and purification of total lipids from animal tissues. J Biol Chem 226:497–509
Lazar Z, Dulermo T, Neuvéglise C, Crutz-LeCoq A-M, Nicaud J-M (2014) Hexokinase—a limiting factor in lipid production from fructose in Yarrowia lipolytica. Metab Eng 26:89–99
Joshi S, Bharucha C, Jha S, Yadav S, Nerurkar A, Desa AJ (2008) Biosurfactant production using molasses and whey under thermophilic conditions. Bioresour Technol 99:195–199
Makkar RS, Swaranjit S, Cameotra SS (1997) Utilization of molasses for biosurfactant production by two Bacillus strains at thermophilic conditions. J Am Oil Chem Soc 74:887–889
Babel W, Hofmann KH (1981) The conversion of triosephosphate via methylglyoxal, a bypass to the glycolytic sequence in methylotrophic yeasts? FEMS Microbiol Lett 10:133–136
Ermakova IT, Morgunov IG (1987) Pathways of glycerol metabolism in Yarrowia (Candida) lipolytica yeasts. Mikrobiology 57:533–537
May JW, Sloan J (1981) Glycerol utilization by Schizosaccharomyces pombe: dehydrogenation as the initial step. J Gen Microbiol 123:183–185
Makri A, Fakas S, Aggelis G (2010) Metabolic activities of biotechnological interest in Yarrowia lipolytica grown on glycerol in repeated batch cultures. Bioresour Technol 101:2351–2358
Bellou S, Makri A, Triantaphyllidou I-E, Papanikolaou S, Aggelis G (2014) Morphological and metabolic shifts of Yarrowia lipolytica induced by alteration of the dissolved oxygen concentration in the growth environment. Microbiology 160:807–817
Ykema A, Verbree EC, Verseveld HW, Smit H (1986) Mathematical modeling of lipid production by oleaginous yeast in continuous cultures. Antoinie Van Leeuwenhoek 52:491–506
Brown BD, Hsu KH, Hammond EG, Glatz BA (1989) A relationship between growth and lipid accumulation in Candida cyrvata D. J Ferment Bioeng 68:344–352
Ratledge C (1994) Yeast moulds algae and bacteria as sources of lipids. In: Kamel BS, Kakuda Y (eds) Technological advances in improved and alternative sources of lipids. Blackie academic and professional, London, pp 235–291
Evans CT, Ratledge C (1983) A comparison of the oleaginous yeast Candia curvata grown on different carbon sources in continuous and batch culture. Lipids 18:623–629
Rywińska A, Juszczyk P, Wojtatowicz M, Rymowicz W (2011) Chemostat study of citric acid production from glycerol by Yarrowia lipolytica. J Biotechnol 152:54–57
Davies RJ (1992) Scale up of yeast oil technology. In: Ratledge C, Kyle DJ (eds) Industrial application of single cell oil. AOCS Press, Champaign, pp 196–218
Meesters PAEP, van de Wal H, Weusthuis R, Eggink G (1996) Cultivation of the oleaginous yeast Carptococcus curvatus in a new reactor with improved mixing and mass transfer characteristics (surer®). Biotechnol Tech 10:277–282
Browse J, Mc Court PJ, Somerville CR (1986) Fatty acid composition of leaf lipids determined after combined digestion and fatty acid methyl ester formation from fresh tissue. Anal Biochem 152:141–145
MR, ZL, TD, and J-MN conceived the study and participated in its design. MR, ZL, and TD carried out the experiments. MR wrote the first draft of the manuscript. MR, ZL, J-MN, and PF analyzed the results and assessed the culture data. MR, ZL, TD, J-MN, and PF revised the manuscript. All authors read and approved the final manuscript.
This work was funded by the French National Institute for Agricultural Research (INRA). M. Rakicka was funded by INRA. T. Dulermo and Z. Lazar were funded by the French National Research Agency (Investissements d'avenir program; reference ANR-11-BTBR-0003). Z. Lazar received financial support from the European Union in the form of an AgreenSkills Fellowship (grant agreement no. 267196; Marie-Curie FP7 COFUND People Program). We would also like to thank Jessica Pearce and Lindsay Higgins for their language editing services.
Magdalena Rakicka
Present address: Department of Biotechnology and Food Microbiology, Wrocław University of Environmental and Life Sciences, Chełmońskiego Str. 37/41, 51-630, Wrocław, Poland
INRA, UMR1319 Micalis, 78350, Jouy-en-Josas, France
, Zbigniew Lazar
, Thierry Dulermo
& Jean Marc Nicaud
AgroParisTech, UMR Micalis, Jouy-en-Josas, France
Microbial Processes and Interactions, Gembloux Agro Bio-Tech, Université de Liège, Passage des Déportés, 2, 5030, Gembloux, Belgium
Patrick Fickers
Institut Micalis, INRA-AgroParisTech, UMR1319, Team BIMLip: Biologie Intégrative du Métabolisme Lipidique, CBAI, 78850, Thiverval-Grignon, France
Search for Magdalena Rakicka in:
Search for Zbigniew Lazar in:
Search for Thierry Dulermo in:
Search for Patrick Fickers in:
Search for Jean Marc Nicaud in:
Correspondence to Magdalena Rakicka or Jean Marc Nicaud.
Yarrowia lipolytica
Oleaginous yeast
Biolipid production
Crude glycerol
Continuous culture
|
CommonCrawl
|
Volume 21 Supplement 2
Selected articles from the 6th International Work-Conference on Bioinformatics and Biomedical Engineering
Accurately estimating the length distributions of genomic micro-satellites by tumor purity deconvolution
Yixuan Wang1,2 na1,
Xuanping Zhang1,2 na1,
Xiao Xiao3,
Fei-Ran Zhang4,
Xinxing Yan1,2,
Xuan Feng1,2,
Zhongmeng Zhao1,2,
Yanfang Guan1,2,5 &
Jiayin Wang1,2
BMC Bioinformatics volume 21, Article number: 82 (2020) Cite this article
Genomic micro-satellites are the genomic regions that consist of short and repetitive DNA motifs. Estimating the length distribution and state of a micro-satellite region is an important computational step in cancer sequencing data pipelines, which is suggested to facilitate the downstream analysis and clinical decision supporting. Although several state-of-the-art approaches have been proposed to identify micro-satellite instability (MSI) events, they are limited in dealing with regions longer than one read length. Moreover, based on our best knowledge, all of these approaches imply a hypothesis that the tumor purity of the sequenced samples is sufficiently high, which is inconsistent with the reality, leading the inferred length distribution to dilute the data signal and introducing the false positive errors.
In this article, we proposed a computational approach, named ELMSI, which detected MSI events based on the next generation sequencing technology. ELMSI can estimate the specific length distributions and states of micro-satellite regions from a mixed tumor sample paired with a control one. It first estimated the purity of the tumor sample based on the read counts of the filtered SNVs loci. Then, the algorithm identified the length distributions and the states of short micro-satellites by adding the Maximum Likelihood Estimation (MLE) step to the existing algorithm. After that, ELMSI continued to infer the length distributions of long micro-satellites by incorporating a simplified Expectation Maximization (EM) algorithm with central limit theorem, and then used statistical tests to output the states of these micro-satellites. Based on our experimental results, ELMSI was able to handle micro-satellites with lengths ranging from shorter than one read length to 10kbps.
To verify the reliability of our algorithm, we first compared the ability of classifying the shorter micro-satellites from the mixed samples with the existing algorithm MSIsensor. Meanwhile, we varied the number of micro-satellite regions, the read length and the sequencing coverage to separately test the performance of ELMSI on estimating the longer ones from the mixed samples. ELMSI performed well on mixed samples, and thus ELMSI was of great value for improving the recognition effect of micro-satellite regions and supporting clinical decision supporting. The source codes have been uploaded and maintained at https://github.com/YixuanWang1120/ELMSI for academic use only.
Micro-satellites are repetitive DNA sequences that consist of specific oligonucleotide units [1, 2], exposing intrinsic polymorphisms in terms of the length, which are often described as length distributions [3]. A distinct event known as micro-satellite instability (MSI) refers to a pattern of hypermutation caused by defects in the mismatch repair system [4], characterized by widespread length polymorphisms of micro-satellites repeats, as well as by elevated frequency of single-nucleotide variants (SNVs) [3, 5]. MSI happens if the length distributions of the same micro-satellite region differ significantly between different tissue samples, such as a tumor sample and a normal sample, otherwise the micro-satellite stability (MSS) event exists. Up to 15% – 20% of sporadic cases of colorectal cancer exhibit MSI events [6, 7], while 12% of advanced prostate cancer cases have MSI events [8]. Some recent studies have surveyed the MSI landscape across a range of cancer types [9–11], and imply that these regions have important clinical implications for cancer diagnostics and patient prognosis [12, 13]. For example, MSI positive colorectal tumors respond well to PD-1 blocade [14]. Due to these clinical utility, the detection of MSI events has become increasingly important.
Owing to the increasing prevalence of the next generation sequencing (NGS) technologies, several computational tools for MSI diagnosis utilizing NGS data were developed, replacing the traditional fluorescent multiplexed PCR-based methods, which are time-consuming and costly. These algorithms includes MSIsensor [15], mSINGS [16], MANTIS [17], MSIseq [18], MSIpred [19], and MIRMMR [20]. Based on our best knowledge, these algorithms may be roughly divided into two categories: the read-count distribution based ones and mutation burden based ones. MSIsensor is among the first algorithms for analyzing cancer sequencing data, calculating the length distributions of each micro-satellite in paired tumor-normal sequence data and implementing a statistical test to identify significantly altered events between these paired distributions. mSINGS works based on target-gene captured sequencing data, allowing for the comparisons among the numbers of signals that reflect the repetitive micro-satellite tracts by differing lengths from tumor and control samples. mSINGS is computationally complex, and is thus only suitable for small panels. MANTIS analyzes MSI of a normal-tumor sample pair as an aggregate of loci instead of analyzing the differences of individual loci. By pooling the scores of all the loci and focusing on the average score, the impacts that sequencing errors or poorly performing loci may have on the results can be reduced. Meanwhile, MSIseq, MIRMMR and MSIpred utilize machine learning algorithms to predict MSI status. MSIseq compares the length distributions using four machine learning frameworks: logistic regression, decision tree, random forest and naive Bayes approach. It is a classifier that only reports MSI-H vs. non-MSI-H, without a score or percentage, or information about the instability of particular loci. MIRMMR builds a logistic regression classifier that considers both the methylation and mutation information of the genes belonging to MMR system. MSIpred adopts a support vector machine (SVM) to compute 22 features characterizing the tumor mutational load from mutation data in mutation annotation format (MAF) generated from paired tumor-normal exome sequencing data, and then use these features to predict tumor MSI status in the SVM. The classifier was trained by the MAF data of 1074 samples belonging to four types. But none of these approaches is able to overcome the one-read-length limitation. Since the detector can no longer squeeze the micro-satellites by partially mapping reads, the algorithms cannot locally anchor the micro-satellite by using paired-end reads. To this end, ELMSI has been proposed to break through this one-read-length limitation.
Of note, all of these existing algorithms generally imply a hypothesis that the tumor purity of the input sequenced samples is sufficiently high, where the purity refers to the proportion of tumor cells in the mixed sample, which varies widely among different samples and cancer types. But in practice, the sample purity is not as high as expected. Due to the growth pattern of tumor tissues and clinical sampling method, the tumor sample sequenced is actually a mixture that contains non-cancerous cells [21]. The presence of non-cancerous cells can influence the judgment of micro-satellite state. Ignoring the tumor purity, the micro-satellite length distributions and states may be inaccurate. For a micro-satellite region from a mixed tumor sample, different tissues may carry different length distributions, while the observed "distribution" from the sequencing data is actually a convolution of the distribution in tumor cells with that in normal cells. If we first established an assumption that the input sample is sufficiently pure, which means we have already assumed that there is only one distribution existing in the mathematical model, then we cannot fit the actual two distributions at all (See Fig. 1). Meanwhile, even if we can use a software to estimate the tumor purity p in advance, we cannot directly solve the deconvolution problem. In order to recognize the actual length distribution of the tumor micro-satellite from a given mixed sample, we must calculate the parameter values of the distributions accurately. Furthermore, since the existing algorithms mainly use statistical tests to detect MSI, even if the sample is pure enough, the convolutional distribution inferred based on a set of mixed data containing the normal tissue micro-satellite length data, which will dilute the data signal and may mislead the statistical tests to report a MSS event, introducing type-I error finally. Existing tumor purity estimation algorithms, such as EMpurity [22], can accurately identify the proportion of normal cells and tumor cells in sequencing samples respectively, which is helpful for us to further correct the length distributions according to the estimated purity.
Micro-satellite length distributions in the mixed sample
Motivated by this, in this article, we proposed a novel algorithm termed ELMSI that offers a new approach to identify the state and length distributions of the microsatellite from a given mixed sample. First, we established a more realistic hypothesis that the sequencing sample is a normal-tumor mixed sample, where the micro-satellite lengths are subject to two different distributions. Secondly, we used the purity estimation algorithms to accelerate the deconvolution process for calculating the respective distribution parameters. Finally, our algorithm was suitable for both short and long MSI detection. To test the performance of ELMSI, a series of simulation experiments were conducted. Because mSINGS is only used for small panels and MSIseq targets the sequencing at smaller regions of interest, while ELMSI instead focuses on longer micro-satellite and larger panel, these algorithms were not selected for comparisons. The experimental results herein were compared with MSIsensor. The results demonstrated that ELMSI can accurately identify the state of micro-satellite and infer the length distributions of it from a given mixed normal-tumor sample. Our algorithm outperformed MSIsensor based on multiple indicators, maintaining satisfactory accuracy even when coverage decreases at the same time.
Computational pipeline
Suppose that we are given a series of mapped files in BinAry Map (BAM) format generated from a normal-tumor mixed sample, and the outputs of the proposed algorithm include both the length distributions and the state of each micro-satellite. The proposed approach, ELMSI, consists of three components. The first component is estimating the tumor purity of the given sequenced sample by calculating the read counts of the filtered SNVs. Based on the estimated purity, the second component identifies the length distributions and the state of the shorter micro-satellites from the mixed sample by adding the Maximum Likelihood Estimation (MLE) step to the existing algorithm MSIsensor [15]. The third component infers the length distributions of the longer micro-satellites by combining a simplified Expectation Maximization (EM) algorithm with central limit theorem, and then uses statistical tests to output the states of them. Here, a model of micro-satellite evolution which has been well recognized in recent years holds that the distribution of micro-satellite length is a balance between length mutations and point mutations [23, 24]. Length mutations, the rate of which increases with increasing repeat counts, favor loci to attain arbitrarily high values, whereas point mutations break long repeat arrays into smaller units. Therefore, we make the same assumption [25] that the length distribution approximates a normal distribution. We have made two assumptions on the established computational model:
The input sequenced sample is not pure, containing micro-satellites of two types (normal cells and tumor cells) represented by two kinds of length distributions.
The length distribution of a micro-satellite approximates a normal distribution.
Before building the model, we need to process the input data. We have the Binary Map format (BAM) files of whole-exome sequencing (WES) data mapped to reference genome by bwa [26] as our initial input data. Then, we define the following important terms on the aligned reads.
MS-pair: Two paired reads, one of which is perfectly mapped while the other spans a breakpoint.
SB-read: A read which is across the breakpoint in an MS-pair.
PSset: A collection of the binary group consisting of initial positions and sequences of the SB-reads, which is represented by (POS, SEQ).
Sk-mer: The sequence consisting of the first k bases.
We first find all the micro-satellite candidate regions by scanning the given reference genome, recording micro-satellites of maximum repeat unit length 6bp and saving the location and the corresponding sequences of each site. Then, we use a clustering algorithm to find the remanent micro-satellite candidate regions which may be ignored by the initial scanning. This algorithm clusters are based on the distances among the initial mapping positions of the reads across each breakpoint. The number of clusters represents the number of micro-satellite regions. We set Lmax as the longest length of micro-satellites. The lengths of micro-satellites are generally less than 50kbps [27]. Thus, Lmax is set to be 50kbps. ELMSI estimates the number of micro-satellites using a clustering algorithm according to the distances of the initial positions of the SB-reads. The clustering strategy is as follows:
According to the mapping results from the PSset, two SB-reads will belong to the same cluster only if the distance between their initial positions is less than Lmax. Each cluster then represents a candidate micro-satellite region, providing the number of micro-satellites.
Once the number of micro-satellites is determined, for each candidate micro-satellite region, ELMSI uses a k-mer based algorithm to split each read. As the repeat units that compose micro-satellites are usually less than 6 bps, we set k=6 as a default. Starting from the first base of the read sequence, the algorithm detects whether two k-mer sequences are identical replicates. This sequence is a candidate repeat unit, and the first base of the sequence is a candidate breakpoint of the micro-satellite. The same operation is conducted for all reads in the micro-satellite region and other candidate areas, taking the mode of the repeat units and breakpoints as the final results.
Estimating the tumor purity of the sample
First, we introduce a tumor purity estimation algorithm. Due to the limitation of current sequencing technologies, the purity problem is almost inevitable during the actual sampling process, so many algorithms are proposed to solve this problem. Among them, EMpurity [22] has established a probability model to accurately estimate the tumor cell proportion in the mixed sample. The observed indicators are the numbers of reads supporting the reference allele and mutation at each site, respectively, while the unknown hidden states include the tumor purity and the joint genotype. EMpurity designs a probabilistic model to describe the emission probabilities from the hidden states to the observed indicators and the transition probabilities among the hidden states. This model is solved by an Expectation Maximization algorithm.
EMpurity uses the pair-sampled DNA sequencing data as the model input data, and only considers the heterozygous sites with somatic mutations. For one sample in the pair, the set of possible genotype values at each loci is G={AA,AB,BB}. Let N,T and TM represent the normal sample, virtual pure tumor sample and mixed tumor sample, respectively. Here, the virtual pure tumor sample T is actually part of TM. Then, for the paired samples, the set of possible combined genotype values is a Cartesian product, which is G×G={(GN,GT):GN,GT∈G}. For any site i, let \( n_{N\_ref}^{i} \) and \( n_{T_{M}\_ref}^{i} \) denote the number of reads supporting the reference allele in the normal sample and mixed tumor sample, respectively, each of which follows a binomial distribution with parameters μN and \( \mu _{T_{M}} \). There are only 9 possible joint genotypes, which follow a polynomial distribution with parameter μG. Considering the bias on read depth, we assume that tumor purity follows a normal distribution across all of the given sites, whose parameters are μp and λp. Let \( R^{i}=\left \{ n_{x\_ref}^{i}, n_{x\_\overline {ref}}^{i} \right \} \) and \( D^{i}=\left \{ n_{x\_d}^{i} \right \}, x \in \{ N, T, T_{M} \} \). Let \( n_{x\_\overline {ref}} \) be the number of reads supporting the mutation in x. Let \( n_{x\_d}^{i} \) represent the read depth in x. For x∈{N,TM}, these values are observed. And then, the estimation of tumor purity is \( \hat {p} = n_{T\_d}^{i} / n_{T_{M}\_d}^{i} \). Let \( \mathcal {G} \) denote the random variable representing the joint genotype \( \left \{G_{(G_{N}, G_{T})}^{i}\right \} \). Let 𝜗 represent the set of unknown parameters, which is \( \vartheta =\left \{ \mu _{N}, \mu _{T}, \mu _{G}, \mu _{p}, \lambda _{p}^{-1} \right \} \). Suppose that \( \mu _{G_{(G_{N}, G_{T})}} \) satisfies \( 0 \leq \mu _{G_{(G_{N}, G_{T})}} \leq 1 \) and \( \sum \nolimits _{G_{N} \in G} \sum \nolimits _{G_{T} \in G} \mu _{G_{(G_{N}, G_{T})}}=~1 \).
This model is solved by an Expectation Maximization algorithm, where the established likelihood function is:
$$ \begin{aligned} &L(R, D, \mathcal{G}; \vartheta) =\prod_{i=1}^{I}\prod_{G_{N} \in G}\prod_{G_{T} \in G}\\ &\left[\mu_{G_{(G_{N}, G_{T})}} Bin\left(n_{x\_ref}^{i}|n_{x\_d}^{i}, \mu_{x_{(G_{x})}}\right) N\left(p_{(G_{N}, G_{T})}^{i}|\mu_{p_{(G_{N},G_{T})}^{i}}, \lambda_{p_{(G_{N},G_{T})}^{i}}^{-1}\right)\right]^{G_{(G_{N}, G_{T})}^{i}}\\ &x \in \{N,T\} \end{aligned} $$
The specific EM iterative process can be referred to EMpurity [22].
Estimating the length distribution parameters of the short micro-satellite
For the shorter (shorter than one-read-length) micro-satellites, the existing algorithms, such as MSIsensor [15], can accurately calculate the specific length data and estimate the state of them. However, when the sequenced sample is a normal-tumor mixture, the calculated micro-satellite lengths actually contain both the normal micro-satellite lengths and the tumor micro-satellite lengths, and the state estimated directly is inaccurate. Thus, given a mixed sample with known proportions (normal cells account for (1−p), tumor cells account for p) and a micro-satellite region belonging to this sample, MSIsensor can detect this micro-satellite region, obtaining a set of the lengths L={l1,l2,...,lN} as a result. L is actually a length data set sampled randomly from two samples which are independent of each other and subject to two different normal distribution models. According to the law of large numbers, the data in L have a probability of (1−p) to be the length of a micro-satellite from normal cells, and the probability of p to be that from tumor cells.
Given a micro-satellite region, we assume that its length follows a normal distribution \( N_{1}\left (\mu _{1}, \sigma _{1}^{2} \right) \) when it belongs to normal cells, while the length of it follows a normal distribution \( N_{2}(\mu _{2}, \sigma _{2}^{2}) \) when it belongs to tumor cells. Therefore, the length of this micro-satellite in the mixed sample follows a probability distribution with the density function f=(1−p)f1+pf2, where f1 and f2 is the density function of N1 and N2 respectively, while L={l1,l2,...,lN} is the set of lengths obtained from this mixed micro-satellite sample independently. We can get the values of μ1,σ1 by separately detecting normal samples (such as blood samples). Under these known conditions, we can use the Maximum Likelihood Estimation (MLE) step to estimate the values of μ2,σ2. From the above, the likelihood function is the joint probability density function of the lengths:
$$ \begin{aligned} L(\mu_{2},\sigma_{2})&=\prod_{i=1}^{N} f(x_{i}, \mu_{2}, \sigma_{2})\\ &=\prod_{i=1}^{N}\left[(1 - p) \frac{1}{\sqrt{2\pi}\sigma_{1}} exp \left(-\frac{(x_{i} - \mu_{1})^{2}}{2\sigma_{1}^{2}} \right)\right.\\ &\quad +\left. p\frac{1}{\sqrt{2\pi}\sigma_{2}}exp \left(-\frac{(x_{i} - \mu_{2})^{2}} {2\sigma_{2}^{2}} \right)\right] \end{aligned} $$
The likelihood function actually reflects the probability of generating these length values in L. The parameter values in the likelihood function which can maximize this probability are the estimated values we need to calculate:
$$ \left\{ \begin{aligned} \frac{\partial L(\mu_{2}, \sigma_{2})}{\partial \mu_{2}} = 0 \\ \frac{\partial L(\mu_{2}, \sigma_{2})}{\partial \sigma_{2}} = 0 \end{aligned} \right. $$
By this, the estimated values \( \hat {\mu _{2}}, \hat {\sigma _{2}} \) can be obtained. Thus, the length distributions of shorter micro-satellites from a given mixed sample can be recognized, and then we perform a z-test to assess the micro-satellite state.
Estimating the length distribution parameters of the long micro-satellite
On the other hand, for the longer micro-satellites, reads cannot locate them, so we cannot pinpoint their specific lengths. Thus, we use the length distribution to characterize them. Given a mixed sample of normal-tumor cells, we set the proportion of tumor cells as p to facilitate the computation. In this paper, we only consider the following two scenarios (See Fig. 2).
The patterns of sequencing reads from a micro-satellite region sampled from a mixed sample. a short MS region. b long MS region
Similarly, we have known that the micro-satellite lengths in (1−p) normal cells follow a normal distribution \( N_{1}\left (\mu _{1}, \sigma _{1}^{2}\right) \), while the micro-satellite lengths in p pure tumor cells follow an another normal distribution \( N_{2}\left (\mu _{2}, \sigma _{2}^{2}\right) \). And, normal distribution parameters of N1 can be estimated by detecting normal tissue cells alone. According to central limit theorem, the average of the samples is roughly equal to the average of the population. Whatever the distribution of the population is (mean is μ, variation is σ2), when the sampling times reach a certain condition (>30), the means of the samples (sample size n) sampled from it will surround the mean of the population and be normally distributed (mean is μ, variation is σ2/n). Due to the specific lengths of longer micro-satellite cannot be assessed by the existing technology, we can use the distribution of the mean length of them to reflect the overall length distribution. Our approach supposes that the length of a micro-satellite is normally distributed. Therefore, ELMSI considers a continuous estimation strategy, whose basic goal is to estimate the micro-satellite average length based on the coverage of the specified area containing this micro-satellite, and then using the updated micro-satellite average length to estimate the coverage of this specified area in turn. This loop is repeated until there are no longer significant changes in micro-satellite average length. Therefore, we can use at least 30 groups of sampling average lengths to assess the distribution of the overall long micro-satellite. The length of the hybrid longer micro-satellites belonging to this mixed sample subject to a normal distribution with \( \mu = (1-p) \mu _{1} + p \mu _{2}, \sigma ^{2} = (1-p) \sigma _{1}^{2} + p \sigma _{2}^{2} \). According to the Central Limit Theorem, the sampled average length distribution parameters μ can be obtained to reflect the overall length distribution. However, under the technical restrictions, we can only use the estimated σ2 to represent the overall variance due to the uncountable sample size. By substituting them in the above formula, the length distribution parameters μ2 and σ2 of micro-satellites in the pure tumor sample can be calculated. The specific EM process is as follows:
Let WIN−bk be the window on the reference, with the breakpoint of a micro-satellite as the midpoint of it. The default length of WIN−bk is set to be 5000bps. Then, the read pairs can be divided into the following categories. Let C-pair be the paired-reads perfectly mapped to WIN−bk,T-pair be the paired-reads perfectly mapped to the micro-satellite region, O-pair be the paired-reads with one read mapped to WIN−bk and the other mapped to the micro-satellite region, SO-pair be the paired-reads with one read mapped to the micro-satellite region and the other spanning across a breakpoint, S-pair be the paired-reads with one read mapped to WIN−bk while the other spans across a breakpoint, and S-read be the reads which span across the breakpoints in any SO-pair or S-pair. Figure 3 is a graphical representation of the relevant definitions.
The changes in coverage when a micro-satellite event occurs, and the definitions of different read pairs. C-pairs: The paired-reads in which mapped to WIN-bk; T-pairs: The paired-reads in which both reads mapped from micro-satellite areas; O-pairs: The paired-reads in which one read perfectly matched to WIN-bk and one read mapped from micro-satellites areas; SO-pairs: The paired-reads in which one read is mapped from an micro-satellite area and one read spans across the breakpoints; S-pairs: The paired-reads in which one read is perfectly matched to WIN-bk and one read spans across the breakpoint. S-reads: The reads which span across the breakpoints in SO-pairs and S-pairs
The breakpoints and the repeat units of these micro-satellites can be identified by the aforementioned data preprocessing, we set a WIN−bk with the breakpoint as the midpoint. The initial length of WIN−kb is set to be 5000 bps. According to the aligned reads corresponding to WIN−bk, we can obtain the coverage of reference in WIN−bk using the following formulas:
$$\begin{array}{*{20}l} {SUM}_{{bp}} &= {NUM}_{{read}}\times {L}_{{read}} \end{array} $$
$$\begin{array}{*{20}l} {C} &= \frac{SUM_{{bp}}}{L} \end{array} $$
where SUMbp represents the total number of bases in WIN−bk,NUMread represents the total number of reads in the target area, Lread represents the read length, C represents the coverage of the target area, and L represents the length of the target area. When the WIN−bk length is fixed, SUMbp is a constant. Thus, the lengths of micro-satellites do not affect SUMbp, but do influence the coverage C. We can therefore calculate the normal distribution parameters of the micro-satellite lengths through the following nine steps.
Variable initialization:
Let m be the total number of micro-satellites, i be the ith micro-satellites, S be the sampling times, WIN−bk be the sequence of samples with the micro-satellite's breakpoint as the midpoint, LWin be the length of WIN−bk,Laln be the total number of bases belong to the micro-satellites region in all S-reads, Lset be the set of micro-satellite lengths. Step 1-1: Initializing the number of micro-satellites, the repeating units, breakpoints by the data preprocessing; Step 1-2: Clustering the paired-reads into 5 categories which are: C-pairs, T-pairs, O-pairs, S-pairs and SO-pairs, all the paired-reads are in WIN−bk; Step 1-3: Calculating the number of paired-reads in these categories and let NUMC,NUMT,NUMO,NUMS,NUMSO represent the number of C-pairs, T-pairs, O-pairs, S-pairs and SO-pairs respectively; Step 1-4: Setting m as the number of micro-satellites, i=1,S=1,LWin=5000bps,L′=0,Lset=∅.
According to the paired-reads clustering results, calculate the average coverage of WIN−bk. The formula is \( C = \frac {SUM_{{bp}}}{L}\), where SUMbp=2×(NUMC+NUMT+NUMO+NUMS+NUMSO)×Lread+Laln. And L=L′+LWin.
Suppose that the coverage follows a uniform distribution, and then the coverage in Step 2 is equal to the coverage in micro-satellite area. In this step, we use the formula \( L^{\prime \prime } = \frac {SUM_{{bp}}}{C}\) to update the micro-satellite length. Where SUMbp=(2×NUMT+NUMO+NUMSO)×Lread+Laln.
If |L−L′′|>δ, where \( \delta = \frac {L^{\prime }}{100} + 1 \), let L′=L′′, and repeat Step 2.
The obtained micro-satellite length is incorporated into a set, \( L_{{set}} = L_{{set}} \bigcup \{L^{\prime \prime }\} \).
In order to assess the normal distribution parameter of a given micro-satellite sequence, we sample 30 times (at least) by changing the size of LWin. Set S=S+1, if S<30, and let LWin=LWin+1000. Then proceed to Step 1.
The statistical data regarding micro-satellite lengths obtained from these 30 groups of sampling experiments are tested using a normal test algorithm and the Shapiro-Wilk algorithm. Output the normal distribution parameters of a micro-satellite N(μ,σ2). μ and σ2 are the mean and covariance of lengths.
If i<m, set i=i+1, go to Step 1.
The independent z-test is used to compare the state of micro-satellite between tumor cells and normal cells. If p-value <0.05, then the identified micro-satellite is an MSI event, otherwise the identified micro-satellite is an MSS event.
To test the performance of ELMSI, we first tested its ability of micro-satellite state classification, and also compared the two major indicators - precision rate and recall rate - with those yielded by MSIsensor [15]. And we conducted experiments on a series of simulated datasets with different configurations, which altered the number of micro-satellites, coverage, and read length. In these simulation experiments, the following key indicators were calculated to evaluate ELMSI: true positive (TP), false positive (FP), true negative (TN) and false negative (FN). In addition, five popular indicators were further calculated, which are accuracy, recall, precision, MCC and Gain.
Accuracy=(TP+TN)/(TP+TN+FN+FP);
Recall=TP/(TP+FN);
Precision=TP/(TP+FP);
\( MCC = (TP \times TN-FP \times FN) / \sqrt {(TP+FP)(TP+FN)(TN+FP)(TN+FN)} \);
Gain=(TP−FP)/(TP+FN).
Simulation dataset generation
To generate the simulation datasets, we first randomly selected a region of 10Mbps on human chromosome 19. To design a complex situation, we randomly chose the micro-satellites length, repeat unit, and the breakpoint. As aforementioned, the micro-satellite length in a given individual is normal distributed. We divided the normal distribution N(μ,σ2) into seven parts which are μ−3σ,μ−2σ,μ−σ,μ,μ+σ,μ+2σ,μ+3σ, and the number of micro-satellites in each part planted into the reference was got through multiplied coverage by corresponding probability 1%, 6%, 24%, 38%, 24%, 6%, and 1% for each part, respectively. Once each micro-satellite was planted, we merged these seven read files. All of the simulated reads were then mapped to the reference sequence. The alignment file was then provided to variant calling tools.
Micro-satellites state classification and comparison experiment
In this part, we first tested the accuracy of ELMSI in classifying the micro-satellite state from the mixed samples. The z-test was used to determine whether the micro-satellite is a MSI event.
For the shorter micro-satellites, we compared our algorithm with the proposed approach MSIsensor. Among the proposed micro-satellite state classification algorithms, mSINGS is suitable for small panels and has been reported to be used only for limited exome data, and MSIseq only targets the sequencing at smaller regions. Comparison with these algorithms is meaningless. MSIsensor can accurately identify the micro-satellite state and lengths when the they are shorter than one read length. Thus we chose MSIsensor to do the comparison experiment. The number of micro-satellite was set to be 30, the coverage was set to be 100 × and the read-length was set to be 200bps. The tumor purity was set to be 0.9, 0.7, 0.5, 0.3, 0.1, respectively. Micro-satellite state were subsequently identified by the two classification tools MSIsensor and ELMSI. The results are shown in Table 1.
Table 1 Comparison results of ELMSI and MSIsensor
As can be seen, ELMSI has better performance in hybrid micro-satellite state classification. When the tumor purity of the input sequenced sample is below a certain ratio, the MSS signal in the normal sample will dilute the MSI signal, causing MSIsensor to report a MSS event. Thus, when the input tumor sample is a mixture with high normal cell contamination, MSIsensor cannot distinguish the MSI accurately. However, ELMSI can do the classification even if the tumor purity is less than 10%.
On the other hand, for the longer micro-satellites, the paired-reads used to locate the candidate micro-satellite region are invalid, and none of the existing approaches is able to overcome the one-read-length limitation. Thus, we proposed ELMSI, which can identify the longer hybrid micro-satellites, and classify their state. Next, we tested the classification accuracy of it. The number of micro-satellite was set to be 30, the coverage was set to be 100 × and the read-length was set to be 200bps. The tumor purity was set to be 0.9, 0.7, 0.5, 0.3, 0.1, respectively. The detailed results are shown in Table 2.
Table 2 Performance of ELMSI for longer micro-satellites classification
As is shown in Table 2, the decreasing tumor ratio can influence the accuracy of the ELMSI. However, even with a purity as low as 10%, the results still indicate that ELMSI can provide a reliable MSI classification.
Estimating the distribution of micro-satellite lengths
To separately verify the validity of ELMSI in estimating the length distributions of the longer micro-satellites. We ignored the influence of tumor purity, and tested the performance of ELMSI by changing micro-satellite number, coverage, and read length. A correct call is defined as follows: a micro-satellites is identified with a correct repeat unit, the breakpoint detected belongs to the (b−−10bps,b+10bps) where b is the set breakpoint, and the actual micro-satellites length belongs to the (μ−3σ,μ+3σ), where μ and σ are parameter values which have be estimated.
We first changed the number of micro-satellite from 20 to 100. In order to better reflect the influence of micro-satellite number on ELMSI, we also varied the coverage from 30 ×, 60 ×, 100 ×, to 120 ×. The read length was set to be 100bp in this group of experiments. For each different micro-satellite number, we repeated the test five times using the same setting and output the average results, which are summarized in Table 3.
Table 3 Key indicators of ELMSI in different numbers number of mciro-satellites
The increasing micro-satellite number can influence the robustness of the ELMSI. In practice, since microsatellites are very rare, few micro-satellites will exist in a given 10Mbps chromosomal sequence region. Even so, for testing ELMSI, we intended to increase this density. Based on Table 3, we can see that ELMSI can identify micro-satellites and exclude non micro-satellites interference accurately. The results also show that ELMSI can offer a high reliability.
Sequencing coverage affects somatic mutation calling, which in turn would presumably affect the performance of ELMSI. To assess the influence of the different coverage on ELMSI, we further varied the coverage from 10 × to 100 ×. As is shown in Table 4 the coverage changes intuitively affect the changes in key indicators. In this group of experiments, we set the number of microsatellites to be 20, 40, or 60, and set read length to be 100 bps.
Table 4 Comparisons of the performance of ELMSI in different coverages
The lower the coverage, the greater the difficulty faced by this computational approach. Consistent with this, Table 4 indicates that the performance of ELMSI increases as coverage increases, with maximal recall rate more than 80%. Thus, the higher the coverage, the higher the accuracy of ELMSI for inferring micro-satellites.
ELMSI can also stay valid when the read length is altered. The number of micro-satellites was set to be 20, or 50, coverage was set to be 30 ×, 60 ×, 100 ×, or 120 ×, and the read length was set to be 100bps, 150bps, 200bps, 250bps and 300bps. The results are shown in Table 5.
Table 5 Key indicators of ELMSI corresponding to different read lengths
The main weakness of this method is the huge amount of splicing required. The longer the read length, the smaller the splicing workload, and the fewer errors will be introduced by splicing. We thus predict that with the increased of read length, ELMSI performance will improve. Table 5 validates this hypothesis, and shows that the longer the read length is, the more accurate estimation result is.
In this article, we focus on the computational problem of inferring the length distributions and states of all kinds of micro-satellites in tumors with normal cell contamination. Existing approaches, such as MSIsensor, mSINGS, MANTIS and MSIseq, perform well in handling the genomic micro-satellite event whose length is shorter than one read length, but often encounter a significant loss of accuracy when the length of micro-satellite becomes longer. Meanwhile, all of these MSI detection algorithms implies a general assumption before establishing a mathematical model that the input sample is a pure tumor sample, which is difficult to achieve under existing sequencing technology. We have therefore proposed an algorithm to break these limitations, handling micro-satellites with a wide range of length from a mixed normal-tumor sample based on NGS data. Our proposed algorithm, termed ELMSI, directly computes on the aligned reads. ELMSI can clearly recognize the length distributions and states of micro-satellites with a wide range of length from mixed sequenced samples. For short microsatellites, it can identify the lengths accurately, while for long micro-satellites, it can estimate the normal distribution parameters. ELMSI is among the first approaches to recognize and identify long micro-satellites. However, due to the nature of sequencing data and the limitation of computing capacity, the estimated mean μ is relatively accurate, while the estimated variance σ has a certain deviation. Thus, for longer MSI detection, our algorithm uses independent z-test mainly. When the sample size can be calculated during the iteration process, we can estimate the variance of the longer micro-satellite more accurately, and thus we recommend to use independent t-test to infer the MSI state. The performance of ELMSI is compared with MSIsensor, and ELMSI is superior for the hybrid shorter micro-satellites classification. For the mixed longer samples, ELMSI can also obtain the satisfactory results. The simulation experimental results demonstrate that ELMSI is robust, with good performance in response to variations in coverage, read length, and the number of micro-satellites. It will be useful for micro-satellites screening and we anticipate a wider usage in cancer clinical sequencing.
Field D, Wills C. Long, polymorphic microsatellites in simple organisms. Proc Biol Sci. 1996; 263(1367):209.
Tóth G, Gáspári Z, Jurka J. Microsatellites in different eukaryotic genomes: survey and analysis. Genome Res. 2000; 10(7):967.
Ellegren H. Microsatellites: simple sequences with complex evolution. Nat Rev Genet. 2004; 5(6):435–45.
Hummerich H, Lehrach H. Trinucleotide repeat expansion and human disease. Electrophoresis. 1995; 16(9):1698–704.
Shia J. Evolving approach and clinical significance of detecting dna mismatch repair deficiency in colorectal carcinoma. Semin Diagn Pathol. 2015; 32(5):352–61.
Kim TM, Laird PW, Park PJ. The landscape of microsatellite instability in colorectal and endometrial cancer genomes. Cell. 2013; 155(4):858–68.
Woerner SM, Kloor M, Mueller A, Rueschoff J, Friedrichs N, Buettner R, Buzello M, Kienle P, Knaebel HP, Kunstmann E. Microsatellite instability of selective target genes in hnpcc-associated colon adenomas. Oncogene. 2005; 24(15):2525–35.
Pritchard CC, Morrissey C, Kumar A, Zhang X, Smith C, Coleman I, Salipante SJ, Milbank J, Yu M, Grady WM. Complex MSH2 and MSH6 mutations in hypermutated microsatellite unstable advanced prostate cancer. Nat Commun. 2014; 5:4988.
Vilar E, Tabernero J. Molecular dissection of microsatellite instable colorectal cancer. Cancer Discov. 2013; 3(5):502–11.
Li B, Liu HY, Guo SH, Sun P, Gong FM, Jia BQ. Microsatellite instability of gastric cancer and precancerous lesions. Int J Clin Exp Med. 2015; 8(11):21138–44.
Shannon C, Kirk J, Barnetson R, Evans J, Schnitzler M, Quinn M, Hacker N, Crandon A, Harnett P. Incidence of microsatellite instability in synchronous tumors of the ovary and endometrium. Clin Cancer Res. 2003; 9(4):1387–92.
Moertel CG. Tumor microsatellite-instability status as a predictor of benefit from fluorouracil-based adjuvant chemotherapy for colon cancer. N Engl J Med. 2003; 349(3):247–57.
Pawlik TM, Raut CP, Rodriguezbigas MA. Colorectal carcinogenesis: Msi-h versus msi-l. Dis Markers. 2013; 20(4-5):199–206.
Gong J, Wang C, Lee PP, Chu P, Fakih M. Response to pd-1 blockade in microsatellite stable metastatic colorectal cancer harboring a pole mutation. J Natl Compr Cancer Netw Jnccn. 2017; 15(2):142.
Niu B, Ye K, Zhang Q, Lu C, Xie M, Mclellan MD, Wendl MC, Ding L. Msisensor: microsatellite instability detection using paired tumor-normal sequence data. Bioinformatics. 2014; 30(7):1015–6.
Salipante SJ, Scroggins SM, Hampel HL, Turner EH, Pritchard CC. Microsatellite instability detection by next generation sequencing. Clin Chem. 2014; 60(9):1192–9.
Kautto EA, Bonneville R, Miya J, Yu L, Krook MA, Reeser JW, Roychowdhury S. Performance evaluation for rapid detection of pan-cancer microsatellite instability with mantis. Oncotarget. 2017; 8(5):7452.
Huang MN, Mcpherson JR, Cutcutache I, Teh BT, Tan P, Rozen SG. Msiseq: Software for assessing microsatellite instability from catalogs of somatic mutations. Sci Rep. 2015; 5(1):13321.
Wang C, Liang C. Msipred: a python package for tumor microsatellite instability classification from tumor mutation annotation data using a support vector machine. Sci Rep. 2018; 8(1). https://doi.org/10.1038/s41598-018-35682-z.
Foltz S, Liang WW, Xie M, Ding L. Mirmmr: binary classification of microsatellite instability using methylation and mutations. Bioinformatics. 2017; 33(23):3799–801.
Carter SL, Kristian C, Elena H, Aaron MK, Hui S, Travis Z, Laird PW, Onofrio RC, Wendy W, Weir BA. Absolute quantification of somatic dna alterations in human cancer. Nat Biotechnol. 2012; 30(5):413–21.
Yu G, Zhao Z, Liu R, Tian Z, Jing X, Yi H, Zhang X, Xiao X, Wang J. Accurately estimating tumor purity of samples with high degree of heterogeneity from cancer sequencing data. In: Intelligent Computing Theories and Application: 2017. p. 273–285. https://doi.org/10.1007/978-3-319-63312-1_25.
Kruglyak S, Durrett RT, Schug MD, Aquadro CF. Equilibrium distributions of microsatellite repeat length resulting from a balance between slippage events and point mutations. Proc Natl Acad Sci U S A. 1998; 95(18):10774–8.
I. Bell G, Jurka J. The length distribution of perfect dimer repetitive dna is consistent with its evolution by an unbiased single-step mutation process. J Mol Evol. 1997; 44(4):414–21.
Wu CW, Chen GD, Jiang KC, Li AF, Chi CW, Lo SS, Chen JY. A genome-wide study of microsatellite instability in advanced gastric carcinoma. Cancer. 2015; 92(1):92–101.
Li H. Toward better understanding of artifacts in variant calling from high-coverage samples. Bioinformatics. 2014; 30(20):2843–51.
Srivastava S, Avvaru A, Sowpati DT, Mishra RK. Patterns of microsatellite distribution across eukaryotic genomes. BMC Genomics. 2019; 20(1):153.
The authors would like to thank the conference organizers of the International Work-Conference on Bioinformatics and Biomedical Engineering (IWBBIO 2018). We also would like to thank the reviewers for their valuable comments and suggestions, which guide us to improve the work and manuscript.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 21 Supplement 2, 2020: Selected articles from the 6th International Work-Conference on Bioinformatics and Biomedical Engineering. The full contents of the supplement are available online at URL.
This work is supported by the National Science Foundation of China (Grant No: 31701150) and the Fundamental Research Funds for the Central Universities (CXTD2017003), China Postdoctoral Science Foundation funded project 2018M643684.
Yixuan Wang, Xuanping Zhang, and Xiao Xiao contributed equally to this work.
School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710048, People's Republic of China
Yixuan Wang, Xuanping Zhang, Xinxing Yan, Xuan Feng, Zhongmeng Zhao, Yanfang Guan & Jiayin Wang
Shaanxi Engineering Research Center of Medical and Health Big Data, School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an, 710048, People's Republic of China
Institute of Health Administration and Policy, School of Public Policy and Administration, Xi'an Jiaotong University, Xi'an, 710048, People's Republic of China
Xiao Xiao
Department of General Surgery, The First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, People's Republic of China
Fei-Ran Zhang
Geneplus Beijing Institute, Beijing, 100061, People's Republic of China
Yanfang Guan
Yixuan Wang
Xuanping Zhang
Xinxing Yan
Xuan Feng
Zhongmeng Zhao
Jiayin Wang
JYW and XPZ conducted this research. YXW, XX, XXY, FRZ, XF and ZMZ designed the algorithms and the pipeline. XXY and YFG applied the simulation experiments. YXW, XX and JYW wrote this manuscript. All authors read and approved the final version of this manuscript.
Correspondence to Jiayin Wang.
Wang, Y., Zhang, X., Xiao, X. et al. Accurately estimating the length distributions of genomic micro-satellites by tumor purity deconvolution. BMC Bioinformatics 21, 82 (2020). https://doi.org/10.1186/s12859-020-3349-5
Genomic micro-satellite
Length distribution estimation
Tumor purity
Sequencing data analysis
|
CommonCrawl
|
Impact of cancer mutational signatures on transcription factor motifs in the human genome
Calvin Wing Yiu Chan1,2,
Zuguang Gu1,
Matthias Bieg1,3,
Roland Eils1,3,4 &
Carl Herrmann ORCID: orcid.org/0000-0003-4989-47221,4
BMC Medical Genomics volume 12, Article number: 64 (2019) Cite this article
Somatic mutations in cancer genomes occur through a variety of molecular mechanisms, which contribute to different mutational patterns. To summarize these, mutational signatures have been defined using a large number of cancer genomes, and related to distinct mutagenic processes. Each cancer genome can be compared to this reference dataset and its exposure to one or the other signature be determined. Given the very different mutational patterns of these signatures, we anticipate that they will have distinct impact on genomic elements, in particular motifs for transcription factor binding sites (TFBS).
We used the 30 mutational signatures from the COSMIC database, and derived a theoretical framework to infer the impact of these signatures on the alteration of transcription factor (TF) binding motifs from the JASPAR database. Hence, we translated the trinucleotide mutation frequencies of the signatures into alteration frequencies of specific TF binding motifs, leading either to creation or disruption of these motifs.
Motif families show different susceptibility to alterations induced by the mutational signatures. For certain motifs, a high correlation is observed between the TFBS motif creation and disruption events related to the information content of the motif. Moreover, we observe striking patterns regarding for example the Ets-motif family, for which a high impact of UV induced signatures is observed. Our model also confirms the susceptibility of specific transcription factor motifs to deamination processes.
Our results show that the mutational signatures have different impact on the binding motifs of transcription factors and that for certain high complexity motifs there is a strong correlation between creation and disruption, related to the information content of the motif. This study represents a background estimation of the alterations due purely to mutational signatures in the absence of additional contributions, e.g. from evolutionary processes.
With the availability of thousands of fully sequenced cancer genomes, genome-wide patterns of somatic mutations can be analyzed to search for potential driver mutations. Such an effort has been exemplified by the recent Pan-Cancer Analysis of Whole Genomes (PCAWG) initiative by the International Cancer Genome Consortium (ICGC) consortium. However, besides coding driver mutations that have been described earlier, non-coding mutations have been under increased scrutiny, in search for additional non-coding drivers, given the extensive number of these mutations in non-coding genomic regions. Several modes of actions can be identified for these mutations, the most likely ones being mutations affecting regulatory elements such as transcription factor binding sites (TFBS), altering in one way or the other (creation or disruption) the binding motif. A spectacular example was identified in several cancer entities involving the promoter of the TERT gene, in which two recurrent mutations have been shown to create new binding sites for Ets-family transcription factors, leading to a strong over-expression of the TERT oncogene [1, 2]. Another example was found in T-ALL, in which a mutation creating a new binding site for MYB leads to the appearance of a super-enhancer driving over-expression of the TAL1 oncogene [3]. Besides these few non-coding drivers, cancer genomes are loaded with thousands of mutations which are termed passenger, as they cannot be individually related to molecular phenotypes, as in the previous cases. However, several studies have shown that these putative passengers contribute to an overall mutational load in the cancer genome, and can, collectively, have an impact [4, 5]. Hence, it is of importance to understand the overall patterns of non-coding mutations, besides the few driving examples.
Patterns of somatic mutations have been analyzed by defining so-called mutational signatures, based on a dimensional reduction approach focusing on the patterns of trinucleotide alterations. The 96 possible types of trinucleotide mutations were summarized into a reduced number of signatures, which each describe a different mutational bias [6]. Some of these mutational signatures can be related to specific mutational processes such as APOBEC mutations, nucleotide mismatch repair or various carcinogens. Recently, nucleotide excision repair (NER), which is related to one mutational signature, has been related to specific patterns of mutations within TFBS in cancer genomes [7]. Once these overall signatures are available, the exposure of cancer types or of individual cancer genomes can be determined. Hence, for example, there is a clear association between the signature related to ultra-violet light and melanoma [8].
In this study, we assess the impact of mutational signatures on motifs of transcription factor binding sites. In particular, we search to understand how a particular mutational signature impacts the large collection of transcription factor binding site motifs. Our objective is to establish, for each signature, a catalogue of binding motif creation and disruption frequencies, which would correspond to an expected background effect of the mutational patterns, in the absence of any additional effect such as selective processes. Our goal is thus to translate the mutational signatures based on trinucleotide alterations into signatures of motif alteration. The result provides a theoretical framework as a baseline model for transcription binding site alteration analysis in cancer genomes.
The following regulatory impact analysis is based on the 30 trinucleotide mutational signatures described by Alexandrov et al. [6]. These mutational signatures were downloaded from the COSMIC database (https://cancer.sanger.ac.uk/cosmic/signatures). Each of the 30 mutational signature contains the normalized mutational probability across the 96 types of point mutation in a trinucleotide context. In the remaining of this paper the mutational signature is represented by the 30 x 96 mutational signature matrix SM and mutational signature vector \(\vec {s}_{v}(s_{i})\) as follows:
$$ {}S_{M} \,=\, \left[\!\!\begin{array}{c} \leftarrow Trinucleotide\;signature\;1 \rightarrow\\ \leftarrow Trinucleotide\;signature\;2 \rightarrow\\ \vdots\\ \leftarrow Trinucleotide\;signature\;30 \rightarrow\\ \end{array}\!\!\right]\!\! =\!\! \left[\begin{array}{c} \vec{s}_{v}(s_{1})\\ \vec{s}_{v}(s_{2})\\ \vdots\\ \vec{s}_{v}(s_{30})\\ \end{array}\right] $$
$$ {}S_{M} = \left[\begin{array}{cccc} Pr(m_{1}|s_{1}) & Pr(m_{2}|s_{1}) & \dots & Pr(m_{96}|s_{1})\\ Pr(m_{1}|s_{2}) & Pr(m_{2}|s_{2}) & \dots & Pr(m_{96}|s_{2})\\ \vdots & \vdots & \ddots & \vdots \\ Pr(m_{1}|s_{30}) & Pr(m_{2}|s_{30}) & \dots & Pr(m_{96}|s_{30})\\ \end{array}\right] $$
where mi corresponds to the 96 possible trinucleotide mutations (eg. A[C>A]A, A[C>G]A,... etc.). This mutational signature matrix is denoted as the trinucleotide mutational signature in the remaining part of this article.
Transcription Factor Motif Alteration Signature
The procedure of computing the motif binding alteration probability is illustrated in Fig. 1.
Worflow of the method: a Frequency counting procedure; b Transcription factor binding motif alteration signature analysis workflow
We define Pr(a|tfi,mj) as the probability for a motif tfi to undergo an alteration a (i.e. creation or disruption) given a trinucleotide mutation mj. A disruption event is defined as a mutation that turns a binding site into a non-binding site, and a creation event corresponds to the reverse effect. The probability of a transcription factor motif alteration event is computed by assessing the p-value of a given sequence being a binding site before and after the mutation. The 512 JASPAR 2016 vertebrate position weight matrices (PWM) of length 6 to 19 are used to compare the impact in binding affinity due to a point mutation [9]. The p-value of a sequence is evaluated using matrix-scan of the Regulatory Sequence Analysis Toolbox (RSAT) to compute the p-value [10]. In this study, a p-value of p=0.001 is set as the binding threshold. For a given transcription factor binding PWM of width k, all possible k-mer sequences are scanned to compute the mutational statistics. For a PWM of k=19, it requires scanning a total of 419=274,877,906,944 sequences for all the 19mers. The search space of binding sequences can be reduced in half by combining all the reverse complementary sequences.
For each PWM tfi of width k, we enumerate all possible k-mers (considering a k-mer and its reverse complement as the same motif) and, using matrix-scan with the parameters described above, we separate the set of k-mers into binding and non-binding k-mers. Then, the alteration probability is computed by mutating each of the trinucleotide in the matching k-mer according to the mutation type mi and search for the corresponding mutated sequence. A disruption event is identified if the mutated sequence is not found in the list of binding k-mers. Conversely, a creation event is identified if a non-binding k-mer is turned into a binding k-mer. The current matched binding sequence is considered as the reference sequence for a motif disruption event and alternative sequence in the motif creation event and vice versa.
In order to count these events in the human genome, all possible k-mers are extracted from the human genome (version hg19) and their occurrences are counted for k=6 to k=19. The count of each reference sequence in the human genome is recorded according to the type of trinucleotide mutation and the type of alteration event (see pseudo-code in Additional File 4: Figure S4). The probability Pr(a|tfi,mj) can be obtained by normalizing the counts by the total number of reference trinucleotide chg19(mj,ref) for the reference trinucleotide mj,ref of a given mutation type mj of a tfi PWM width of length w(tfi). The count normalization factor for alteration probability computation is illustrated in Fig. 1b. For each trinucleotide in the genome, it is compared w(tfi)−2 times. Therefore, the total count detected should be divided by the number of trinucleotides in the genome multiplied by w(tfi)−2 to obtain the alteration probability.
$$ Pr(a|tf_{i},m_{j}) = \frac{c(a|tf_{i},m_{j})}{c_{hg19}(m_{j,ref}) \cdot (w(tf_{i}) - 2)} $$
Importantly, we also need to determine the binding affinity of k-mers which do not occur in the human genome for the motif creation event, as a mutation could turn a k-mer into one which does not occur in the reference genome. However, for the disruption probability computation, the k-mers search space for genomic k-mers count can be drastically reduced by considering only k-mers occurring in the human genome, which dramatically reduces the search space for k>13. The motif alteration probability of a given transcription factor PWM and trinucleotide is stored in the motif alteration probability matrix,
$$ {}\Phi_{A|TF,M} = \left[\begin{array}{cccc} Pr(a|tf_{1},m_{1}) & Pr(a|tf_{1},m_{2}) & \dots & Pr(a|tf_{1},m_{96})\\ Pr(a|tf_{2},m_{1}) & Pr(a|tf_{2},m_{2}) & \dots & Pr(a|tf_{2},m_{96})\\ \vdots & \vdots & \ddots & \vdots \\ Pr(a|tf_{I},m_{1}) & Pr(a|tf_{I},m_{2}) & \dots & Pr(a|tf_{I},m_{96})\\ \end{array}\right] $$
From this, we compute the alteration probability for a mutational signature si using Bayesian inference:
$$\begin{array}{@{}rcl@{}} Pr(a|tf_{j},s_{i})=\sum_{k=1}^{96} Pr(a|tf_{j},m_{k}) \cdot Pr(m_{k}|s_{i}) \end{array} $$
or, in matrix notation:
$$\begin{array}{@{}rcl@{}} S_{TF}=S_{M}\;\Phi_{A|TF,M}^{T} \end{array} $$
$$\begin{array}{@{}rcl@{}} {}S_{TF}\! =\! \left[\begin{array}{cccc} Pr(a|s_{1},tf_{1}) & Pr(a|s_{1},tf_{2}) & \dots & Pr(a|s_{1},tf_{I})\\ Pr(a|s_{2},tf_{1}) & Pr(a|s_{2},tf_{2}) & \dots & Pr(a|s_{2},tf_{I})\\ \vdots & \vdots & \ddots & \vdots \\ Pr(a|s_{30},tf_{1}) & Pr(a|s_{30},tf_{2}) & \dots & Pr(a|s_{30},tf_{I})\\ \end{array}\right] \end{array} $$
The full algorithm for computing the alteration probability is given in the Additional file 4: Figure S4.
Analysis on Transcription Factor Motif Alteration Probability
The motif alteration probability matrix ΦTF|M encapsulates changes in binding affinity of a given PWM under the perturbation of a single nucleotide point mutation. In order to investigate similarities in the alteration probabilities of different motifs, a hierarchical clustering of motif creation and motif disruption is performed. A Partitioning Around Medoids (PAM) clustering approach is applied to partition the 512 transcription factors using the silhouette coefficient to determine the optimal number of groups. The clustering is then compared to the TF family annotation downloaded from the JASPAR database. To further investigate the alteration probability of relevant TFs in cancer, all transcription factors present in the COSMIC cancer gene census are extracted and evaluated. The global alteration offset of these COSMIC cancer TFs are computed by subtracting the disruption probability from the creation probability.
To evaluate the similarity of motif alteration probability of multiple transcription factors, a self-organizing map (SOM) analysis is performed using the alteration probability matrix ΦTF|M to validate the result and to gain insight into the alteration similarity among the TF motifs. For this, a 22 x 22 grid was used and resulted in a stack of 96 variable maps corresponding to the trinucleotide mutation type mk. The map dimension of the SOM is selected to maximally retain the resolution of the transcription factor space based on the following criterion:
$$ \mathop{argmax}_{w_{som}} \{w_{som}^{2}\} < N_{TF} $$
where wsom is the width of the SOM, and NTF is the total number of TFs. There is a total of 512 TFs in the JASPAR vertebrate 2016 database, therefore the optimal SOM dimension is wsom=22. This allows the TFs to be distributed evenly on the map in the worst case scenario when they are equally dissimilar with respect to each other.
Comparison with the PCAWG Dataset
We compared our motif alteration prediction based on the mutational signatures with real datasets of SNVs in cancer. For this, we used a previously published dataset from the PCAWG study, containing 2708 whole-genome sequencing (WGS) samples. We restrained ourselves to the WGS and discarded whole-exome sequencing (WES) samples as we wanted to make genome-wide predictions of alterations of TFBS motif, which lie outside of coding-regions [11]. We have listed the abbreviations of the tumor subtypes in Additional file 5: Figure S5.
In order to predict motif alterations in the PCAWG dataset, we developed a motif alteration pipeline. The pipeline is based on the matrix-scan-quick tool from the RSAT toolbox [10]. The neighborhood reference sequence of each SNV is extracted to build a second order background Markov model for PWM matching. Both the reference and alternative sequence are matched against the given PWM. For alteration detection, it is required to fulfill both of the following conditions: (1) the mutation results in a change from binding site to non-binding site (or vice-versa for creation) where a threshold of p≤1e−4 is required to consider a sequence as a TF binding site; (2) a 10-fold p-value change before and after the mutation.
The PCAWG SNV calls are first matched against the same set of JASPAR PWMs to identify possible transcription factor binding alteration sites. To produce comparable results across all cancer entities, the detection probability of each of the transcription factor alteration is computed by normalizing the detection counts with respect to the total number of SNV detected in the corresponding cancer entity.
To match our predicted frequencies of alteration based on the mutational signature to the cancer dataset, we need, for each of the cancer entities, to combine the influence of those mutational signatures that contribute to the particular cancer entity. Here, we used, for each cancer sample, the exposure matrices described in [11], based on 2708 PCAWG samples. Since the PCAWG dataset is based on a new set of 48 PCAWG mutational signatures, we mapped each of these 48 signatures to the most similar out of the 30 COSMIC signatures. Similarity was assessed using Pearson correlation across the 96 trinucleotide mutations. When several PCAWG signatures mapped to the same COSMIC signature, we summed up the exposure values corresponding to these signatures. To infer the transcription factor binding site alteration probability for one cancer entity, the transcription factor motif alteration matrix STF is multiplied with the normalized exposure matrix EPID to produce the per patient exposure prediction matrix ΨPID,
$$ \Psi_{PID} = S_{TF}^{T} \; E_{PID} $$
$$\begin{array}{@{}rcl@{}} {}E_{PID} = \left[\begin{array}{cccc} e(s_{1}|pid_{1}) & e(s_{1}|pid_{2}) & \dots & e(s_{1}|pid_{482})\\ e(s_{2}|pid_{1}) & e(s_{2}|pid_{2}) & \dots & e(s_{2}|pid_{482})\\ \vdots & \vdots & \ddots & \vdots \\ e(s_{30}|pid_{1}) & e(s_{30}|pid_{2}) & \dots & e(s_{30}|pid_{482})\\ \end{array}\right] \end{array} $$
where e(sl|pidm) is the normalized exposure (or exposure probability) of signature l and patient m. Combining the above we have,
$$ {}\Psi_{PID} \,=\,\! \left[\!\begin{array}{cccc} Pr(a|pid_{1},tf_{1}) & Pr(a|pid_{2},tf_{1}) & \dots & Pr(a|pid_{482},tf_{1})\\ Pr(a|pid_{1},tf_{2}) & Pr(a|pid_{2},tf_{2}) & \dots & Pr(a|pid_{482},tf_{2})\\ \vdots & \vdots & \ddots & \vdots \\ Pr(a|pid_{1},tf_{I}) & Pr(a|pid_{2},tf_{I}) & \dots & Pr(a|pid_{482},tf_{I})\\ \end{array}\!\right] $$
After obtaining the per patient exposure prediction matrix ΨPID, the median TF alteration probability within each entity is compared against the alteration probability from the alteration detection pipeline.
Clustering of tumor samples
Given the heterogeneity of the samples within a tumor cohort, we clustered the PCAWG samples based on their exposure values. We first used the UMAP dimensionality reduction method [12] on the table of exposure values of the 2708 samples, and then defined clusters using the hdbscan method [13], as implemented in the largeVis R-package. This defined 21 clusters containing 2360 samples, while 348 samples could not be assigned to one of these clusters. Results are shown in Additional file 3: Figure S3.
Impact of mutational signatures on transcription factor binding motifs
We computed the motif alteration signature using the conditional probability between the transcription factor alteration probability and the trinucleotide mutational signature as described in Eq. 5 and Eq. 6. The predicted alteration probabilities for all TF motifs and all 30 COSMIC signatures are given in Additional file 6: Figure S6. The motif alteration results across all 512 JASPAR motifs are shown in Fig. 2a for motif creation (top) and motif disruption (bottom, see Additional file 1: Figure S1 for a more detailed representation including signature names and information on TF families.). To capture similar patterns between transcription factor motifs, we applied a PAM clustering to the set of 512 motifs. The silhouette coefficient shows a local maximum at kclust=4 indicating that the optimal number of clusters is 4.
Signature driven motif alterations: a Motif alteration signatures heatmap where the vertical axis corresponds to the 30 mutational signatures and the horizontal axis corresponds to the 512 motifs in JASPAR database. A kclust=4 PAM clustering performed over the TF alteration probability; b Cluster count statistics of each TF families with at least 5 members; c Familial binding profile of SOX-related motifs associated with PAM clusters
The four clusters show a completely different behavior under the mutation signatures; whereas cluster 3 shows a low sensitivity to any of the mutational signatures (with some exceptions), cluster 4 on the other hand seems to be strongly impacted both by motif creation and disruption. Interestingly, comparing the disruption and creation heatmaps, we observe that the creation sensitivity for cluster 4 is high over all signatures, whereas we observe two groups of signatures for the disruption, with different impacts. We then studied the composition of TF motifs of each cluster. As expected, none of the clusters is dominated by a single TF family (Fig. 2b). However, Fig. 2b shows that some motif families are preferentially associated to one of the clusters. Forkhead motifs are in their vast majority associated with cluster 4 and show a high sensitivity to all mutational signatures. We also observe obvious mutual exclusivity between the cluster 1 and 2 versus cluster 3 and 4 for most of the TF families. SOX-related factors and C/EBP-related family, on the other hand, seem to be distributed across several clusters. For SOX-related motifs, we observe that these motifs form 2 major subgroups as shown in Fig. 2c. SOX-related factors associated with PAM cluster 1 contain an extended binding sequence AACAATKGCAKCAKTGTT whereas those associated with PAM cluster 2 and 4 contain a shorter version AACAATG binding motif.
After this global analysis of the alteration patterns across all 512 motifs, we next focused on specific motifs related to transcription factors which play a role in cancer. We extracted from COSMIC cancer gene census the list of transcription factors to investigate how the binding motifs of transcription factors mutated in the cancer genome are affected by somatic mutations. We computed the differential motif alteration probabilities of 30 signatures for 40 TF motifs with the strongest differential impact (Additional file 2: Figure S2). This map displays two broad groups of transcription factors with opposite behavior. Some interesting patterns can be observed. For example, HNF1A, a liver-specific transcription factor, appears to have a strong excess of creation upon signature 12, which is specifically found in liver cancer. Given the high expression of HNF1A in liver tissue, this excess of new binding sites could result in some novel, functional binding sites in liver cancer.
In order to capture the complexity of the relation between the mutational signatures and the binding motifs, we applied SOM clustering to group motifs showing similar behaviors. We performed SOM clustering over the 192 possible mutational transitions (taking both creation and disruption into account). We also combined these mutational probabilities into the 30 mutational signatures.
The SOM clustering provides a clear picture on the similarity of the alteration behavior of transcription factors across all trinucleotide mutational probability and the 30 mutational signatures. Transcription factors of the same transcription factor families often share similar binding sequence and their motif alteration behavior should also be similar. We observe that overall, transcription factors of the same family are well clustered together and globally share similar motif alteration probability patterns (Fig. 3a,b). For example, the SOX transcription factor family with SOX2, SOX4, SOX8, SOX9, SOX11, SOX10, SOX11, SOX17, and SOX21 are clustered together. However, for specific signatures, differences between related transcription factors do exist. In Fig. 3a, SOX10 appears to have a very similar creation probability to its neighbor SOX2 for signature 10, but shows a much lower creation probability in signature 23 (Fig. 3b).
Transcription factor motifs of width 6 to 19: a SOM of Signature 10 creation probability; b SOM of Signature 23 motif creation probability; c SOM using 30 motif alteration signatures with TF family colour coded; d SOM using 96 motif alteration probabilities with TF family colour coded
Overall, by coloring the cells according to the family of the TF, we observe a global clustering of motifs from the same structural class (Fig. 3c,d). However, this does not hold true for all families; we highlighted some members of the GATA-family in Fig. 3c which appear to be dispersed across the SOM map.
Correlation Between Motif Creation and Disruption Signature
The previous results indicate that for many transcription factors, there is a compensating effect of creation and disruption. Further investigating this relationship we found indeed that the alteration probability between creation and disruption event in all signatures have globally a strong correlation between the creation and disruption (Fig. 4a). We found this correlation to be strongly related to the trinucleotide diversity of the motif matching sequences. To quantify this diversity, the trinucleotide content of each of the matching sequences is determined and the entropy is computed for each TF motif. The similarity between creation and disruption probability of each motif is evaluated using the absolute relative difference:
$$ {}\left| RD(s,tf)\right|=\frac{2\left|{Pr(s,tf,create) - Pr(s,tf,disrupt)}\right|}{ Pr(s,tf,create) + Pr(s,tf,disrupt)} $$
Demonstration of alteration correlation effect between creation and disruption probability: a Scatter plot of correlation between the creation and disruption probabilities for all 512 motifs across the 30 mutational signatures where each dot corresponds to a TF motif in one of the 30 signatures; b Difference of creation/disruption probabilities vs. motif entropy, with Hoxd8 highlighted in red and Pax9 in blue; c Boxplot of alteration probability absolute relative difference of Hoxd8 versus PAX9; d PWM logo of Hoxd8 (top) and alteration probability of Hoxd8 with respect to 96 trinucleotides (bottom); e PWM logo of PAX9 (top) and alteration probability of PAX9 with respect to 96 trinucleotides (bottom)
where s represents a signature and tf a binding motif. The scatter plot in Fig. 4b shows an inverse relation between the complexity of the binding sequences (as measured by the entropy of these sequences) and the difference between creation and disruption. Hence, the more complex the motif content, the stronger the creation and the disruption signature probability are correlated.
This effect is illustrated using Hoxd8 and PAX9 as an example. To ensure that the motif width does not bias the results, both selected motifs have a width of 17 nucleotides. As illustrated in Fig. 4d and Fig. 4e, Hoxd8 and PAX9 have very different trinucleotide content. This is obvious looking at the motif logo. The motif alteration probability with respect to the 96 trinucleotide mutation is shown below the logos. For Hoxd8, the motif alteration probability concentrates on trinucleotide mutations related to TAA, AAT, TTA, and ATT where the overlap of these trinucleotide mutation creation and disruption only occurs on A[T >A]A, A[T >A]T, T[T >A]A, and T[T >A]T.
On the other hand, the alteration probability of PAX9 is distributed along all the 96 trinucleotide mutations, resulting in the creation and disruption probability to be strongly correlated along the 96 trinucleotide mutations. This translates into a high similarity across the 30 mutational signatures. This is shown in Fig. 4c for Hoxd8 and PAX9 across the 30 mutational signatures.
Association of Deamination Signature and TFBS Creation
We next sought to validate our predictions using independent data. In [14], Zemojtel et al. described a set of transcription factors whose binding sites are frequently created as a result of CpG deamination process during evolution. These transcription factors include: c-Myc(Myc), Nfya, Nfyb, Oct4(POU5F1B), PAX5, Rxra, Usf1, and YY1. Given that some of the mutational signatures in cancer are related to CpG deamination, we sought to verify if the same transcription factors are impacted by this process due to somatic mutations. The creation probability of these transcription factor motifs are plotted in Fig. 5a over all 30 mutational signatures.
CpG deamination associated TF motif a creation probability for all 30 mutational signatures; b creation probability of all TF of the corresponding TF family
Signature 1 was described as being related to deamination of 5-methylcytosine [15] and ranks as the second highest signature in terms of creation frequency of these motifs in Fig. 5a. Signature 14 shows a very similar mutational profile with strong C →T bias. In addition, it is interesting to note that the POU5F1B motif has a high creation probability compared to other TFs across all signatures. This phenomenon is due to the high genomic frequency of motifs which differ from POU5F1B binding motifs by one mutation, which gives rise to a large number of k-mers which closely resemble the POU5F1B binding domain and serve as a substrate for TF motif creation events.
If we extend this single TFs to the family they belong to, we also observe a much higher creation probability for the other members of the POU-family, compared to the families of the other impacted TFs (Fig. 5b).
Mutation associated Mechanisms
There are three main molecular mechanisms leading to single nucleotide mutations in cancer: i) defective DNA mismatch repair (MMR); ii) APOBEC activity; and iii) transcription-coupled nucleotide excision repair (NER). In the catalogue of mutational signatures, several signatures can be related to each of these processes. We wanted to investigate which transcription factor motifs are most impacted by these three different molecular mechanisms.
For each of these mechanisms, we summarized the alteration results for all signatures annotated to the same mechanism; APOBEC is related to signatures 2 and 13, MMR to signature 6,15,20 and 26, and NER to signatures 4,7,11 and 22. We displayed the differential alteration probabilities (creation - disruption) in Fig. 6a. Interestingly, a number of transcription factor motifs show reverse behaviors with respect to these three mechanisms. For example, Foxd3 shows a much higher creation probability for APOBEC and NER related signatures, whereas the opposite holds for MMR signatures.
Multiple signatures driven molecular mechanisms in cancer with associated TF and the corresponding global alteration offset comparison: a Top ranked TF and signature pair for each of the 3 associated mechanism; b APOBEC associated signatures and Myb
As an example, it was shown that a mutation associated with the APOBEC signature leads to the creation of a MYB binding site [16]. To understand if this single event leading to a driver mutation might result from a more general impact of the APOBEC signature on the binding sites of MYB, we indeed observed that the two APOBEC signatures (signature 2 and 13) show the highest bias towards MYB motif creation compared to all other signatures (P-value = 0.02, Fig. 6b). Hence, this single driver event might result from an elevated creation rate of potential MYB binding sites under these APOBEC signatures.
We then sought to validate our predictions using a dataset of observed SNV in cancer genomes. We used a dataset of 2708 whole-genome sequencing covering 40 cancer subtypes. The idea of the validation is to compare the predicted frequencies of motifs alteration that are explained purely by the mutational signature and its biases towards certain trinucleotides, with the frequencies of motif alteration observed when considering a SNV dataset in cancer genomes. If the predicted and the observed frequencies are comparable, then most of the signal of motif creation or disruption can be explained by the impact of mutational signatures. If on the other hand we observe a difference, this discrepancy could be attributed to additional effects in the real dataset, such as positive or negative evolutionary pressure in the cancer dataset leading to an increased frequency of motif creation or disruption. Hence, our goal is to highlight such potential effects and to use our signature based prediction as a baseline.
In order to compute a predicted alteration probability per patient, we used the signature exposure of each patient, and performed a linear combination based on the exposure values. In order to take into account the fact that there is heterogeneity between tumors even within a subtype, we used a clustering of samples based on their exposure to mutational signatures to split each subtype. While some subtypes are rather homogeneous i.e. lie mostly within one cluster, some others are spread across many clusters (see Additional file 3: Figure S3). The differential alteration probability is computed for each motif and each cluster within the subtypes by first subtracting the motif disruption and creation probability for each TF and then taking the median of the alteration difference by cancer entity and TF family. In Fig. 7a, we display the results for all TF families containing at least 10 motifs and each tumor subtype containing at least 50 samples.
Inferred differential probability using exposure from the PCAWG dataset. a differential probability of creation minus disruption for all TF families with at least 10 members, across tumor subtypes containing at least 50 samples. The colored dots represent the samples of the different UMAP clusters, with size proportional to the number of these samples. b Mutational profile for the 3 signatures mostly represented within the melanoma (MELA) subtype. c Differential alteration probability (creation minus disruption) for all 30 signatures for the Ets-related family of motifs. UV-related signature 7 is highlighted in red
For the 10 TF families, we observed that some appear to have a global excess of predicted disruption events over all cancer subtypes (bHLH, Ets-related), while other show the opositive effect (NK-related, SOX-related,...). Beyond these general trends, we have also observed that some cancer subtypes show a different trend. For example, Esophageal Adenocarcinoma (ESAD) shows an excess of bHLH-type motif creations, while melanoma (MELA) is predicted to contain an excess of Tal-related disruptions as opposed to all other cancer types. Looking even more in detail, differences within the tumor subtypes were obvious, between the samples belonging to the different clusters defined previously through the UMAP analysis. A prominent example was the alteration probabilities of Ets-related motifs in Melanoma. Melanoma shows by far the highest bias toward disruptions of Ets-motifs. However, this only holds true for the melanoma samples belonging to cluster 3 (brown in the figure). The other melanoma samples (cluster 1 and 21) show a balance between creation and disruption. These 3 melanoma clusters differ in their exposure profile (Fig. 7b). Indeed, cluster 3, which displays the high Ets disruption bias, is highly exposed to Signature 7, which is related to UV-induced mutations, and displays indeed the second highest disruption probability for Ets-motifs (Fig. 7c). The relation between UV-induced signatures and Ets binding sites has been described previously [17]. In this study, an increase of mutations in binding sites for TFs was described, in particular for Ets binding sites and UV induced mutations. Hence, it appears that our model is capable of capturing this effect, despite the fact that we focus on motifs occurrences and not actual binding sites.
Finally, we compared the predictions with the observed alterations across the PCAWG dataset. To perform this comparison, we first determined the observed motif creation and disruption probabilities of all TFs across all samples within a given cohort using the motif alteration calling pipeline described in the method section. This pipeline predicts, for each SNV, whether it disrupts or creates potential binding motifs, and yields for each patient and each motif a creation and disruption count for each TF motif. We computed the log-ratio of the number of observed creation/disruption events as determined by the pipeline, by summarizing all samples belonging to a tumor subtype and UMAP cluster. The comparison of this observed alteration bias with the alteration bias predicted by our model is displayed in Fig. 8. Overall, we found a very good correlation between the model prediction and the observation. The correlation between both was significant for all TF families, with some differences. For example, the correlation is very high for Ets-related factors, for which in most cases, the direction of the bias was concordant between model prediction and observed alteration counts. In other cases however, despite a good correlation, the direction of the alteration did not coincide well. For example, for RXR-related factors, we observe a global excess of motif disruptions, even for samples for which our model predicts an excess of creations. The prediction made for melanoma samples in cluster 3 of a strong excess of disruptions over creations is confirmed in the observed alterations.
Comparison of the predicted differential alteration (creation minus disruption) (x-axis) with the observed log-ratio of motif creation counts versus the disruption counts, obtained from the SNV in each tumor subtype. Correlation coefficient and p-values for the regression are indicated in the maps. Color dots represent samples belonging to a tumor subtype and a UMAP cluster. Some interesting sample groups are indicated explicitely
The purpose of this work is to translate the patterns of mutational signatures observed in cancer genomes into patterns of alterations of motifs corresponding to transcription factor binding sites. For this purpose, we have established a theoretical framework to compute the probabilities of alterations (creation or disruption) over a large set of known motifs from the JASPAR database, by using a k-mer based approach. We observed distinct patterns of creation/disruption event, which are generally shared by the motifs belonging to the same TF family. However, despite this general agreement within a family, differences can be observed as can be seen for example for the GATA-family motifs highlighted in Fig. 3. This could possibly lead to a shift in the number of occurring binding sites from one transcription to a different one from the same family, and lead to a partial rewiring of the regulatory network.
This model only takes into account the effect of the mutational signatures and therefore determines an overall expected background of motifs alterations, in the absence of any further evolutionary mechanism like positive or negative selection. Departures from these expected patterns could be interpreted as the effect of additional specific mechanisms impacting the landscape of binding motifs. In this respect, the example of Ets motifs is interesting. We find an overall tendency towards an excess of disruption of motifs over creation of novel motifs across many tumor types, especially in melanoma. This is also confirmed by the overall observed patterns of motif breaks due to single-nucleotide somatic variants. In melanoma, this fits the described excess of mutations within Ets binding sites due to UV radiation [17]. In accordance with this, we only observe this strong disruption bias for tumor samples within a specific cluster strongly impacted by the UV-light signature, but not in the other melanoma samples. This seems to indicate that, despite being restricted to actual Ets-binding sites, which are way less abundant that Ets motifs, the overall signature seems to capture this effect. However, the most prominent non-coding regulatory mutation described so far, affecting Ets-binding is actually a creation of novel binding sites within the TERT promoter region [1, 2]. Hence, we have an overall genome-wide excess of Ets-motif disruption, but a focal appearance of novel Ets-binding sites. We have previously shown that this tendency towards creation of novel Ets binding motifs is found in other gene promoters, like BCL2 in lymphoma or NEAT1 in liver cancer [4]. These are also driving changes in the expression of the corresponding gene (especially TERT and BCL2), highlighting the potential functional significance of these focal events.
Very few non-coding driver mutations have been found in the extensive PCAWG study [18]. Beyond the TERT promoter mutation, a number of recurring promoter mutations have been found, however it is unclear whether they might have a functional impact given the lack of association with gene expression change (e.g. PAX5). We also expect that the vast majority of the alteration patterns that we describe here will have a low if any functional impact individually. These would be classically defined as passenger mutations, which are usually discarded in cancer studies. However, our recent study has highlighted that the mutational load of so-called passenger mutations might contribute globally to a functional impact and contribute to the cancer phenotype [4]. Hence, describing the global pattern of motif alterations induced by mutational signatures sheds a new light onto the potential impact of mutational signature in shaping the global mutational load of somatic mutations in cancer genomes.
In this study, we investigated the theoretical impact of cancer mutational signatures on regulatory elements of the non-coding genome and provided an outline of a Bayesian framework for motif alteration analysis using mutational signatures. One of the key finding in this study is the correlation effect between motif creation and motif disruption. The correlation between the motif alteration probability was found to be strongly positively associated to the motif entropy. Further, previously described effects such as the impact of specific signatures on families of transcription factors can be reproduced by our theoretical model. An intriguing finding was that the described non-coding driver leading to MYB creation in T-ALL could be related to a global increased creation probability in APOBEC driven cancer types. Finally, the motif alteration signatures were used to infer the alteration events of each PCAWG cohort using the corresponding signature exposure. We confirmed that Ets-motifs in melanoma display a strong excess of motif disruptions over novel motif creations, especially for the samples exposed to the UV-induced signature. This is predicted by our model and validated from the actual SNV disruption counts. Therefore, this motif alteration signature can serve as a background model for point mutation analysis for large datasets.
ICGC:
International cancer genome consortium
DNA mismatch repair
NER:
Nucleotide excision repair
PAM:
Partition around medoids
PCAWG:
Pan-cancer analysis of whole genomes
PWM:
Position weight matrix
RSAT:
Regulatory sequence analysis toolbox
SNV:
Single-nucleotide variants
Self-organizing maps
TF:
Transcription factor
TFBS:
Transcription factor binding site
UMAP:
Uniform manifold approximation and projection
Whole exome sequencing
WGS:
Killela PJ, Reitman ZJ, Jiao Y, Bettegowda C, Agrawal N, Diaz Jr LA, Friedman AH, Friedman H, Gallia GL, Giovanella BC, Grollman AP, He TC, He Y, Hruban RH, Jallo GI, Mandahl N, Meeker AK, Mertens F, Netto GJ, Rasheed BA, Riggins GJ, Rosenquist TA, Schiffman M, Shih Ie M, Theodorescu D, Torbenson MS, Velculescu VE, Wang TL, Wentzensen N, Wood LD, et al. TERT promoter mutations occur frequently in gliomas and a subset of tumors derived from cells with low rates of self-renewal. Proc Nat Acad Sci USA. 2013; 110(15):6021–6. https://doi.org/10.1073/pnas.1303607110.
Huang FW, Hodis E, Xu MJ, Kryukov GV, Chin L, Garraway LA. Highly Recurrent TERT Promoter Mutations in Human Melanoma. Science. 2013; 339(6122):957–9. https://doi.org/10.1126/science.1229259.
Mansour MR, Abraham BJ, Anders L, Berezovskaya A, Gutierrez A, Durbin AD, Etchin J, Lawton L, Sallan SE, Silverman LB, Loh ML, Hunger SP, Sanda T, Young RA, Look AT. Oncogene regulation. An oncogenic super-enhancer formed through somatic mutation of a noncoding intergenic element. Sci (NY). 2014; 346(6215):1373–7. https://doi.org/10.1126/science.1259037.
Kumar S, Warrell J, Li S, McGillivray P, Meyerson W, Salichos L, Harmanci A, Martinez-Fundichely A, Chan CWY, Nielsen M, Lochovsky L, Zhang Y, Li X, Pedersen JS, Herrmann C, Getz G, Khurana E, Gerstein M. Passenger mutations in 2500 cancer genomes: Overall molecular functional impact and consequences. bioRxiv. 2018. https://doi.org/10.1101/280446.http://arxiv.org/abs/https://www.biorxiv.org/content/early/2018/03/12/280446.full.pdf.https://www.biorxiv.org/content/early/2018/03/12/280446.full.pdf.
Mcfarland CD, Korolev KS, Kryukov GV, Sunyaev SR. Impact of deleterious passenger mutations on cancer progression. Proc Nat Acad Sci USA. 2013; 110(8):1–6. https://doi.org/10.1073/pnas.1213968110.
Alexandrov LB, Nik-Zainal S, Wedge DC, Aparicio SAJR, Behjati S, Biankin AV, Bignell GR, Bolli N, Borg A, Børresen-Dale A-L, Boyault S, Burkhardt B, Butler AP, Caldas C, Davies HR, Desmedt C, Eils R, Eyfjörd JE, Foekens JA, Greaves M, Hosoda F, Hutter B, Ilicic T, Imbeaud S, Imielinski M, Imielinsk M, Jäger N, Jones DTW, Jones D, Knappskog S, et al. Signatures of mutational processes in human cancer. Nature. 2013; 500(7463):415–21. https://doi.org/10.1038/nature12477.
Sabarinathan R, Mularoni L, Deu-pons J, Gonzalez-perez A, López-bigas N. Nucleotide excision repair is impaired by binding of transcription factors to DNA. Nature. 2016; 532(7598):264–7. https://doi.org/10.1038/nature17661.
Pleasance ED, Cheetham RK, Stephens PJ, McBride DJ, Humphray SJ, Greenman CD, Varela I, Lin M-L, Ordonez GR, Bignell GR, Ye K, Alipaz J, Bauer MJ, Beare D, Butler A, Carter RJ, Chen L, Cox AJ, Edkins S, Kokko-Gonzales PI, Gormley NA, Grocock RJ, Haudenschild CD, Hims MM, James T, Jia M, Kingsbury Z, Leroy C, Marshall J, Menzies A, et al. A comprehensive catalogue of somatic mutations from a human cancer genome. Nature. 2010; 463(7278):191–6.
Mathelier A, Fornes O, Arenillas DJ, Chen CY, Denay G, Lee J, Shi W, Shyr C, Tan G, Worsley-Hunt R, Zhang AW, Parcy F, Lenhard B, Sandelin A, Wasserman WW. JASPAR 2016: A major expansion and update of the open-access database of transcription factor binding profiles. Nucleic Acids Res. 2016; 44(D1):110–5. https://doi.org/10.1093/nar/gkv1176.
Thomas-Chollier M, Defrance M, Medina-Rivera A, Sand O, Herrmann C, Thieffry D, van Helden J. Rsat 2011: regulatory sequence analysis tools. Nucleic Acids Res. 2011; 39(suppl_2):86–91.
Alexandrov L, Kim J, Haradhvala NJ, Huang MN, Ng AWT, Boot A, Covington KR, Gordenin DA, Bergstrom E, Lopez-Bigas N, Klimczak LJ, McPherson JR, Morganella S, Sabarinathan R, Wheeler DA, Mustonen V, Getz G, Rozen SG, Stratton MR. The repertoire of mutational signatures in human cancer. bioRxiv. 2018. https://doi.org/10.1101/322859.http://arxiv.org/abs/https://www.biorxiv.org/content/early/2018/05/15/322859.full.pdf. https://www.biorxiv.org/content/early/2018/05/15/322859.full.pdf.
McInnes L, Healy J, Melville J. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv e-prints. 2018:1802–03426. http://arxiv.org/abs/1802.034261802.03426.
McInnes L, Healy J, Astels S. hdbscan: Hierarchical density based clustering. J Open Source Softw. 2017; 2(11). https://doi.org/10.21105/joss.00205.
Zemojtel T, Kiebasa SM, Arndt PF, Behrens S, Bourque G, Vingron M. CpG deamination creates transcription factor-binding sites with high efficiency. Genome Biol Evol. 2011; 3(1):1304–11. https://doi.org/10.1093/gbe/evr107.
Fox EJ, Salk JJ, Loeb LA. Exploring the implications of distinct mutational signatures and mutation rates in aging and cancer. Genome Med. 2016; 8(1):30. https://doi.org/10.1186/s13073-016-0286-z.
Li Z, Abraham B, Berezovskaya A, Farah N, Liu Y, Leon T, Fielding A, Tan SH, Sanda T, Weintraub A, et al. Apobec signature mutation generates an oncogenic enhancer that drives lmo1 expression in t-all. Leukemia. 2017; 31(10):2057.
Mao P, Brown AJ, Esaki S, Lockwood S, Poon GMK, Smerdon MJ, Roberts SA, Wyrick JJ. ETS transcription factors induce a unique UV damage signature that drives recurrent mutagenesis in melanoma. Nat Commun. 2018; 9(1):2626. https://doi.org/10.1038/s41467-018-05064-0.
Rheinbay E, Parasuraman P, Grimsby J, Tiao G, Engreitz JM, Kim J, Lawrence MS, Taylor-Weiner A, Rodriguez-Cuevas S, Rosenberg M, Hess J, Stewart C, Maruvka YE, Stojanov P, Cortes ML, Seepo S, Cibulskis C, Tracy A, Pugh TJ, Lee J, Zheng Z, Ellisen LW, Iafrate AJ, Boehm JS, Gabriel SB, Meyerson M, Golub TR, Baselga J, Hidalgo-Miranda A, Shioda T, et al. Recurrent and functional regulatory mutations in breast cancer. Nature. 2017; 547(7661):55–60. https://doi.org/10.1038/nature22992.
The authors would like to acknowledge for the input and assistance from Jules Kerssemakers regarding to the implementation of the data processing pipeline, and Ashwini Sharma for his scientific suggestions and inputs on stylizing the contents. CWYC acknowledges funding through the NCT3.0 program.
This project is partly funded by DKFZ internal funding and NCT3.0 "ENHANCE" program through a fellowship to CC. We acknowledge financial support by Deutsche Forschungsgemeinschaft within the funding programme Open Access Publishing, by the Baden-Württemberg Ministry of Science, Research and the Arts and by Ruprecht-Karls-Universität Heidelberg. The Funding agencies had no influence on the design of the study, the analysis and interpretation of the data or the writing of the manuscript.
The data generated or analysed during this study are included in this published article in Additional file 6. Scripts are available upon request from the corresponding author (CH).
Consent to publication
Division of Theoretical Bioinformatics, German Cancer Research Center (DKFZ), Heidelberg, 69120, Germany
Calvin Wing Yiu Chan, Zuguang Gu, Matthias Bieg, Roland Eils & Carl Herrmann
Faculty of Biosciences, Heidelberg University, Heidelberg, 69120, Germany
Calvin Wing Yiu Chan
Center for Digital Health, Berlin Institute of Health (BIH), Berlin, 10178, Germany
Matthias Bieg & Roland Eils
Health Data Science Unit, Medical Faculty University Heidelberg and BioQuant, Heidelberg, 69120, Germany
Roland Eils & Carl Herrmann
Zuguang Gu
Matthias Bieg
Roland Eils
Carl Herrmann
CC, CH performed the analysis. ZG and MB provided assistance for data processing. RE and CH supervised the project.
Correspondence to Carl Herrmann.
Motif alteration signatures heatmap where the vertical axis corresponds to the 30 mutational signatures and the horizontal axis corresponds to the 512 motifs in JASPAR database. A kclust=4 PAM clustering performed over the TF alteration probability. The TF family is indicated as a colored bar on the right-hand side. (pdf 199 kb)
Differential motif alteration (creation minus disruption) heatmap of TF in COSMIC cancer gene census. Shown are the 40 TFs with the highest differential probability accross at least 3 signatures. (pdf 28 kb)
(top) UMAP representation of the 2708 WGS samples from PCAWG, according to their exposure to the mutational signatures. Colors indicate the tumor subtype. (bottom) Clustering of the UMAP map using hdbscan. The number of samples within each cluster and their tumor subtype is indicated as a barplot (bottom right). (pdf 245 kb)
Pseudo-code to compute the conditional alteration probability. (pdf 104 kb)
List of abbreviations used in the PCAWG data for tumor subtypes. (pdf 36 kb)
Excel sheet containing the predicted alteration frequencies for all JASPAR motifs for all 30 COSMIC mutational signatures. (xlsx 1977 kb)
Yiu Chan, C.W., Gu, Z., Bieg, M. et al. Impact of cancer mutational signatures on transcription factor motifs in the human genome. BMC Med Genomics 12, 64 (2019). https://doi.org/10.1186/s12920-019-0525-4
SNV
Mutational signature
|
CommonCrawl
|
Journal of Glaciology
Volume 63 Issue 240
Acquisition of a 3 min, two-dim...
Core reader
Acquisition of a 3 min, two-dimensional glacier velocity field with terrestrial radar interferometry
2.1. Derivation of equations to combine data from two radars
2.2. Data acquisition and processing
Xie, Surui Dixon, Timothy H. Voytenko, Denis Deng, Fanghui and Holland, David M. 2018. Grounding line migration through the calving season at Jakobshavn Isbræ, Greenland, observed with terrestrial radar interferometry. The Cryosphere, Vol. 12, Issue. 4, p. 1387.
Vaňková, Irena Voytenko, Denis Nicholls, Keith W. Xie, Surui Parizek, Byron R. and Holland, David M. 2018. Vertical Structure of Diurnal Englacial Hydrology Cycle at Helheim Glacier, East Greenland. Geophysical Research Letters,
Journal of Glaciology, Volume 63, Issue 240
August 2017 , pp. 629-636
DENIS VOYTENKO (a1), TIMOTHY H. DIXON (a2), DAVID M. HOLLAND (a1), RYAN CASSOTTO (a3), IAN M. HOWAT (a4), MARK A. FAHNESTOCK (a5), MARTIN TRUFFER (a5) and SANTIAGO DE LA PEÑA (a4)
1 Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
2 School of Geosciences, University of South Florida, Tampa, FL, USA
3 Department of Earth Sciences, University of New Hampshire, Durham, NH, USA
4 School of Earth Sciences and Byrd Polar Research Center, The Ohio State University, Columbus, OH, USA
5 Geophysical Institute, University of Alaska Fairbanks, Fairbanks, AL, USA
Copyright: © The Author(s) 2017
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
DOI: https://doi.org/10.1017/jog.2017.28
Published online by Cambridge University Press: 06 June 2017
Fig. 1. Map of the field area. Site location relative to Greenland marked by red star (inset). R1 and R2 show the locations of the TRI instruments. Their respective coverage areas are shown by overlapping gray scales, overlain on a LANDSAT image from 11 August 2014 (obtained from landsatlook.usgs.gov).
Fig. 2. Diagram of the field set-up for the derivation of Eqn (1). Radar positions are R1 and R2 (triangles), LOS velocities measured with each radar are V R1 and V R2, angles from the radars to a parcel of interest on the glacier (stars) are θ 1 and θ 2 (dashed lines). True velocity of an ice parcel observed by the radar on the glacier is V glac , and its north and east components are V y and V x . Vectors of the radars are given by the dotted lines originating from radars toward the parcel of interest. V x and V y are obtained using the measured V R1 and V R2 and the known (from the instrument locations) θ 1 and θ 2. Note that the LOS velocities (V R1 and V R2) are obtained from a vector projection of V glac onto the look vectors of R1 and R2. Also note that if the glacier is moving toward the radar, the measured velocity is negative because the distance (slant range) is decreasing between the first and second image in the interferogram.
Fig. 3. Plots of ice velocity derived from the 3 min interferometric data pair (left) and the 3-day feature tracking (right). The velocity magnitudes and directions are similar. The feature-tracking data are truncated near the terminus due to a calving event.
Fig. 4. A pointwise comparison between the overlapping 2-D interferometric and feature-tracking data for components >5 m d−1. The plots compare eastward velocity (a), northward velocity (b), azimuth (c) and velocity magnitude (d). One-to-one relationship lines are shown on every plot. Note that the variability is close to what is described by the Monte Carlo simulation (Figs 6, 7). Error bars show the one SD uncertainties for the interferometric data. The feature-tracking (PIV) uncertainty is assumed to be 0.1 pixels (Huang and others, 1997), which, in this case, is 0.2 m d−1 (0.5 m over a 3-day period, to one significant figure) for each of the velocity components. The feature-tracking uncertainty error bars for the magnitude and azimuth are derived from another Monte Carlo simulation.
Fig. 5. A visual example of a Monte Carlo simulation (1000 runs) for a single point on the glacier surface. Here, the inputs are the radar velocity measurements (V R1 and V R2) and their respective view directions (θ 1 and θ 2). Note that we only obtain the distributions of V x and V y using the simulation. We then calculate the azimuth and velocity magnitude along with their respective uncertainties from the results of the simulation. This process is repeated for every desired data point over the glacier surface to produce Figures 6, 7.
Fig. 6. East (a) and north (b) velocity component maps and their uncertainties derived from interferometry and the Monte Carlo simulation. Note that northward uncertainties (d) are higher than eastward uncertainties (c), likely due to the scan orientations of the two TRIs.
Fig. 7. Direction and velocity magnitudes derived from interferometry (a, b) and their respective uncertainties (c, d) are all calculated from the Monte Carlo simulation. Note that azimuth is defined to be positive clockwise from north.
Fig. 8. Plot of error ellipses for a selected subset of points over the glacier surface overlain on a map of velocity magnitude derived from interferometry. The ellipses (blue) cover two SDs and are scaled relative to the arrows (white) showing the direction of motion and speed. Note that the north uncertainty is much greater than the east uncertainty.
Fig. 9. Contours of loss of digits of precision considering the positions of the two radars (red stars) calculated from the condition number of the matrix of coefficients in Eqns (5) and (6). Most of the terminus experiences between 0.5 and 1.5 digits of loss.
Send article to Kindle
Volume 63, Issue 240
Available formats PDF Please select a format to send.
Send article to Dropbox
To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.
Send article to Google Drive
To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.
Outlet glaciers undergo rapid spatial and temporal changes in flow velocity during calving events. Observing such changes requires both high temporal and high spatial resolution methods, something now possible with terrestrial radar interferometry. While a single such radar provides line-of-sight velocity, two radars define both components of the horizontal flow field. To assess the feasibility of obtaining the two-dimensional (2-D) flow field, we deployed two terrestrial radar interferometers at Jakobshavn Isbrae, a major outlet glacier on Greenland's west coast, in the summer of 2012. Here, we develop and demonstrate a method to combine the line-of-sight velocity data from two synchronized radars to produce a 2-D velocity field from a single (3 min) interferogram. Results are compared with the more traditional feature-tracking data obtained from the same radar, averaged over a longer period. We demonstrate the potential and limitations of this new dual-radar approach for obtaining high spatial and temporal resolution 2-D velocity fields at outlet glaciers.
Velocity fields of tidewater glaciers are sensitive indicators of the various driving and resisting forces acting upon them (e.g. Howat and others, 2008). Unfortunately, it is challenging to obtain such data with adequate spatial and temporal resolution. Discrete GPS measurements undersample the velocity field in a spatial sense. Satellite observations give a well-resolved two-dimensional (2-D) (horizontal) velocity field via feature or speckle tracking (e.g. Joughin and others, 2008; Ahn and Howat, 2011) but undersample the temporal variation. Temporal resolution of ice velocity from spaceborne sensors is generally limited to several days or more and thus undersamples short-term fluctuations in the highly dynamic zones of marine-terminating glaciers, where iceberg calving and changes in basal water pressure may be frequent and lead to rapid stress and velocity variations.
Jakobshavn Isbrae is an outlet glacier on the west coast of Greenland (Fig. 1). The main trunk of the glacier is ~5 km wide and moves at ~40 m d−1 (Amundson and others, 2010) with cliff heights of ~100 m (Xie and others, 2016). Jakobshavn drains ~6% of the Greenland ice sheet (Bindschadler, 1984) and is likely to have accounted for ~4% of the increase in sea-level rise rate for the 20th century (Houghton and others, 2001). It represents an important target for research aimed at understanding the overall health of the Greenland ice sheet.
Terrestrial radar interferometers (TRIs) have been used to study a variety of geophysically deforming surfaces at very high (minute-scale) sampling rates. Caduff and others (2015) and Voytenko and others (2015a) review the basic TRI technique. Voytenko and others (2015c) used near-field TRI observations to resolve the vertical component of deformation along a tidewater glacier terminus. Xie and others (2016) observed a calving event at Jakobshavn using a TRI. Feature-tracking techniques have been applied to TRI observations of Jakobshavn's proglacial fjord (Peters and others, 2015). However, such techniques have limited temporal resolution compared with interferometric measurements (hour vs minute-scale) or require very fast motion (e.g. ice mélange during a calving event). Although a single instrument provides only scalar line-of-sight (LOS) measurements, in principle, measurements from two identical synchronized TRIs positioned at different locations but observing a common overlapping area can be combined to define both horizontal components of glacier velocity.
In this study, our TRIs are real-aperture Ku band (1.74 cm wavelength) GAMMA Portable Radar Interferometers (GPRI) (Werner and others, 2008). Each instrument has three antennas (one transmitting and two receiving). The receiving antennas have a 25 cm baseline, and the transmitting and lower receiving antenna (which are the ones used in this study) have a 60 cm baseline, which allow for displacement sensitivity of ~1 mm and an elevation sensitivity of 3 m at a distance of 2 km (Strozzi and others, 2012). The antennas are attached to a rotating frame and scan an arc of a specified angle to image the scene. The maximum range of our TRIs is ~16 km, with a nominal range resolution of 0.75 m and an azimuth resolution of 7 m at 1 km, which linearly widens with distance. TRIs are designed for installation on stable bedrock. Deployments on moving ice are logistically difficult, complicate the measurements, and thus are not ideal. Because the TRI does not move in space during a study period, no topographic phase correction is required when processing the interferograms.
Methods to obtain components of motion from two viewpoints have been developed for weather radars (Lhermitte and Miller, 1970) and subsequently for satellite synthetic aperture radar data from ascending and descending passes (Joughin and others, 1998; Fialko and others, 2005). Here, we present a similar TRI-based approach to resolve the two horizontal components of the surface velocity field at Jakobshavn.
Consider a parcel of glacier ice that is moving in two dimensions and is seen by two radars (TRIs). Assume that the vertical motion is insignificant. The components of the velocity vector of the parcel with respect to east and north are V x and V y . The TRI, however, only measures the velocity of the parcel in the direction of its LOS, with V R1 representing the LOS velocity measured by TRI 1, and V R2 representing the LOS velocity measured by TRI 2. The LOS angles (measured counterclockwise from east by convention) for each azimuth line are θ 1 and θ 2 for TRIs 1 and 2, respectively (Fig. 2).
We derive an equation describing the measured LOS velocity as a function of V x and V y of the parcel and the LOS angle of the TRI, θ. We do this by rotating the coordinate system of a given vector [V x , V y ] by the angle θ.
(1) $$\left[ {\matrix{ {V^{\prime}_x} \cr {V^{\prime}_y} \cr}} \right] = \left[ {\matrix{ {{\rm cos}(\theta )} & {{\rm sin}(\theta )} \cr { - {\rm sin}(\theta )} & {{\rm cos}(\theta )} \cr}} \right]\left[ {\matrix{ {V_x} \cr {V_y} \cr}} \right].$$
Note that V′ x is the component of velocity in the direction of the unit vector given by θ, which happens to be the velocity component measured by the TRI. Since this is the only component measured by the TRI, we can ignore the other component of rotation (V′ y ), and obtain an equation for the measured radial velocity:
(2) $$V_R = V_x {\rm cos}(\theta ) + V_y {\rm sin}(\theta ).$$
Therefore, the velocity equations for both TRIs are:
(3) $$V_{R1} = V_x {\rm cos}(\theta _1 ) + V_y {\rm sin}(\theta _1 ),$$
(4) $$V_{R2} = V_x {\rm cos}(\theta _2 ) + V_y {\rm sin}(\theta _2 ).$$
The above equations are sufficient to generate a forward model for the dual TRI velocity problem.
In reality, the two TRIs measure V R1 and V R2, where θ 1 and θ 2 are the instrument look angles, and are known from the locations of the instruments and the georeferenced images. The focused TRI images are by default in polar coordinates, where one axis is azimuth (an angular measure of the scan) and the other is slant range (distance from the origin, measured as the shortest distance from radar to target). Therefore, we can rewrite the TRI velocity equations in the A x = b form, and solve for V x and V y for each point using matrix inversion.
(5) $$\left[ {\matrix{ {V_x} \cr {V_y} \cr}} \right] = \left[ {\matrix{ {{\rm cos}(\theta _1 )} & {{\rm sin}(\theta _1 )} \cr {{\rm cos}(\theta _2 )} & {{\rm sin}(\theta _2 )} \cr}} \right]^{ - 1} \left[ {\matrix{ {V_{R1}} \cr {V_{R2}} \cr}} \right].$$
The inverse matrix can be written explicitly with no numerical inversion required:
(6) $$A^{ - 1} = \displaystyle{1 \over {det(A)}}\left[ {\matrix{ {{\rm sin}(\theta _2 )} & { - {\rm sin}(\theta _1 )} \cr { - {\rm cos}(\theta _2 )} & {{\rm cos}(\theta _1 )} \cr}} \right],$$
where det(A) = [cos(θ 1)sin(θ 2) − sin(θ 1)cos(θ 2)] = sin(θ 2 − θ 1).
We set up two TRI instruments on the south side of the Ilulissat fjord ~6 km from the terminus of Jakobshavn Isbrae (Fig. 1) and collected data with a 3 min sampling interval. The radar separation was ~1 km, constrained by local geography. One radar was at an elevation of 314 m and the other one was at 270 m. A built-in GPS receiver provides accurate clock information to the TRI. Because of the 3 min sampling rate, we set the acquisition start times on both instruments to be the same (as opposed to being offset by 1 or 2 min). We use a single acquisition pair (2012/08/01 20:01 and 2012/08/01 20:04) to demonstrate the concept of our method, and compare the results with a longer 3-day period (2012/07/31 16:05 to 2012/08/03 16:06) of motion derived by feature tracking.
We prepare the interferograms from non-resampled single-look complex files using the GAMMA software package. During processing, we multilook (spatially average) the TRI data by 12 looks (averaged pixels) in range, smooth the interferograms with an adaptive filter (Goldstein and Werner, 1998), set a phase unwrapping mask focused on the stationary rocks and the main trunk of the glacier (ignoring the ice mélange), and unwrap the phase using a minimum cost flow algorithm. We then convert the unwrapped interferograms (and the multilooked intensity images) into rectangular coordinates with 15 m pixel spacing. This conversion is needed for georeferencing to properly define the angles relative to due east. Due to the small elevation differences between the TRIs and the glacier (~200 m over a horizontal distance of ~5 km, suggesting an incidence angle of ~ 2°), we do not perform a terrain correction for conversion from slant range to ground range. The interferograms are then converted into velocities via a multiplicative factor of − λ/4πΔt, where λ is the wavelength (1.74 cm for the GPRI) and Δt is the time between the images in the interferogram (3 min in our case).
One of the radar datasets suffers from phase unwrapping issues, likely due to the look direction of the radar relative to the principal direction of glacier motion and the 3 min sampling rate. We correct for this by manually adjusting the resulting unwrapped interferogram by two cycle slips and recalculating the velocity (this offset is determined by examining an area where the look angles of both radars are similar, and should measure similar rates). The intensity images are then compared with LANDSAT images and adjusted 39o and 59o for rotational offsets related to instrument locations, allowing for accurate calculation of the horizontal viewing angle. These offsets are determined by iteratively adjusting a rotation angle of a georeferenced image until some salient features (typically rock outcrops) visually overlap with the same features in a LANDSAT image. After georeferencing both velocity datasets to the same image space, we calculate θ 1 and θ 2 for each overlapping pixel using the radar pixel coordinates and the pixel coordinates containing the velocity values measured by each radar, and solve for the east and north components of velocity using Eqn (5) and the analytically derived inverse matrix in Eqn (6).
We use the correlation-based OpenPIV (particle image velocimetry) package (Taylor and others, 2010) to measure offsets over a 3-day period from the TRI images to compare our results. Here, the georeferenced intensity images have 5 m pixel spacing with a search window of 64 pixels and an overlap of 32 pixels. The output of each velocity component is smoothed with a 5-pixel median filter. We then resample the resulting interferometric velocity component maps to the dimensions of the feature-tracking data and smooth them with a 5-pixel median filter for comparison (Figs 3, 4).
We combine the above method with a Monte Carlo simulation to perform uncertainty analysis, offering a more convenient alternative to error propagation (which would require calculating partial derivatives). The Monte Carlo method (Fig. 5) estimates the probability distribution of the solution by repeatedly solving the equations by sampling from the known or, more commonly, assumed probability distributions of the input parameters. Here, these are the radar LOS velocities, their orientations and their respective standard deviations (SDs). A large enough number of runs are performed to generate meaningful distribution statistics. Here, the results are the velocity components, where each component has its own probability distribution with a mean and SD.
We assume that there are no phase unwrapping errors (from inspection and after accounting for the offset mentioned earlier) and that the radar-derived velocity and angle values are normally distributed with SDs of 0.5 m d−1 derived from on-rock TRI measurements at Breiðamerkurjökull (Voytenko and others, 2015b) and 0.1° (the smallest eye-detectable difference when manually georeferencing the TRI image to a background LANDSAT image; the TRI azimuth positioner errors are negligible), respectively. We then run a simulation with 1000 samples for every measurement point and calculate the resulting 'average' velocity (in a probabilistic sense) along with an uncertainty (SD) for each component (Fig. 6). We then use the resulting data to obtain the best estimate and uncertainty of the principal direction of glacier motion (azimuth, given by $90{\rm ^\circ} - \arctan (V_y /V_x )$ ) and the velocity magnitude (given by $(V_x^2 + V_y^2 )^{1/2} $ ) (Fig. 7).
Since our method involves matrix inversion, we also need to consider precision loss due to the conditioning of the system (Atkinson, 2008). The condition number (κ) represents the orthogonality of the system (a high condition number implies that the vectors inside the matrix are not orthogonal and that a small perturbation of the inputs will result in a large change in the outputs). A condition number of 1 implies orthogonality. We use the condition number of A, the matrix to be inverted (see the matrix of coefficients in Eqns (5) and (6)), and an L 2 norm to estimate precision loss for our 2-D velocity results. Note that the matrix only depends on the orientation of the two radars (i.e. the instrument locations and not the actual velocity measurements). In our case, A is ill conditioned when the look angles of the TRIs are similar.
(7) $$\kappa = {\rm \parallel} A{\rm \parallel} _2 {\rm \parallel} A^{ - 1} {\rm \parallel} _2. $$
Following the equation below, we calculate C, the number of digits of precision loss.
(8) $$C = \mathop {\log} \nolimits_{10} \kappa. $$
Most of the ice around the study area flows to the northwest with an azimuth of ~315° and uncertainties (one SD) of ± 5° close the terminus and up to ±15° further up-glacier with velocity magnitudes ranging from ~50 m d−1 near the ice front to ~25 m d−1 up-glacier with uncertainties ~ ±6 m d−1 (Figs 3, 6, 7). The Monte Carlo simulation suggests that east-west uncertainties are ~1 m d−1, while north-south uncertainties are ~6 m d−1 (Fig. 6). This is likely due to the positioning of the radars relative to the main trunk of the glacier, their limited spatial separation (constrained by our site) and the scan orientations.
The interferometric and feature-tracking techniques yield similar magnitudes and directions (Fig. 4), except for slight discrepancies in the direction of motion where interferometric azimuth is systematically ~15° greater than the feature-tracking azimuth. This may be due to different sampling periods, measurement uncertainties and the conditioning of the equations. Additionally, the interferometric results are based on a 3 min period, while the feature-tracking results cover a 72 h period and include a calving event, which could affect the average direction of ice motion and its velocity magnitude (Amundson and others, 2008; Nettles and others, 2008).
We also use the Monte Carlo simulation results to plot error ellipses (covering two SDs) for a selected subset of points (Fig. 8). The method of plotting the error ellipses depends on the uncertainty of each component (in this case, obtained from the Monte Carlo simulations) and a desired confidence interval for the ellipse and is described in detail by Haug (2012) and Voytenko and others (2015b). Note that the shape of the error ellipses agrees with the uncertainties presented in Figure 6.
While examining the condition number of the system of equations, we note that when C is 10, we lose one digit of precision. Considering our previous velocity measurement uncertainty assumption of ±0.5 m d−1 (i.e. our measurements are precise to the ones decimal place), losing one decimal place of precision means that the resulting velocity is only accurate to the tens decimal place, or ±5 m d−1.
Given our instrument locations, most of the terminus loses a little over one digit of precision (Fig. 9), suggesting uncertainties of at least 5 m d−1 for each component, which are more conservative than the uncertainties computed from the Monte Carlo simulation (~1 to ~6 m d−1) (Fig. 6).
At Jakobshavn, locations for both radars that would be optimal for this method are either covered with glacier ice or are close to 16 km away from the terminus (near the useful range of the instrument), making 2-D interferometric measurements with TRI challenging at this location. However, since these precision loss calculations only depend on the instrument locations, they can be used in a forward model prior to future deployments to determine suitable instrument positions at other sites. For applications involving imaging the calving front of marine-terminating glaciers, if geography and logistics allow, the optimum radar separation should be such that the two radars are imaging the target area at close to right angles (to improve the conditioning of the system of equations) and within the operating range of the instruments.
Our work shows that it is possible to obtain minute-scale 2-D velocity fields using TRI. Although feature tracking can be used to obtain 2-D velocity fields from pairs of intensity images from a single radar, the results are likely to cover multiple hours of motion. Instead, our algorithm can be used to obtain results over minute-scale time steps using two radars. This method should be applicable in any location where a favorable site geometry exists. Salient features (e.g. bare rocks), which are visible in the TRI and the background (e.g. LANDSAT) imagery, are also required for visual georeferencing.
Based on results from our deployment and the analysis presented above, we can define an improved experiment design that should yield a robust 2-D glacier velocity field in most situations.
First, the two radars should be spatially separated by an amount such that there is sufficient angular separation for most of their overlapping image areas. Optimal radar placement can be determined before field deployment by modeling radars at different locations and calculating the expected precision loss via the condition number. Ideally, the radar look directions should be perpendicular over the region of interest, the area of which should be well within the ~16 km operating range.
Second, the time interval between radar scans, which should be identical for the two instruments, should be sufficiently short such that phase unwrapping can be done accurately (as close as possible to yield a displacement of half of a wavelength, while letting the radar scan a large enough arc to cover the area of interest). For fast moving glaciers such as Jakobshavn, 1½–2 min TRI scans may be optimum. In our 2012 experiment, we used 3 min scans. This longer time scan may have contributed to some of the phase unwrapping problems we encountered.
Having frequent and spatially dense coverage of 2-D glacier surface velocities at the terminus is necessary to improve our understanding of the calving process. Since GPS deployments on rapidly moving ice are logistically difficult, costly and sparse, and satellite measurements do not have minute-scale revisit times, a combination of two TRI instruments can provide the necessary spatial and temporal resolution.
We thank editor Hamish Pritchard, Reinhard Drews and an anonymous reviewer for their valuable comments. We thank the Gordon and Betty Moore Foundation and NASA's Cryospheric Sciences program. T.H.D. acknowledges support from NASA grant NNX12AK29G. R.C. acknowledges support under the NASA Earth and Space Science Fellowship (NNX14AL29H). D.M.H. and D.V. acknowledge support of NYU Abu Dhabi Research Institute grant G1204, as well as from NASA through the 'Oceans Melting Greenland' program led by JPL with subcontract to NYU and thank support from NSF Polar Programs ARC-1304137.
Ahn, Y and Howat, IM (2011) Efficient automated glacier surface velocity measurement from repeat images using multi-image/multichip and null exclusion feature tracking. IEEE Trans. Geosci. Remote Sens., 49(8), 2838–2846
Amundson, J and 5 others (2008) Glacier, fjord, and seismic response to recent large calving events, Jakobshavn Isbræ, Greenland. Geophys. Res. Lett., 35(22)
Amundson, JM and 5 others (2010) Ice mélange dynamics and implications for terminus stability, Jakobshavn Isbræ, Greenland. J. Geophys. Res.: Earth Surf., 115(F1)
Atkinson, KE (1989) An introduction to numerical analysis 2nd Edn. John Wiley & Sons, New York
Bindschadler, RA (1984) Jakobshavns Glacier drainage basin: a balance assessment. J. Geophys. Res.: Oceans (1978–2012), 89(C2), 2066–2072
Caduff, R, Schlunegger, F, Kos, A and Wiesmann, A (2015) A review of terrestrial radar interferometry for measuring surface change in the geosciences. Earth Surf. Process. Landf., 40(2), 208–228
Fialko, Y, Sandwell, D, Simons, M and Rosen, P (2005) Three-dimensional deformation caused by the Bam, Iran, earthquake and the origin of shallow slip deficit. Nature, 435(7040), 295–299
Goldstein, RM and Werner, CL (1998) Radar interferogram filtering for geophysical applications. Geophys. Res. Lett., 25(21), 4035–4038
Haug, AJ (2012) Bayesian estimation and tracking: a practical guide. John Wiley & Sons
Houghton, J and 7 others (2001) IPCC 2001: climate change 2001. The Climate Change Contribution of Working Group I to the Third Assessment Report of the Intergovemmental Panel on Climate Change, 159
Howat, IM, Joughin, I, Fahnestock, M, Smith, BE and Scambos, TA (2008) Synchronous retreat and acceleration of southeast Greenland outlet glaciers 2000–06: ice dynamics and coupling to climate. J. Glaciol., 54(187), 646–660
Huang, H, Dabiri, D and Gharib, M (1997) On errors of digital particle image velocimetry. Meas. Sci. Technol., 8(12), 1427
Joughin, I, Kwok, R and Fahnestock, M (1998) Interferometric estimation of three-dimensional ice-flow using ascending and descending passes. IEEE Trans. Geosci. Remote Sens., 36(1), 25–37 (doi: 10.1109/36.655315), ISSN
Joughin, I and 7 others (2008) Continued evolution of Jakobshavn Isbrae following its rapid speedup. J. Geophys. Res.: Earth Surf., 113, f04006 (doi: 10.1029/2008JF001023), ISSN
Lhermitte, RM and Miller, LJ (1970) Doppler radar methodology for the observation of convective storms. In Preprints, 14th Conference on Radar Meteorology, Tucson, AZ, American Meteor Society, 133–138
Nettles, M and 9 others (2008) Step-wise changes in glacier flow speed coincide with calving and glacial earthquakes at Helheim Glacier, Greenland. Geophys. Res. Lett., 35(24)
Peters, IR and 6 others (2015) Dynamic jamming of iceberg-choked fjords. Geophys. Res. Lett., 42(4), 1122–1129
Strozzi, T, Werner, C, Wiesmann, A and Wegmuller, U (2012) Topography mapping with a portable real-aperture radar interferometer. IEEE Geosci. Remote Sens. Lett., 9(2), 277–281
Taylor, ZJ, Gurka, R, Kopp, GA and Liberzon, A (2010) Long-duration time-resolved PIV to study unsteady aerodynamics. IEEE Trans. Instrum. Meas., 59(12), 3262–3269
Voytenko, D and 7 others (2015a) Multi year observations of Breidamerkurjökull, a marine-terminating glacier in southeastern Iceland, using terrestrial radar interferometry. J. Glaciol., 61(225), 42–54
Voytenko, D and 5 others (2015b) Observations of inertial currents in a lagoon in southeastern Iceland using terrestrial radar interferometry and automated iceberg tracking. Comput. Geosci., 82, 23–30
Voytenko, D and 5 others (2015c) Tidally driven ice speed variation at Helheim Glacier, Greenland, observed with terrestrial radar interferometry. J. Glaciol., 61(226), 301
Werner, C, Strozzi, T, Wiesmann, A and Wegmüller, U (2008) Gamma's portable radar interferometer. In Proceedings of IAG/FIG, Lisbon, Portugal
Xie, S and 5 others (2016) Precursor motion to iceberg calving at Jakobshavn Isbræ, Greenland, observed with terrestrial radar interferometry. J. Glaciol., 62(236), 1134–1142
Loading article...
|
CommonCrawl
|
Biology 2015
The Cell Cycle and Cellular Reproduction
Sylvia S. Mader, Michael Windelspecht
The Cell Cycle and Cellular Reproduction - all with Video Answers
+ 4 more educators
Chapter Questions
For questions 1–4, match each stage of the cell cycle to its correct description.
$\begin{array}{ll}{\text { a. } G_{1} \text { stage }} & {\text { b. S stage }} \\ {\text { c. } G_{2} \text { stage }} & {\text { d. } M(\text { mitotic }) \text { stage }}\end{array}$
At the end of this stage, each chromosome consists of two attached chromatids.
For questions , match each stage of the cell cycle to its correct description.
During this stage, daughter chromosomes are distributed to two daughter nuclei.
Nalvi D.
The cell doubles its organelles and accumulates the materials needed for DNA synthesis.
Jamelia A.
The cell synthesizes the proteins needed for cell division.
Which is not true of the cell cycle?
a. The cell cycle is controlled by internal/external signals.
b. Cyclin is a signaling molcule that increases and decreases as the cycle continues.
c. DNA damage can stop the cell cycle at the G1 checkpoint.
d. Apoptosis occurs frequently during the cell cycle.
The diploid number of chromosomes
a. is the 2n number.
b. is in a parent cell and therefore in the two daughter cells following mitosis.
c. varies according to the particular organism.
d. is present in most somatic cells.
e. All of these are correct.
The form of DNA that contains genes that are actively being transcribed is called
a. histones.
b. telomeres.
c. heterochromatin.
d. euchromatin.
Alyssa M.
Histones are involved in
a. regulating the checkpoints of the cell cycle.
b. lengthening the ends of the telomeres.
c. compacting the DNA molecule.
d. cytokinesis.
At the metaphase plate during metaphase of mitosis, there are
a. single chromosomes.
b. duplicated chromosomes.
c. G1 stage chromosomes.
d. always 23 chromosomes
During which mitotic phases are duplicated chromosomes present?
a. all but telophase
b. prophase and anaphase
c. all but anaphase and telophase
d. only during metaphase at the metaphase plate
e. Both a and b are correct.
Which of these is paired incorrectly?
a. prometaphase - the kinetochores become attached to spindle fibers
b. anaphase- daughter chromosomes migrate toward spindle poles
c. prophase - the nucleolus disappears and the nuclear envelope disintegrates
d. metaphase - the chromosomes are aligned in the metaphase plate
e. telophase - a resting phase between cell division cycles
Kemi A.
Which of the following is not characteristic of cancer cells?
a. Cancer cells often undergo angiogenesis.
b. Cancer cells tend to be nonspecialized.
c. Cancer cells undergo apoptosis.
d. Cancer cells often have abnormal nuclei.
e. Cancer cells can metastasize.
Which of the following statements is true?
a. Proto-oncogenes cause a loss of control of the cell cycle.
b. The products of oncogenes may inhibit the cell cycle.
c. Tumor suppressor gene products inhibit the cell cycle.
d. A mutation in a tumor suppressor gene may inhibit the cell cycle.
In contrast to a eukaryotic chromosome, a prokaryotic chromosome
a. is shorter and fatter.
b. has a single loop of DNA.
c. never replicates.
d. contains many histones.
Which of the following is the term used to describe asexual reproduction in a single-celled organism?
a. cytokinesis
b. mitosis
c. binary fission
d. All of these are correct.
|
CommonCrawl
|
Peel Off
the extended real line from a topological pov (introduction)
The extended real line, which we will soon define, is a useful setting in the study of measure and integration, for example. But, in that context, it is seen primarily as a tool, and the topological properties that one can acquire from it remains relatively shaded, since the focus is on its useful algebraic properties, and simple limit properties. We shall see that one can arrive in interesting results (that hold even when you consider problems on $\mathbb{R}$ itself) quite easily with it.
IDEA: We want to attach two points to the real line and call them $\infty$ and $-\infty$, and we want them to behave as we expect from something we would call $\infty$ and $-\infty$. For that, if we want to talk about the topology of this resulting space, we essentially have to say what are the neighbourhoods of this topology. We still want $(a-\delta,a+\delta)$ to be a neighbourhood of a real number $a$, for example. But we also want neighbourhoods of $\infty$ now. It seems a reasonable attempt to define a neighbourhood of $\infty$ as $(M,\infty]$ for example (note the closed bracket, indicating that $\infty$ is in the set).
Before proceding, I introduce the concept of a basis of a topology. Essentially, the basis of a topology is a smaller, "controlled" set that generates the topology - it says who the open sets are.
Definition: If $X$ is a set, a basis of a topology in $X$ is a collection $\mathcal{B}$ of sets that satisfy the following properties:
(i) For all $x \in X$ there is a set $B$ in $\mathcal{B}$ that contains $x$
(ii) If $x$ is in the intersection of two basis elements $B_1$ and $B_2$, then there is a basis element $B_3$ that contains $x$ and that is contained in the intersection of $B_1$ and $B_2$.
We define the topology $\tau$ generated by $\mathcal{B}$ to be the collection of sets $A$ that satisfy the property that for all $x \in A$, there is a basis element $B$ containing $x$ and contained in $A$.
For example, the balls of a metric space are a basis for its topology (draw it in order to understand!)
Another example of a basis (which is, in fact, a corollary of the balls of metric spaces) are the open intervals in the real line.
Of course, there are technical issues (minor ones, easily solved) that I'll overpass. We have to prove that the topology generated by $\mathcal{B}$ is in fact a topology, as defined in a previous post. If you are interested, you can do it as an exercise.
Now, let's jump into what we wanted!
Definition: Take two points that are not in $\mathbb{R}$ and call them $\infty$ and $-\infty$. Now, define
$$\displaystyle \overline{\mathbb{R}}:=\mathbb{R} \cup \{\infty,-\infty\}$$
Furthermore, define the following basis on $\displaystyle \overline{\mathbb{R}}$:
The basis $\mathcal{B}$ will consist of the open intervals and of the sets $(b, \infty]$ and $[-\infty, a)$ for all $b$ and $a$ real numbers.
That this is in fact a basis (which means that this satisfies the properties listed before) is easy to verify.
Now, in order not to introduce a lot of notations and definitions, I'll not define the subspace topology. It is not a difficult definition, but may be abstract and not enlightening at first. Hence, I'll just assume an intuition in it, in order to justify the following: it seems clear that, if you have
$\displaystyle \overline{\mathbb{R}}$ and pass from it to $\mathbb{R}$, the topology you inherit is exactly the standard topology of $\mathbb{R}$. We will use this fact.
We arrive now at a change of point of view:
In analysis, one often learns the following definition:
We say a sequence $x_n$ converges if there is a real number $L$ such that $\forall \epsilon >0$ there exists a $N \in \mathbb{N}$ such that $n > N \implies |x_n - L| < \epsilon$. In this case, we call $L= \lim x_n$. Otherwise, we say the sequence diverges.
But we also have the following definition:
($1$) Given a sequence $x_n$, we say $\lim x_n= \infty$ if $\forall A \in \mathbb{R}$ there is a $N \in \mathbb{N}$ such that $n > N \implies x_n> A$.
Note that this is a slight abuse of notation. The sequence $x_n$ above, BY DEFINITION, does not converge. But we say $\lim x_n= \infty$, because it makes sense. To be completely honest, we should write something different, like $L ~x_n= \infty$
But note that, according to our topology, we have that the definition of $L ~x_n= \infty$ is in fact the definition of $\lim x_n= \infty$. In fact, ($1$) is precisely telling: For all neighbourhoods $V$ of infinity, there exists an $N$ such that $n > N \implies x_n \in V$. So, $x_n$ CONVERGES, and REALLY CONVERGES to $\infty$.
We come to our first proposition:
Proposition: $\displaystyle \overline{\mathbb{R}}$ is compact.
Proof: Take an open cover $V_i$ of $\displaystyle \overline{\mathbb{R}}$. Choose a $V_{i_1}$ such that it contains $+\infty$, and a $V_{i_2}$ such that it contains $-\infty$. They contain a set of the form $(b,\infty]$ and $[-\infty, a)$ respectively, so the rest of the $V_i$ should cover $[a,b]$, which is contained in the complement of those sets. But, by the Heine-Borel Theorem, $[a,b]$ is compact. Hence, there is a finite subcover of $V_i$ that covers $[a,b]$. So, this finite subcover, together with $V_{i_1}$ and $V_{i_2}$ covers $\displaystyle \overline{\mathbb{R}}$. So, we arrived at a finite subcover for $\displaystyle \overline{\mathbb{R}}$. $\blacksquare$.
Corollary: Every sequence in $\displaystyle \overline{\mathbb{R}}$ has a convergent subsequence.
Note the analogy between bolzano-weierstrass theorem and the corollary above. Bolzano-weierstrass says every bounded sequence has a convergent subsequence.
We arrive now at a result that does not involve $\displaystyle \overline{\mathbb{R}}$ at first sight:
Proposition: Let $f:[0,\infty) \rightarrow \mathbb{R}$ be a continuous function such that $\displaystyle \lim _{x \rightarrow \infty}f(x) =L$, and $L<f(0)$. Then, $f$ has a maximum.
Proof: Define $\overline{f} :[0,\infty] \rightarrow \mathbb{R}$ as $f(x)$ if $x \in [0,\infty ) $ and $L$ if $x=\infty$. Since $\displaystyle \lim _{x \rightarrow \infty} f(x)=L$, $\overline{f}$ is continuous in $[0,\infty]$. Since $[0,\infty]$ is closed (not proved, but easily seen to be true) and $\displaystyle \overline{\mathbb{R}}$ is compact, $[0,\infty]$ is compact. Hence, $\overline{f}$ reaches a maximum on $[0,\infty]$. This maximum cannot be in $\infty$, since $f(0)>f(\infty)$. Hence, this maximum must be achieved in $[0,\infty)$. $\blacksquare$
Note somethings in the previous demonstration:
First, $0$ has nothing special. If there was any other place where $f$ was greater than $L$, it would be enough.
Secondly, this requirement (that $f(0)>L$) is just to guarantee that the maximum is in $[0,\infty)$ and not in $[0,\infty]$. In fact, there is always a maximum in $[0,\infty]$. The problem is, sometimes the maximum can be achieved at infinity. Draw an example of this (any monotonic increasing bounded function will do!).
We conclude by sketching the proof of the following theorem:
Theorem: A continuous bijective function $f$ on an interval has a continuous inverse.
Sketch of Proof: If the interval is of the form $[a,b]$, it is compact, and we are done.
If the interval is of the form $[a,b)$, since $f$ is continuous and bijective, it is monotonic (by the intermediate value theorem), so $\displaystyle \lim_{x \rightarrow b}f(x)$ exists (it can be $\infty$, no problem!). Pass to the extension $\overline{f}$ of $f$ on $[a,b]$. It is continuous. Hence, since $[a,b]$ is compact, the inverse is continuous. Restrict the inverse by taking away $\overline{f}(b)$. This is precisely the inverse of $f$. The rest is analogous. $\blacksquare$.
Sin and cos
We present a way to define $\sin$ and $\cos$ which is quite traditional, but show a non-canonical way to "prove" that these definitions are equivalent to the geometrical ones.
First, let's define the derivative of a function $f:\mathbb{R} \rightarrow \mathbb{C}$:
Definition: Given a function $f:\mathbb{R} \rightarrow \mathbb{C}$ given by: $f(x)=\Re(f(x))+i \Im(f(x))$, define:
$f'(x)=\Re'(f(x))+i \Im'(f(x))$
OBS: Note that theorems like "derivative of sum is sum of derivatives" still hold, as well as the definition of derivative by the limit.
OBS: Note also that this ISN'T the derivative of a function $f:\mathbb{C} \rightarrow \mathbb{C}$. We are concerned with functions with real domain.
Now, extend the definition of exponentiation (read the first post on this blog) to complex numbers:
Definition: $\displaystyle e^z:=\sum_{n=0}^{\infty}\frac{z^n}{n!}$
The series converges for every complex $z$ by the ratio test, and the formula $e^{(z+w)}=e^ze^w$ still holds by the cauchy product formula. Now, let's calculate the derivative of $e^x$ and $e^{ix}$. Note that $x$ is real.
It's common to do this by theorems of power series. We shall not use them. Instead, we use more elementary methods.
For the derivative of $e^x$:
$\displaystyle \lim_{h\rightarrow 0} \frac{e^{x+h}-e^x}{h}=e^{x}\lim_{h\rightarrow 0} \frac{e^h-1}{h}$
Now, to evaluate the last limit (without using theorems of power series), do the following:
Fix an arbitrary $H >0$.
Now, given an $\epsilon >0$, there exists $n \in \mathbb{N}$ such that:
$$\frac{H^{n}}{(n+1)!}+\frac{H^{n+1}}{(n+2)!}+... \leq \epsilon$$
since the series $\displaystyle \sum_{k=0}^{\infty}\frac{H^k}{(k+1)!}$ converges by the ratio test. But note that if you multiply $0<h<H$ this implies :
$$\frac{hH^{n}}{(n+1)!}+\frac{hH^{n+1}}{(n+2)!}+... \leq \epsilon.h$$
Since $h<H$:
$$\frac{h^{n+1}}{(n+1)!}+\frac{h^{n+2}}{(n+2)!}+... \leq \frac{hH^{n}}{(n+1)!}+\frac{hH^{n+1}}{(n+2)!}+... \leq \epsilon.h$$
But then, we have:
$$e^h \leq 1+h+\frac{h^2}{2!}+\frac{h^3}{3!}+...+\frac{h^n}{n!} + \epsilon.h$$
Which gives us:
$$\frac{e^h -1}{h} \leq 1+\frac{h}{2!}+\frac{h^2}{3!}+...+\frac{h^{n-1}}{n!} + \epsilon$$
But $1\leq \frac{e^h -1}{h}$ is obvious from the definition of $e^h$. So, taking limits:
$$1 \leq \displaystyle \lim_{h\rightarrow 0^{+}} \frac{e^h -1}{h} \leq 1+\epsilon$$
But $\epsilon>0$ was arbitrary, which gives:
$$\lim_{h\rightarrow 0^{+}} \frac{e^h -1}{h} =1$$
Now, note that:
$$\displaystyle \lim_{h\rightarrow 0^{-}} \frac{e^h -1}{h} =
\lim_{h\rightarrow 0^{+}} \frac{e^{-h}-1}{-h}= \lim_{h\rightarrow 0^{+}} \frac{\frac{1}{e^h}-1}{-h}=
\lim_{h\rightarrow 0^{+}} \frac{e^h-1}{h}.\frac{1}{e^h}=1$$
Hence, the limit equals $1$, and it is proved that the derivative of $e^x$ is $e^x$. $\blacksquare$
Now, we will calculate the derivative of $e^{ix}$:
$\displaystyle \lim_{h\rightarrow 0} \frac{e^{i(x+h)}-e^{ix}}{h}=e^{ix}\lim_{h\rightarrow 0} \frac{e^{ih}-1}{h}=e^{ix}\lim_{h\rightarrow 0} \frac{e^{ih}-1}{h}$.
But $e^{ih}=1+ih-\frac{h^2}{2!}-i\frac{h^3}{3!}+\frac{h^4}{4!}+...$. Since the series is absolutely convergent, separate the series in two pieces: the part with $i$ and the part without $i$. Similar estimations that were used before now will be able to be used, and will result (since the term in $h$ is $i$):
$$\lim_{h\rightarrow 0} \frac{e^{ih}-1}{h}=i$$
So, the derivative of $e^{ix}$ is $ie^{ix}$.
You may ask at this point: where is $\cos$ and $\sin$?
$\displaystyle \cos(x):=\frac{e^{ix}+e^{-ix}}{2}$
$\displaystyle \sin(x):=\frac{e^{ix}-e^{-ix}}{2i}$
By the definition of $e^z$, $e^{\overline{z}}=\overline{e^z}$. Then, $\cos$ and $\sin$ are real functions. Moreover, it is evident that:
$$e^{ix}=\cos(x)+ i \sin(x)$$
We also have:
$$|e^{ix}|^2=e^{ix}.\overline{e^{ix}}=e^{ix}e^{-ix}=1$$
which implies:
$$|e^{ix}|=1 \Rightarrow \sin^2(x)+\cos^2(x)=1$$
Also, directly from definition:
$$\cos'(x)=-\sin(x), ~~~~~~\sin'(x)=\cos(x)$$
And also directly from definition: $\cos(0)=1$, $\sin(0)=0$
Now, why on earth are those definitions the sine and cosine we know?
We will prove they must be. How?
Proposition: Let $c:\mathbb{R} \rightarrow \mathbb{R}$ and $s: \mathbb{R} \rightarrow \mathbb{R}$ such that:
(1) $c(0)=1$, $s(0)=0$
(2)$c'(x)=-s(x)$, $s'(x)=c(x)$.
So, $s(x)=\sin(x)$ and $c(x)=\cos(x)$.
This way, since the functions sine and cosine we know geometrically satisfy those properties, they must be the $\sin$ and $\cos$ we just defined.
Proof: Suppose we have functions $c, s$ satisfying those properties.
Define the function $f(x):=(\cos(x)-c(x))^2+(\sin(x)-s(x))^2$. We have:
$$f'(x)=2(\cos(x)-c(x))(-\sin(x)+s(x))+2(\sin(x)-s(x))(\cos(x)-c(x))=0$$
Therefore, $f$ is constant.
But $f(0)=(1-1)^2+(0-0)^2=0$. So $f(x)=0$ for all $x \in \mathbb{R}$.
But this can only be true if $\sin(x)=s(x)$ and $\cos(x)=c(x)$ for all $x \in \mathbb{R}$. $\blacksquare$.
My name is Aloizio Macedo, and I am a 21 years old Mathematics student at UFRJ (Universidade Federal do Rio de Janeiro).
|
CommonCrawl
|
Prob. 16, Chap. 6, in Baby Rudin: Some Properties of Riemann Zeta Fucntion
Here is Prob. 16, Chap. 6, in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition:
For $1 < s < \infty$, define $$ \zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}. $$ (This is Riemann's zeta function, of great importance in the study of the distribution of prime numbers.) Prove that
(a) $$ \zeta(s) = s \int_1^\infty \frac{[x]}{x^{s+1} } \ \mathrm{d} x $$ and that
(b) $$ \zeta(s) = \frac{s}{s-1} - s \int_1^\infty \frac{x-[x]}{ x^{s+1} } \ \mathrm{d} x. $$ where $[x]$ denotes the greatest integer $\leq x$.
Prove that the integral in (b) converges for all $s > 0$.
Hint: To prove (a), compute the difference between the integral over $[1, N]$ and the $N$th partial sum of the series that defines $\zeta(s)$.
My Attempt:
Here are the links to a couple of relevant posts of mine here on Math SE:
Prob. 8, Chap. 6, in Baby Rudin: The Integral Test for Convergence of Series
Theorem 6.12 (b) in Baby Rudin: If $f_1 \leq f_2$ on $[a, b]$, then $\int_a^b f_1 d\alpha \leq \int_a^b f_2 d\alpha$
Theorem 6.12 (c) in Baby Rudin: If $f\in\mathscr{R}(\alpha)$ on $[a, b]$ and $a<c<b$, then $f\in\mathscr{R}(\alpha)$ on $[a, c]$ and $[c, b]$
Theorem 6.12 (a) in Baby Rudin: If $f\in\mathscr{R}(\alpha)$ on $[a,b]$, then $cf\in\mathscr{R}(\alpha)$ for every constant $c$
Prob. 16 (a)
Let $b$ be a real number such that $b > 2$, and let $N$ be the integer such that $N \leq b < N+1$. Then $N \geq 2$ and we see that $$ \begin{align} s \int_1^b \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x &\geq s \int_1^N \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x \\ &= s \sum_{k=1}^{N-1} \int_k^{k+1} \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x \\ &\geq s \sum_{k=1}^{N-1} \int_k^{k+1} \frac{ k }{ x^{s+1} } \ \mathrm{d} x \\ &= s \sum_{k=1}^{N-1} k \int_k^{k+1} x^{-s-1} \ \mathrm{d} x \\ &= s \sum_{k=1}^{N-1} k \frac{ (k+1)^{-s} - k^{-s} }{ -s } \\ &= \sum_{k=1}^{N-1} k \left( \frac{ 1 }{ k^s } - \frac{ 1 }{ (k+1)^s } \right). \tag{1} \end{align} $$ On the other hand, $$ \begin{align} s \int_1^b \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x &\leq s \int_1^{N+1} \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x \\ &= s \sum_{k=1}^{N} \int_k^{k+1} \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x \\ &\leq s \sum_{k=1}^{N} \int_k^{k+1} \frac{ k+1 }{ x^{s+1} } \ \mathrm{d} x \\ &= s \sum_{k=1}^{N} (k+1) \int_k^{k+1} x^{-s-1} \ \mathrm{d} x \\ &= s \sum_{k=1}^{N} (k+1) \frac{ (k+1)^{-s} - k^{-s} }{ -s } \\ &= \sum_{k=1}^{N} (k+1) \left( \frac{ 1 }{ k^s } - \frac{ 1 }{ (k+1)^s } \right) \\ &= \sum_{k=1}^{N} k \left( \frac{ 1 }{ k^s } - \frac{ 1 }{ (k+1)^s } \right) + \sum_{k=1}^{N} \left( \frac{ 1 }{ k^s } - \frac{ 1 }{ (k+1)^s } \right) \\ &= \sum_{k=1}^{N} k \left( \frac{ 1 }{ k^s } - \frac{ 1 }{ (k+1)^s } \right) + \left( 1- \frac{1}{ (N+1)^s } \right). \tag{2} \end{align} $$
By combining (1) and (2) we obtain $$ \sum_{k=1}^{N-1} k \left( \frac{ 1 }{ k^s } - \frac{ 1 }{ (k+1)^s } \right) \leq s \int_1^b \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x \leq \sum_{k=1}^{N} k \left( \frac{ 1 }{ k^s } - \frac{ 1 }{ (k+1)^s } \right) + \left( 1- \frac{1}{ (N+1)^s } \right) \tag{3} $$ for every real number $b > 2$ and for the natural number $N$ such that $N \leq b < N+1$, (i.e. $N = [b]$) and so $N \geq 2$.
Conversely, for every natural number $N > 2$, we can find a real number $b$ such that $N \leq b < N+1$, and so (3) holds.
Is what I've done so far correct? If so, then how to prove from here the identity asserted in (a)?
Prob. 16 (b)
We now assume that $$ \zeta (s) = s \int_1^\infty \frac{ [ x ] }{ x^{s+1} } \ \mathrm{d} x \tag{4} $$ holds. Then $$ \begin{align} \frac{s}{s-1} - s \int_1^\infty \frac{ x-[x] }{ x^{s+1} } \ \mathrm{d} x &= \frac{s}{s-1} - s \int_1^\infty \frac{ 1 }{ x^s } \ \mathrm{d} x + s \int_1^\infty \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x \\ &= \frac{s}{s-1} - s \lim_{b \to \infty} \int_1^b \frac{ 1 }{ x^s } \ \mathrm{d} x \ + \ \zeta(s) \qquad \mbox{ [ using (4) ] } \\ &= \frac{s}{s-1} - s \lim_{b \to \infty} \frac{ b^{-s + 1} - 1 }{ -s+1 } + \zeta (s) \\ &= \frac{s}{s-1} - s \frac{ \lim_{b \to \infty} b^{-s + 1} \ - \ 1 }{ -s+1 } + \zeta (s) \\ &= \frac{s}{s-1} - s \frac{ 0 - 1 }{ -s+1 } + \zeta (s) \qquad \mbox{ [ because $s > 1$ ] } \\ &= \zeta (s), \end{align} $$ as required.
Now as $0 \leq x - [x] < 1$ for all real numbers $x$ and as $x^{s+1} > 0$ for all $x \geq 1$, so $$ 0 \leq \frac{ x- [x] }{ x^{s+1 } } < \frac{ 1 }{ x^{s+1} }, $$ and then we can conclude (from Theorem 6.12 (b) in Rudin ) that $$ 0 \leq \int_1^b \frac{ x- [x] }{ x^{s+1 } } \ \mathrm{d} x \leq \int_1^b \frac{ 1 }{ x^{s+1 } } \ \mathrm{d} x $$ for every real number $b \geq 1$. Therefore, $$ \begin{align} 0 \leq \int_1^\infty \frac{ x-[x] }{x^{s+1} } \ \mathrm{d} x &= \lim_{b \to \infty} \int_1^b \frac{ x-[x] }{x^{s+1} } \ \mathrm{d} x \\ &\leq \lim_{b \to \infty} \int_1^b \frac{ 1 }{ x^{s+1 } } \ \mathrm{d} x \\ &= \int_1^\infty \frac{ 1 }{ x^{s+1 } } \ \mathrm{d} x. \tag{5} \end{align} $$ Now the integral $\int_1^\infty \frac{ 1 }{ x^{s+1 } } \ \mathrm{d} x$ converges if and only if the series $\sum \frac{1}{n^{s+1} }$ converges (by Prob. 8, Chap. 6, in Rudin), since the function $f$ defined on $[1, \infty)$ by
$$ f(x) \colon= \frac{1}{x^{s+1} } $$ is monotonically decreasing.
Now as $s > 1$, so $ s+1 > 2$ and thus (by Theorem 3.28 in Rudin) the series $\sum \frac{1}{n^{s+1}}$ converges. So the last integral in (5) also converges, and then (5) implies the convergence of $ \int_1^\infty \frac{ x-[x] }{x^{s+1} } \ \mathrm{d} x $, as required.
Is what I've asserted so far correct? Have I used the correct and rigorous enough logic in establishing whatever I have? Or, have I made any mistakes somewhere?
P.S.: After reading the answer and comments below, here is what I've thought of.
Theorem 3.41 in Rudin:
Given two sequences $\left\{ a_n \right\}$, $\left\{ b_n \right\}$, put $$ A_n = \sum_{k=0}^n a_k $$ if $n \geq 0$; put $A_{-1} = 0$. Then, if $0 \leq p \leq q$, we have $$ \sum_{n=p}^q a_n b_n = \sum_{n=p}^{q-1} A_n \left( b_n - b_{n+1} \right) + A_q b_q - A_{p-1} b_p. \tag{A} $$
So if we put $a_k \colon= 1 = k - (k-1) $ and $b_k \colon= \frac{1}{k^s}$ for $k \geq 1$ and $a_0 = 0$ in (A), then for any integer $r \geq 2$ we obtain $$ \begin{align} \sum_{k=1}^r \frac{1}{k^s} &= \sum_{k=1}^{r-1} \frac{k - (k-1) }{ k^s } \\ &= \sum_{k=1}^{r-1} A_k \left( b_k - b_{k+1} \right) + A_r b_r - A_0 b_1 \\ &= \sum_{k=1}^{r-1} k \left( \frac{1}{k^s} - \frac{1}{ (k+1)^s } \right) + r \frac{1}{r^s} \\ &= \sum_{k=1}^{r-1} k \left( \frac{1}{k^s} - \frac{1}{ (k+1)^s } \right) + \frac{1}{r^{s-1} }, \end{align} $$ and so $$ \sum_{k=1}^{r+1} \frac{1}{k^s} = \sum_{k=1}^{r} k \left( \frac{1}{k^s} - \frac{1}{ (k+1)^s } \right) + \frac{1}{ (r+1)^{s-1} }, $$ which implies that $$ \sum_{k=1}^{r} k \left( \frac{1}{k^s} - \frac{1}{ (k+1)^s } \right) = \sum_{k=1}^{r+1} \frac{1}{k^s} \ - \ \frac{1}{ (r+1)^{s-1} }. \tag{6} $$
Then using (6) in (3) we obtain $$ \sum_{k=1}^N \frac{1}{k^s} \ - \ \frac{1}{N^s} \leq s \int_1^b \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x \leq \sum_{k=1}^{N+1} \frac{1}{k^s} + 1 - \frac{2}{ (N+1)^s }. \tag{7} $$ In (7) if we let $b \to \infty$, then $N \to \infty$ also, and then we obtain $$ \sum_{k=1}^\infty \frac{1}{k^s} \leq s \int_1^\infty \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x \leq 1 + \sum_{k=1}^\infty \frac{1}{k^s}, $$
which is the same as $$\zeta(s) \leq s \int_1^\infty \frac{ [x] }{ x^{s+1} } \ \mathrm{d} x \leq 1 + \zeta(s). $$
Is what I've done so far the desired thing? If so, then what next?
real-analysis integration analysis definite-integrals riemann-zeta
Saaqib Mahmood
Saaqib MahmoodSaaqib Mahmood
$\begingroup$ @SaaqibMahmuud..I've seen many of your posts and i believe that your determined to solve the life out of Baby Rudin..:P I praise that..good for you...+1 from me. $\endgroup$ – Marios Gretsas Aug 9 '17 at 12:32
$\begingroup$ @Marios Gretsas thanks. What proportions of the problems are required to be solved in a typical course taught using Rudin at an American or European university, I wonder? $\endgroup$ – Saaqib Mahmood Aug 9 '17 at 12:46
$\begingroup$ I don't know because Baby Rudin is a book recomended to my university in Greece, especially for introductory courses in advance calculus of the 3rdand 4rth semester.But this book is not used as a teaching materia by the professors in my university in such early semesters and ages,because its quite a difficult book. $\endgroup$ – Marios Gretsas Aug 9 '17 at 12:50
$\begingroup$ Most of the people i know use this book for self study and for its challenging exercises to gain more mathematical maturity and a concrete picture of rigorus proofs $\endgroup$ – Marios Gretsas Aug 9 '17 at 12:52
$$\zeta(s) = \sum_{n=1}^\infty n^{-s} = \sum_{n=1}^\infty (n-(n-1)) n^{-s} = \sum_{n=1}^\infty n (n^{-s}-(n+1)^{-s}) \\= \sum_{n=1}^\infty n \int_n^{n+1} s x^{-s-1}dx = s\int_1^\infty \lfloor x \rfloor x^{-s-1}dx$$
The name of this is "summation by parts" and "Abel summation formula"
reunsreuns
$\begingroup$ how does the third equality in your answer occurs? Could you please elaborate? $\endgroup$ – Saaqib Mahmood Aug 9 '17 at 9:09
$\begingroup$ @SimplyBeautifulArt I really meant $\sum_{n\ge 1} (n-(n-1))n^{-s}=\sum_{n\ge 1} n \, n^{-s}-\sum_{n\ge 0} n \, (n+1)^{-s}$ at least for $\Re(s) > 2$. It is not hard to adapt it to $\Re(s) > 1$. $\endgroup$ – reuns Aug 9 '17 at 11:03
$\begingroup$ Oh, my bad. Summation by parts is the last step... (it could've been that other step too, but yeah, that's overcomplicating) $\endgroup$ – Simply Beautiful Art Aug 9 '17 at 11:08
$\begingroup$ @SimplyBeautifulArt ? I don't follow you. Abel summation formula is $\sum_n n^{-s} = s \int_1^\infty \lfloor x \rfloor x^{-s-1}dx$, summation by parts is $\sum_n n^{-s} = \sum_n n(n^{-s}-(n+1)^{-s})$ $\endgroup$ – reuns Aug 9 '17 at 11:09
$\begingroup$ ._. I shall go get my coffee and try waking up a bit more. $\endgroup$ – Simply Beautiful Art Aug 9 '17 at 11:14
Not the answer you're looking for? Browse other questions tagged real-analysis integration analysis definite-integrals riemann-zeta or ask your own question.
Prob. 2, Chap. 6, in Baby Rudin: If $f\geq 0$ and continuous on $[a,b]$ with $\int_a^bf(x)\ \mathrm{d}x=0$, then $f=0$
Prob. 9, Chap. 6, in Baby Rudin: Which one of these two improper integrals converges absolutely and which one does not?
Prob. 11, Chap. 6, in Baby Rudin: Triangle Inequality for Riemann-Stieltjes Integrals using $L^2$-Norm
|
CommonCrawl
|
Distinct polymer physics principles govern chromatin dynamics in mouse and Drosophila topological domains
Vuthy Ea1,
Tom Sexton2,
Thierry Gostan1,
Laurie Herviou1,
Marie-Odile Baudement1,
Yunzhe Zhang3,
Soizik Berlivet1,
Marie-Noëlle Le Lay-Taha1,
Guy Cathala1,4,
Annick Lesne1,4,5,
Jean-Marc Victor1,4,5,
Yuhong Fan3,
Giacomo Cavalli2,4 &
Thierry Forné1,4
In higher eukaryotes, the genome is partitioned into large "Topologically Associating Domains" (TADs) in which the chromatin displays favoured long-range contacts. While a crumpled/fractal globule organization has received experimental supports at higher-order levels, the organization principles that govern chromatin dynamics within these TADs remain unclear. Using simple polymer models, we previously showed that, in mouse liver cells, gene-rich domains tend to adopt a statistical helix shape when no significant locus-specific interaction takes place.
Here, we use data from diverse 3C-derived methods to explore chromatin dynamics within mouse and Drosophila TADs. In mouse Embryonic Stem Cells (mESC), that possess large TADs (median size of 840 kb), we show that the statistical helix model, but not globule models, is relevant not only in gene-rich TADs, but also in gene-poor and gene-desert TADs. Interestingly, this statistical helix organization is considerably relaxed in mESC compared to liver cells, indicating that the impact of the constraints responsible for this organization is weaker in pluripotent cells. Finally, depletion of histone H1 in mESC alters local chromatin flexibility but not the statistical helix organization. In Drosophila, which possesses TADs of smaller sizes (median size of 70 kb), we show that, while chromatin compaction and flexibility are finely tuned according to the epigenetic landscape, chromatin dynamics within TADs is generally compatible with an unconstrained polymer configuration.
Models issued from polymer physics can accurately describe the organization principles governing chromatin dynamics in both mouse and Drosophila TADs. However, constraints applied on this dynamics within mammalian TADs have a peculiar impact resulting in a statistical helix organization.
During the last decade, the advent of Chromosome Conformation Capture (3C) [1] and its derived technologies (4C, 5C, Hi-C) [2] allowed to explore genome organization with unprecedented resolution and accuracy. By capturing all chromatin contacts present at a given time in their physiological nuclear context, and then by averaging these events over several millions of cells, the quantitative 3C method [3] allows to access the relative contact frequencies between chromatin segments in vivo. This feature is key to understanding chromatin dynamics in vivo because it depends not only on fundamental biophysical parameters of the chromatin (such as compaction and stiffness) that determine its local organization at the nucleosomal scale, but also on constraints that impact its higher-order/supranucleosomal organization. These latter constraints can result either from nuclear determinants that organize chromatin at higher scales ("top-down" constraints) or from some intrinsic locus-specific components of the chromatin that are controlling genomic functions, like epigenetic modifications or the binding of specific factors ("bottom-up" constraints) [4].
Hi-C approaches (that combine 3C assays with high-throughput sequencing) provided genome-wide profiling of contact frequencies in the yeast (Saccharomyces cerevisiae) [5], fly (Drosophila melanogaster) [6], mouse (Mus musculus domesticus) [7] and human [8, 9] genomes. While these data confirmed that higher-order chromatin dynamics appears to be globally unconstrained in yeast, they showed that this organization level is constrained in higher eukaryotes where the chromatin is compartmentalized into chromosomal territories that are themselves further partitioned into the so-called "Topologically Associating Domains" (TADs) [10] or contact domains [9]. TADs and contact domains are defined as chromosomal sub-compartments that display preferential contacts in cis. However, they are restricted to interphase cells and disappear in mitotic chromosomes [11], to be re-acquired in the early G1 phase [12]. They are physically delimited by borders that are gene-rich regions enriched in specific factors like the insulator protein CTCF [7, 9, 13, 14]. Noticeably, the location of TAD borders appears to be quite stable across cell types. It is commonly accepted that, within TADs, chromatin is organized into chromatin loops, via locus-specific interactions, and that this organization is tightly related to genome function [9, 15–17]. It has recently been evidenced that such interactions occur in the context of fluctuating structures rather than being stable loops [18], and we previously showed that, in the absence of strong long-range locus-specific interactions, this underlying dynamics of the chromatin undergo constraints in gene-rich regions resulting in modulated contact frequencies over large genomic distances [4]. While the involvement of locus-specific factors in chromatin-loop formation, within TADs, is now well established [9], the physical properties that govern the underlying chromatin dynamics at that scale remains unknown.
Here, using quantitative 3C experiments, we report that the modulation of contact frequencies previously described in liver cells [4] is also present in pluripotent mouse Embryonic Stem Cells (mESC), not only in gene-rich TADs, but also in gene-poor and gene-desert domains. Therefore, the constraints that affect higher-order chromatin dynamics in mammals appear to widely affect TADs in diverse genomic contexts. We show that the equilibrium/crumpled globule models do not reproduce chromatin dynamics within mammalian TADs. In contrast, models derived from polymer physics can accurately describe chromatin dynamics at that scale in both mouse and Drosophila TADs. In the mouse, we found that chromatin dynamics is less constrained in ESC than in liver cells, and that this constraint is also strongly attenuated in a TAD spanning a gene-desert compared to gene-poor or gene-rich TADs. In Drosophila melanogaster, using Hi-C data obtained from embryos, we show that, on a local scale, chromatin dynamics is finely tuned according to the epigenetic landscape: the nucleofilament is less compact and more flexible in active than in heterochromatic domains. However, in contrast to mammals, the higher-order chromatin dynamics in Drosophila appears largely unconstrained.
To explore the influence of the genomic context on chromatin dynamics, we first investigated mouse ESC, for which TADs have been finely defined [7]. We focused on three types of domains: five gene-rich TADs, two gene-poor TADs [19] and one gene-desert TAD (Additional file 1a and b). The regions investigated in the two gene-poor TADs (Additional file 1b) are devoid of any known genes or putative regulatory elements, and their homozygous deletion in mouse results in fully viable pups, with no obvious alteration [19]. These TADs actually do contain several genes, but the closest from the regions analysed are located around 300 kb away. In contrast, the gene-desert TAD is containing a single gene located more than 1.5 Mb away from the region analysed (Additional file 1a).
Equilibrium/crumpled globule models do not reproduce chromatin dynamics within mammalian TADs
Equilibrium and crumpled/fractal globule models, have been developed to describe chromatin dynamics in vivo. It was shown that, when one looks at decreasing contact frequencies as a function of increasing genomic distances in a Log-Log plot, the equilibrium globule model follows a power-law scaling associated to a slope of −3/2 over two orders of magnitude while the crumpled globule model has a slope of −1 [20]. Using Hi-C data, it was shown that crumpled globule features are characteristic of chromatin dynamics above 1 Mb (chromosome territory/inter-TADs dynamics) but that they may not be valid for separation distances shorter than 100 kb [8].
To assess whether such organization principles apply to chromatin dynamics within TADs, we thus performed quantitative 3C experiments in the different TADs described above and, using Log-Log plots, we showed that gene-rich, as well as gene-poor and gene-desert TADs display slopes superior to −1 (−0.60 to −0.48) (Fig. 1) which are incompatible with the equilibrium or crumpled globule models. Therefore, neither the equilibrium nor the crumpled globule models accurately reproduce chromatin dynamics within mammalian TADs.
Fitting globule models to contact frequencies quantified in mESC. Experimental 3C-qPCR data obtained for wt mESC in gene-rich TADs (Fig. 2) have been displayed into a Log-Log plot and globule models were fitted to the following power-law: X(s) = k*s α (adapted from Eq. 6 and Eq. 9 from ref. [20]), where X(s) is the cross-linking frequency, s (in kb) is the site separation along the genome, K is representing the efficiency of cross-linking and the exponent α is the slope associated to this power-law. Best-fits (using the nls object of the R software) show that the slope associated to our experimental data (red line) is approximately α = −1/2 (−0.52) with a correlation coefficient R 2 = 0.47, while correlation coefficients associated to the equilibrium (α = −3/2) (black line) or crumpled globules (α = −1) (green line) are much lower
Chromatin dynamics is less constrained in pluripotent mESC than in liver cells
We then fitted our data to two models, derived from polymer physics, that were previously used to describe chromatin dynamics in the yeast Saccharomyces cerevisiae [1, 21] and in mammals [4]. The first model [see equations (eqs.) 1 and 2 in Methods] provides measurements of three key parameters of local chromatin dynamics (nucleosomal scale): K reflects features of the experimental setting (mainly cross-linking efficiency); L is the length of a chromatin segment (in nm) containing 1 kb of genomic DNA, thus reflecting chromatin compaction (in nm/kb); S (the Kuhn's statistical segment, in kb) is a measure of chromatin flexibility [1, 21]. The higher cross-linking efficiency, chromatin compaction and flexibility, the lower the values of parameters K, L and S will be. This model assumes that, at higher-order organization levels, chromatin does not undergo any special constraints. Therefore, we define it as "unconstrained chromatin" model. The second model is named "statistical helix" model. It provides measurements of the same parameters of local chromatin dynamics, but it also takes into account constraints that may impact chromatin dynamics at the higher-order level (supranucleosomal scale). In this model, the higher-order chromatin dynamics is described as if constraints imposed onto chromatin were folding, statistically, the chromatin into a helical shape that can be characterized by two parameters: its mean Diameter (D) (in nm) and its mean Pitch (P) (in nm) [eq.2] [4]. These two parameters are thus describing the presence of constraints that impact higher-order chromatin dynamics. The weaker the effects of the constraints, the less pronounced the parameters of the statistical helix will be (i.e. large diameter and/or large Pitch).
As previously found in mouse liver cells [4], gene-rich TADs display modulated contact frequencies and the statistical helix model [eqs.1 and 3] can be very well fitted to our experimental data while the unconstrained chromatin model [eqs.1 and 2] does not fit for site separation larger than 35 kb (Fig. 2a). This confirms that, in both liver cells and mESC, chromatin dynamics in gene-rich TADs undergoes constraints that can be described by polymer models as if, at the supranucleosomal scale, the chromatin was statistically folded into a helix.
Fitting the statistical helix model to contact frequencies quantified in mESC. Quantitative 3C data were obtained from wild-type mouse ESC in five gene-rich TADs (a), two gene-poor TADs (b) and one gene-desert TAD (c) (see genomic maps in Additional file 1). For each type of TAD, data obtained from all the anchor primers used for each locus (Additional file 7) were compiled in a single graph (each locus is represented by a specific color). Error bars are standard error of the mean of three independent quantitative 3C assays each quantified at least in triplicate. Dashed lines delimit supranucleosomal domains that encompass separation distances where contact frequencies are alternatively lower and higher (see Methods). The graphs show the best fit analyses obtained with the unconstrained chromatin model [eqs. 1 and 2] (black curves) or the statistical helix model [eqs. 1 and 3] (red curves). Correlation coefficients (R 2) are indicated on the graphs. Best fit parameters, and the genomic distance contained within one statistical helix turn (Sh in kb), are given in the upper part of Table 1. For each supranucleosomal domains, the mean contact frequencies and the number (n) of experimental points are indicated on the graphs. p-values (Mann–Whitney U-test) account for the significance of the differences observed between the experimental means of two adjacent domains (double asterisks indicate a p-value < 0.05 and > 0.01 and triple asterisks a p-value < 0.01)
However, close examination of best-fit parameters indicates that the statistical helix organization of the chromatin in gene-rich TADs is considerably more relaxed in mESC compared to liver cells (Table 1, compare first and second rows). The mean Pitch (P) of the statistical helix is 201 ± 13 nm in mESC while it is only 160 ± 9 nm in liver cells, and the mean diameter (D) is 255 ± 8 nm and 287 ± 5 nm respectively. Consequently, one turn of the statistical helix (Sh) contains 97 ± 1 kb of genomic DNA in liver cells while it encompasses only 85 ± 2 kb in mESC. Therefore, higher-order chromatin dynamics is less constrained in pluripotent mESC than in liver cells. Remarkably, this clear difference is not linked to local chromatin flexibility since the S parameter is identical (S = 2.7 ± 0.1 kb) in both cell types (Table 1, upper part). Finally, the values of the K parameter suggest that cross-linking efficiency is higher in liver cells than in mESC (Table 1, compare first and second rows).
Table 1 Fitting the statistical helix model to the relative contact frequencies observed in wild-type (upper part, rows 2–4) and triple KO (lower part, rows 5–7) mouse ES cells (mESCs)
Effects of constraints on chromatin dynamics correlate with gene density in mESC TADs
Interestingly, inside both gene-poor and gene-desert TADs, chromatin also displayed modulated contact frequencies (see mean contact frequencies in Fig. 2b/c), indicating that the constraints that impact higher-order chromatin dynamics are present in all genomic contexts investigated. However, while the statistical helix model fits again better than the unconstrained chromatin model to gene-poor TAD data (Fig. 2b), both models could be equally well fitted to the gene-desert data (Fig. 2c), indicating that chromatin dynamics in this latter TAD is not subject to strong constraints. Indeed, the statistical helix is relaxed in gene-desert TADs since one helix turn contains only 72 kb of genomic DNA while it encompasses more than 85/87 kb in gene-rich or gene-poor TADs (Table 1, compare the fourth row with the second and third rows). Globally, these results indicate that the shape of the statistical helix is progressively more elongated as we go from gene-rich and gene-poor to gene-desert TADs approaching an unconstrained chromatin configuration. Therefore, while the constraints impacting chromatin dynamics can be detected in all genomic contexts investigated, their effects are clearly stronger in gene-rich and in gene-poor than in gene-desert TADs.
The models also show that, at the nucleosomal scale, the chromatin is much less flexible in gene-poor (S = 3.7 ± 0.3 kb) and gene-desert (S = 3.8 ± 0.3 kb) TADs than in gene-rich TADs (S = 2.7 ± 0.1 kb) (Table 1, compare the third and fourth rows with the second row). However, these changes in chromatin flexibility do not necessarily translate into changes in higher-order chromatin dynamics. Indeed, gene-poor and gene-desert TADs have similar flexibility but different statistical helix organization: one helix turn encompasses 85/87 kb of genomic DNA in gene-poor TADs but only 72 kb in the gene-desert TAD (Table 1, compare third and fourth rows). Conversely, gene-rich and gene-poor TADs have different chromatin flexibility but very similar statistical helix: Pitch (P) is around 200 nm, diameter (D) is about 250 nm and one helix turn encompasses 85/87 kb of genomic DNA (Table 1, compare second and third rows). Finally, as we noted above, the statistical helix in gene-rich TADs is in a much more open configuration in mESC than in liver while chromatin flexibility is identical in both cell types (Table 1, compare first and second rows). Therefore, the variations of the higher-order chromatin dynamics observed in vivo in different genomic contexts appear to be largely independent of chromatin flexibility.
Histone H1 depletion alters chromatin flexibility but not statistical helix organization
To ascertain that variations of chromatin flexibility do not necessarily impact higher-order chromatin dynamics, we performed quantitative 3C experiments in mESC that are Triple Knock-Out (H1 TKO) for histone H1 genes H1c, H1d and H1e [22]. Indeed, since it binds between nucleosomes, the linker histone H1 is thought to be a major factor regulating chromatin compaction and flexibility at the nucleosomal scale [23, 24], but a precise evaluation of its potential role for chromatin dynamics at the supranucleosomal scale is missing. Its depletion should thus allow us to assess whether altering chromatin stiffness will impact higher-order chromatin dynamics. Mice lacking the H1c, H1d and H1e die during embryonic development, but H1 TKO mESC lines can be established, which bear various chromatin structure changes [22]. Identical experiments as described above were thus performed in H1 TKO mESC (Fig. 3) and best-fit parameters of the statistical helix model were obtained for each category of TADs (Table 1, lower part).
Fitting the statistical helix model to contact frequencies quantified in mouse H1 TKO ESC. Quantitative 3C data were obtained from mouse ESC that are Triple Knock-Out for Histone H1 genes (H1 TKO), for five gene-rich TADs (a), two gene-poor TADs (b) and one gene-desert TAD (c). The graphs show the best-fit analyses obtained with the unconstrained chromatin model [eqs. 1 and 2] (black curves) or the statistical helix model [eqs. 1 and 3] (red curves). The data (see Additional file 8) were analyzed and are depicted as described in the legend of Fig. 2. Best-fit parameters, and the genomic distance contained within one statistical helix turn (Sh in kb), are given in the lower part of Table 1
In both gene-poor and gene-desert TADs (Fig. 3b and c respectively), where histone H1 density is very high [25], identical results were obtained in both H1 TKO and wild-type (WT) mESC (Table 1, compare third with sixth rows and fourth with seventh rows respectively). In these TADs, histone H1 depletion was apparently not sufficient to alter chromatin flexibility. One can note, however, that the values of the K parameter are higher in H1 TKO than in WT mESC (Table 1, compare third with sixth rows and fourth with seventh rows) indicating that cross-linking efficiency is lower upon partial histone H1 depletion.
In gene-rich TADs (Fig. 3a), where histone H1 density is lower [25] histone H1 depletion in mouse mESC resulted in a very significant decrease in chromatin flexibility compared to WT mouse mESC (S = 3.1 ± 0.1 kb and 2.7 ± 0.1 kb respectively) (Table 1, compare fifth and second rows). This result is in agreement with previous finding indicating that the stiffness of a disordered and poorly condensed chromatin fiber (as in H1 TKO mESC) is large, being directly influenced by the high stiffness of the embedded DNA stretch, while a more organized and condensed fiber (as in WT mESC) is far more flexible [26], provided that nucleosome stacking does not occur (as in gene-deserts where histone H1 density is very high) [27]. However, despite the significant decrease in chromatin flexibility observed in gene-rich TADs, the parameters of the statistical helix (diameter D, pitch P, DNA in helix turn Sh) were not significantly altered. The shape of the statistical helix tends to be slightly more elongated in H1 TKO mESC than in WT mESC, but this apparent tendency is not sufficiently strong to be considered as really significant. Therefore, the results presented in Fig. 3a demonstrate that altering chromatin flexibility at the nucleosomal scale in gene-rich TADs, where the statistical helix is prominent, does not necessarily impact significantly the higher-order chromatin organization of these regions.
This indicates that chromatin dynamics at the nucleosomal and supranucleosomal scales are somewhat uncoupled, suggesting that the constraints imposed on higher-order chromatin dynamics within TADs may not necessarily rely on intrinsic local features of the chromatin, like the presence of H1 linker histone or histone epigenetic modifications, which would affect its nucleosomal organization and oligonucleosome compaction [22]. Therefore, this raises the question of the role of the epigenetic landscapes on chromatin dynamics.
Higher-order chromatin dynamics within Drosophila TADs is unconstrained
To investigate the influence of the epigenetic contexts on chromatin dynamics, we generated and used Hi-C data from the fly Drosophila melanogaster for which epigenetic domains have been extensively described [6]. The Drosophila genome is relatively small in size allowing ultra-high genomic resolution of chromatin contacts. Five billion paired-end Hi-C reads were obtained from late Drosophila embryos [28] and normalized Hi-C data were processed in order to produce thousands of "virtual 3C" profiles providing relative contact frequencies at 5 kb resolution throughout the Drosophila genome (see Methods).
To check whether some constraints impact chromatin dynamics in the Drosophila, we first focused our analyses on a subset of "virtual-3C" profiles spanning separation distances of at least 65 kb without crossing any TAD borders. Among the 2236 "virtual-3C" profiles that could be appropriately fitted to the unconstrained chromatin model [eqs. 1 and 2] (0 < R2 < 1), 66 % had a correlation coefficient (R2) above 0.5. This result indicates that the unconstrained chromatin model fits appropriately to most "virtual 3C" profiles and thus, in contrast to previous observation made in mammals [4] (Fig. 2), chromatin dynamics within Drosophila TADs appears as globally unconstrained, and hence non-helical, at the scale of several tens of kilo-bases.
Local properties of Drosophila chromatin are finely tuned according to the epigenetic landscape
"Virtual 3C" generated were then classified according to chromosomal location and to the previously defined epigenomic domains (D1 to D4) [6]: D1 ("red chromatin") corresponds to domains with "active" epigenetic marks, D2 ("black chromatin") displays no specific epigenetic modifications, D3 ("blue chromatin") is Polycomb (PcG) associated chromatin and D4 ("green chromatin") is HP1/heterochromatin. Finally, for each "virtual 3C", the unconstrained chromatin model was fitted and the three best-fit parameters were extracted (see Additional file 2 for representative examples). For each chromosome, statistical analyses of best-fit parameters were performed separately according to the epigenetic domains.
Box-plots in Fig. 4 show the results of statistical analyses of best-fit parameters obtained for chromosome 2 L. We found that "active" domains (D1, "red chromatin") are less compact (median value of L parameter = 10.81 nm/kb), more efficiently cross-linked (median value of K parameter = 0.85) and more flexible (median value of S parameter = 4.15 kb) than the other domains (L = 10.56/10.66/10.32 nm/kb for D2/D3/D4 respectively while K = 1.49/1.34/2.40 and S = 4.92/4.84/5.30 kb for D2/D3/D4 respectively) (Fig. 4). As expected, we found that HP1/heterochromatin (D4) is much less flexible and more compact than any other type of chromatin. However, "black" (D2) and PcG (D3) chromatins have very similar flexibility and compaction, suggesting that PcG proteins do not significantly impact on local chromatin dynamics (Fig. 4). Identical results were found for all the other Drosophila chromosomes, except for the tiny chromosome 4, which displayed quite flexible and poorly compacted chromatin despite being entirely heterochromatic (Table 2) (full data are in Additional file 3. Additional file 5 gives Wilcox p-values of differences observed between the different epigenetic domains for parameters shown in Table 2). This finding is consistent with a recent work demonstrating that chromosome 4 displays distinct epigenetic profiles compared to both pericentric heterochromatin and euchromatic regions and that enrichment of HP1a on chromosome 4 genes creates an alternate chromatin structure which is critical for their regulation [29]. Globally, these experiments confirm that the epigenetic contexts influence significantly the local chromatin dynamics in vivo. However, quantitatively, their effects on chromatin compaction and flexibility appear as being quite limited. Indeed, the largest variations observed (between the "active" and HP1/heterochromatin domains) for chromatin compaction and flexibility are 10.76 to 9.99 nm/kb, i.e. about 7 %, on chromosome 2R, and 4.090 to 5.382 kb, i.e. about 24 %, on chromosome 3 L, respectively (Table 2). Therefore, the epigenetic landscape in the fly appears to be involved in fine-tuning the local chromatin dynamics.
Epigenetic landscapes and chromatin dynamics of the Drosophila chromosome 2 L. "Virtual 3C", obtained from Hi-C experiments in the Drosophila, were classified according to the four previously defined epigenetic domains (D1 to D4) [6]: D1 ("red chromatin") corresponds to domains with "active" epigenetic marks, D2 ("black chromatin") displays no specific epigenetic modifications, D3 ("blue chromatin") is PcG associated chromatin and D4 ("green chromatin") is HP1/heterochromatin. The unconstrained chromatin model [eqs.1 and 2] was then fitted and the three best-fit parameters (K = crosslinking efficiency; L = compaction; S = flexibility) were recovered from each "virtual 3C". Statistical analyses of best-fit parameters were performed separately according to the epigenetic domains. Box-plots show the results obtained for each type of domains on the chromosome 2 L. Stars indicate statistically significant differences: single asterisk indicates a p-value < 0.05 and > 0.01, a double asterisk a p-value < 0.01 and > 0.001 and a triple asterisk a p-value < 0.001 (all p-values are given in Additional file 5). The number of best-fits (n) performed in each domain is as follows: D1: n = 990; D2: n = 2481; D3: n = 624; D4: n = 239). The results obtained from the other Drosophila chromosomes are given in Additional file 3
Table 2 Fitting the unconstrained model on Drosophila Hi-C dataa
Modulated contact frequencies, the statistical helix and their relevance for genome functions
Our work shows that, in the mouse, a modulation in contact frequency over large genomic distances can be detected in all the three genomic contexts investigated: gene-rich, gene-poor and gene-desert TADs. This demonstrates that the constraints responsible for the emergence of the statistical helix apply widely to the mammalian genome (Fig. 2; Table 1, upper part). However, their effects on higher-order chromatin dynamics are progressively attenuated as we shift from gene-rich and gene-poor to gene-desert TADs where, in this latter case, an unconstrained polymer model can be fitted appropriately to contact frequency data (Fig. 2; Table 1, upper part). This situation is reminiscent to experiments performed in the yeast Saccharomyces cerevisiae [21] where the unconstrained model could be fitted appropriately in AT-rich regions while the statistical helix model provides better fits in GC-rich regions [4].
Furthermore, the statistical helix organization, and its underlying dynamics, seems to be finely tuned according to the cell-type. Indeed, chromatin appears to be less constrained in mESC than in mouse liver cells (the statistical helix is more "elongated" in mESC) (Table 1, upper part). This finding is in agreement with several pieces of evidence indicating that, in mESC, chromatin is characterized by an abundance of active chromatin marks [30, 31] and that it displays less compact heterochromatin domains [30, 32, 33]. Therefore, the configuration of the genome makes it more accessible in mESC than in differentiated cells. It is assumed that this specific chromatin organization is essential to establish pluripotency by maintaining the genome in an open, readily accessible state, allowing for maximum plasticity [16].
"Virtual 3C" profiles reconstructed from 5C data obtained in mESC [10] also shows the presence of a very significant modulation in contact frequencies in a 572 kb gene-poor region displaying no apparent locus-specific interaction (chrX:102,338,477-102,910,171) (Additional file 4). Interestingly, here again, the statistical helix model fits better to these data (R 2 = 0.52) than the unconstrained chromatin model (R 2 = 0.40). Therefore, 5C, as well as quantitative 3C data (Fig. 2), are able to evidence, in mESC, a long-range modulation in contact frequencies which is best described by the statistical helix model.
As previously indicated [4], the existence of a modulation in contact frequencies has important functional implications, at least in the gene-rich TADs where it is prominent. Indeed, locus-specific functional interactions in these TADs necessarily occur from this underlying dynamics of the chromatin. Therefore, any constraints favouring intrinsically the probability of contact between two genomic regions will also favour the probability of interaction between the regulatory elements that they contain. Long-range interactions should thus tend to occur at preferred relative separation distances where the probability of contact is the highest. We previously showed that, in loci containing co-expressed genes, conserved elements (UCSC database) are overrepresented at a distance of ~100 kb from the surrounding Transcriptional Start Sites (TSS) [4]. In the same line, ChIP-seq experiments at 885 loci containing genes overexpressed in the mouse forebrain showed that p300 peaks linked to enhancer activities are more significantly enriched for separation distances of about 70 to 80 kb from the nearest TSS [34]. Finally, extensive 5C experiments focusing on the ENCODE pilot project regions (representing 1 % of the human genome) have recently shown that long-range interactions between TSS and distal elements display a marked asymmetry with a bias for interactions with elements located about 120 kb upstream of the TSS [35]. Altogether, these observations are in agreement with the existence of a long-range (~100 kb) modulation of contact frequencies in gene-rich-TADs, suggesting that the constraints governing statistical helix organization underlie higher-order chromatin dynamics of a very significant part of the genome.
Simple polymer-physics principles govern chromatin dynamics within TADs
In addition to polymer models as those used in the present work, several other physical models, like the equilibrium and crumpled/fractal globule models, have been developed to describe chromatin dynamics in vivo [20]. Crumpled globule features are characteristic of chromatin dynamics above 1 Mb (chromosome territory/inter-TADs dynamics) [8]. However, at that scale, simple polymer-physic models, like the "strings and binders switch" (SBS) model [36], can also reproduce crumpled globule conformations, and finally globule features of chromatin organization within TADs remain unexplored. Using quantitative 3C data (Fig. 2), we showed that, in the absence of any strong locus-specific interaction, contact profiles obtained in gene-rich TADs (Fig. 1) follow a power-law scaling associated to a slope of −1/2. A similar value has been described for mitotic chromosomes for separation distances encompassing 40 kb to 10 Mb [12]. However, our samples are devoid of mitotic chromosomes (interphasic nucleus preparations) and therefore, as previously suggested for distances shorter than 100 kb [8], the contact profiles observed in gene-rich TADs are incompatible with the equilibrium or crumpled globule models. In contrast, they are in good agreement with a more compact conformation as suggested by the SBS model [36]. Therefore, our work reinforce the idea that simple polymer-physics models of chromatin are sufficient to describe chromatin dynamics in vivo [4, 37] and it shows that such models and principles also apply within TADs both in mammals and in the fly Drosophila melanogaster. Importantly, neither the equilibrium or crumpled globule models nor the "unconstrained chromatin" model, or so far any other known globule or polymer models, including the SBS model, are able to describe the discrete modulation in contact frequencies that we consistently observed within mammalian TADs in diverse experimental and cellular contexts (3C data in Fig. 2; 5C data in Additional file 4; [4]). Only the statistical helix model is able to account for this feature and it is thus, so far, the simplest model to accurately describe the fundamental chromatin dynamics observed within mammalian TADs. However, this model is clearly not sufficient to describe chromatin dynamics when significant locus-specific interactions take place and, in such conditions, more complex polymer models may indeed be required, taking into account chromatin contacts with nuclear compartments and/or attachment of diffusible factors to binding sites on the chromatin [37].
Finally, while the existence of modulated contact frequencies has important implications for chromatin dynamics in a cell population, its interpretation as a helical organization may be far from the reality of an individual conformation at a given time in a single cell. One can note, however, that this model may also be valid to describe chromatin dynamics at the single cell level if the ergodicity of the fluctuations could be verified (i.e. if the average fluctuations observed at a given time in a cell population can recapitulate the average fluctuations over time of an individual conformation).
Two general types of constraints could contribute to the emergence of the statistical helix organization frequencies within mammalian TADs: "bottom-up" constraints, inherent to some intrinsic constituents of the chromatin, or "top-down" constraints imposed by higher-order superstructures, like chromosome territories and TADs. Despite its remarkable impact on chromatin flexibility in gene-rich TADs, histone H1 depletion does not significantly affects statistical helix parameters in mESC (Fig. 3; Table 1, lower part). This indicates that chromatin dynamics at the nucleosomal and supranucleosomal scales could be somewhat uncoupled, suggesting that the constraints imposed on higher-order chromatin dynamics during the interphase may not necessarily rely on intrinsic factors of the chromatin that would affect its nucleosomal organization ("bottom-up" constraints).
Hi-C data have shown that contact frequencies across TAD borders are extremely low [7]. The statistical helix organization observed in mammals is thus necessarily confined within TADs and cannot extend throughout TAD borders. It is therefore tempting to speculate that, in mammals, TADs borders may represent "top-down" constraints impacting chromatin dynamics at higher-order levels by restricting the space that the chromatin could possibly explore at that scale, thus contributing to the emergence of the statistical helix organization. However, this hypothesis is challenged by the fact that no such constraints are observed in Drosophila TADs.
How to explain such a difference between these two organisms? Rather than speculating that genome organization principles are intrinsically different (which would appear unlikely for two metazoans), it seems more realistic to postulate that the underlying organization principles are similar, but that constraints applied to higher-order chromatin dynamics have different impacts because of distinct critical features of TAD organization in these two organisms. Indeed, Drosophila TADs display a median size of 70 kb [6] which is considerably smaller than that of mammalian TADs. With a median size of more than 800 kb [7], mammalian TADs are more prone to constraints that impact chromatin dynamics at higher-order levels i.e. over large genomic distances. Therefore, we propose that, beyond locus-specific interactions, higher-order chromatin dynamics in higher eukaryotes may also rely on "top-down" constraints whose effects are depending on the exact size and organization of the TADs.
Mouse breeding
All experimental designs and procedures are in agreement with the guidelines of the animal ethics committee of the French "Ministère de l'Agriculture" (European directive 2010/63/EU).
mESC were cultured in serum/LIF conditions as previously described [22].
Quantitative 3C / SybGreen assays
3C assays were performed from nucleus preparations as previously described [3, 38, 39]. 3C products were quantified (during the linear amplification phase) on a LighCycler 480 II apparatus (Roche) (10 min. at 95 °C followed by 45 cycles 10 s. at 95 °C/8 s. at 69 °C/14 s. at 72 °C) using the Hot-Start Platinum® Taq DNA Polymerase from Invitrogen (10966–034), the GoTaq® Hot-Start Polymerase from Promega (M5005) and a standard 10X qPCR mix [40] where the usual 300 μM dNTP have been replaced by 1500 μM of CleanAmp dNTP (Tebu-bio 040 N-9501-10). Standards curves for qPCR have been generated from BACs (RP serie from Invitrogen) as previously described [4]: RP23 55I2 for the Usp22 locus; RP23 117C15 for the Dlk1 locus; RP23 463 J10 and RP23 331E7 for the Lnp locus; RP23 117 N21 for the Mtx2 locus; RP23 131E7 for the Emb locus; RP23 30H4 and RP23 247C7 for the 3qH2 and 19qC2 gene-poor regions respectively; and a sub-clone derived from RP23 3D5 for the 11qA5 gene-desert region (also see Additional file 1a). Quantitative 3C primers sequences are given in Additional file 6. Data obtained from these experiments are included in Additional file 7 (WT mESC) and Additional file 8 (H1 TKO mESC). The number of sites analysed in each experiment were as follows (Additional file 1b). For WT mESC: Usp22 locus, for anchor sites F1 and F7, 33 and 35 sites were analysed respectively; Dlk1 locus, for anchor sites F3/F5/F14 and F16, 9/16/21 and 26 sites were analysed respectively; Emb locus, for anchor sites R4 and R7, 30 sites were analysed for each anchor; Lnp locus, for anchor site R35, 49 sites were analysed; Mtx2 locus, for anchor sites R2 and R56, 52 and 50 sites were analysed respectively; 3qH2 gene-poor locus, for anchor sites R6 and R27, 25 sites were analysed for each anchor; 19qC2 gene-poor locus, for anchor sites R41 and R59, 33 sites were analysed for each anchor, and for the 11qA5 gene-desert locus, for anchor sites F5/F25/F35 and F48, 21/20/21 and 20 sites were analysed respectively. For H1 TKO mESC: Usp22 locus, for anchor sites F1 and F7, 33 and 34 sites were analysed respectively; Dlk1 locus, for anchor sites F3/F5/F14 and F16, 9/16/21 and 24 sites were analysed respectively; Emb locus, for anchor sites R4 and R7, 29 and 30 sites were analysed respectively; Lnp locus, for anchor site R35, 49 sites were analysed; Mtx2 locus, for anchor sites R2 and R56, 52 and 49 sites were analysed respectively; 3qH2 gene-poor locus, for anchor sites R6 and R27, 25 sites were analysed for each anchor; 19qC2 gene-poor locus, for anchor sites R41 and R59, 33 sites were analysed for each anchor, and for the 11qA5 gene-desert locus, for anchor sites F5/F25/F35 and F48, 18/20/21 and 19 sites were analysed respectively.
Supranucleosomal domains
The supranucleosomal domains (D.I to D.VI) encompass separation distances where random collision frequencies are alternatively lower and higher; They were assessed by statistical analyses (Mann–Whitney U tests) performed on data shown in Figs. 2 and 3. For gene-rich and gene-poor loci : 0 to 35 kb (domain I), 35-70 kb (domain II), 70-115 kb (domain III), 115-160 kb (domain IV), 160-205 kb (domain V) and 205–250 kb (domain VI). For the gene-desert region : 0 to 25 kb (domain I), 25-50 kb (domain II), 50-75 kb (domain III), 75-100 kb (domain IV), 100-125 kb (domain V) and 125–150 kb (domain VI).
Mathematical methods
We used a model that combines the Freely Jointed Chain/Kratky-Porod worm-like chain models as described in reference [41]. This combined model (equation 3 of reference [21]), which expresses the relation between the cross-linking frequency X(s) (in mol x liter−1 x nm3) and the site separation s (in kb) along the genome, is as follows:
$$ \mathit{\mathsf{X}}\left(\mathit{\mathsf{s}}\right)=\left[K\times \mathsf{0.53}\times {\beta}^{-\raisebox{1ex}{$\mathsf{3}$}\!\left/ \!\raisebox{-1ex}{$\mathsf{2}$}\right.}\times \exp \left(-\raisebox{1ex}{$\mathsf{2}$}\!\left/ \!\raisebox{-1ex}{${\beta}^{\mathsf{2}}$}\right.\right)\times {\left(L\times S\right)}^{-\mathsf{3}}\right] $$
with, for an unconstrained polymer:
$$ \beta =\raisebox{1ex}{$s$}\!\left/ \!\raisebox{-1ex}{$S$}\right.\ \left(\mathrm{unconstrained}\ \mathrm{chromatin}\ \mathrm{model}\right) $$
In equation [1], the linear mass density L is the length of the chromatin in nm that contains 1 kb of genomic DNA. We used different L values estimated from a packing ratio of 6 nucleosomes per 11 nm of chromatin in solution at physiological salt concentrations [42, 43] and a nucleosome repeat length (NRL) of 194 base pairs as found in mouse liver [44] or NRL = 189 and 174 nt for wild-type and TKO mESC respectively [22]. This led to values of L = 9.45 nm/kb for mouse liver cells, L = 9.70 nm/kb for mESC and L = 10.53 nm/kb for TKO mESC. S is the length of the Kuhn's statistical segment in kb, which is a measure for the flexibility of the chromatin. The parameter K represents the efficiency of cross-linking which reflects experimental variations [1].
We previously showed that mammalian chromatin undergoes constraints that results in a modulation of contact frequencies along some regions of the chromatin [4]. This modulation can be described by a specific polymer model, called the statistical helix model, where the following β term is used in equation [1] (see ref. [4]):
$$ \beta =\frac{\sqrt{D^2\times { \sin}^2\left[\frac{\pi \times L\times s}{\sqrt{\pi^2\times {D}^2+{P}^2}}\right]+\left[\frac{P^2\times {L}^2\times {s}^2}{\pi^2\times {D}^2+{P}^2}\right]}}{L\times S}\ \left(\mathrm{statistical}\ \mathrm{helix}\ \mathrm{model}\right) $$
where P is the mean Pitch and D the mean diameter in nm of the statistical helix. The length of one turn on the statistical helix Sh in kb (Table 1) was calculated using best-fit parameters and equation [4]:
$$ Sh = \frac{\sqrt{{\left(\pi \times D\right)}^2+\left({P}^2\right)}}{L}\ \left(\mathrm{kb}\right) $$
Best-fit analyses of quantitative 3C data from mouse ECS
Best-fit analyses were implemented under the R software (R Development Core Team 2008, http://www.R-project.org), as previously described [4]. We used the "nls object" (package stats version 2.8.1) which determines the nonlinear (weighted) least-squares estimates of the parameters of nonlinear models.
Best-fit analyses of "Virtual 3C" in the Drosophila melanogaster
Hi-C data were obtained from total Drosophila embryos and normalized tag numbers were assembled into 5 kb bins as previously described [6, 28]. Datasets have been submitted to Gene Expression Omnibus (GEO) under accession no [GSE61471] (http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE61471). The relative contact frequencies used to construct the "virtual 3C" profiles were obtained by assembling these 5 kb bins into larger 25 kb (5*5 kb) bins that were analyzed with a step of 5 kb along the chromosomes. For each 25 kb bins, the relative contact frequencies were calculated each 5 kb within a region surrounding 400 kb (80*5 kb bins) from the start of the 25 kb bin. For each "virtual 3C", the unconstrained chromatin model [eqs. 1 and 2] was fitted to the first 70 kb (14*5 kb bins) using the "nls2 object" under the R software (R Development Core Team 2008, http://www.R-project.org), and the best-fit parameters were extracted. Statistical analyses of these parameters were performed separately on each chromosome and according to the type of epigenetic domain (Fig. 2 and Additional file 2). Wilcox p-values were calculated to assess the significance of differences observed between the values obtained in each case (Additional files 1 and 9).
The data set supporting the results of this article is available in the Gene Expression Omnibus repository, [GSE61471, http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE61471].
TAD:
Topologically Associating Domains
3C:
Chromosome Conformation Capture
mESC:
mouse Embryonic Stem Cells
H1 TKO:
Triple Knock-Out of Histone H1 genes
qPCR:
Real-time quantitative polymerase chain reaction
Dekker J, Rippe K, Dekker M, Kleckner N. Capturing chromosome conformation. Science. 2002;295:1306–11.
de Wit E, de Laat W. A decade of 3C technologies: insights into nuclear organization. Genes Dev. 2012;26:11–24.
Hagège H, Klous P, Braem C, Splinter E, Dekker J, Cathala G, et al. Quantitative analysis of chromosome conformation capture assays (3C-qPCR). Nat Protoc. 2007;2:1722–33.
Court F, Miro J, Braem C, Lelay-Taha M-N, Brisebarre A, Atger F, et al. Modulated contact frequencies at gene-rich loci support a statistical helix model for mammalian chromatin organization. Genome Biol. 2011;12:R42.
Duan Z, Andronescu M, Schutz K, McIlwain S, Kim YJ, Lee C, et al. A three-dimensional model of the yeast genome. Nature. 2010;465:363–7.
Sexton T, Yaffe E, Kenigsberg E, Bantignies F, Leblanc B, Hoichman M, et al. Three-dimensional folding and functional organization principles of the Drosophila genome. Cell. 2012;148:458–72.
Dixon JR, Selvaraj S, Yue F, Kim A, Li Y, Shen Y, et al. Topological domains in mammalian genomes identified by analysis of chromatin interactions. Nature. 2012;485:376–80.
Lieberman-Aiden E, van Berkum NL, Williams L, Imakaev M, Ragoczy T, Telling A, et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science. 2009;326:289–93.
Rao SS, Huntley MH, Durand NC, Stamenova EK, Bochkov ID, Robinson JT, et al. A 3D Map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell. 2014;159:1665–80.
Nora EP, Lajoie BR, Schulz EG, Giorgetti L, Okamoto I, Servant N, et al. Spatial partitioning of the regulatory landscape of the X-inactivation centre. Nature. 2012;485(7398):381–5.
Giorgetti L, Servant N, Heard E. Changes in the organization of the genome during the mammalian cell cycle. Genome Biol. 2013;14:142.
Naumova N, Imakaev M, Fudenberg G, Zhan Y, Lajoie BR, Mirny LA, et al. Organization of the mitotic chromosome. Science. 2013;342:948–53.
Van Bortle K, Nichols MH, Li L, Ong CT, Takenaka N, Qin ZS, et al. Insulator function and topological domain border strength scale with architectural protein occupancy. Genome Biol. 2014;15:R82.
Zuin J, Dixon JR, van der Reijden MI, Ye Z, Kolovos P, Brouwer RW, et al. Cohesin and CTCF differentially affect chromatin architecture and gene expression in human cells. Proc Natl Acad Sci U S A. 2014;111:996–1001.
Andrey G, Montavon T, Mascrez B, Gonzalez F, Noordermeer D, Leleu M, et al. A switch between topological domains underlies HoxD genes collinearity in mouse limbs. Science. 2013;340:1234167.
Cavalli G, Misteli T. Functional implications of genome topology. Nat Struct Mol Biol. 2013;20:290–9.
Gibcus JH, Dekker J. The hierarchy of the 3D genome. Mol Cell. 2013;49:773–82.
Giorgetti L, Galupa R, Nora EP, Piolot T, Lam F, Dekker J, et al. Predictive polymer modeling reveals coupled fluctuations in chromosome conformation and transcription. Cell. 2014;157:950–63.
Nobrega MA, Zhu Y, Plajzer-Frick I, Afzal V, Rubin EM. Megabase deletions of gene deserts result in viable mice. Nature. 2004;431:988–93.
Mirny LA. The fractal globule as a model of chromatin architecture in the cell. Chromosome Res. 2011;19:37–51.
Dekker J. Mapping in vivo chromatin interactions in yeast suggests an extended chromatin fiber with regional variation in compaction. J Biol Chem. 2008;283:34532–40.
Fan Y, Nikitina T, Zhao J, Fleury TJ, Bhattacharyya R, Bouhassira EE, et al. Histone H1 depletion in mammals alters global chromatin structure but causes specific changes in gene regulation. Cell. 2005;123:1199–212.
Happel N, Doenecke D. Histone H1 and its isoforms: contribution to chromatin structure and function. Gene. 2009;431:1–12.
Recouvreux P, Lavelle C, Barbi M, Conde E, Silva N, Le Cam E, et al. Linker histones incorporation maintains chromatin fiber plasticity. Biophys J. 2011;100:2726–35.
Cao K, Lailler N, Zhang Y, Kumar A, Uppal K, Liu Z, et al. High-resolution mapping of h1 linker histone variants in embryonic stem cells. PLoS Genet. 2013;9:e1003417.
Ben-Haim E, Lesne A, Victor JM. Chromatin: a tunable spring at work inside chromosomes. Phys Rev E Stat Nonlin Soft Matter Phys. 2001;64:051921.
Wedemann G, Langowski J. Computer simulation of the 30-nanometer chromatin fiber. Biophys J. 2002;82:2847–59.
Schuettengruber B, Oded Elkayam N, Sexton T, Entrevan M, Stern S, Thomas A, et al. Cooperativity, specificity, and evolutionary stability of polycomb targeting in Drosophila. Cell Rep. 2014;9:219–33.
Riddle NC, Jung YL, Gu T, Alekseyenko AA, Asker D, Gui H, et al. Enrichment of HP1a on Drosophila chromosome 4 genes creates an alternate chromatin structure critical for regulation in this heterochromatic domain. PLoS Genet. 2012;8:e1002954.
Meshorer E, Yellajoshula D, George E, Scambler PJ, Brown DT, Misteli T. Hyperdynamic plasticity of chromatin proteins in pluripotent embryonic stem cells. Dev Cell. 2006;10:105–16.
Mikkelsen TS, Ku M, Jaffe DB, Issac B, Lieberman E, Giannoukos G, et al. Genome-wide maps of chromatin state in pluripotent and lineage-committed cells. Nature. 2007;448:553–60.
Efroni S, Duttagupta R, Cheng J, Dehghani H, Hoeppner DJ, Dash C, et al. Global transcription in pluripotent embryonic stem cells. Cell Stem Cell. 2008;2:437–47.
Fussner E, Djuric U, Strauss M, Hotta A, Perez-Iratxeta C, Lanner F, et al. Constitutive heterochromatin reorganization during somatic cell reprogramming. EMBO J. 2011;30:1778–89.
Visel A, Blow MJ, Li Z, Zhang T, Akiyama JA, Holt A, et al. ChIP-seq accurately predicts tissue-specific activity of enhancers. Nature. 2009;457:854–8.
Sanyal A, Lajoie BR, Jain G, Dekker J. The long-range interaction landscape of gene promoters. Nature. 2012;489:109–13.
Barbieri M, Chotalia M, Fraser J, Lavitas LM, Dostie J, Pombo A, et al. Complexity of chromatin folding is captured by the strings and binders switch model. Proc Natl Acad Sci U S A. 2012;109:16173–8.
Barbieri M, Fraser J, Lavitas LM, Chotalia M, Dostie J, Pombo A, et al. A polymer model explains the complexity of large-scale chromatin folding. Nucleus. 2013;4:267–73.
Braem C, Recolin B, Rancourt RC, Angiolini C, Barthes P, Branchu P, et al. Genomic matrix attachment region and chromosome conformation capture quantitative real time PCR assays identify novel putative regulatory elements at the imprinted Dlk1/Gtl2 locus. J Biol Chem. 2008;283:18612–20.
Court F, Baniol M, Hagège H, Petit JS, Lelay-Taha M-N, Carbonell F, et al. Long-range chromatin interactions at the mouse Igf2/H19 locus reveal a novel paternally expressed long non-coding RNA. Nucleic Acids Res. 2011;39:5893–906.
Lutfalla G, Uzé G. Performing quantitative reverse-transcribed polymerase chain reaction experiments. Methods Enzymol. 2006;410:386–400.
Rippe K. Making contacts on a nucleic acid polymer. Trends Biochem Sci. 2001;26:733–40.
Gerchman SE, Ramakrishnan V. Chromatin higher-order structure studied by neutron scattering and scanning transmission electron microscopy. Proc Natl Acad Sci U S A. 1987;84:7802–6.
Ghirlando R, Felsenfeld G. Hydrodynamic studies on defined heterochromatin fragments support a 30-nm fiber having six nucleosomes per turn. J Mol Biol. 2008;376(5):1417–25.
Dalal Y, Fleury TJ, Cioffi A, Stein A. Long-range oscillation in a periodic DNA sequence motif may influence nucleosome array formation. Nucleic Acids Res. 2005;33(3):934–45.
We thank Hidemasa Kato for stimulating discussions, Françoise Carbonell, Cosette Rebouissou and the staff from the IGMM animal unit for technical assistance. This work was supported by grants from the Institut National du Cancer [PLBIO 2012–129, INCa_5960 to T.F.], the Association pour la Recherche contre le Cancer [SFI20101201555 to T.F.], the Ligue contre le cancer (comité Hérault) and the Centre National de la Recherche Scientifique. Y.F. & Y.Z. are supported in part by a Georgia Research Alliance Distinguished Cancer Scholar Award (to Y.F.) and by a US National Institutes of Health grant GM085261 (to Y.F.).
Institut de Génétique Moléculaire de Montpellier, UMR5535, CNRS, Université de Montpellier, 1919 Route de Mende, 34293, Montpellier, Cedex 5, France
Vuthy Ea, Thierry Gostan, Laurie Herviou, Marie-Odile Baudement, Soizik Berlivet, Marie-Noëlle Le Lay-Taha, Guy Cathala, Annick Lesne, Jean-Marc Victor & Thierry Forné
Institut de Génétique Humaine, UPR 1142, CNRS, Montpellier, France
Tom Sexton & Giacomo Cavalli
School of Biology and the Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, Georgia, USA
Yunzhe Zhang & Yuhong Fan
CNRS GDR 3536 UPMC, Sorbonne universités, Paris, France
Guy Cathala, Annick Lesne, Jean-Marc Victor, Giacomo Cavalli & Thierry Forné
Laboratoire de Physique de la Matière Condensée, CNRS UMR 7600, UPMC, Sorbonne universités, Paris, France
Annick Lesne & Jean-Marc Victor
Vuthy Ea
Tom Sexton
Thierry Gostan
Laurie Herviou
Marie-Odile Baudement
Yunzhe Zhang
Soizik Berlivet
Marie-Noëlle Le Lay-Taha
Guy Cathala
Annick Lesne
Jean-Marc Victor
Yuhong Fan
Giacomo Cavalli
Thierry Forné
Correspondence to Thierry Forné.
VE participated in the design of the study, performed 3C-qPCR experiments and best-fit analyses. TS participated in the design of the study and performed Hi-C experiments. TG performed best-fit and statistical analyses of Hi-C data. LH performed 3C-qPCR experiments. MOB contributed analytic tools used for 3C-qPCR primer design. YZ carried out H1 TKO experiments. SB performed 5C data analyses. MNLT and GCa participated in the design of the study and performed 3C-qPCR experiments. AL and JMV participated in the design of the study and contributed to the development of physical models. YF and GC participated in the design of the study and edited the manuscript. TF conceived and designed the study, contributed to the development of physical models, performed best-fit analyses and wrote the manuscript. All authors read and approved the final manuscript.
Genomic maps of the TADs investigated in the present study. (a) Map of the TADs containing the loci analyzed in mESC. The Hi-C data (http://yuelab.org/hi-c/database.php) ([1]) are displayed on the top of each map. Gene locations are presented as visualized in the UCSC browser. The black squares in the Hi-C data and the location of the BACs (black bars below the genes) help to demarcate the regions analyzed in our quantitative 3C experiments. Red and green rectangles indicate a negative or a positive directionality index respectively, as defined in ref. [1]. Blue rectangles are located at the borders of each TAD. (b) Detailed map of the loci investigated by quantitative 3C. Genes are indicated by full boxes and promoters by thick black arrows above these boxes. The scale-bar indicates the size of 10 kb of sequence. The names of the loci and chromosomal location are indicated above each map. The HindIII (Usp22, Emb, Lnp, Mtx2, 19qC2, 3qH2 and 11qA5 loci) or EcoRI (Dlk1 locus) sites investigated are indicated on the maps. Arrows labeled with a "F" (forward) or a "R" (reverse) indicate the positions of the primers used as anchors in quantitative 3C experiments. The location (mm9), size (in Mb) and gene density (TSS/Mb) of each TAD investigated are indicated on the right. Note that the very low contact frequencies observed for regions investigated on chromosomes 3 and 11qA5 impair the accurate location of TAD borders. TAD sizes provided here are those determined from data published in ref. [1] (Additional file 9). (PDF 300 kb)
Nine examples of fits of the unconstrained chromatin model [eqs. 1 and 2 ] on « Virtual 3C » data obtained on Drosophila melanogaster chromosome 2 L. Because of the small size of TADs in the Drosophila, the unconstrained chromatin model was fitted to the first 70 kb of the data. Note that the fit is generally in very good agreement with the unconstrained chromatin model even for larger separation distances. (PDF 250 kb)
Epigenetic landscapes and chromatin dynamics of the Drosophila chromosomes. "Virtual 3C" were obtained and analysed from each chromosome as described in Fig. 4. Statistical analyses of best-fit parameters were performed separately according to the epigenetic domains (Note that chromosome 4 is exclusively composed of D4 Domains, while chromosome 3 is devoid of such domains). Box-plots show the results obtained for each type of domains on each chromosome. Chromosome X was excluded from the analyses because dosage compensation affects epigenetic landscapes of this chromosome in males and the embryos used in the Hi-C experiments were not sexed. The number of best-fits (n) performed in each domain is as follows: for chromosome 2R, D1: n = 1391; D2: n = 1943; D3: n = 350; D4: n = 290, for chromosome 3 L, D1: n = 1096; D2: n = 2650; D3: n = 609; D4: n = 272, for chromosome 3R, D1: n = 1525; D2: n = 3197; D3: n = 755, for chromosome 4, D4: n = 225. (PDF 260 kb)
fitting the statistical helix model to contact frequencies quantified by 5C experiments in mECS: (a) 5C Matrix from data obtained in mESC [ 2 ] indicating the 572 kb gene-poor region (region 1) with no apparent locus-specific interaction (chrX:102,338,477-102,910,171) that we used to fit polymer models. (b) "Virtual 3C" profiles were reconstructed from region 1 and data were compiled in a single graph. Error bars are standard error of the mean of two 5C experiments. Dashed lines delimit supranucleosomal domains that encompass separation distances where contact frequencies are alternatively lower and higher (see Methods). The graph shows the best fit analyses obtained with the unconstrained chromatin model [eqs. 1 and 2] (black curve) or the statistical helix model [eqs. 1 and 3] (red curve). Correlation coefficients (R2) are indicated on the graph. For each supranucleosomal domains, the mean contact frequencies and the number (n) of experimental points are indicated on the graph. p-values (Mann–Whitney U-test) account for the significance of the differences observed between the experimental means of two adjacent domains. (PDF 200 kb)
Statistical tests. This table gives, for each chromosome and each domain, the Wilcox p-values for the differences observed between the median values of the three parameters presented in Table 2 (k = crosslinking efficiency; L = compaction; S = flexibility). (PDF 72 kb)
Quantitative 3C primer sequences. (XLS 56 kb)
Quantitative 3C dataset for WT mouse embryonic stem cells. (XLS 89 kb)
Quantitative 3C dataset for H1 TKO mouse embryonic stem cells. (XLS 86 kb)
Additional references. (PDF 108 kb)
Ea, V., Sexton, T., Gostan, T. et al. Distinct polymer physics principles govern chromatin dynamics in mouse and Drosophila topological domains. BMC Genomics 16, 607 (2015). https://doi.org/10.1186/s12864-015-1786-8
Chromatin dynamics
Polymer models
Topological domains
H1 histone
Human and rodent genomics
|
CommonCrawl
|
Trace: • Quantization
advanced_tools:quantization
Quantization is a phenomenon where the constraints of the physical system have the effect that some physical quantity only appears in discrete jumps, while all values in between physically forbidden.
The easiest example is a rope that is held under constant tension by two hands:
The thing is, no matter how the two hands try to make the rope vibrate, the rope will only vibrate with a quantized set of modes. The two hands fix the rope at both ends. As a result of this constraint, the rope can only vibrate with fixed frequencies. The frequencies between this fixed set of frequencies are physically impossible:
The same thing now happens also in quantum mechanics.
Here we describe particles using waves. If we then consider, for example, a particle in a box we notice that only a specific discrete set of wave functions are physically possible. Physically this means that the energy levels within the box are quantized in quantum mechanics, as a result of the constraints imposed by the box.
Quantization in the context of quantum mechanics
In quantum physics, quantization is the process of constructing the quantum formulation of a system from its classical description. Starting with a classical system, we wish to formulate a quantum theory, which in an appropriate limit, reduces back to the classical system.
graph LR A(Classical Mechanics)–>|Quantization|B(Quantum Mechanics)
The inverse procedure is called dequantization, or simply "taking the classical limit". Here we start with a quantum theory and arrive back at its classical counterpart.
graph LR A(Quantum Mechanics)–>|Dequantization|B(Classical Mechanics)
After electrons and holes, the simplest example of the emergence of particles in rocks is sound quantization. This astonishing phenomenon is the closest thing to real magic I know. Sound is familiar to everyone as the vibration of elastic matter, typically air but also solid
Sound quantization is a particularly instructive example of particle emergence because it can be worked out exactly, in all its detail, starting from the underlying laws of quantum mechanics obeyed by atoms-provided the atoms are first postulated to have perfectly crystallized. This is what we mean by quantized sound being a universal feature of crystallinity. This phenomenon is the prototypical example of Goldstone's theorem, the statement that particles necessarily emerge in any matter exhibiting spontaneous broken symmetry. The analysis also reveals that the particles of sound acquire more and more integrity as the corresponding tone is lowered in pitch, and become exact in the limit of low tone. Very high-pitched sound quanta propagating through a solid can decay probabilistically into two or more quanta of sound with a lower pitch, this decay being aptly analogous to that of a radioactive nucleus or an elementary particle such as a pion. Their decay turns out to be the same thing as elastic nonlinearity—the failure of distortion of the solid to be proportional to the stress on it when the stress is large, such as occurs just before fracture. But these nonlinearities matter less and less as the sound wavelength increases, the time scale for the decay increases as the tone is lowered and eventually becomes infinite. Sound quantization is a beautiful case of magic in physics revealed by thoughtful analysis to be not magic at all but a failure of intuition.
The quantum properties of sound are identical to those of light. This fact is important, for it is not at all obvious, given that sound is a collective motion of elastic matter while light ostensibly is not. The analogy is revealed most simply and directly by heat capacity. The ability of crystalline insulators to store heat drops universally in cryogenic environments as the cube of the temperature. This effect is a consequence of quantum mechanics, for it is easy to show that the heat capacity would have to be constant and large (as it is at room temperature) if all the atoms obeyed Newton's laws. The heat capacity of empty space follows the rule precisely. Space is not empty when it is heated of course, but filled with light, the color and intensity of which depend on the temperature. This effect is familiar from the red glow emitted from hot embers and the white light blazing from a light bulb filament or the surface of the sun. A warm crystal is likewise filled with sound.
In either case the specific temperature dependence of the heat capacity is accounted for quantitatively by the Planck law, a simple formula derived om the assumption that light or sound can be created or annihilated only in discrete amounts.8 In fact, the formula for the heat capacity of a crystalline solid is simply that of empty space, with the speed of sound Substituted for the speed of light.
The analogy between phonons and photons raises the obvious question of whether light itself might be emergent.
The similarity between sound and light requires explanation, for there is no obvious reason for their quantum mechanics to be the same. In the case of sound,quantization may be deduced om the underlying laws of quantum mechanics. In the case of light it must be postulated. This logical loose end is enormously embarrassing, and is something we physicists prefer to disguise in formal language.
The emergent quantum of sound, known as a phonon,is aptly analogous to the quantum of light, the photon.
The similarity between sound and light requires explanation, for there is no obvious reason for their quantum mechanics to be the same. In the case of sound-quantization may be deduced from the underlying laws of quantum mechanics obeyed by the atoms. In the case of light it must be postulated. This logical loose end is enormously embarrassing, and is something we physicists prefer to disguise in formal language. Thus we say that light and sound obey the Planck law by virtue of canonical quantization and the bosonic nature of the underlying degrees of freedom.But this is no explanation at all, for the reasoning is circular. Stripped of its complexity, "canonical quantization" simply boils down to requiring light to have properties modeled after those of sound.
Light has a vexing aspect, the gauge effect, that has no analogue in sound and is o en used to argue that light cannot be emergent. This argument is false, since there are plenty of ways one could imagine that light might emerge, but the effect is nonetheless a serious conceptual matter that points to an important physical distinction between light and sound. Its simplest manifestation is in heat capacity.When a sound wave passes by, a given atom is displaced a bit om its static position in the lattice. There are three distinct ways it can do this-left-right, up-down, and forward-backward-each of which contributes separately to the heat capacity, effectively multiplyinG the final answer by three. But the corresponding multiplication factor for light is only two, even though light is also the displacement of something. On one of the three axes the stuff of the universe, whatever it is, simply cannot vibrate-at least on time scales relevant to experimentally accessible temperatures-or store heat. The underlying microscopic reason is not known and is treated in modern physics as a postulate.
"A different Universe" by Robert Laughlin
Canonical quantization means that we replace the classical Poisson brackets with commutator brackets
\begin{eqnarray} \text{Poisson bracket } \{\cdot,\cdot\} &\longrightarrow \text{commutator } i[\cdot,\cdot]\\ \end{eqnarray}
The procedure is called canonical because we are considering canonical variables here, e.g. the canonical momentum. A different way to describe a quantum theory is the path integral approach, which is also known as functional quantization.
Source: page 5 in Klauber's Student friendly QFT book
First Quantization = Quantization in Particle Theories
First quantization is used to get from a classical particle theory the corresponding quantum theory. The procedure can be summarized through the following two steps
We use the same Hamiltonian for the quantum particles that we would use if they were behaving classically.
We replace the classical Poisson brackets for conjugate variables with commutator brackets (divided by $i\hbar$): $$ \{q_i,p_j\} = \delta_{ij} \longrightarrow [\hat{q}_i,\hat{p}_j]= \hat{q}_i\hat{p}_j -\hat{p}_j \hat{q}_i = i \hbar \delta_{ij} $$
Now a crucial observation is that ordinary numbers and function commutate: $[f(x),g(x)]=0$ or also $[3,5]=3\cdot 5 - 5\cdot 3 =0$. Therefore, we learn here that the location $\hat{q}_i$ and the momentum $\hat{p}_j$ can no longer be mere numbers or functions, but must be operators. Operators are always denoted by a hat.
The operators are then interpreted as measurement operators. This means, for example, when we act with the momentum operator $\hat{p}_j$ on the wave function $\Psi$, which is the object that we use to describe, say a particle, we get as a result the momentum that the particle has in the $j$ direction: $\hat{p}_j \Psi = p_j \Psi$. (This only works so simple when the particle is in a momentum eigenstate, i.e. has a definite momentum. Otherwise the result is a superposition and more complicated.) For more on this, see the page about quantum mechanics.
Second Quantization = Quantization in Field Theories
Canonical second quantization works analogously:
We again use the same Hamiltonian for the quantum fields as if we were dealing with a classical field.
We replace the classical fields with operators and the Poisson brackets with commutators: $$i\{\phi_s(t,\vec{x},\pi_r(t,\vec{y}\} = \delta_{sr} \delta(\vec x - \vec y) \longrightarrow [\phi_s(t,\vec{x}),\pi_r(t,\vec{y}\,')]=i \hbar \delta_{sr} \delta(\vec{x}-\vec{y}\,') $$
Again these equation tell us that our dynamical variables, here the field and its conjugate momentum, are no long mere functions but instead operators. However, the interpretation of these operators is a bit more difficult and is best understood by looking at an explicit example.
The action functional $S[\phi(x)]$ for a free real scalar field of mass $m$ is \begin{eqnarray} S[\phi(x)]\equiv \int d^{4}x \,\mathcal{L}(\phi,\partial_{\mu}\phi)= {1\over 2}\int d^{4}x \,\left(\partial_{\mu}\phi\partial^{\mu}\phi- {m^{2}}\phi^2\right). \end{eqnarray} We can calculate the equations of motion are obtained by using the Euler-Lagrange equations \begin{eqnarray} \partial_{\mu}\left[\partial\mathcal{L}\over \partial(\partial_{\mu}\phi) \right]-{\partial\mathcal{L}\over \partial\phi}=0 \quad \Longrightarrow \quad (\partial_{\mu}\partial^{\mu}+m^{2})\phi=0. \label{eq:eomKG} \end{eqnarray}
The momentum canonically conjugated to the field $\phi(x)$ is given by \begin{eqnarray} \pi(x)\equiv {\partial\mathcal{L}\over \partial(\partial_{0}\phi)} ={\partial\phi\over\partial t}. \end{eqnarray}
We can solve the equations of motion by using the Fourier transform \begin{eqnarray} (\partial_{\mu}\partial^{\mu}+m^{2})\phi(x)=0 \quad \Longrightarrow \quad (-p^{2}+m^{2})\widetilde{\phi}(p)=0, \end{eqnarray} The general solution can then be written as \begin{eqnarray} \phi(x)&=&\int {d^{4}p\over (2\pi)^{4}}(2\pi)\delta(p^{2}-m^{2})\theta(p^{0}) \left[\alpha(p)e^{-ip\cdot x}+\alpha(p)^{*}e^{ip\cdot x}\right] \nonumber \\ &=& \int {d^{3}p\over (2\pi)^{3}}{1\over 2\omega_{p}} \left[\alpha(\vec{p}\,)e^{-i\omega_{p}t + \vec{p}\cdot \vec{x}} +\alpha(\vec{p}\,)^{*}e^{i\omega_{p}t-\vec{p}\cdot \vec{x}}\right] \label{eq:general_sol_phi} \end{eqnarray} The conjugate momentum is then \begin{eqnarray} \pi(x)= -{i\over 2}\int {d^{3}p\over (2\pi)^{3}} \left[\alpha(\vec{p}\,)e^{-i\omega_{p}t + \vec{p}\cdot \vec{x}} +\alpha(\vec{p}\,)^{*}e^{i\omega_{p}t-\vec{p}\cdot \vec{x}}\right]. \end{eqnarray}
Now $\phi(x)$ and $\pi(x)$ are promoted to operators by replacing the functions $\alpha(\vec{p})$, $\alpha(\vec{p})^{*}$ by the operators \begin{eqnarray} \alpha(\vec{p}\,)\longrightarrow \widehat{\alpha}(\vec{p}\,), \alpha(\vec{p}\,)^{*}\longrightarrow \widehat{\alpha}^{\dagger}(\vec{p}\,). \end{eqnarray} Moreover, demanding $[\phi(t,\vec{x}),\pi(t,\vec{x}\,')]= i\delta(\vec{x}-\vec{x}\,')$ tells us that the operators $\widehat{\alpha}(\vec{p})$, $\widehat{\alpha}(\vec{p})^{\dagger}$ fulfill the commutation relations
\begin{eqnarray} {}[\alpha(\vec{p}),\alpha^{\dagger}(\vec{p}\,')]&=&(2\pi)^{3} (2\omega_{p})\delta(\vec{p}-\vec{p}\,'), \nonumber \\ {}[\alpha(\vec{p}),\alpha(\vec{p}\,')] &=&[\alpha^{\dagger}(\vec{p}),\alpha^{\dagger}(\vec{p}\,')]=0. \end{eqnarray}
The interpretation of these operators is that they create particle. Concretely, we get a state describing a particle when we act with the creation operators $\alpha(\vec{p})$ on the vacuum state (=the state without any particle) $|0\rangle$ which satisfies \begin{eqnarray} \langle 0|0\rangle=1, \quad \widehat{P}^{\mu}|0\rangle=0, \quad \mathcal{U}(\Lambda)|0\rangle=|0\rangle, \quad \forall \Lambda \in {\rm SO}(1,3). \end{eqnarray} Therefore, we can write a general one-particle state $|f\rangle\in\mathcal{H}_{1}$ as \begin{eqnarray} |f\rangle=\int {d^{3}p\over (2\pi)^{3}}{1\over 2\omega_{p}}f(\vec{p}) \alpha^{\dagger}(\vec{p})|0\rangle, \end{eqnarray} while we write a $n$-particle state $|f\rangle\in\mathcal{H}_{1}^{\otimes\,n}$ as \begin{eqnarray} |f\rangle=\int \prod_{i=1}^{n}{d^{3}p_{i}\over (2\pi)^{3}} {1\over 2\omega_{p_{i}}}f(\vec{p}_{1},\ldots,\vec{p}_{n}) \alpha^{\dagger}(\vec{p}_{1})\ldots\alpha^{\dagger}(\vec{p}_{n})|0\rangle. \end{eqnarray}
For more on this, see the page about quantum field theory.
A nice summary of the different quantization procedures can be found at page 4 and 5 in Klauber's Student friendly QFT.
Also nice is http://www.scottaaronson.com/democritus/lec9.html
There are many obstacles to make the process of quantization mathematical precise.
Already from first principles one encounters difficulties. Given that the classical description of a system is an approximation to its quantum description, obtained in a macroscopic limit (when $\hbar \to 0$), one expects that some information is lost in the limit. So quantization should somehow have to compensate for this. But how can a given quantization procedure select, from amongst the myriad of quantum theories all of which have the same classical limit, the physically correct one? Obstructions to Quantization by Mark J. Gotay
For this reason, many quantization schemes have been developed over the years which all have their individual shortcomings:
Canonical quantization and its modern reformulation geometric quantization. Here we try to map the classical observables, which are functions in phase space, onto operators in a corresponding Hilbert space. The algebra of the observables in classical mechanics is given by the Poisson bracket, whereas the operators in Hilbert space obey the commutator algebra. (Both are instances of the Heisenberg algebra).
Weyl quantization and its modern reformulation deformation quantization. Here we try to map the phase space formulation of classical mechanics (=Hamiltonian mechanics) onto the phase space formulation of quantum mechanics. This means that we map the Poisson bracket onto the Moyal bracket, which can be understood as a deformation of the Poisson bracket. In turn, the Moyal bracket becomes in the limit $\hbar \to 0$ the Poisson bracket.
Path integral quantization.
Group-theoretic approach to quantization, which is closely related to geometric quantization.
Asymptotic quantization
Stochastic quantization is based, roughly speaking, on the idea that the quantum indeterminacy is a result of a stochastic process.
A nice summary can be found in Quantization Methods: A Guide for Physicists and Analysts by S. Twareque Ali, Miroslav Engliš. See also Obstructions to Quantization by Mark J. Gotay and Landsman, N.P.: Mathematical topics between classical and quantum mechanics. Springer Monographs in Mathematics. Springer-Verlag, New York, 1998.
Canonical Quantization
Whenever you have a classical phase space (symplectic manifold to mathematicians), functions on the phase space give an infinite dimensional Lie algebra, with Poisson bracket the Lie bracket. Dirac's basic insight about quantization ("Poisson bracket goes to commutators") was just that a quantum theory is supposed to be a unitary representation of this Lie algebra.
For a general symplectic manifold, how to produce such a representation is a complicated story (see the theory of "geometric quantization"). For a finite-dimensional linear phase space, the story is given in detail in the notes: it turns out that there's only one interesting irreducible representation (Stone-von Neumann theorem), it's determined by how you quantize linear functions, and you can't extend it to functions beyond quadratic ones (Groenewold-van Hove no-go theorem). This is the basic story of canonical quantization.
For the infinite-dimensional linear phase spaces of quantum field theory, Stone-von Neumann is no longer true, and the fact that knowing the operator commutation relations no longer determines the state space is one source of the much greater complexity of QFT.
http://www.math.columbia.edu/~woit/wordpress/?p=7108
Canonical quantization is the original approach to quantization and goes back to Weyl, von Neumann, and Dirac
The procedure consists in assigning to the observables of classical mechanics (=real-valued functions $f(p,q)$ of $(p,q)=(p_1,\dots,p_n,q_1,\dots, q_n)\in\mathbb{R}^n\times\mathbb{R}^n$ (the phase space)), self-adjoint operators $Q_f$ on the Hilbert space $L^2(\mathbb{R}^n)$ in such a way that
the correspondence $f\mapsto Q_f$ is linear;
$Q_1=I$, where $1$ is the constant function, equal to one everywhere, and $I$ the identity operator;
for any function $\phi:\mathbb{R}\to\mathbb{R}$ for which $Q_{\phi\circ f}$ and $\phi(Q_f)$ are well-defined, $Q_{\phi\circ f}=\phi(Q_f)$; and
the operators $Q_{p_j}$ and $Q_{q_j}$ corresponding to the coordinate functions $p_j,q_j$ ($j=1,\dots,n$) are given by\begin{equation} Q_{q_j} \psi = q_j \psi, \qquad Q_{p_j} \psi =-\frac{ih}{2\pi} \,\frac{\partial \psi} {\partial q_j} \qquad \text{for } \psi \in L^2(\mathbb{R}^n,dq). \end{equation}
The Stone and von Neumann theorem then states that up to unitary equivalence, these operators are the unique operators acting on a Hilbert space $\mathcal H$, which satisfy an irreducibility condition.
The standard reference is Dirac, P.A.M. [1967] The Principles of Quantum Mechanics. Revised Fourth Ed. (Oxford Univ. Press, Oxford).
Geometric Quantization
For a short summary, see https://physics.stackexchange.com/questions/46015/why-quantum-mechanics/75775#75775
For geometric quantization, great introductions are http://math.ucr.edu/home/baez//quantization.html and "Quantization is a mystery" by Ivan Todorov
see also Geometric quantization; a crash course by Eugene Lerman
Tuynman, G. What is prequantization, and what is geometric quantization?, in Proceedings, Seminar 1989 1990, Mathematical Structures in Field Theory is a very well-written introduction to the topic, and contains many references to further work.
The standard textbook is "Geometric Quantization" by Woodhouse, see also the discussion about good literature on geometric quantization here.
Higher prequantum geometry by Urs Schreiber
Kirillov, A.A. [1990] Geometric quantization. In: Dynamical Systems IV: Symplectic Geometry and Its Applications. Arnol'd, V.I. and Novikov, S.P., Eds. Encyclopædia Math. Sci. IV. (Springer, New York) 137-172
Souriau, J.-M. [1997] Structure of Dynamical Systems. (Birkhäuser, Boston).
Woodhouse, N.M.J. [1992] Geometric quantization. Second Ed. (Clarendon Press, Oxford).
Weyl Quantization
see Folland, G.B. [1989] Harmonic Analysis in Phase Space. Ann. Math. Ser. 122 (Princeton University Press, Princeton)
Deformation Quantization
These developments encourage attempts to view quantum mechanics as a theory of functions or distributions on phase space, with deformed products and brackets. We suggest that quantization be understood as a deformation of the structure of the algebra of classical observables, rather than as a radical change in the nature of the observables. Incidentally, the nontriviality of the deformations throws some light on the nontrivial Deformation theory and quantization I by Bayen, F., Flato, M., Fronsdal, C., Lichnerowicz, A., & Sternheimer, D. [1978]
Deformations, stable theories and fundamental constants by R Vilela Mendes
Deformation quantization: a survey by M Bordemann
Bayen, F., Flato, M., Fronsdal, C., Lichnerowicz, A., & Sternheimer, D. [1978] Deformation theory and quantization I, II. Ann. Phys. 110, 61– 110, 111–151.
Deformation Quantization in the Teaching of Quantum Mechanics by Allen C. Hirshfeld and Peter Henselder
Deformation quantization in the teaching of Lie group representations by Alexander J. Balsomoa and Job A. Nable
Rieffel, M.A. [1990] Deformation quantization and operator algebras. Proc. Sym. Pure Math. 45, 411–423.
Rieffel, M.A. [1993] Quantization and C ∗ -algebras. In: C ∗ -Algebras: 1943- 1993, A Fifty Year Celebration. Doran, R.S., Ed. Contemp. Math. 167, 67–97.
Rieffel, M.A. [1997] Questions on quantization. Preprint quant-ph/ 9712009.
Path Integral Quantization
see Path Integral Quantum Mechanics
Group-theoretic Approach to Quantization
In a previous paperl we have briefly outlined a method of quantization which follows closely the traditional method of geometric quantization of So uri au, Kostant, and others.2-4 The underlying rationale, however, is rather different: Instead of searching for quantizations of previously defined classical systems, the new approach tends to build directly, and without any ingredient other than a group law, the dynamical quantum systems. Thus, their quantum character is already determined by the symmetry group. The method is based on the close relation which exists among the spatial and dynamical properties of a system and its symmetry group, as well as of the slightly different character which the symmetry groups of classical and quantum systems present. Clearly, the difficulty of the procedure is the determination of the group for the case of interacting systems; in this respect, it fares no better than the conventional Kostant-Souriau (KS) theory. Nevertheless, our method may be applied directly in configuration space, thus avoiding the problem of characterizing the manifold of solutions5 of the classical system in order to quantize it. Quantization as a consequence of the symmetry group by Aldaya, V. & Azcarraga, J.A.
Aldaya, V. & Azcarraga, J.A. [1982] Quantization as a consequence of the symmetry group: An approach to geometric quantization. J. Math. Phys. 23, 1297–1305.
Isham, C.J. [1984] Topological and global aspects of quantum theory. In: Relativity, Groups, and Topology II. DeWitt, B.S. & Stora, R., Eds. (NorthHolland, Amsterdam) 1059–1290
Stochastic Quantization
see Review of stochastic mechanics by Edward Nelson
Quantization of Gauge Theories
The standard book on the topic is Quantization of Gauge Systems by Marc Henneaux and Claudio Teitelboim
There are at least four extant approaches to quantization of gauge theories.
The first is gauge fixing: fix a gauge and quantize in that gauge. But when one tries to do this for Yang–Mills theories using the analogues of familiar gauge conditions (e.g. Lorentz gauge) the procedure may break down. The difficulty is explained by the fact that the gauge condition may fail to define a global transversal in the constraint surface, i.e. a hypersurface that meets each of the gauge orbits exactly once.
A second approach is reduced phase space quantization. Quotient out the gauge orbits to produce the reduced phase space. If this procedure goes smoothly (see section 10 below) the normal method of quantization can be applied to the resulting unconstrained Hamiltonian system. This approach faces the practical difficulty of having to solve the constraints, and even if one overcomes this difficulty one may find that the reduced phase space has features that complicate the quantization (see section 10 below).
The third approach is called Dirac constraint quantization. Here the procedure is to promote the first-class constraints to operators on a Hilbert space and then require that the vectors in the physical sector of this Hilbert space be annihilated by the constraint operators. Of course, the forming of the constraint operators is subject to operator ordering ambiguities. But even modulo such ambiguities, it can happen that the resulting Dirac quantization is inequivalent to that obtained by reduced phase space quantization. In such a case, which is the correct quantization? And how would one tell? I will return to these matters below.
The fourth approach is called BRST quantization after Becchi, Rouet, Stora, and Tyutin. The idea is to mirror the original gauge symmetry by a symmetry transformation on an extended phase space obtained by adding auxiliary variables. The additional phase space variables are chosen so that the BRST symmetry has a simple form that facilitates quantization.
http://pitt.edu/~jearman/Earman2003a.pdf
Quantization of Gravity
see also Quantum Gravity
What Quantization Scheme to Use? Well, in the 80 years people have pursued quantum gravity, they have tried every quantization scheme you could think of!
To get a sense of how complicated quantum gravity is (and an overview of the different approached), I recommend reading Rovelli's "Notes for a brief history of quantum gravity" arXiv:gr-qc/0006061, which is an easy read.
Just a few remarks about quantization schemes: quantization is always problematical on its own, in the sense that presumably nature is already quantum and formulating a procedure to go from classical to quantum is nonsensical. This is discussed in many articles, I'll give a few free good references, e.g., S Twareque Ali and Miroslav Engliš' "Quantization Methods: A Guide for Physicists and Analysts" (arXiv:math-ph/0405065) and MJ Gotay's " Obstructions to Quantization" (arXiv:math-ph/9809011).
http://physics.stackexchange.com/a/34410/37286
It is generally acknowledged that quantization is an ill-defined procedure which cannot be consistently applied to all classical systems. Obstructions to Quantization by Mark J. Gotay
Quantization is what we call the difference between classical theories and quantum theories. A classical theory becomes a quantum theory through quantization.
Quantization is an art form which, when applied to classical physical theories, yields predictions of subatomic behavior which are in spectacular agreement with experiments. Ron Y. Donagi
Let me begin with a reminder that the quantum revolution did not erase our reliance on the earlier, classical physics. Indeed, when proposing a theory, we begin with classical concepts and construct models according to the rules of classical, pre-quantum physics. We know, however, such classical reasoning is not in accordance with quantum reality. Therefore, the classical model is reanalyzed by the rules of quantum physics, which comprise the true laws of Nature. This twostep procedure is called quantization.
The Unreasonable Effectiveness of Quantum Field Theory by R. Jackiw
Is a third quantization possible?
see https://physics.stackexchange.com/questions/625/is-a-third-quantization-possible
Why there is no unique recipe for quantization of a classical theory?
see https://physics.stackexchange.com/questions/323937/why-there-is-no-unique-recipe-for-quantization-of-a-classical-theory
How can we understand the transition from quantum to classical physics?
For a nice discussion, see chapter 2 in Sleeping Beauties in Theoretical Physics by Thanu Padmanabhan:
"Particles do not follow trajectories. They are described by wavefunctions but under appropriate circumstances the constructive interference of the phases of the wavefunction will single out a path which we call a classical trajectory. The Hamilton-Jacobi Equation is just the lowest-order Schrodinger equation if we use the ansatz in Eq. (2.1). The ¨ mysterious procedure in Hamilton-Jacobi theory — of differentiating the solution to Hamilton-Jacobi equation and equating it to a constant — is just the condition for constructive interference of the phases of waves differing slightly in the parameter E. The procedure based on Hamilton-Jacobi theory works in classical mechanics because it is supported by the Schrodinger equation"
See also: Decoherence and the Transition from Quantum to Classical—Revisited by Zurek and The quantum-to-classical transition and decoherence by Maximilian Schlosshauer
What does quantization is not a functor really mean?
see https://mathoverflow.net/questions/8606/what-does-quantization-is-not-a-functor-really-mean
←-
"I went back to Cambridge at the beginning of October 1925, and resumed my previous style of life, intense thinking about these problems during the week and relaxing on Sunday, going for a long walk in the country alone. The main purpose of these long walks was to have a rest so that I would start refreshed on the following Monday. It was during one of the Sunday walks in October 1925, when I was thinking about this (uv- vu), in spite of my intention to relax, that I thought about Poisson brackets. I remembered something which I had read up previously, and from what I could remember, there seemed to be a close similarity between a Poisson bracket of two quantities and the commutator. The idea came in a flash, I suppose, and provided of course some excitement, and then came the reaction "No, this is probably wrong". I did not remember very well the precise formula for a Poisson bracket, and only had some vague recollections. But there were exciting possibilities there, and I thought that I might be getting to some big idea. It was really a very disturbing situation, and it became imperative for me to brush up on my knowledge of Poisson brackets. Of course, I could not do that when I was right out in the countryside. I just had to hurry home and see what I could find about Poisson brackets. I looked through my lecture notes, the notes that I had taken at various lectures, and there was no reference there anywhere to Poisson brackets. The textbooks which I had at home were all too elementary to mention them. There was nothing I could do, because it was Sunday evening then and the libraries were all closed. I just had to wait impatiently through that night without knowing whether this idea was really any good or not, but I still think that my confidence gradually grew during the course of the night. The next morning I hurried along to one of the libraries as soon as it was open, and then I looked up Poisson brackets in Whitackers Analytical Dynamics, and I found that they were just what I needed." Dirac
In my opinion, canonical quantization looks like an algorithm to invert the classical limit of quantum theory, and not a fundamental principle. Quantum Theory as an Emergent Phenomenon: Foundations and Phenomenology by Stephen Adler
Now it doesn't seem to be true that God created a classical universe on the first day and then quantized it on the second day. John Baez
quantum mechanics, quantum field theory
advanced_tools/quantization.1526529934.txt.gz · Last modified: 2018/05/17 04:05 (external edit)
|
CommonCrawl
|
When calculating weights using inverse probability weighting, should the mean of the distribution of weights =1?
I am trying to calculate stabilized weights using inverse probability weighting by "dropout" from my cohort study to try to account for selection bias due to follow-up. I read from one source that the distribution of the stabilized weights should have a mean of 1, that the sum of the unstabilized weights should be double the size of the sum of the stabilized weights and that the range for the unstabilized weights should be greater than that of the stabilized weights. However, no matter what I've tried with my code, the average distribution of the stabilized weights is not 1. Are these criteria valid? I have not been able to find these criteria anywhere written anywhere else.
probability distributions logistic
CourtneyCourtney
There is no reason that the sum of inverse probability weights should be equal to 1 but you can always normalize each weight by the total sum of weights to obtain a sum equal to 1
mirimomirimo
The true stabilized weights should have a mean of 1. This is explained in Hernán and Robins (2006). Their proof is the following:
$$E \left[\frac{P(A=1)}{P(A=1|L)}\right] = E \left\{E \left[ \frac{P(A=1)}{P(A=1|L)}|L\right] \right\}= E \left\{E \left[ \frac{P(A=1|L)}{P(A=1|L)}\right] \right\}=1$$
where $A$ is the treatment and $L$ is the covariate vector, and $\frac{P(A=1)}{P(A=1|L)}$ is the stabilized weight. Cole and Hernán (2008) use whether the estimated standardized weight has a mean of 1 as a diagnostic for the adequacy of the propensity score model.
In my experience, this is not a mainstream practice. The preferred way to assess the adequacy of the propensity score model is to see whether it yields balance between the groups on the covariates. As @mirimo mentioned, it's also always possible to force weights to have a given sum by normalizing them, which doesn't change their properties but does change their mean and standard deviation. The heuristic can be useful when the weights are estimated using propensity scores, but there are several other ways of estimating weights that don't require the estimation of the propensity score, and for these methods, the mean of the weights is arbitrary.
Cole, S. R., & Hernán, M. A. (2008). Constructing Inverse Probability Weights for Marginal Structural Models. American Journal of Epidemiology, 168(6), 656–664. https://doi.org/10.1093/aje/kwn164
Hernán, M. A., & Robins, J. M. (2006). Estimating causal effects from epidemiological data. Journal of Epidemiology and Community Health (1979-), 60(7), 578–586.
NoahNoah
$\begingroup$ Thank you for your response! $\endgroup$ – Courtney Aug 12 '20 at 8:57
Not the answer you're looking for? Browse other questions tagged probability distributions logistic or ask your own question.
Modelling a Poisson distribution with overdispersion
Square root of inverse gamma distribution?
Weight variables for predictive model
How avoid regularizing intercept in scikit's LogisticRegression
How to calculate the cumulative distribution function of a GEV distribution when $1+\xi(x-\mu)/\sigma\le0$?
Inverse probability weighting for selection bias in cohort studies
|
CommonCrawl
|
Get Government Securities essential facts below. View Videos or join the Government Securities discussion. Add Government Securities to your PopFlock.com topic list for future reference or share this resource on social media.
Government debt, also known as public interest, public debt, national debt and sovereign debt,[1][2] contrasts to the annual government budget deficit, which is a flow variable that equals the difference between government receipts and spending in a single year. The debt is a stock variable, measured at a specific point in time, and it is the accumulation of all prior deficits.
Government debt can be categorized as internal debt (owed to lenders within the country) and external debt (owed to foreign lenders). Another common division of government debt is by duration until repayment is due. Short term debt is generally considered to be for one year or less, and long term debt is for more than ten years. Medium term debt falls between these two boundaries. A broader definition of government debt may consider all government liabilities, including future pension payments and payments for goods and services which the government has contracted but not yet paid.
Governments create debt by issuing government bonds and bills. Less creditworthy countries sometimes borrow directly from a supranational organization (e.g. the World Bank) or international financial institutions.
In a monetarily sovereign country such as the United States of America, the United Kingdom and most other countries, government debt held in the home currency can be viewed as savings accounts held at the central bank. In this way this "debt" has a very different meaning to the debt acquired by households who are restricted by their income. Monetarily sovereign governments issue their own currencies and do not need this income to finance spending.
A central government with its own currency can pay for its nominal spending by creating money ex novo,[3] although typical arrangements leave money creation to central banks. In this instance, a government issues securities to the public not to raise funds, but instead to remove excess bank reserves (caused by government spending that is higher than tax receipts) and '...create a shortage of reserves in the market so that the system as a whole must come to the [central] Bank for liquidity.' [4]
The sealing of the Bank of England Charter (1694)
During the Early Modern era, European monarchs would often default on their loans or arbitrarily refuse to pay them back. This generally made financiers wary of lending to the king and the finances of countries that were often at war remained extremely volatile.
The creation of the first central bank in England--an institution designed to lend to the government--was initially an expedient by William III of England for the financing of his war against France. He engaged a syndicate of city traders and merchants to offer for sale an issue of government debt. This syndicate soon evolved into the Bank of England, eventually financing the wars of the Duke of Marlborough and later Imperial conquests.
A new way to pay the National Debt, James Gillray, 1786. King George III, with William Pitt handing him another moneybag.
The establishment of the bank was devised by Charles Montagu, 1st Earl of Halifax, in 1694, to the plan which had been proposed by William Paterson three years before, but had not been acted upon.[5] He proposed a loan of £1.2m to the government; in return the subscribers would be incorporated as The Governor and Company of the Bank of England with long-term banking privileges including the issue of notes. The Royal Charter was granted on 27 July through the passage of the Tonnage Act 1694.[6]
The founding of the Bank of England revolutionised public finance and put an end to defaults such as the Great Stop of the Exchequer of 1672, when Charles II had suspended payments on his bills. From then on, the British Government would never fail to repay its creditors.[7] In the following centuries, other countries in Europe and later around the world adopted similar financial institutions to manage their government debt.
In 1815, at the end of the Napoleonic Wars, British government debt reached a peak of more than 200% of GDP.[8]
In 2018, the global government debt reached the equivalent of $66 trillion, or about 80% of global GDP.[9]
Government and sovereign bonds
Public debt as a percent of GDP, evolution for USA, Japan and the main EU economies.
Public debt as a percent of GDP by CIA (2012)
A government bond is a bond issued by a national government. Such bonds are most often denominated in the country's domestic currency. Sovereigns can also issue debt in foreign currencies: almost 70% of all debt in a sample of developing countries from 1979 through 2006 was denominated in US dollars.[10] Government bonds are sometimes regarded as risk-free bonds, because national governments can if necessary create money de novo to redeem the bond in their own currency at maturity. Although many governments are prohibited by law from creating money directly (that function having been delegated to their central banks), central banks may provide finance by buying government bonds, sometimes referred to as monetizing the debt.
Government debt, synonymous with sovereign debt,[11] can be issued either in domestic or foreign currencies. Investors in sovereign bonds denominated in foreign currency have exchange rate risk: the foreign currency might depreciate against the investor's local currency. Sovereigns issuing debt denominated in a foreign currency may furthermore be unable to obtain that foreign currency to service debt. In the 2010 Greek debt crisis, for example, the debt is held by Greece in Euros, and one proposed solution (advanced notably by World Pensions Council (WPC) financial economists) is for Greece to go back to issuing its own drachma.[12][13] This proposal would only address future debt issuance, leaving substantial existing debts denominated in what would then be a foreign currency, potentially doubling their cost[14]
This article or section may contain misleading parts. Please help clarify this article according to any suggestions provided on the talk page. (October 2015)
General government debt as percent of GDP, United States, Japan, Germany.
Interest burden of public debt with respect to GDP.
National Debt Clock outside the IRS office in NYC, April 20, 2012
Public debt is the total of all borrowing of a government, minus repayments denominated in a country's home currency. CIA's World Factbook lists the debt as a percentage of GDP; the total debt and per capita amounts have been calculated in the table below using the GDP (PPP) and population figures from the same report.
The debt-to-GDP ratio is a commonly accept method for assessing the significance of a nation's debt. For example, one of the criteria of admission to the European Union's euro currency is that an applicant country's debt should not exceed 60% of that country's GDP. The GDP calculation of many leading industrial countries include taxes such as the value-added tax, which increase the total amount of the gross domestic product and thus reducing the percentage amount of the debt-to-GDP ratio.[15][16]
National public debts greater than 0.5% of world public debt, 2012 estimates (CIA World Factbook 2013)[17]
(billion USD)
% of GDP
per capita (USD)
% of world public debt
World 56,308 64% 7,936 100.0%
United States* 17,607 74% 55,630 31.3%
Japan 9,872 214% 77,577 17.5%
China 3,894 32% 2,885 6.9%
Germany 2,592 82% 31,945 4.6%
Italy 2,334 126% 37,956 4.1%
France 2,105 90% 31,915 3.7%
United Kingdom 2,064 89% 32,553 3.7%
Brazil 1,324 55% 6,588 2.4%
Spain 1,228 85% 25,931 2.2%
Canada 1,206 84% 34,902 2.1%
India 995 52% 830 1.8%
Mexico 629 35% 5,416 1.1%
South Korea 535 34% 10,919 1.0%
Turkey 489 40% 6,060 0.9%
Netherlands 488 69% 29,060 0.9%
Egypt 479 85% 5,610 0.9%
Greece 436 161% 40,486 0.8%
Poland 434 54% 11,298 0.8%
Belgium 396 100% 37,948 0.7%
Singapore 370 111% 67,843 0.7%
Taiwan 323 36% 13,860 0.6%
Argentina 323 42% 7,571 0.6%
Indonesia 311 25% 1,240 0.6%
Russia 308 12% 2,159 0.6%
Portugal 297 120% 27,531 0.5%
Thailand 292 43% 4,330 0.5%
Pakistan 283 50% 1,462 0.5%
* US data exclude debt issued by individual US states, as well as intra-governmental debt; intra-governmental debt consists of Treasury borrowings from surpluses in the trusts for Federal Social Security, Federal Employees, Hospital Insurance (Medicare and Medicaid), Disability and Unemployment, and several other smaller trusts; if data for intra-government debt were added, "Gross Debt" would increase by about one-third of GDP. The debt of the United States over time is documented online at the Department of the Treasury's website TreasuryDirect.Gov[18] as well as current totals.[19]
Outdated Tables
Public Debt Top 20, 2010 estimate (CIA World Factbook 2011)[20]
Note (2008 estimate)
USA $9,133 62% $29,158 ($5,415, 38%)
Japan $8,512 198% $67,303 ($7,469, 172%)
Germany $2,446 83% $30,024 ($1,931, 66%)
Italy $2,113 119% $34,627 ($1,933, 106%)
India $2,107 52% $ 1,489 ($1,863, 56%)
China $1,907 19% $ 1,419 ($1,247, 16%)
France $1,767 82% $27,062 ($1,453, 68%)
UK $1,654 76% $26,375 ($1,158, 52%)
Brazil $1,281 59% $ 6,299 ($ 775, 39%)
Canada $1,117 84% $32,829 ($ 831, 64%)
Spain $ 823 60% $17,598 ($ 571, 41%)
Mexico $ 577 37% $ 5,071 ($ 561, 36%)
Greece $ 454 143% $42,216 ($ 335, 97%)
Netherlands $ 424 63% $25,152 ($ 392, 58%)
Turkey $ 411 43% $ 5,218 ($ 362, 40%)
Belgium $ 398 101% $38,139 ($ 350, 90%)
Egypt $ 398 80% $ 4,846 ($ 385, 87%)
Poland $ 381 53% $ 9,907 ($ 303, 45%)
South Korea $ 331 23% $ 6,793 ($ 326, 24%)
Singapore $ 309 106% $65,144
Taiwan $ 279 34% $12,075
Debt of sub-national governments
Municipal, provincial, or state governments may also borrow. Municipal bonds, "munis" in the United States, are debt securities issued by local governments (municipalities).
In 2016, U.S. state and local governments owed $3 trillion and have another $5 trillion in unfunded liabilities.[21]
Denominated in reserve currencies
Governments often borrow money in a currency in which the demand for debt securities is strong. An advantage of issuing bonds in a currency such as the US dollar, the pound sterling, or the euro is that many investors wish to invest in such bonds. Countries such as the United States, Germany, Italy and France have only issued in their domestic currency (or in the Euro in the case of Euro members).
Relatively few investors are willing to invest in currencies that do not have a long track record of stability. A disadvantage for a government issuing bonds in a foreign currency is that there is a risk that it will not be able to obtain the foreign currency to pay the interest or redeem the bonds. In 1997 and 1998, during the Asian financial crisis, this became a serious problem when many countries were unable to keep their exchange rate fixed due to speculative attacks.
Although a national government may choose to default for political reasons, lending to a national government in the country's own sovereign currency is generally considered "risk free" and is done at a so-called "risk-free interest rate". This is because the debt and interest can be repaid by raising tax receipts (either by economic growth or raising tax revenue), a reduction in spending, or by creating more money. However, it is widely considered that this would increase inflation and thus reduce the value of the invested capital (at least for debt not linked to inflation). This has happened many times throughout history, and a typical example of this is provided by Weimar Germany of the 1920s, which suffered from hyperinflation when the government massively printed money, because of its inability to pay the national debt deriving from the costs of World War I.
In practice, the market interest rate tends to be different for debts of different countries. An example is in borrowing by different European Union countries denominated in euros. Even though the currency is the same in each case, the yield required by the market is higher for some countries' debt than for others. This reflects the views of the market on the relative solvency of the various countries and the likelihood that the debt will be repaid. Further, there are historical examples where countries defaulted, i.e., refused to pay their debts, even when they had the ability of paying it with printed money. This is because printing money has other effects that the government may see as more problematic than defaulting.
A politically unstable state is anything but risk-free as it may--being sovereign--cease its payments. Examples of this phenomenon include Spain in the 16th and 17th centuries, which nullified its government debt seven times during a century, and revolutionary Russia of 1917 which refused to accept the responsibility for Imperial Russia's foreign debt.[22] Another political risk is caused by external threats. It is mostly uncommon for invaders to accept responsibility for the national debt of the annexed state or that of an organization it considered as rebels. For example, all borrowings by the Confederate States of America were left unpaid after the American Civil War. On the other hand, in the modern era, the transition from dictatorship and illegitimate governments to democracy does not automatically free the country of the debt contracted by the former government. Today's highly developed global credit markets would be less likely to lend to a country that negated its previous debt, or might require punishing levels of interest rates that would be unacceptable to the borrower.
U.S. Treasury bonds denominated in U.S. dollars are often considered "risk free" in the U.S. This disregards the risk to foreign purchasers of depreciation in the dollar relative to the lender's currency. In addition, a risk-free status implicitly assumes the stability of the US government and its ability to continue repayments during any financial crisis.
Lending to a national government in a currency other than its own does not give the same confidence in the ability to repay, but this may be offset by reducing the exchange rate risk to foreign lenders. On the other hand, national debt in foreign currency cannot be disposed of by starting a hyperinflation;[23] and this increases the credibility of the debtor. Usually, small states with volatile economies have most of their national debt in foreign currency. For countries in the Eurozone, the euro is the local currency, although no single state can trigger inflation by creating more currency.
Lending to a local or municipal government can be just as risky as a loan to a private company, unless the local or municipal government has sufficient power to tax. In this case, the local government could to a certain extent pay its debts by increasing the taxes, or reduce spending, just as a national one could. Further, local government loans are sometimes guaranteed by the national government, and this reduces the risk. In some jurisdictions, interest earned on local or municipal bonds is tax-exempt income, which can be an important consideration for the wealthy.
Clearing and defaults
Public debt clearing standards are set by the Bank for International Settlements, but defaults are governed by extremely complex laws that vary from jurisdiction to jurisdiction. Globally, the International Monetary Fund can take certain steps to intervene to prevent anticipated defaults. It is sometimes criticized for the measures it advises nations to take, which often involve cutting back on government spending as part of an economic austerity regime. In triple bottom line analysis, this can be seen as degrading capital on which the nation's economy ultimately depends.
Those considerations do not apply to private debts, by contrast: credit risk (or the consumer credit rating) determines the interest rate, more or less, and entities go bankrupt if they fail to repay. Governments need a far more complex way of managing defaults because they cannot really go bankrupt (and suddenly stop providing services to citizens), albeit in some cases a government may disappear as it happened in Somalia or as it may happen in cases of occupied countries where the occupier doesn't recognize the occupied country's debts.
Smaller jurisdictions, such as cities, are usually guaranteed by their regional or national levels of government. When New York City declined into what would have been a bankrupt status during the 1970s (had it been a private entity), by the mid-1970s a "bailout" was required from New York State and the United States. In general, such measures amount to merging the smaller entity's debt into that of the larger entity and thereby giving it access to the lower interest rates the larger entity enjoys. The larger entity may then assume some agreed-upon oversight in order to prevent recurrence of the problem.
Economic policy basis
Wolfgang Stützel showed with his Saldenmechanik (Balances Mechanics) how a comprehensive debt redemption would compulsorily force a corresponding indebtedness of the private sector, due to a negative Keynes-multiplier leading to crisis and deflation.[24]
In the dominant economic policy generally ascribed to theories of John Maynard Keynes, sometimes called Keynesian economics, there is tolerance for fairly high levels of public debt to pay for public investment in lean times, which, if boom times follow, can then be paid back from rising tax revenues. Empirically, however, sovereign borrowing in developing countries is procyclical, since developing countries have more difficulty accessing capital markets in lean times.[25]
As this theory gained global popularity in the 1930s, many nations took on public debt to finance large infrastructural capital projects--such as highways or large hydroelectric dams. It was thought that this could start a virtuous cycle and a rising business confidence since there would be more workers with money to spend. Some[who?] have argued that the greatly increased military spending of World War II really ended the Great Depression. Of course, military expenditures are based upon the same tax (or debt) and spend fundamentals as the rest of the national budget, so this argument does little to undermine Keynesian theory. Indeed, some[who?] have suggested that significantly higher national spending necessitated by war essentially confirms the basic Keynesian analysis (see Military Keynesianism).
Nonetheless, the Keynesian scheme remained dominant, thanks in part to Keynes' own pamphlet How to Pay for the War, published in the United Kingdom in 1940. Since the war was being paid for, and being won, Keynes and Harry Dexter White, Assistant Secretary of the United States Department of the Treasury, were, according to John Kenneth Galbraith, the dominating influences on the Bretton Woods agreements. These agreements set the policies for the Bank for International Settlements (BIS), International Monetary Fund (IMF), and World Bank, the so-called Bretton Woods Institutions, launched in the late 1940s for the last two (the BIS was founded in 1930).
These are the dominant economic entities setting policies regarding public debt. Due to its role in setting policies for trade disputes, the World Trade Organization also has immense power to affect foreign exchange relations, as many nations are dependent on specific commodity markets for the balance of payments they require to repay debt.
Structure and risk of a public debt
This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2015) (Learn how and when to remove this template message)
Understanding the structure of public debt and analyzing its risk requires one to:
Assess the expected value of any public asset being constructed, at least in future tax terms if not in direct revenues. A choice must be made about its status as a public good--some public "assets" end up as public bads, such as nuclear power plants which are extremely expensive to decommission--these costs must also be worked into asset values.
Determine whether any public debt is being used to finance consumption, which includes all social assistance and all military spending.
Determine whether triple bottom line issues are likely to lead to failure or defaults of governments--say due to being overthrown.
Determine whether any of the debt being undertaken may be held to be odious debt, which might permit it to be disavowed without any effect on a country's credit status. This includes any loans to purchase "assets" such as leaders' palaces, or the people's suppression or extermination. International law does not permit people to be held responsible for such debts--as they did not benefit in any way from the spending and had no control over it.
Determine if any future entitlements are being created by expenditures--financing a public swimming pool for instance may create some right to recreation where it did not previously exist, by precedent and expectations.
Sovereign debt problems have been a major public policy issue since World War II, including the treatment of debt related to that war, the developing country "debt crisis" in the 1980s, and the shocks of the 1998 Russian financial crisis and Argentina's default in 2001.
Effect on future economic growth
In 2013, the World Bank Group issued a report which analyzed debt levels of 100 developed and developing countries, from 1980 to 2008, and found that debt-to-GDP rations above 77% for developed countries (64% for developing countries) reduced future annual economic growth by 0.02 percentage points for each percentage point of debt above the threshold.[26][27]
Implicit debt
Government "implicit" debt is the promise by a government of future payments from the state. Usually, this refers to long-term promises of social payments such as pensions and health expenditure; not promises of other expenditure such as education or defense (which are largely paid on a "quid pro quo" basis to government employees and contractors).
A problem with these implicit government insurance liabilities is that it is hard to cost them accurately, since the amounts of future payments depend on so many factors. First of all, the social security claims are not "open" bonds or debt papers with a stated time frame, "time to maturity", "nominal value", or "net present value".
In the United States, as in most other countries, there is no money earmarked in the government's coffers for future social insurance payments. This insurance system is called PAYGO (pay-as-you-go). Alternative social insurance strategies might have included a system that involved save and invest.
Furthermore, population projections predict that when the "baby boomers" start to retire, the working population in the United States, and in many other countries, will be a smaller percentage of the population than it is now, for many years to come. This will increase the burden on the country of these promised pension and other payments--larger than the 65 percent[28] of GDP that it is now. The "burden" of the government is what it spends, since it can only pay its bills through taxes, debt, and increasing the money supply (government spending = tax revenues + change in government debt held by public + change in monetary base held by the public). "Government social benefits" paid by the United States government during 2003 totaled $1.3 trillion.[29] According to official government projections, the Medicare is facing a $37 trillion unfunded liability over the next 75 years, and the Social Security is facing a $13 trillion unfunded liability over the same time frame.[30][31]
In 2010 the European Commission required EU Member Countries to publish their debt information in standardized methodology, explicitly including debts that were previously hidden in a number of ways to satisfy minimum requirements on local (national) and European (Stability and Growth Pact) level.[32]
A simple model of sovereign debt dynamics
The following model of sovereign debt dynamics comes from Romer (2018).[33]
Assume that the dynamics of a country's sovereign debt D{\displaystyle D} over time t{\displaystyle t} may be modeled as a continuous, deterministic process consisting of the interest paid on current debt and net borrowing:
D˙=rD⏟Interest on Debt+G(t)−T(t)⏟Net Borrowing{\displaystyle {\dot {D}}=\underbrace {rD} _{\text{Interest on Debt}}+\underbrace {G(t)-T(t)} _{\text{Net Borrowing}}}
Where r{\displaystyle r} is a time-dependent interest rate, G(t){\displaystyle G(t)} is government spending, and T(t){\displaystyle T(t)} is total tax collections. In order to solve this differential equation, we assume a solution D=Mu{\displaystyle D=Mu} and introduce the integrating factor M{\displaystyle M} :
M=eR(t),R(t)=∫0tr(τ)dτ{\displaystyle M=e^{R(t)},\quad R(t)=\int _{0}^{t}r(\tau )d\tau }
This substitution leads to the equation:
u˙=e−R(t)[G(t)−T(t)]{\displaystyle {\dot {u}}=e^{-R(t)}[G(t)-T(t)]}
And integrating this equation from t∈[0,∞){\displaystyle t\in [0,\infty )} , we find that:
e−R(∞)D∞=D0+∫0∞e−R(t)[G(t)−T(t)]dt{\displaystyle e^{-R(\infty )}D_{\infty }=D_{0}+\int _{0}^{\infty }e^{-R(t)}[G(t)-T(t)]dt}
A problem arises now that we've solved this equation: at t=∞{\displaystyle t=\infty } , it is impossible for the present value of a country's debt to be positive. Otherwise, the country could borrow an infinite amount of money. Therefore, it is necessary to impose the No Ponzi condition:
lims→∞e−R(s)Ds≤0{\displaystyle \lim _{s\rightarrow \infty }e^{-R(s)}D_{s}\leq 0}
Therefore, it follows that:
∫0∞e−R(t)[T(t)−G(t)]dt≥D0{\displaystyle \int _{0}^{\infty }e^{-R(t)}[T(t)-G(t)]dt\geq D_{0}}
In other words, this last equation shows that the present value of taxes minus the present value of government spending must be at least equal to the initial sovereign debt.
Ricardian equivalence
The representative household's budget constraint is that the present value of its consumption cannot exceed its initial wealth plus the present value of its after-tax income.
∫0∞e−R(t)C(t)dt⏟Consumption≤D0+K0⏟Initial Capital+∫0∞e−R(t)[W(t)−T(t)]dt⏟After-Tax Income{\displaystyle \underbrace {\int _{0}^{\infty }e^{-R(t)}C(t)dt} _{\text{Consumption}}\leq \underbrace {D_{0}+K_{0}} _{\text{Initial Capital}}+\underbrace {\int _{0}^{\infty }e^{-R(t)}[W(t)-T(t)]dt} _{\text{After-Tax Income}}}
Assuming that the present value of taxes equals the present value of government spending, then this last equation may be rewritten as:
∫0∞e−R(t)C(t)dt≤K0+∫0∞e−R(t)[W(t)−G(t)]dt{\displaystyle \int _{0}^{\infty }e^{-R(t)}C(t)dt\leq K_{0}+\int _{0}^{\infty }e^{-R(t)}[W(t)-G(t)]dt}
This equation shows that the household budget constraint may be defined in terms of government purchases, without regards to debt or taxes. Moreover, this is the famous result known as Ricardian equivalence: only the quantity of government purchases affects the economy, not the method of financing (i.e., through debt or taxes).
Government finance:
Generational accounting
Financial repression
Sovereign default
Sovereign credit
Specific:
1980s austerity policy in Romania
Latin American debt crisis
2010 European sovereign debt crisis
United States public debt
National debt of the United States
Bond (finance)
Credit default swap
Warrant (of Payment)
List of countries by credit rating
List of countries by external debt
List of countries by net international investment position
List of countries by public debt
^ "Bureau of the Public Debt Homepage". United States Department of the Treasury. Archived from the original on October 13, 2010. Retrieved 2010.
^ "FAQs: National Debt". United States Department of the Treasury. Archived from the original on October 21, 2010. Retrieved 2010.
^ The Economics of Money, Banking, and the Financial Markets 7ed, Frederic S. Mishkin
^ Tootell, Geoffrey. "The Bank of England's Monetary Policy" (PDF). Federal Reserve Bank of Boston. Retrieved 2017.
^ Committee of Finance and Industry 1931 (Macmillan Report) description of the founding of Bank of England. 1979. ISBN 9780405112126. Retrieved 2010. "Its foundation in 1694 arose out the difficulties of the Government of the day in securing subscriptions to State loans. Its primary purpose was to raise and lend money to the State and in consideration of this service it received under its Charter and various Act of Parliament, certain privileges of issuing bank notes. The corporation commenced, with an assured life of twelve years after which the Government had the right to annul its Charter on giving one year's notice. Subsequent extensions of this period coincided generally with the grant of additional loans to the State"
^ H. Roseveare, The Financial Revolution 1660-1760 (1991, Longman), p. 34
^ Ferguson, Niall (2008). The Ascent of Money: A Financial History of the World. Penguin Books, London. p. 76. ISBN 9780718194000.
^ UK public spending Retrieved September 2011
^ "Government debt hits record $66 trillion, 80% of global GDP, Fitch says". CNBC. 23 January 2019.
^ "Empirical Research on Sovereign Debt and Default" (PDF). Federal Reserve Board of Chicago. Retrieved .
^ "FT Lexicon" – The Financial Times
^ M. Nicolas J. Firzli, "Greece and the Roots the EU Debt Crisis" The Vienna Review, March 2010
^ "EU accused of 'head in sand' attitude to Greek debt crisis". Telegraph.co.uk. Retrieved .
^ "Why leaving the euro would still be bad for both Greece and the currency area" – The Economist, 2015-01-17
^ "GROSS DOMESTIC PRODUCT (GDP)". Organisation for Economic Co-operation and Development (OECD). Retrieved 2019.
^ "GROSS DOMESTIC PRODUCT (GDP)". U.S. Bureau of Economic Analysis (BEA). Retrieved 2019.
^ "Country Comparison :: Public debt". Central Intelligence Agency. Archived from the original on May 13, 2013. Retrieved 2013.
^ "Government - Historical Debt Outstanding - Annual". Treasurydirect.gov. 2010-10-01. Retrieved .
^ "Debt to the Penny (Daily History Search Application)". Treasurydirect.gov. Retrieved .
^ "Country Comparison :: Public debt". cia.gov. Archived from the original on October 4, 2008. Retrieved 2011.
^ "Debt Myths, Debunked". U.S. News. December 1, 2016.
^ Hedlund, Stefan (2004). "Foreign Debt". Encyclopedia of Russian History (reprinted in Encyclopedia.com). Retrieved 2010.
^ Cox, Jeff (2019-11-25). "Fed analysis warns of 'economic ruin' when governments print money to pay off debt". CNBC. Retrieved .
^ Wolfgang Stützel: Volkswirtschaftliche Saldenmechanik Tübingen : Mohr Siebeck, 2011, Nachdr. der 2. Aufl., Tübingen, Mohr, 1978, S. 86
^ "The Economics and Law of Sovereign Debt and Default" (PDF). Journal of Economic Literature. 2009. Retrieved .
^ Grennes, Thomas; Caner, Mehmet; Koehler-Geib, Fritzi (2013-06-22). "Finding The Tipping Point -- When Sovereign Debt Turns Bad". World Bank Group. Policy Research Working Papers. doi:10.1596/1813-9450-5391. hdl:10986/3875. Retrieved . The present study addresses these questions with the help of threshold estimations based on a yearly dataset of 101 developing and developed economies spanning a time period from 1980 to 2008. The estimations establish a threshold of 77 percent public debt-to-GDP ratio. If debt is above this threshold, each additional percentage point of debt costs 0.017 percentage points of annual real growth. The effect is even more pronounced in emerging markets where the threshold is 64 percent debt-to-GDP ratio. In these countries, the loss in annual real growth with each additional percentage point in public debt amounts to 0.02 percentage points.
^ Kessler, Glenn (2020-09-09). "Mnuchin's claim that the pre-pandemic economy 'would pay down debt over time'". The Washington Post. Retrieved . The debt-to-GDP ratio is considered a good guide to a country's ability to pay off its debts. The World Bank has calculated that 77 percent public debt-to-GDP is about the highest a developed country should have before debt begins to hamper economic growth.
^ "Report for Selected Countries and Subjects". International Monetary Fund. Retrieved . (General government gross debt 2008 estimates rounded to one decimal place)
^ "Government Social Benefits Table". Archived from the original on November 1, 2004.
^ Capretta, James C. (June 16, 2018). "The financial hole for Social Security and Medicare is even deeper than the experts say". MarketWatch.
^ Mauldin, John (March 25, 2019). "The Real US National Debt Might Be $230 Trillion". Newsmax.
^ "Council Regulation (EC) No 479/2009". Retrieved .
^ Romer, David (2018). Advanced Macroeconomics. McGraw-Hill Economics. New York, NY: McGraw-Hill Education. pp. 662-672. ISBN 978-1260185218.
The IMF Public Financial Management Blog
OECD government debt statistics
Japan's Central Government Debt
Riksgäldskontoret - Swedish national debt office
United States Treasury, Bureau of Public Debt - The Debt to the Penny and Who Holds It
Slaying the Dragon of Debt, Regional Oral History Office, The Bancroft Library, University of California, Berkeley
A historical collection of documents on or referring to government spending and fiscal policy, available on FRASER
Eisner, Robert (1993). "Federal Debt". In David R. Henderson (ed.). Concise Encyclopedia of Economics (1st ed.). Library of Economics and Liberty. OCLC 317650570, 50016270, 163149563
"Government's Borrowing Power". DebatedWisdom. 3IVIS GmbH. Retrieved 2016.
US Debt Clock
CLYPS dataset on public debt level and composition in Latin America
Government Security Contractors
Sentinel Government Securities A
MALAYSIAN GOVERNMENT SECURITIES (MGS)
What Are Government Securities?
Timo Harkonen, Director of Government Security
Investing in Government Securities, How Safe?
Government Security Contractors And Us Government Bids
Government security plan in Faisalabad _08-12-2014
Newsone Received, Whose Personalities got Government Security In punjab
sm krishna asks australian government security for indian students
IFSEC - London 2014 / Trailer, Street Surveillance and Government Security
The Popular Crowd, Part of the Governmental Security System
Video Demo : Government Security Camera Systems by Virtual Surveillance
Biden blocked from intelligence briefings, meets non-government security experts
[Free Read] Keys to Investing in Government Securities Free Online
Khabar Yeh Hai - 12th Jan 2015 - Discuss Government Security Measures
Pittcon 2012 - Government Security Application of Laser Spectroscopy - Abstract 4
MahaBahas: Why are separatists being given government security in Jammu and Kashmir?
Read Mitigating Corruption in Government Security Forces: The Role of Institutions Incentives
Library Understanding Treasury Bills and Other U S Government Securities (No Nonsense Financial
Government_securities
|
CommonCrawl
|
Boundedness of solutions to a fully parabolic Keller-Segel system with nonlinear sensitivity
DCDS-B Home
On global boundedness of the Chen system
June 2017, 22(4): 1645-1671. doi: 10.3934/dcdsb.2017079
Attractors for non-autonomous reaction-diffusion equations with fractional diffusion in locally uniform spaces
Gaocheng Yue
Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
Received December 2015 Revised November 2016 Published February 2017
Fund Project: The author is supported by NSF of China under Grant 11501289
Full Text(HTML)
Related Papers
In this paper, we first prove the well-posedness for the nonautonomous reaction-diffusion equations with fractional diffusion in the locally uniform spaces framework. Under very minimal assumptions, then we study the asymptotic behavior of solutions of such equation and show the existence of $(H^{2(\alpha -ε),q}_U(\mathbb{R}^N),H^{2(\alpha -ε),q}_φ(\mathbb{R}^N))(0<ε<\alpha <1)$-uniform(w.r.t.$g∈\mathcal{H}_{L^q_U(\mathbb{R}^N)}(g_0)$) attractor $\mathcal{A}_{\mathcal{H}_{L^q_U(\mathbb{R}^N)}(g_0)}$ with locally uniform external forces being translation uniform bounded but not translation compact in $L_b^p(\mathbb{R};L^q_U(\mathbb{R}^N)).$ The key to that extensions is a new the space-time estimates in locally uniform spaces for the linear fractional power dissipative equation.
Keywords: Reaction-diffusion equations, uniform attractors, locally uniform spaces, fractional powers of sectorial operator.
Mathematics Subject Classification: Primary:35K57, 35B40;Secondary:35B41.
Citation: Gaocheng Yue. Attractors for non-autonomous reaction-diffusion equations with fractional diffusion in locally uniform spaces. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1645-1671. doi: 10.3934/dcdsb.2017079
F. Abergel, Existence and finite dimensionality of the global attractor for evolution equationds on unbounded domains, J. Differential Equations, 83 (1990), 85-108. doi: 10.1016/0022-0396(90)90070-6. Google Scholar
B. Andrade, A. Carvalho, P. Carvalho-Neto and P. Marín-Rubio, Semilinear fractional differential equations: Global solutions, critical nonlinearities and comparison results, Topological Methods in Nonlinear Analysis, 45 (2015), 439-467. doi: 10.12775/TMNA.2015.022. Google Scholar
J. Arrieta, J. Cholewa, T. Dlotko and A. Rodriguez-Bernal, Linear parabolic equations in locally spaces, Math. Models Methods Appl. Sci., 14 (2004), 253-293. doi: 10.1142/S0218202504003234. Google Scholar
J. Arrieta, J. W. Cholewa, T. Dlotko and A. Rodriguez-Bernal, Asymptotic behavior and attractors for reaction diffusion equations in unbounded domain, Nonlinear Anal., 56 (2004), 515-554. doi: 10.1016/j.na.2003.09.023. Google Scholar
J. Arrieta, N. Moya and A. Rodriguez-Bernal, {Asymptotic behavior of reaction-diffusion equations in weighted Sobolev spaces}, 2009, Submitted.Google Scholar
A. V. Babin and M. I. Vishik, Attractors of Evolutions North-Holland, Amsterdam, 1992. Google Scholar
A. Babin and M. Vishik, Attractors of partial differential evolution equations in an unbounded domain, Proc. Roy. Soc. Edinburgh Sect. A, 116 (1990), 221-243. doi: 10.1017/S0308210500031498. Google Scholar
A. Carvalho and T. Dlotko, Partly dissipative systems in locally uniform spaces, Colloq. Math., 100 (2004), 221-242. doi: 10.4064/cm100-2-6. Google Scholar
A. N. Carvalho, J. A. Langa and J. C. Robinson, Attractors for Infinite-dimensional Non-autonomous Dynamical Systems Applied Mathematical Sciences 182, Springer-Verlag, 2013. doi: 10.1007/978-1-4614-4581-4. Google Scholar
Z. Q. Chen, P. Kim and R. Song, Heat kernel estimates for the Dirichlet fractional Laplacian, J. Eur. Math. Soc., 12 (2010), 1307-1329. doi: 10.4171/JEMS/231. Google Scholar
V. Chepyzhov and M. Vishik, {Non-autonomous evolutionary equations with translation compact symbols and their attractors, C. R. Acad. Sci. Paris Sér. I, 321 (1995), 153-158. Google Scholar
V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics volume 49 of American Mathematical Society Colloquium Publications, AMS, Providence, RI, 2002. Google Scholar
V. V. Chepyzhov and M. I. Vishik, Attractors of nonautonomous dynamical systems and their dimension, J. Math. Pures Appl., 73 (1994), 279-333. Google Scholar
J. W. Cholewa and T. Dlotko, Global Attractors in Abstract Parabolic Problems Cambridge University Press, 2000. doi: 10.1017/CBO9780511526404. Google Scholar
J. Cholewa and T. Dlotko, Cauchy problems in weighted Lebesgue spaces, Czechoslovak Mathematical Journal, 54 (2004), 991-1013. doi: 10.1007/s10587-004-6447-z. Google Scholar
J. Cholewa and A. Rodriguez-Bernal, Extremal equilibria for dissipative parabolic equations in locally uniform spaces, Math. Model Methods Appl. Sci., 19 (2009), 1995-2037. doi: 10.1142/S0218202509004029. Google Scholar
J. Cholewa and A. Rodriguez-Bernal, Extremal equilibria for monotone semigroups in ordered spaces with application to evolutionary equations, J. Differential Equations, 249 (2010), 485-525. doi: 10.1016/j.jde.2010.04.006. Google Scholar
I. Chueshov and I. Lasiecka, Long-time behavior of second order evolution equations with nonlinear damping Mem. Amer. Math. Soc. , 195 (2008), viii+183 pp. doi: 10.1090/memo/0912. Google Scholar
T. Dlotko, M. Kania and C. Sun, Pseudodifferential parabolic equations in uniform spaces, Applicable Analysis: An International Journal, 93 (2014), 14-34. doi: 10.1080/00036811.2012.753587. Google Scholar
M. A. Efendiev and S. V. Zelik, The attractor for a nonlinear reaction-diffusion system in an bounded domain, Comm. Pure Appl. Math., 54 (2001), 625-688. doi: 10.1002/cpa.1011. Google Scholar
J. K. Hale, Asymptotic Behavior of Dissipative Systems Amer. Math. Soc. Providence, RI, 1988. Google Scholar
A. Haraux, Systemes Dynamiques Dissipatifs et Applications Paris, Masson, 1991. Google Scholar
D. Henry, Geometric Theory of Semilinear Parabolic Equations Lecture Notes in Mathematics 840, Springer-Verlag, Berlin, 1981. Google Scholar
T. Kato, The Cauchy problem for quasi-linear symmetric hyperbolic systems, Arch. Ration. Mech. Anal., 58 (1975), 181-205. doi: 10.1007/BF00280740. Google Scholar
O. A. Ladyzhenskaya, Attractors for Semigroups and Evolution Equations Leizioni Lincei/Canbridge Univ. Press, Cambridge/New York, 1991. doi: 10.1017/CBO9780511569418. Google Scholar
X. Li and S. Ruan, Attractors for non-autonomous parabolic problems with singular initial data, J. Differential Equations, 251 (2011), 728-757. doi: 10.1016/j.jde.2011.05.015. Google Scholar
S. S. Lu, H. Q. Wu and C. K. Zhong, Attractors for nonautonomous 2D Navier-Stokes equations with normal external forces, Discrete Contin. Dyn. Syst., 13 (2005), 701-719. doi: 10.3934/dcds.2005.13.701. Google Scholar
Q. F. Ma, S. H. Wang and C. K. Zhong, Necessary and sufficient conditions for the existence of global attractors for semigroups and applications, Indiana University Math. J., 51 (2002), 1541-1559. doi: 10.1512/iumj.2002.51.2255. Google Scholar
V. I. Mazya and T. O. Shaposhnikova, On the Bourgain, Brezis, and Mironescu Theorem Concerning Limiting Embeddings of Fractional Sobolev Spaces, Journal Funct. Anal., 195 (2002), 230-238. doi: 10.1006/jfan.2002.3955. Google Scholar
C. X. Miao, B. Q. Yuan and B. Zhang, Well-posedness of the Cauchy problem for the fractional power dissipative equations, Nonlinear Analysis, 68 (2008), 461-484. doi: 10.1016/j.na.2006.11.011. Google Scholar
A. Mielke and G. Schneider, Attractors for modulation equations on unbounded domain sexistence and comparison, Nonlinearity, 8 (1995), 743-768. Google Scholar
I. Moise, R. Rosa and X. Wang, Attractors for noncompact nonautonomous systems via energy equations, Discrete Contin. Dyn. Syst., 10 (2004), 473-496. doi: 10.3934/dcds.2004.10.473. Google Scholar
J. Robinson, Infinite-dimensional Dynamical Systems Cambridge University Press Texes in Applied Mathematics, Series, 2002. doi: 10.1007/978-94-010-0732-0. Google Scholar
C. Sun, D. Cao and J. Duan, Uniform attractors for non-autonomous wave equations with nonlinear damping, SIAM J. Applied Dynamical Systems, 6 (2007), 293-318. doi: 10.1137/060663805. Google Scholar
R. Temam, Infinite-dimensional Systems in Mechanics and Physics Springer-Verlag, New York, 1988. doi: 10.1007/978-1-4612-0645-3. Google Scholar
J. L. Vázquez, Recent progress in the theory of Nonlinear Diffusion with Fractional Laplacian Operators, in Nonlinear elliptic and parabolic differential equations, Disc. Cont. Dyn. Syst. S, 4 (2014), 857-885. Google Scholar
J. L. Vázquez, Nonlinear Diffusion with Fractional Laplacian Operators, in Nonlinear partial differential equations: the Abel Symposium 2010, Holden, Helge & Karlsen, Kenneth H. eds., Springer, 7 (2012), 271-298. doi: 10.1007/978-3-642-25361-4. Google Scholar
J. L. Vázquez and B. Volzone, Symmetrization for linear and nonlinear fractional parabolic equations of porous medium type, J. Math. Pures Appl., 101 (2014), 553-582. doi: 10.1016/j.matpur.2013.07.001. Google Scholar
B. X. Wang, Attractors for reaction-Diffusion equation in unbounded domains, Physica D, 128 (1999), 41-52. doi: 10.1016/S0167-2789(98)00304-2. Google Scholar
M. H. Yang and C. Y. Sun, Dynamics of strongly damped wave equations in locally uniform spaces: Attractors and asymptotic regularity, Trans. Amer. Math. Soc., 361 (2009), 1069-1101. doi: 10.1090/S0002-9947-08-04680-1. Google Scholar
G. C. Yue and C. K. Zhong, Dynamics of non-autonomous reaction-diffusion equations in locally uniform spaces, Top. Methods Nonlinear Anal., 46 (2015), 935-965. Google Scholar
G. C. Yue and C. K. Zhong, Global attractors for the Gray-Scott equations in locally uniform spaces, Discrete Contin. Dyn. Syst. B, 21 (2016), 337-356. doi: 10.3934/dcdsb.2016.21.337. Google Scholar
S. Zelik, The attractor for a nonlinear reaction-diffusion system in an unbounded domain and Kolmogorov's epsilon-entropy, Math. Nachr., 232 (2001), 129-179. doi: 10.1002/1522-2616(200112)232:1<129::AID-MANA129>3.0.CO;2-T. Google Scholar
S. Zelik, The attractor for a nonlinear hyperbolic equation in the unbounded domain, Discrete Contin. Dyn. Syst., 7 (2001), 593-641. doi: 10.3934/dcds.2001.7.593. Google Scholar
C. Zhong, M. Yang and C. Sun, The existence of global attractors for norm-to-weak continuous semigroup and application to the nonlinear reaction-diffusion equations, J. Differential Equations, 223 (2006), 367-399. doi: 10.1016/j.jde.2005.06.008. Google Scholar
Gaocheng Yue, Chengkui Zhong. Global attractors for the Gray-Scott equations in locally uniform spaces. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 337-356. doi: 10.3934/dcdsb.2016.21.337
Alexey Cheskidov, Songsong Lu. The existence and the structure of uniform global attractors for nonautonomous Reaction-Diffusion systems without uniqueness. Discrete & Continuous Dynamical Systems - S, 2009, 2 (1) : 55-66. doi: 10.3934/dcdss.2009.2.55
Yejuan Wang, Peter E. Kloeden. The uniform attractor of a multi-valued process generated by reaction-diffusion delay equations on an unbounded domain. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4343-4370. doi: 10.3934/dcds.2014.34.4343
Martin Michálek, Dalibor Pražák, Jakub Slavík. Semilinear damped wave equation in locally uniform spaces. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1673-1695. doi: 10.3934/cpaa.2017080
Gaocheng Yue. Limiting behavior of trajectory attractors of perturbed reaction-diffusion equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-22. doi: 10.3934/dcdsb.2019101
Sven Jarohs, Tobias Weth. Asymptotic symmetry for a class of nonlinear fractional reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (6) : 2581-2615. doi: 10.3934/dcds.2014.34.2581
Ciprian G. Gal, Mahamadi Warma. Reaction-diffusion equations with fractional diffusion on non-smooth domains with various boundary conditions. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1279-1319. doi: 10.3934/dcds.2016.36.1279
Yuncheng You, Caidi Zhao, Shengfan Zhou. The existence of uniform attractors for 3D Brinkman-Forchheimer equations. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3787-3800. doi: 10.3934/dcds.2012.32.3787
Messoud Efendiev, Alain Miranville. Finite dimensional attractors for reaction-diffusion equations in $R^n$ with a strong nonlinearity. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 399-424. doi: 10.3934/dcds.1999.5.399
P.E. Kloeden, Victor S. Kozyakin. Uniform nonautonomous attractors under discretization. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 423-433. doi: 10.3934/dcds.2004.10.423
Tomás Caraballo, José A. Langa, James C. Robinson. Stability and random attractors for a reaction-diffusion equation with multiplicative noise. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 875-892. doi: 10.3934/dcds.2000.6.875
Oleksiy V. Kapustyan, Pavlo O. Kasyanov, José Valero. Regular solutions and global attractors for reaction-diffusion systems without uniqueness. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1891-1906. doi: 10.3934/cpaa.2014.13.1891
Peter E. Kloeden, Thomas Lorenz. Pullback attractors of reaction-diffusion inclusions with space-dependent delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1909-1964. doi: 10.3934/dcdsb.2017114
Yuncheng You. Random attractors and robustness for stochastic reversible reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (1) : 301-333. doi: 10.3934/dcds.2014.34.301
Piermarco Cannarsa, Giuseppe Da Prato. Invariance for stochastic reaction-diffusion equations. Evolution Equations & Control Theory, 2012, 1 (1) : 43-56. doi: 10.3934/eect.2012.1.43
Martino Prizzi. A remark on reaction-diffusion equations in unbounded domains. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 281-286. doi: 10.3934/dcds.2003.9.281
Angelo Favini, Atsushi Yagi. Global existence for Laplace reaction-diffusion equations. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1-21. doi: 10.3934/dcdss.2020083
María Anguiano, Tomás Caraballo, José Real, José Valero. Pullback attractors for reaction-diffusion equations in some unbounded domains with an $H^{-1}$-valued non-autonomous forcing term and without uniqueness of solutions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 307-326. doi: 10.3934/dcdsb.2010.14.307
Kazuo Yamazaki, Xueying Wang. Global stability and uniform persistence of the reaction-convection-diffusion cholera epidemic model. Mathematical Biosciences & Engineering, 2017, 14 (2) : 559-579. doi: 10.3934/mbe.2017033
Susanna Terracini, Gianmaria Verzini, Alessandro Zilio. Uniform Hölder regularity with small exponent in competition-fractional diffusion systems. Discrete & Continuous Dynamical Systems - A, 2014, 34 (6) : 2669-2691. doi: 10.3934/dcds.2014.34.2669
PDF downloads (12)
HTML views (1)
on AIMS
Article outline
Recipient's E-mail*
|
CommonCrawl
|
The Birthday Paradox - On Jupiter, and Beyond!
A classic puzzle/paradox is to ask:
You may feel like you've seen this before. If you have, skip to the end where I talk about a formula, and discuss the relevance to parameters in computer hashing algorithms. In particular, I'm going to talk about how large hash spaces should be to avoid collisions with a given level of probability.
If there are just two people in a room, it's very unlikely that they will have the same birthday. On the other hand, if there are 1000 people in a room, it's absolutely certain that there will be shared birthdays, as there simply aren't enough days to go round without repeats. So as we add people to a room the chances of a shared birthday rise from 0 to 1, and at some point will pass through the halfway mark.
We assume that birthdays are distributed uniformly at random throughout the year.
How many people do you need to have in a room before the chances are more than 50% of a shared birthday?
If you haven't seen this before then I urge you to have a guess now. Is it 180 people? More than that? Fewer than that? What do you think?
Most people who haven't seen this before get it very wrong, and then are surprised by the answer.
In case you are one of the many, many people who get this wrong and are astonished, you may be thinking of a different question. My birthday is a specific date. If we assume birthdays are uniformly distributed, how many people must you ask before you find someone who has the same birthday as me?
On average, about 182.
If you ask some 150 to 200 people, the odds are about 50% that you'll find someone who shares my specific birthday.
But that's not the question I asked. They don't just need to share my birthday, it can be that any birthday is shared among them, and that changes the odds dramatically.
Johnny Carson got it wrong - here's a video clip:
http://www.cornell.edu/video/?videoid=2334
The number of people?
Just 23.
Yes, as soon as you have 23 people in a room the chances that two of them share a birthday are just over 50%. This is true, even if you allow 366 days in the year, and it actually becomes more true (whatever that means!) if you take into account that birthdays are not evenly distributed throughout the year.
Why is it so small?
Suppose we have one person in the room, and consider what happens as we add more people. The first extra person only has to avoid one birthday, and that's not so hard. But then the next person has to avoid two existing birthdays, and the next has to avoid three existing birthdays. Not only must everyone who has come before have succeeded in avoiding the coincidence, but it gets harder as we go along. This accumulation of avoided coincidences eventually gets you, and sooner than you may think.
Indeed, another way to view this is to look at how many pairings there are of the people in the room. With 23 people there are 253 possible pairs, more than half the number of days in the year, so suddenly it's not all that implausible that we have a shared birthday. (That's quite a lot more than half 365, but we may have three people sharing a birthday, or more, and the sums get quite complicated).
And normal people stop there. They might be surprised by the result, and some don't even believe it, but certainly most people stop.
Mathematicians aren't normal people ...
But mathematicians aren't normal people, and they might ask - what happens in general? What if we were on Jupiter, for example, where there are around $10\frac{1}{2}$ thousand "days" in a "year"?
How would it change then?
This is relevant when we talk about avoiding hash collisions in computing systems, so it makes sense to talk about huge numbers of possible "birthdays" and huge numbers of "people." We can ask, when we have a huge hash space, how many keys can we choose before there's a 1% chance of a collision? (for some value of 1%).
In case you don't know about hashes and hash spaces, a "hash function" is a way of taking a sort of "fingerprint" of an object. The result is the hash of the object, and a hash is usually designed to be a random-looking collection of bits. Hashes used to be of size 32 or 64 bits, because that fitted nicely into computer storage units, but these days hashes tend to be much larger.
Hashes usually have the property that changing the object ever so slightly changes the hash pretty much apparently at random. Cryptographic hashes also have the property that it's really, really hard to deduce anything about the original object, even if you know exactly how the hash is computing.
http://en.wikipedia.org/wiki/Hash_function
So what happens as we deal with larger numbers, and with different probabilities? So let's see how we get to the answer of 23 people given 365 possible birthdays, and then generalise from there.
To do the analysis we turn this around and ask - what is the probability that we have no collisions? For just one person the answer is 1 - there is no chance of sharing a birthday, so there is a 100% chance of no shared birthday.
With two people, the second person must avoid the birthday of the first person. Assuming 365 days in the year this is then 364/365, being the number of days allowed, divided by the number of days from which we must choose.
Now add another person. To avoid a coincidence we must, in addition to the first coincidence dodged, now avoid two existing days. That has a chance of (365-2)/365. And so on, and these accumulate. So the chance of 10 people avoiding each other's birthdays is:
$\left(\frac{365-1}{365}\right)\times\left(\frac{365-2}{365}\right)\times\ldots\times\left(\frac{365-9}{365}\right)$
We can make that a bit neater by saying that the first person has 0 days to avoid, so that makes it:
$\left(\frac{365-0}{365}\right)\times\left(\frac{365-1}{365}\right)\times\left(\frac{365-2}{365}\right)\times...\times\left(\frac{365-9}{365}\right)$
and there are 10 terms in total. By extension we can see that for k people, the chances they all avoid each other's birthdays is:
$\left(\frac{365-0}{365}\right)\times\left(\frac{365-1}{365}\right)\times\left(\frac{365-2}{365}\right)\times...\times\left(\frac{365-({k}-1)}{365}\right)$
If we also replace 365 by N to make this even more general, and we observe that:
Just as a reminder, we use n! to mean $n(n-1)(n-2)...3.2.1,$ so that means $6!=6.5.4.3.2.1,$ which is 720. The exclamation mark is sometimes pronounced "pling" and this operation is called "factorial".
Note that here, as in some places elsewhere I use the period to represent multiplication, instead of using "x".
$N(N-1)(N-2)...(N-({k}-1))=\frac{N!}{(N-k)!}$
(where the pling represents the factorial operation) then we can write this fairly succinctly as:
$P(avoid~clash)=\frac{N!}{(N-k)!N^k}\quad\quad\quad[1]$
So now we can ask - for a given value of N, what value of k first makes this greater than 50%?
What do we mean by the interpolated answer?
We never hit exactly 50%. So for some value of k we're below, and for k+1 we're above. What we can then do is interpolate between those values of k to get a non-integer that would be (almost exactly) the right answer. I'm using linear interpolation because we're working over a small range and it seems reasonable.
http://en.wikipedia.org/wiki/Linear_interpolation
We can do this "exactly" by just computing the answer for lots of values of k and then interpolating between them to get an answer. This is a numerical technique, and if we do it lots and lots we might see a pattern emerging. It seems that the value k that gives us a probability of 50% of a collision is about $\sqrt{N}.$ Looking more closely, we can see that there's a constant c so that if we choose $c\sqrt{N}$ numbers, then our probability of a clash is about 50%.
And c turns out to be about 1.17741...
Now, you probably don't recognise that number, certainly I didn't. But after a bit of futzing about I found it to be remarkably close to $\sqrt{ln(4)}.$
Where does that come from? Let's find out.
We need to analyse this:
$P(avoid~clash)=\frac{N!}{(N-k)!N^k}\quad\quad\quad\quad[1]$
Factorials are quite nasty to deal with, but we can use Stirling's approximation which says that:
$n!\approx{n^n}{e^{-n}}{\sqrt{2{\pi}n}}$
Substituting that into equation (1), expanding and simplifying results in this somewhat scary beast:
$P\approx{e^{-k}}\left(1-\frac{k}{N}\right)^{k-N-1/2}$
That's actually much simpler than we might otherwise expect, but it's still pretty tough to see where to go from here. However, the experienced eye will see something that looks familiar.
We see something like this: $\left(1-\frac{k}{N}\right)^{-N}$
We've seen this before. Or at least, I have, and anyone else who has done some significant calculus, or combinatorics, or pretty much any more advanced mathematics. We know that as x gets larger:
$(1+1/x)^x\rightarrow{e}$
More than that, if k is constant then we have:
$(1-k/N)^{-N}\rightarrow{e^k}$
But k is the number of things we choose, and that doesn't stay constant as N gets bigger. Still, all that means is that we have to work a little harder.
The above rules come from recognising that there is a series for logarithms. In particular, the Taylor expansion for logs is:
$ln(1+x)\;\approx\;x-x^2/2+x^3/3-\ldots$
That means, after simplification:
$ln(1-k/N)\;\approx\;-\left[\left(\frac{k}{N}\right)~+~\frac{k^2}{2N^2}~+~\frac{k^3}{3N^3}~+~...\right]$
Now, with some trepidation, we can go back to our original:
Taking logs of both sides:
$ln(P)\;\approx\;-k+(k-N-1/2)ln\left(1-\frac{k}{N}\right)$
Substitute our expression for $ln\left(1-\frac{k}{N}\right)$ (watch out for the minus signs!) and we get:
$ln(P)\;\approx\;-k+\left(N-k+\frac{1}{2}\right)\left[\left(\frac{k}{N}\right)~+~\frac{k^2}{2N^2}~+~\frac{k^3}{3N^3}~+~...\right]$
We're going to assume that k is "small" compared with N. In fact, we're going to assume that k is about $\sqrt{N}.$
We can prove that, or carry more terms, but it's not appropriate here.
This expands and simplifies to:
$k^2\;\approx\;-2N.ln(P)$
Isn't that amazingly simple?
Analysis of the analysis
A rant on the side ...
I'm going to have a bit of a rant here.
In the above calculations there were at least three occasions where I recognised things because I was familiar with them, had used them, had played with them, and they were, in a sense, my "friends."
People often ask "Why did you make that approximation?" or "How did you know that would work?" The short answer is often "I didn't, but it felt right."
People often ask why they need to memorise formulas, or why they need to practice solving equations, when they can simply look stuff up whenever they need it, and on-line computer algebra systems can solve equations faster than they can, and more reliably.
But this is an example of why the ability simply to look stuff up is near useless on its own. Searches are deep and wide, and you need intuition to guide you. You need to recognise what might work, things you've seen before, directions to take that are more likely to be fruitful.
Or profitable.
The day probably will come when computers can do all of that better that we can, but that day isn't here yet. We still need human intuition, built from experience and practice, to guide the computer searches, to know what is more likely to work.
If you already know how to do this sort of calculation then you're probably nodding. If you don't, and you can't see how someone can possibly do this kind of stuff, this comment is for you. Practice and experience.
Play.
Once you play with things, the ability to invent and improvise is unleashed.
So if we're looking at a 50% chance of a coincidence, then P=0.5, and in that case ln(P)=-ln(2). Our formula then tells us that for large N, the number of people needed to have a 50% chance of duplicate birthdays is about $\sqrt{2N.ln(2)}$ or $c\sqrt{N},$ where $c=\sqrt{ln(4)}.$
That's about 1.17741 times $\sqrt{N},$ and that explains our initial observation.
It also explains clearly where the ln(4) comes from. It's actually -2.ln(0.5) and the 2 comes from the Taylor Expansion of log, and the 0.5 is the probability.
Our formula is also pretty accurate, even for only moderate values of N. For N=365, our original birthday problem, that's just a shade under 22.5, whereas the interpolated answer is 22.77. If we pretend that we have 1000 days in a year, the interpolated true answer is 37.5 people, and the formula gives 37.233.
So if we have a pool of N items and we select from them at random, with the possibility of repeats, by the time we have selected 1.2 times $\sqrt{N}$ items we have a 50% chance of a collision. This is relevant when designing hash tables in computing, and using hashes to represent items.
Our formula gives us more, though, because we can substitute any given desired probability of a clash. Suppose you want a 90% chance of no collision. Then
$k\;\approx\;\sqrt{-2N.ln(0.9)}\;\approx\;459$
For a million items, we have a 10% chance of no collisions with 2146 selections, we have a 50% chance of no collisions with 1178 selections, and if you want 90% chance of no collisions then you can only select 459 items.
That's quite small, which is why hash spaces have to be so huge to avoid collisions. It used to be quite common to use 40-bit CRC checks, but with only a million objects there's a 36% chance of a hash collision. Even using 64 bit hashes it only takes a billion objects to have a 2.6% chance of a collision, and 2 billion to have a 10% chance of a collision.
In summary, choosing from N items, with N large, and wanting a probability of T of having no collisions, how many items can you choose at random with replacement?
Answer: $k\approx\sqrt{-2{N}.ln(T)}$
Credits and acknowledgements
My thanks (in no particular order) to Wendy Grossman, Patrick Chkoreff, David Bedford, @ImC0rmac, @tombutton, @Risk_Carver, @standupmaths, @hornmaths, @snapey1979, and @pozorvlak for comments on early drafts.
November, 2012: Someone has at last found the deliberate error! Congratulations to @jimmykiselak. I'll leave it in place for now for others to ponder over.
March, 2019: And another one! Congratulations to Adam Atkinson.
The Trapezium Conundrum : >>>> Next >>>>
Send us a comment ...
You can send us a message here. It doesn't get published, it just sends us an email, and is an easy way to ask any questions, or make any comments, without having to send a separate email. So just fill in the boxes and then
TheBirthdayParadox - On Jupiter, and Beyond!
Where did /that/ come from?
DavidBedford
|
CommonCrawl
|
New dissipated energy for the unstable thin film equation
CPAA Home
Uniform attractor for non-autonomous nonlinear Schrödinger equation
March 2011, 10(2): 625-638. doi: 10.3934/cpaa.2011.10.625
On the collapsing sandpile problem
S. Dumont 1, and Noureddine Igbida 2,
LAMFA CNRS UMR 6140, Université de Picardie Jules Verne, 33 rue Saint-Leu 80039 Amiens cedex
LAMFA, CNRS UMR 6140, Universite de Picardie Jules Verne, 33 rue Saint Leu, 80039 Amiens Cedex, France
Received May 2010 Revised September 2010 Published December 2010
We are interested in the modeling of collapsing sandpiles. We use the collapsing model introduced by Evans, Feldman and Gariepy in [13], to provide a description of the phenomena in terms of a composition of projections onto interlocked convex sets around the set of stable sandpiles.
Keywords: dual formulation., p-Laplacian operator, subdifferential operator, numerical approximation of projection.
Mathematics Subject Classification: Primary: 35K55, 65M60; Secondary: 35B40, 65K1.
Citation: S. Dumont, Noureddine Igbida. On the collapsing sandpile problem. Communications on Pure & Applied Analysis, 2011, 10 (2) : 625-638. doi: 10.3934/cpaa.2011.10.625
G. Aronson, L. C. Evans and Y. Wu, Fast/Slow diffusion and growing sandpiles,, J. Differential Equations, 131 (1996), 304. doi: doi:10.1006/jdeq.1996.0166. Google Scholar
P. Bak, C. Tang and K. Weisenfeld, Self-organized criticality,, Phys. Rev. A, 38 (1988), 364. doi: doi:10.1103/PhysRevA.38.364. Google Scholar
J. W. Barrett and L. Prigozhin, Dual formulation in critical state problems,, Interfaces and Free Boundaries, 8 (2006), 349. doi: doi:10.4171/IFB/147. Google Scholar
Ph. Bénilan, M. G. Crandall and A. Pazy, "Evolution Equations Governed by Accretive Operators,", Preprint book., (). Google Scholar
Ph. Bénilan, L. C. Evans and R. F. Gariepy, On some singular limits of homogeneous semigroups,, J. Evol. Equ., 3 (2003), 203. Google Scholar
J. P. Bouchaud, M. E. Cates, J. Ravi Prakash and S. F. Edwards, A model for the Dynamic of Sandpile Surfaces,, J. Phys. I France, 4 (1994), 1383. Google Scholar
G. Bouchitté, G. Buttazzo and P. Seppecher, Energies with respect to a Measure and Applications to Low Dimensional Structures,, Calc. Var. Partial Differential Equations, 5 (1997), 37. Google Scholar
H. Brézis, Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert (French),, North-Holland Mathematics Studies, (1973). Google Scholar
S. Dumont and N. Igbida, On a Dual Formulation for the Growing Sandpile Problem,, European Journal Applied Math., 20 (2009), 169. doi: doi:10.1017/S0956792508007754. Google Scholar
I. Ekeland and R. Témam, "Convex Analysis and Variational Problems,", Classics in Applied Mathematics, (1999). Google Scholar
L. C. Evans, Application of nonlinear semigroup theory to certain partial differential equations,, Nonlinear evolution equations (Proc. Sympos., (1977), 163. Google Scholar
L. C. Evans, Partial differential equations and Monge-Kantorovich mass transfer,, Current developments in mathematics, (1997), 65. Google Scholar
L. C. Evans, M. Feldman and R. F. Gariepy, Fast/Slow diffusion and collapsing sandpiles,, J. Differential Equations, 137 (1997), 166. doi: doi:10.1006/jdeq.1997.3243. Google Scholar
L. C. Evans and F. Rezakhanlou, A stochastic model for sandpiles and its continum limit,, Comm. Math. Phys., 197 (1998), 325. doi: doi:10.1007/s002200050453. Google Scholar
N. Igbida, Equivalent formulations for Monge-Kantorovich equation,, {\em Submitted}., (). Google Scholar
L. Prigozhin, Variational model of sandpile growth,, Euro. J. Appl. Math., 7 (1996), 225. doi: doi:10.1017/S0956792500002321. Google Scholar
J. E. Roberts and J.-M. Thomas, "Mixed and Hybrid Methods,", (P. G. Ciarlet and J. L. Lions eds.), (1991). Google Scholar
R. E. Showalter, "Monotone Operators in Banach Space and Nonlinear Partial Differential Equations,", Mathematical Surveys and Monographs, 49 (1997). Google Scholar
Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040
Stefan Klus, Péter Koltai, Christof Schütte. On the numerical approximation of the Perron-Frobenius and Koopman operator. Journal of Computational Dynamics, 2016, 3 (1) : 51-79. doi: 10.3934/jcd.2016003
Dimitri Mugnai. Bounce on a p-Laplacian. Communications on Pure & Applied Analysis, 2003, 2 (3) : 371-379. doi: 10.3934/cpaa.2003.2.371
Noboru Okazawa, Tomomi Yokota. Subdifferential operator approach to strong wellposedness of the complex Ginzburg-Landau equation. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 311-341. doi: 10.3934/dcds.2010.28.311
Stefan Klus, Christof Schütte. Towards tensor-based methods for the numerical approximation of the Perron--Frobenius and Koopman operator. Journal of Computational Dynamics, 2016, 3 (2) : 139-161. doi: 10.3934/jcd.2016007
Yansheng Zhong, Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent. Communications on Pure & Applied Analysis, 2019, 18 (1) : 227-236. doi: 10.3934/cpaa.2019012
Genni Fragnelli, Dimitri Mugnai, Nikolaos S. Papageorgiou. Robin problems for the p-Laplacian with gradient dependence. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 287-295. doi: 10.3934/dcdss.2019020
Francesca Colasuonno, Benedetta Noris. A p-Laplacian supercritical Neumann problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3025-3057. doi: 10.3934/dcds.2017130
Eduardo Lara, Rodolfo Rodríguez, Pablo Venegas. Spectral approximation of the curl operator in multiply connected domains. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 235-253. doi: 10.3934/dcdss.2016.9.235
Patrizia Pucci, Mingqi Xiang, Binlin Zhang. A diffusion problem of Kirchhoff type involving the nonlocal fractional p-Laplacian. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4035-4051. doi: 10.3934/dcds.2017171
Robert Stegliński. On homoclinic solutions for a second order difference equation with p-Laplacian. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 487-492. doi: 10.3934/dcdsb.2018033
CÉSAR E. TORRES LEDESMA. Existence and symmetry result for fractional p-Laplacian in $\mathbb{R}^{n}$. Communications on Pure & Applied Analysis, 2017, 16 (1) : 99-114. doi: 10.3934/cpaa.2017004
Sophia Th. Kyritsi, Nikolaos S. Papageorgiou. Positive solutions for p-Laplacian equations with concave terms. Conference Publications, 2011, 2011 (Special) : 922-930. doi: 10.3934/proc.2011.2011.922
Shanming Ji, Yutian Li, Rui Huang, Xuejing Yin. Singular periodic solutions for the p-laplacian ina punctured domain. Communications on Pure & Applied Analysis, 2017, 16 (2) : 373-392. doi: 10.3934/cpaa.2017019
Maya Chhetri, D. D. Hai, R. Shivaji. On positive solutions for classes of p-Laplacian semipositone systems. Discrete & Continuous Dynamical Systems - A, 2003, 9 (4) : 1063-1071. doi: 10.3934/dcds.2003.9.1063
Everaldo S. de Medeiros, Jianfu Yang. Asymptotic behavior of solutions to a perturbed p-Laplacian problem with Neumann condition. Discrete & Continuous Dynamical Systems - A, 2005, 12 (4) : 595-606. doi: 10.3934/dcds.2005.12.595
Carlo Mercuri, Michel Willem. A global compactness result for the p-Laplacian involving critical nonlinearities. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 469-493. doi: 10.3934/dcds.2010.28.469
Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal p-Laplacian equation without uniqueness of solution. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1801-1816. doi: 10.3934/dcdsb.2017107
Kanishka Perera, Andrzej Szulkin. p-Laplacian problems where the nonlinearity crosses an eigenvalue. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 743-753. doi: 10.3934/dcds.2005.13.743
Adam Lipowski, Bogdan Przeradzki, Katarzyna Szymańska-Dębowska. Periodic solutions to differential equations with a generalized p-Laplacian. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2593-2601. doi: 10.3934/dcdsb.2014.19.2593
S. Dumont Noureddine Igbida
|
CommonCrawl
|
What is the difference between TeX and LaTeX?
I know LaTeX and I've heard that LaTeX is a set of macros in TeX. But what does it exactly mean?
tex-core latex-misc
David Carlisle
Łukasz LewŁukasz Lew
You should really read tug.org/levels.html – Martin Schröder Jan 10 '12 at 18:36
tex.stackexchange.com/questions/79609/…. Are these the examples of programs using TeX library? – Mistha Jan 3 '15 at 15:12
You can use pdftex to generate a .pdf file directly without the intermediate .dvi step (depending on the contents of the .tex` file of course) – user31729 Jan 3 '15 at 15:18
TeX is a typesetting system. Syntax can be as simple as Hello, world! \bye. LaTeX is a very common document markup language written in TeX. Its syntax is more verbose, like \documentclass{book} \begin{document} Hello, world! \end{document}. Either can write DVI or PDF directly (I've almost never had to write a DVI, however). – Mike Renfro Jan 3 '15 at 15:24
latex is written in tex so "latex" is "tex with some pre-defined functions (macros)" – David Carlisle Jul 10 '19 at 19:33
TeX is both a program (which does the typesetting, tex-core) and format (a set of macros that the engine uses, plain-tex). Looked at in either way, TeX gives you the basics only. If you read the source for The TeXBook, you'll see that Knuth wrote more macros to be able to typeset the book, and made a format for that.
LaTeX is a generalised set of macros to let you do many things. Most people don't want to have to program TeX, especially to set up things like sections, title pages, bibliographies and so on. LaTeX provides all of that: these are the 'macros' that it is made up of.
Joseph Wright♦Joseph Wright
To clarify, the set of macros is called "plain TeX" or "plain", and it's a bit of a misnomer, since one would expect that to refer to nothing but the primitives. – SamB Dec 1 '10 at 5:23
So LaTeX is like a standard library for TeX? Is it quite simple and modular then? Maybe I can even read it and learn about good practice for writing TeX macros? – Thomas Ahle Sep 27 '15 at 17:14
@ThomasAhle LaTeX is a set of macros: saying it's standard is a bit tricky as plain users don't use LaTeX macros! – Joseph Wright♦ Sep 27 '15 at 17:22
@JosephWright What does the La part mean? – smwikipedia Oct 11 '16 at 8:02
@smwikipedia Nothing, or at least Leslie Lamport has never formally specified. Most people assume it's 'LAmport TeX' or similar, but ... – Joseph Wright♦ Oct 11 '16 at 10:07
In short TeX is all about formatting, for document/template designers, while LaTeX is all about content, for document writers.
TeX is a typesetting system. It provides many commands which allow you to specify the format of your document with great detail (e.g. font styles, spacing, kerning, ligatures, etc.), and has specialized algorithms to compute the optimal flow of text in your document (e.g. where to cut lines, pages, etc.). TeX is all about giving you powerful algorithms and commands to specify even the tiniest detail to make your documents look pretty.
LaTeX is a set of macros built on top of TeX. The idea behind LaTeX is to shift the focus from the format to the content of your document. In LaTeX commands are all about giving a structure to the content of your document (e.g. sections, emphasis, tables, indices, etc.). In LaTeX you just say \section{...} instead of: selecting a larger font, a different font style, and inserting appropriate spaces before and after the section heading. As LaTeX is built on top of TeX you also get, of course, a beautiful document as your output; but, more importantly, your source input can also be well structured, easier to read (and write!) for humans.
Juan A. NavarroJuan A. Navarro
"LaTeX is all about content" is correct, but that's only with LaTeX2e, because in LaTeX 2.09 the distinction between formatting and content was not clear cut. In fact, it was only when LaTeX2e was introduced that I stopped using TeX in favour of LaTeX. – José Figueroa-O'Farrill Aug 4 '10 at 23:50
@JoséFigueroa-O'Farrill surely latex 2.09 (declared obsolete in 1994) can be regarded as no longer germane to current discussions? (note, the faq, first published in 1995, iirc, still sometimes talks as if latex 2e is a shiny new object ... something more to remember when proof reading (a task like the old "painting the forth bridge") – wasteofspace Nov 9 '12 at 11:07
Good answer, but there is a bit of ambiguity in the nomenclature. The TeX term for "a set of macros built on top of TeX" is format. Plain TeX is a format, LaTeX is a format, ConTeXt is a format. So maybe it would be better to say TeX is all about formatting. – Matthew Leingang Apr 30 '13 at 12:01
Is every TeX document a valid LaTeX document? Can we say that LaTeX is just the sample as TeX but with a lot of extra macros predefined? – Aaron McDaid Aug 31 '14 at 18:09
@AaronMcDaid It would be nice if this comment would point to a new question on this site. – Maarten Bodewes Dec 19 '15 at 14:48
It's important to distinguish between typesetting "engines", "formats", and "packages".
The engine is the actual program. Nowadays, the most commonly used engines that are distributed with TeXlive and MiKTeX are pdfTeX, XeTeX, and LuaTeX. The "engines" make use of a number of so-called "primitive" instructions to accomplish the job of processing user inputs. Examples of "primitive" instructions provided by the original TeX engine (and also provided by the more recent engines!) are \def, \outer, \expandafter, \noexpand, \futurelet, \relax, \catcode, \vbox, \hbox, \accent and \kern. The primitive instructions are very powerful, but many of them are so low-level that using them directly in a document would be rather tricky, to put it politely.
A format is a collection of macros that make the TeX primitives usable for typesetting purposes by humans. For instance, Plain TeX is a set of macros created by Don Knuth (the creator of the TeX program) to typeset his books, including the TeXbook. (Aside: The TeXbook uses additional macros, besides those set up in the Plain-TeX format, to handle various formatting-related tasks.) LaTeX2e, which has been around for more than 20 years, is probably the most commonly used format these days. Both Plain TeX and LaTeX2e can be "mated" to various engines -- specifically, pdfTeX, XeTEX, and LuaTeX. The current version of ConTeXt is a format that builds on the LuaTeX engine; it will not run under either pdfTeX or XeTeX.
For the most part, the macros defined in the Plain TeX format are also defined in the LaTeX and LaTeX2e formats. However, quite a few Plain-TeX macros -- especially those associated with changing the appearance of fonts in a document, such as \bf, \it, and \tt -- are considered deprecated and should no longer be used in a LaTeX-based document; use LaTeX2e-based macros such as \textbf and \itshape instead. (To be precise, the macros \bf and \it are not defined in the LaTeX2e kernel, but "only" in some LaTeX2e document classes.)
A huge number of packages -- a few thousand, maybe even tens of thousands -- have been written over the years to either accomplish new typesetting-related tasks or to simplify other tasks. Many packages require the LaTeX2e format. A few, though, work equally well with both Plain TeX and LaTeX2e. Some newer packages, such as fontspec, run only under XeLaTeX and LuaLaTeX.
To be sure, decisions about which typesetting tasks should be handled by engines, formats, and packages can be a bit arbitrary and are frequently history-dependent. For instance, in 1994, when LaTeX2e was first circulated broadly, the ability to hyper-link pieces of text within a document and across documents was not considered to be a core typesetting job. I'm sure that the hyperref package -- which came along only long after LaTeX2e was (essentially) frozen -- would be much more streamlined and easier to maintain had various "hooks" and important design decisions related to hyperlinking been built into the LaTeX2e format from the start.
Another example: TeX (the engine) has powerful, and generally very successful, paragraph-building algorithms. However, it is not possible for users (or package writers) to tweak or modify these algorithms directly if they use TeX, pdfTeX, or XeTeX as the underlying engine. In contrast, with LuaTeX important components of the paragraph building algorithms have been "opened up" to programmers. As a result, we're starting to see new packages -- which obviously require the use of LuaTeX as the engine -- that provide additional typesetting capabilities that were simply infeasible so far.
When you execute an instruction such as
pdflatex myfile
at a command line, what's actually run is the pdfTeX program in a way that first loads the LaTeX format and then processes what's in myfile.tex to create a file called myfile.pdf.
Here are three ways to print "Hello World". The first requires (Plain) TeX, the second LaTeX2e, and the third ConTeXt.
Hello World.
\documentclass{article}
\starttext
\stoptext
Assume the input file is named hello.tex in all three cases. To generate a pdf file, you'd compile the first file by typing pdftex hello, the second by typing pdflatex hello, and the third by typing context hello.
For more information on the subject of TeX engines and formats, I recommend Section 1 of the document A guide to LuaLaTeX by Manuel Pégourié-Gonnard. It features a handy table that categorizes the potential interactions between four engines, two formats, and two ways of creating dvi and pdf files.
MicoMico
Thanks for the wonderfull explanation. I just have one question. You gave example for Hello World in both TeX and LaTeX, My question is how will we make TeX realize that it is a book, a article as LaTeX defines using \documentclass{book} – Mistha Jan 4 '15 at 6:51
@Mistha - The following is clearly just "informed" speculation. (I've only been using TeX and LaTeX for 23 years...) The advisability of creating hierarchies of TeX macros to ease the task of generating various types of document must surely have been a main reason for the initial popularity of LaTeX (since the mid-1980s) and the subsequent broad success of LaTeX2e (since 1994). Separating matters of overall layout and structure of a document from matters related to content is certainly fully possible with TeX. However, doing so is much more straightforward with LaTeX2e. – Mico Jan 4 '15 at 7:44
This answer mentions an important point: the latex2e format is not a pure superset of plain tex. When compiling latex you can not assume plain tex macros are defined. – jiggunjer Jul 21 '16 at 16:21
But if latex and plain tex are sets of macros using same primitive instructions underneath — why do they use different executables? – Hi-Angel Oct 10 '17 at 16:01
@Mico oh, you're right. I didn't check it because because it'd lead to a confusion — if binary the same, why to make those links in the first place, instead of calling it always by the same name. Which it did. Still, on my system latex is a symlink to pdftex, but running latex with your latex code works, whilst running pdftex with your latex code causes an error. – Hi-Angel Oct 10 '17 at 20:26
TeX is a typesetting engine which has a macro language available. This macro language is very different from other, more typical, languages. The TeX engine reads text, font metrics and does the typesetting. This means that it decides where the characters from the loaded font will be on the page.
There are several extensions over classical TeX: pdfTeX, XeTeX, LuaTeX. They are able to produce PDF output. The classical TeX is able to produce only DVI output, which is mostly not used today.
TeX (and its extensions) have (for example) the \def command which is the core of macro programming. A macro programmer can declare new control sequences used by the author in the document. A set of such declared control sequences is a macro package. LaTeX is a macro package. For example it provides the control sequence \documentclass, because that was declared by \def\documentclass....
When you process your document, then you run TeX (or one of its extensions) with a macro package preprocessed into binary form, which is called a "format" in TeX terminology. For example, LaTeX is preprocessed from the latex.ltx text file into the latex.fmt binary file (roughly speaking). This is done by the command tex -ini or something similar when a TeX distribution is installed. This is done automatically because there are many complicated historical aspects of this iniTeX processing and there is good reason to hide this from the common user.
When the user runs latex document, then (in reality) the tex -fmt latex.fmt document (or something similar) is processed. It means that TeX is run and it first reads the preprocessed macro package latex.fmt and after that it reads the document prepared by the user.
If vanilla TeX is run (without a preprocessed macro package) then only about 300 primitive commands are available (extensions of TeX provide additional commands). But if TeX with preprocessed latex.fmt is run then about 2000 new control sequences are available. Additional macro packages can be loaded at the beginning of the document and this increases the number of available control sequences.
The common macro packages (LaTeX, ConTeXt) are based on the first macro package (format) created by Knuth, plain.tex. This enlarges vanilla TeX by about 900 control sequences (most of them are mathematical characters aka \alpha, \sum, etc.). The plain TeX is preprocessed into the binary format file tex.fmt (or similar depending on the extension) and it is read by the TeX program automatically if no another format file is specified. This means that tex document runs TeX plus tex.fmt plus document, or pdftex document runs pdfTeX plus pdftex.fmt (which is the result of preprocessing a somewhat extended plain.tex) plus document.
My text above only simplifies the reality, sorry. The reality is much more varied due to historical reasons.
wipetwipet
43.1k6262 silver badges104104 bronze badges
A very understandable explanation. Should be the accepted answer. – 0x450 Jun 13 '17 at 9:42
Don Knuth provide both the TeX typesetting program and the plain TeX macros (the file plain.tex).
TeX the program does the typesetting. The preloaded macros (and the input documents, of course) control what is typeset.
Typically, the macros and input document place items on a horizontal list and then, when the \par command (for example, a blank line) is issued TeX the program breaks the paragraphs into lines.
If you run TeX with \tracingall you can see what are macros and what are TeX commands. I suggest you try this with
$ tex '\relax \tracingall \input story \end'
rather than a LaTeX document. LaTeX does a lot of macro processing, particularly for font selection.
Jonathan FineJonathan Fine
This was originally the answer to Difference between TeX and LaTeX, so the wording may seem a bit off context to this question, but the message remains.
Surprisingly all of those are kind of true. Here's a very brief summary to try to make things clear for you:
TeX is a program, and is the underlying program in all of the TeX family. However TeX, as a program, does a great deal of different things which lead people to call it other names, but also use the name TeX to those things.
TeX, the program, is primarily a typesetting system, which means that its fundamental job is to take your text and put it into a printable document. However when TeX reads an input file to see what you wrote it uses an embedded programming language which is also referred to as "TeX", leading to all this confusion. The TeX programming language is a Turing-complete macro1 expansion language, so all those statements you mention are true. And finally, there is the plain TeX format2, which is a file (plain.tex) written in the TeX programming language, to make some tasks easier while you're using the TeX language to write using the TeX program.
1 A "macro", is a command defined in the TeX language which "expands" to something else. When you do \def\say#1{Hello #1!} you create a macro, and when you use \say{world} to get Hello world! TeX expands the macro \say.
2 A "format", in the TeX vocabulary, is a bunch of macros which are pre-loaded when the program starts to speed up the process.
So if you want to say the word "TeX" 5 times in a single sentence you can say that: when you run tex from the command line, TeX (the program) starts with the plain TeX format loaded and then reads a TeX (the language) input file and uses the TeX typesetting system to write a printable document.
If you understood that, LaTeX is easy: it's a format, just like plain TeX. There is a file, latex.ltx, written using the TeX language, which is preloaded when you use the latex (or pdflatex or xelatex or...) executable. It's just a (very large) bunch of macros operating on top of TeX.
Finally, there were along the years additions to TeX. These additions can't be called TeX, so their creators give them a different name (pdfTeX, XeTeX, LuaTeX, etc.) and these are different TeX engines. They all have most of the same common features as TeX (the typesetting system, the programming language, etc.) but have some fundamental changes which make them a rather different program under the hood (pdfTeX produces PDF output, LuaTeX has the Lua scripting language embedded), so they are called TeX engines.
Phelype OleinikPhelype Oleinik
I'd add that the name actually shows the difference: LaTeX = "Layman's TeX". This also shows that it's enough to master LaTeX to get (most) things done, but TeX to be a real master. As the saying goes: there are TeXnicians and TeXperts...
EDIT: I know this answer is inaccurate (it's one of my first contributions to TeX.SX, I have learned a lot since, tanks mostly to the community behind this site). I kept it deliberately, because this is a quite widely accepted belief, among newbies especially. The comments and the downvote I received put things right, therefore I don't believe this should be edited as it was, namely to LaTeX = "Lamport's TeX" (Leslie Lamport was the initial developer of LaTeX).
I don't know where the comments are gone (I can't see them anymore), but the edit skews the original answer and cancels its message. As I said, I kept this for educational/informative reasons. We learn not only from things that are done right, but also from mistakes. If we are smart enough, from those of our peers, too.
Count ZeroCount Zero
TeX, LaTeX, ConTeXT - different languages / syntax for typesetting.
For each of these there are many engines available that can process the above syntax and generate dvi, ps, pdf, html, svg.... and what not.
To confuse even more there are engines called tex and latex, which can be used to process tex and latex syntaxes respectivly to produce dvi outputs.
@José Figueroa-O'Farrill:
Treating the system as blackbox: ConTeXt, LaTeX and TeX have significantly different syntax and different compilers hence they are different. And most users do not need to know that one is using the other behind the scene, that's an implementation detail. The beginner user needs to know that (a) they are different (b) they all produce awesome quality documents (c) each package has its own "native" way to do stuff (d) you can seek for all of them here.
edited Aug 5 '10 at 0:04
DimaDima
I don't understand the downvotes. This is somewhat terse, but there's nothing particularly wrong with this. What happened to "please consider adding a comment if you think this post can be improved"? – ShreevatsaR Jul 28 '10 at 18:15
I'm voting this up. It's a fine answer. – PersonX Aug 4 '10 at 21:55
I didn't vote this down, but I don't agree that TeX and LaTeX are different languages. LaTeX (as of today) is still a set of TeX macros. – José Figueroa-O'Farrill Aug 4 '10 at 22:41
And: there is no engine called latex. It is just the format. (I didn't downvote either, but the answer is misleading). – topskip Aug 5 '10 at 5:37
Please read tug.org/levels.html – Martin Schröder Jan 10 '12 at 18:39
Not the answer you're looking for? Browse other questions tagged tex-core latex-misc or ask your own question.
How exactly are TeX and LaTeX related?
Difference between TeX and LaTeX
What is a TeX format (e.g. LaTeX)?
TeX Programming
Level of abstraction
Reasons to use plain TeX
Producing doc/docx from LaTeX
Who/what is behind LaTeX?
Which to learn, TeX or LaTeX?
MiKTeX graphics version 1.3b bug on Windows 10 (\set@curr@file undefined)
What are TeX and LaTeX?
What are the benefits of writing resumes in TeX/LaTeX?
LaTeX structure
What sequence of documents should I read to know "all of" TeX and then LaTeX?
How do LaTeX macros interact with the TeX engine?
|
CommonCrawl
|
Universality and uncomputability
The universal machine/program - "one program to rule them all"
A fundamental result in computer science and mathematics: the existence of uncomputable functions.
The halting problem: the canonical example of an uncomputable function.
Introduction to the technique of reductions.
Rice's Theorem: A "meta tool" for uncomputability results, and a starting point for much of the research on compilers, programming languages, and software verification.
"A function of a variable quantity is an analytic expression composed in any way whatsoever of the variable quantity and numbers or constant quantities.", Leonhard Euler, 1748.
"The importance of the universal machine is clear. We do not need to have an infinity of different machines doing different jobs. … The engineering problem of producing various machines for various jobs is replaced by the office work of 'programming' the universal machine", Alan Turing, 1948
One of the most significant results we showed for Boolean circuits (or equivalently, straight-line programs) is the notion of universality: there is a single circuit that can evaluate all other circuits. However, this result came with a significant caveat. To evaluate a circuit of \(s\) gates, the universal circuit needed to use a number of gates larger than \(s\). It turns out that uniform models such as Turing machines or NAND-TM programs allow us to "break out of this cycle" and obtain a truly universal Turing machine \(U\) that can evaluate all other machines, including machines that are more complex (e.g., more states) than \(U\) itself. (Similarly, there is a Universal NAND-TM program \(U'\) that can evaluate all NAND-TM programs, including programs that have more lines than \(U'\).)
It is no exaggeration to say that the existence of such a universal program/machine underlies the information technology revolution that began in the latter half of the 20th century (and is still ongoing). Up to that point in history, people have produced various special-purpose calculating devices such as the abacus, the slide ruler, and machines that compute various trigonometric series. But as Turing (who was perhaps the one to see most clearly the ramifications of universality) observed, a general purpose computer is much more powerful. Once we build a device that can compute the single universal function, we have the ability, via software, to extend it to do arbitrary computations. For example, if we want to simulate a new Turing machine \(M\), we do not need to build a new physical machine, but rather can represent \(M\) as a string (i.e., using code) and then input \(M\) to the universal machine \(U\).
Beyond the practical applications, the existence of a universal algorithm also has surprising theoretical ramifications, and in particular can be used to show the existence of uncomputable functions, upending the intuitions of mathematicians over the centuries from Euler to Hilbert. In this chapter we will prove the existence of the universal program, and also show its implications for uncomputability, see Figure 8.1
8.1: In this chapter we will show the existence of a universal Turing machine and then use this to derive first the existence of some uncomputable function. We then use this to derive the uncomputability of Turing's famous "halting problem" (i.e., the \(\ensuremath{\mathit{HALT}}\) function), from which we a host of other uncomputability results follow. We also introduce reductions, which allow us to use the uncomputability of a function \(F\) to derive the uncomputability of a new function \(G\).
Universality or a meta-circular evaluator
We start by proving the existence of a universal Turing machine. This is a single Turing machine \(U\) that can evaluate arbitrary Turing machines \(M\) on arbitrary inputs \(x\), including machines \(M\) that can have more states and larger alphabet than \(U\) itself. In particular, \(U\) can even be used to evaluate itself! This notion of self reference will appear time and again in this course, and as we will see, leads to several counter-intuitive phenomena in computing.
There exists a Turing machine \(U\) such that on every string \(M\) which represents a Turing machine, and \(x\in \{0,1\}^*\), \(U(M,x)=M(x)\).
That is, if the machine \(M\) halts on \(x\) and outputs some \(y\in \{0,1\}^*\) then \(U(M,x)=y\), and if \(M\) does not halt on \(x\) (i.e., \(M(x)=\bot\)) then \(U(M,x)=\bot\).
8.2: A Universal Turing Machine is a single Turing Machine \(U\) that can evaluate, given input the (description as a string of) arbitrary Turing machine \(M\) and input \(x\), the output of \(M\) on \(x\). In contrast to the universal circuit depicted in Figure 5.6, the machine \(M\) can be much more complex (e.g., more states or tape alphabet symbols) than \(U\).
There is a "universal" algorithm that can evaluate arbitrary algorithms on arbitrary inputs.
Once you understand what the theorem says, it is not that hard to prove. The desired program \(U\) is an interpreter for Turing machines. That is, \(U\) gets a representation of the machine \(M\) (think of it as source code), and some input \(x\), and needs to simulate the execution of \(M\) on \(x\).
Think of how you would code \(U\) in your favorite programming language. First, you would need to decide on some representation scheme for \(M\). For example, you can use an array or a dictionary to encode \(M\)'s transition function. Then you would use some data structure, such as a list, to store the contents of \(M\)'s tape. Now you can simulate \(M\) step by step, updating the data structure as you go along. The interpreter will continue the simulation until the machine halts.
Once you do that, translating this interpreter from your favorite programming language to a Turing machine can be done just as we have seen in Chapter 7. The end result is what's known as a "meta-circular evaluator": an interpreter for a programming language in the same one. This is a concept that has a long history in computer science starting from the original universal Turing machine. See also Figure 8.3.
Proving the existence of a universal Turing Machine
To prove (and even properly state) Theorem 8.1, we need to fix some representation for Turing machines as strings. For example, one potential choice for such a representation is to use the equivalence betwen Turing machines and NAND-TM programs and hence represent a Turing machine \(M\) using the ASCII encoding of the source code of the corresponding NAND-TM program \(P\). However, we will use a more direct encoding.
Let \(M\) be a Turing machine with \(k\) states and a size \(\ell\) alphabet \(\Sigma = \{ \sigma_0,\ldots,\sigma_{\ell-1} \}\) (we use the convention \(\sigma_0 = 0\),\(\sigma_1 = 1\), \(\sigma_2 = \varnothing\), \(\sigma_3=\triangleright\)). We represent \(M\) as the triple \((k,\ell,T)\) where \(T\) is the table of values for \(\delta_M\):
\[T = \left(\delta_M(0,\sigma_0),\delta_M(0,\sigma_1),\ldots,\delta_M(k-1,\sigma_{\ell-1})\right) \;,\]
where each value \(\delta_M(s,\sigma)\) is a triple \((s',\sigma',d)\) with \(s'\in [k]\), \(\sigma'\in \Sigma\) and \(d\) a number \(\{0,1,2,3 \}\) encoding one of \(\{ \mathsf{L},\mathsf{R},\mathsf{S},\mathsf{H} \}\). Thus such a machine \(M\) is encoded by a list of \(2 + 3k\cdot\ell\) natural numbers. The string representation of \(M\) is obtained by concatenating prefix free representation of all these integers. If a string \(\alpha \in \{0,1\}^*\) does not represent a list of integers in the form above, then we treat it as representing the trivial Turing machine with one state that immediately halts on every input.
The details of the representation scheme of Turing machines as strings are immaterial for almost all applications. What you need to remember are the following points:
We can represent every Turing machine as a string.
Given the string representation of a Turing machine \(M\) and an input \(x\), we can simulate \(M\)'s execution on the input \(x\). (This is the content of Theorem 8.1.)
An additional minor issue is that for convenience we make the assumption that every string represents some Turing machine. This is very easy to ensure by just mapping strings that would otherwise not represent a Turing machine into some fixed trivial machine. This assumption is not very important, but does make a few results (such as Rice's Theorem: Theorem 8.16) a little less cumbersome to state.
Using this representation, we can formally prove Theorem 8.1.
We will only sketch the proof, giving the major ideas. First, we observe that we can easily write a Python program that, on input a representation \((k,\ell,T)\) of a Turing machine \(M\) and an input \(x\), evaluates \(M\) on \(X\). Here is the code of this program for concreteness, though you can feel free to skip it if you are not familiar with (or interested in) Python:
# constants
def EVAL(δ,x):
'''Evaluate TM given by transition table δ
on input x'''
Tape = ["▷"] + [a for a in x]
i = 0; s = 0 # i = head pos, s = state
s, Tape[i], d = δ[(s,Tape[i])]
if d == "H": break
if d == "L": i = max(i-1,0)
if d == "R": i += 1
if i>= len(Tape): Tape.append('Φ')
j = 1; Y = [] # produce output
while Tape[j] != 'Φ':
Y.append(Tape[j])
j += 1
return Y
On input a transition table \(\delta\) this program will simulate the corresponding machine \(M\) step by step, at each point maintaining the invariant that the array Tape contains the contents of \(M\)'s tape, and the variable s contains \(M\)'s current state.
The above does not prove the theorem as stated, since we need to show a Turing machine that computes \(\ensuremath{\mathit{EVAL}}\) rather than a Python program. With enough effort, we can translate this Python code line by line to a Turing machine. However, to prove the theorem we don't need to do this, but can use our "eat the cake and have it too" paradigm. That is, while we need to evaluate a Turing machine, in writing the code for the interpreter we are allowed to use a richer model such as NAND-RAM since it is equivalent in power to Turing machines per Theorem 7.1).
Translating the above Python code to NAND-RAM is truly straightforward. The only issue is that NAND-RAM doesn't have the dictionary data structure built in, which we have used above to store the transition function δ. However, we can represent a dictionary \(D\) of the form \(\{ key_0:val_0 , \ldots, key_{m-1}:val_{m-1} \}\) as simply a list of pairs. To compute \(D[k]\) we can scan over all the pairs until we find one of the form \((k,v)\) in which case we return \(v\). Similarly we scan the list to update the dictionary with a new value, either modifying it or appending the pair \((key,val)\) at the end.
The argument in the proof of Theorem 8.1 is a very inefficient way to implement the dictionary data structure in practice, but it suffices for the purpose of proving the theorem. Reading and writing to a dictionary of \(m\) values in this implementation takes \(\Omega(m)\) steps, but it is in fact possible to do this in \(O(\log m)\) steps using a search tree data structure or even \(O(1)\) (for "typical" instances) using a hash table. NAND-RAM and RAM machines correspond to the architecture of modern electronic computers, and so we can implement hash tables and search trees in NAND-RAM just as they are implemented in other programming languages.
Since universal Turing
Implications of universality (discussion)
8.3: a) A particularly elegant example of a "meta-circular evaluator" comes from John McCarthy's 1960 paper, where he defined the Lisp programming language and gave a Lisp function that evaluates an arbitrary Lisp program (see above). Lisp was not initially intended as a practical programming language and this example was merely meant as an illustration that the Lisp universal function is more elegant than the universal Turing machine. It was McCarthy's graduate student Steve Russell who suggested that it can be implemented. As McCarthy later recalled, "I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing a bug, and then advertised this as a Lisp interpreter, which it certainly was". b) A self-replicating C program from the classic essay of Thompson (Thompson, 1984) .
There is more than one Turing machine \(U\) that satisfies the conditions of Theorem 8.1, but the existence of even a single such machine is already extremely fundamental to both the theory and practice of computer science. Theorem 8.1's impact reaches beyond the particular model of Turing machines. Because we can simulate every Turing Machine by a NAND-TM program and vice versa, Theorem 8.1 immediately implies there exists a universal NAND-TM program \(P_U\) such that \(P_U(P,x)=P(x)\) for every NAND-TM program \(P\). We can also "mix and match" models. For example since we can simulate every NAND-RAM program by a Turing machine, and every Turing Machine by the \(\lambda\) calculus, Theorem 8.1 implies that there exists a \(\lambda\) expression \(e\) such that for every NAND-RAM program \(P\) and input \(x\) on which \(P(x)=y\), if we encode \((P,x)\) as a \(\lambda\)-expression \(f\) (using the \(\lambda\)-calculus encoding of strings as lists of \(0\)'s and \(1\)'s) then \((e\; f)\) evaluates to an encoding of \(y\). More generally we can say that for every \(\mathcal{X}\) and \(\mathcal{Y}\) in the set \(\{\) Turing Machines, RAM Machines, NAND-TM, NAND-RAM, \(\lambda\)-calculus, JavaScript, Python, \(\ldots\) \(\}\) of Turing equivalent models, there exists a program/machine in \(\mathcal{X}\) that computes the map \((P,x) \mapsto P(x)\) for every program/machine \(P \in \mathcal{Y}\).
The idea of a "universal program" is of course not limited to theory. For example compilers for programming languages are often used to compile themselves, as well as programs more complicated than the compiler. (An extreme example of this is Fabrice Bellard's Obfuscated Tiny C Compiler which is a C program of 2048 bytes that can compile a large subset of the C programming language, and in particular can compile itself.) This is also related to the fact that it is possible to write a program that can print its own source code, see Figure 8.3. There are universal Turing machines known that require a very small number of states or alphabet symbols, and in particular there is a universal Turing machine (with respect to a particular choice of representing Turing machines as strings) whose tape alphabet is \(\{ \triangleright, \varnothing, 0, 1 \}\) and has fewer than \(25\) states (see Section 8.7).
Is every function computable?
In Theorem 4.12, we saw that NAND-CIRC programs can compute every finite function \(f:\{0,1\}^n \rightarrow \{0,1\}\). Therefore a natural guess is that NAND-TM programs (or equivalently, Turing Machines) could compute every infinite function \(F:\{0,1\}^* \rightarrow \{0,1\}\). However, this turns out to be false. That is, there exists a function \(F:\{0,1\}^* \rightarrow \{0,1\}\) that is uncomputable!
The existence of uncomputable functions is quite surprising. Our intuitive notion of a "function" (and the notion most mathematicians had until the 20th century) is that a function \(f\) defines some implicit or explicit way of computing the output \(f(x)\) from the input \(x\). The notion of an "uncomputable function" thus seems to be a contradiction in terms, but yet the following theorem shows that such creatures do exist:
There exists a function \(F^*:\{0,1\}^* \rightarrow \{0,1\}\) that is not computable by any Turing machine.
The idea behind the proof follows quite closely Cantor's proof that the reals are uncountable (Theorem 2.5), and in fact the theorem can also be obtained fairly directly from that result (see Exercise 6.11). However, it is instructive to see the direct proof. The idea is to construct \(F^*\) in a way that will ensure that every possible machine \(M\) will in fact fail to compute \(F^*\). We do so by defining \(F^*(x)\) to equal \(0\) if \(x\) describes a Turing machine \(M\) which satisfies \(M(x)=1\) and defining \(F^*(x)=1\) otherwise. By construction, if \(M\) is any Turing machine and \(x\) is the string describing it, then \(F^*(x) \neq M(x)\) and therefore \(M\) does not compute \(F^*\).
The proof is illustrated in Figure 8.4. We start by defining the following function \(G:\{0,1\}^* \rightarrow \{0,1\}\):
For every string \(x\in\{0,1\}^*\), if \(x\) satisfies (1) \(x\) is a valid representation of some Turing machine \(M\) (per the representation scheme above) and (2) when the program \(M\) is executed on the input \(x\) it halts and produces an output, then we define \(G(x)\) as the first bit of this output. Otherwise (i.e., if \(x\) is not a valid representation of a Turing machine, or the machine \(M_x\) never halts on \(x\)) we define \(G(x)=0\). We define \(F^*(x) = 1 - G(x)\).
We claim that there is no Turing machine that computes \(F^*\). Indeed, suppose, towards the sake of contradiction, there exists a machine \(M\) that computes \(F^*\), and let \(x\) be the binary string that represents the machine \(M\). On one hand, since by our assumption \(M\) computes \(F^*\), on input \(x\) the machine \(M\) halts and outputs \(F^*(x)\). On the other hand, by the definition of \(F^*\), since \(x\) is the representation of the machine \(M\), \(F^*(x) = 1 - G(x) = 1 - M(x)\), hence yielding a contradiction.
8.4: We construct an uncomputable function by defining for every two strings \(x,y\) the value \(1-M_y(x)\) which equals \(0\) if the machine described by \(y\) outputs \(1\) on \(x\), and \(1\) otherwise. We then define \(F^*(x)\) to be the "diagonal" of this table, namely \(F^*(x)=1-M_x(x)\) for every \(x\). The function \(F^*\) is uncomputable, because if it was computable by some machine whose string description is \(x^*\) then we would get that \(M_{x^*}(x^*)=F(x^*)=1-M_{x^*}(x^*)\).
There are some functions that can not be computed by any algorithm.
The proof of Theorem 8.6 is short but subtle. I suggest that you pause here and go back to read it again and think about it - this is a proof that is worth reading at least twice if not three or four times. It is not often the case that a few lines of mathematical reasoning establish a deeply profound fact - that there are problems we simply cannot solve.
The type of argument used to prove Theorem 8.6 is known as diagonalization since it can be described as defining a function based on the diagonal entries of a table as in Figure 8.4. The proof can be thought of as an infinite version of the counting argument we used for showing lower bound for NAND-CIRC programs in Theorem 5.3. Namely, we show that it's not possible to compute all functions from \(\{0,1\}^* \rightarrow \{0,1\}\) by Turing machines simply because there are more functions like that then there are Turing machines.
As mentioned in Remark 6.5, many texts use the "language" terminology and so will call a set \(L \subseteq \{0,1\}^*\) an undecidable or non recursive language if the function \(F:\{0,1\}^* :\rightarrow \{0,1\}\) such that \(F(x)=1 \leftrightarrow x\in L\) is uncomputable.
The Halting problem
Theorem 8.6 shows that there is some function that cannot be computed. But is this function the equivalent of the "tree that falls in the forest with no one hearing it"? That is, perhaps it is a function that no one actually wants to compute. It turns out that there are natural uncomputable functions:
Let \(\ensuremath{\mathit{HALT}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function such that for every string \(M\in \{0,1\}^*\), \(\ensuremath{\mathit{HALT}}(M,x)=1\) if Turing machine \(M\) halts on the input \(x\) and \(\ensuremath{\mathit{HALT}}(M,x)=0\) otherwise. Then \(\ensuremath{\mathit{HALT}}\) is not computable.
Before turning to prove Theorem 8.7, we note that \(\ensuremath{\mathit{HALT}}\) is a very natural function to want to compute. For example, one can think of \(\ensuremath{\mathit{HALT}}\) as a special case of the task of managing an "App store". That is, given the code of some application, the gatekeeper for the store needs to decide if this code is safe enough to allow in the store or not. At a minimum, it seems that we should verify that the code would not go into an infinite loop.
One way to think about this proof is as follows:
\[ \text{Uncomputability of $F^*$} \;+\; \text{Universality} \;=\; \text{Uncomputability of $\ensuremath{\mathit{HALT}}$} \]
That is, we will use the universal Turing machine that computes \(\ensuremath{\mathit{EVAL}}\) to derive the uncomputability of \(\ensuremath{\mathit{HALT}}\) from the uncomputability of \(F^*\) shown in Theorem 8.6. Specifically, the proof will be by contradiction. That is, we will assume towards a contradiction that \(\ensuremath{\mathit{HALT}}\) is computable, and use that assumption, together with the universal Turing machine of Theorem 8.1, to derive that \(F^*\) is computable, which will contradict Theorem 8.6.
If a function \(F\) is uncomputable we can show that another function \(H\) is uncomputable by giving a way to reduce the task of computing \(F\) to computing \(H\).
The proof will use the previously established result Theorem 8.6. Recall that Theorem 8.6 shows that the following function \(F^*: \{0,1\}^* \rightarrow \{0,1\}\) is uncomputable:
\[ F^*(x) = \begin{cases}1 & x(x)=0 \\ 0 & \text{otherwise} \end{cases} \]
where \(x(x)\) denotes the output of the Turing machine described by the string \(x\) on the input \(x\) (with the usual convention that \(x(x)=\bot\) if this computation does not halt).
We will show that the uncomputability of \(F^*\) implies the uncomputability of \(\ensuremath{\mathit{HALT}}\). Specifically, we will assume, towards a contradiction, that there exists a Turing machine \(M\) that can compute the \(\ensuremath{\mathit{HALT}}\) function, and use that to obtain a Turing machine \(M'\) that computes the function \(F^*\). (This is known as a proof by reduction, since we reduce the task of computing \(F^*\) to the task of computing \(\ensuremath{\mathit{HALT}}\). By the contrapositive, this means the uncomputability of \(F^*\) implies the uncomputability of \(\ensuremath{\mathit{HALT}}\).)
Indeed, suppose that \(M\) is a Turing machine that computes \(\ensuremath{\mathit{HALT}}\). Algorithm 8.8 describes a Turing Machine \(M'\) that computes \(F^*\). (We use "high level" description of Turing machines, appealing to the "have your cake and eat it too" paradigm, see Big Idea 9.)
Algorithm 8 F^* to HALT reduction
Input: \(x\in \{0,1\}^*\)
Output: \(F^*(x)\)
# Assume T.M. \(M_{HALT
Let \(z \leftarrow M_{HALT}(x,x)\). # Assume \(z=HALT(x,x)\).
if{\(z=0\)}
return \(0\)
Let \(y \leftarrow U(x,x)\) # \(U\) universal TM, i.e., \(y=x(x)\)
if{\(y=0\)}
We claim that Algorithm 8.8 computes the function \(F^*\). Indeed, suppose that \(x(x)=0\) (and hence \(F^*(x)=1\)). In this case, \(\ensuremath{\mathit{HALT}}(x,x)=1\) and hence, under our assumption that \(M(x,x)=\ensuremath{\mathit{HALT}}(x,x)\), the value \(z\) will equal \(1\), and hence Algorithm 8.8 will set \(y=x(x)=0\), and output the correct value \(1\).
Suppose otherwise that \(x(x) \neq 0\) (and hence \(F^*(x)=0\)). In this case there are two possibilities:
Case 1: The machine described by \(x\) does not halt on the input \(x\). In this case, \(\ensuremath{\mathit{HALT}}(x,x)=0\). Since we assume that \(M\) computes \(\ensuremath{\mathit{HALT}}\) it means that on input \(x,x\), the machine \(M\) must halt and output the value \(0\). This means that Algorithm 8.8 will set \(z=0\) and output \(0\).
Case 2: The machine described by \(x\) halts on the input \(x\) and outputs some \(y' \neq 0\). In this case, since \(\ensuremath{\mathit{HALT}}(x,x)=1\), under our assumptions, Algorithm 8.8 will set \(y=y' \neq 0\) and so output \(0\).
We see that in all cases, \(M'(x)=F^*(x)\), which contradicts the fact that \(F^*\) is uncomputable. Hence we reach a contradiction to our original assumption that \(M\) computes \(\ensuremath{\mathit{HALT}}\).
Once again, this is a proof that's worth reading more than once. The uncomputability of the halting problem is one of the fundamental theorems of computer science, and is the starting point for much of the investigations we will see later. An excellent way to get a better understanding of Theorem 8.7 is to go over Section 8.3.2, which presents an alternative proof of the same result.
Is the Halting problem really hard? (discussion)
Many people's first instinct when they see the proof of Theorem 8.7 is to not believe it. That is, most people do believe the mathematical statement, but intuitively it doesn't seem that the Halting problem is really that hard. After all, being uncomputable only means that \(\ensuremath{\mathit{HALT}}\) cannot be computed by a Turing machine.
But programmers seem to solve \(\ensuremath{\mathit{HALT}}\) all the time by informally or formally arguing that their programs halt. It's true that their programs are written in C or Python, as opposed to Turing machines, but that makes no difference: we can easily translate back and forth between this model and any other programming language.
While every programmer encounters at some point an infinite loop, is there really no way to solve the halting problem? Some people argue that they personally can, if they think hard enough, determine whether any concrete program that they are given will halt or not. Some have even argued that humans in general have the ability to do that, and hence humans have inherently superior intelligence to computers or anything else modeled by Turing machines.1
The best answer we have so far is that there truly is no way to solve \(\ensuremath{\mathit{HALT}}\), whether using Macs, PCs, quantum computers, humans, or any other combination of electronic, mechanical, and biological devices. Indeed this assertion is the content of the Church-Turing Thesis. This of course does not mean that for every possible program \(P\), it is hard to decide if \(P\) enters an infinite loop. Some programs don't even have loops at all (and hence trivially halt), and there are many other far less trivial examples of programs that we can certify to never enter an infinite loop (or programs that we know for sure that will enter such a loop). However, there is no general procedure that would determine for an arbitrary program \(P\) whether it halts or not. Moreover, there are some very simple programs for which no one knows whether they halt or not. For example, the following Python program will halt if and only if Goldbach's conjecture is false:
def isprime(p):
return all(p % i for i in range(2,p-1))
def Goldbach(n):
return any( (isprime(p) and isprime(n-p))
for p in range(2,n-1))
if not Goldbach(n): break
n+= 2
Given that Goldbach's Conjecture has been open since 1742, it is unclear that humans have any magical ability to say whether this (or other similar programs) will halt or not.
8.5: SMBC's take on solving the Halting problem.
A direct proof of the uncomputability of \(\ensuremath{\mathit{HALT}}\) (optional)
It turns out that we can combine the ideas of the proofs of Theorem 8.6 and Theorem 8.7 to obtain a short proof of the latter theorem, that does not appeal to the uncomputability of \(F^*\). This short proof appeared in print in a 1965 letter to the editor of Christopher Strachey:
To the Editor, The Computer Journal.
An Impossible Program
A well-known piece of folk-lore among programmers holds that it is impossible to write a program which can examine any other program and tell, in every case, if it will terminate or get into a closed loop when it is run. I have never actually seen a proof of this in print, and though Alan Turing once gave me a verbal proof (in a railway carriage on the way to a Conference at the NPL in 1953), I unfortunately and promptly forgot the details. This left me with an uneasy feeling that the proof must be long or complicated, but in fact it is so short and simple that it may be of interest to casual readers. The version below uses CPL, but not in any essential way.
Suppose T[R] is a Boolean function taking a routine (or program) R with no formal or free variables as its arguments and that for all R, T[R] = True if R terminates if run and that T[R] = False if R does not terminate.
Consider the routine P defined as follows
rec routine P
§L: if T[P] go to L
Return §
If T[P] = True the routine P will loop, and it will only terminate if T[P] = False. In each case `T[P]`` has exactly the wrong value, and this contradiction shows that the function T cannot exist.
C. Strachey
Churchill College, Cambridge
Try to stop and extract the argument for proving Theorem 8.7 from the letter above.
Since CPL is not as common today, let us reproduce this proof. The idea is the following: suppose for the sake of contradiction that there exists a program T such that T(f,x) equals True iff f halts on input x. (Strachey's letter considers the no-input variant of \(\ensuremath{\mathit{HALT}}\), but as we'll see, this is an immaterial distinction.) Then we can construct a program P and an input x such that T(P,x) gives the wrong answer. The idea is that on input x, the program P will do the following: run T(x,x), and if the answer is True then go into an infinite loop, and otherwise halt. Now you can see that T(P,P) will give the wrong answer: if P halts when it gets its own code as input, then T(P,P) is supposed to be True, but then P(P) will go into an infinite loop. And if P does not halt, then T(P,P) is supposed to be False but then P(P) will halt. We can also code this up in Python:
def CantSolveMe(T):
Gets function T that claims to solve HALT.
Returns a pair (P,x) of code and input on which
T(P,x) ≠ HALT(x)
def fool(x):
if T(x,x):
while True: pass
return "I halted"
return (fool,fool)
For example, consider the following Naive Python program T that guesses that a given function does not halt if its input contains while or for
def T(f,x):
"""Crude halting tester - decides it doesn't halt if it contains a loop."""
import inspect
source = inspect.getsource(f)
if source.find("while"): return False
if source.find("for"): return False
If we now set (f,x) = CantSolveMe(T), then T(f,x)=False but f(x) does in fact halt. This is of course not specific to this particular T: for every program T, if we run (f,x) = CantSolveMe(T) then we'll get an input on which T gives the wrong answer to \(\ensuremath{\mathit{HALT}}\).
The Halting problem turns out to be a linchpin of uncomputability, in the sense that Theorem 8.7 has been used to show the uncomputability of a great many interesting functions. We will see several examples of such results in this chapter and the exercises, but there are many more such results (see Figure 8.6).
8.6: Some uncomputability results. An arrow from problem X to problem Y means that we use the uncomputability of X to prove the uncomputability of Y by reducing computing X to computing Y. All of these results except for the MRDP Theorem appear in either the text or exercises. The Halting Problem \(\ensuremath{\mathit{HALT}}\) serves as our starting point for all these uncomputability results as well as many others.
The idea behind such uncomputability results is conceptually simple but can at first be quite confusing. If we know that \(\ensuremath{\mathit{HALT}}\) is uncomputable, and we want to show that some other function \(\ensuremath{\mathit{BLAH}}\) is uncomputable, then we can do so via a contrapositive argument (i.e., proof by contradiction). That is, we show that if there exists a Turing machine that computes \(\ensuremath{\mathit{BLAH}}\) then there exists a Turing machine that computes \(\ensuremath{\mathit{HALT}}\). (Indeed, this is exactly how we showed that \(\ensuremath{\mathit{HALT}}\) itself is uncomputable, by reducing this fact to the uncomputability of the function \(F^*\) from Theorem 8.6.)
For example, to prove that \(\ensuremath{\mathit{BLAH}}\) is uncomputable, we could show that there is a computable function \(R:\{0,1\}^* \rightarrow \{0,1\}^*\) such that for every pair \(M\) and \(x\), \(\ensuremath{\mathit{HALT}}(M,x)=\ensuremath{\mathit{BLAH}}(R(M,x))\). The existence of such a function \(R\) implies that if \(\ensuremath{\mathit{BLAH}}\) was computable then \(\ensuremath{\mathit{HALT}}\) would be computable as well, hence leading to a contradiction! The confusing part about reductions is that we are assuming something we believe is false (that \(\ensuremath{\mathit{BLAH}}\) has an algorithm) to derive something that we know is false (that \(\ensuremath{\mathit{HALT}}\) has an algorithm). Michael Sipser describes such results as having the form "If pigs could whistle then horses could fly".
A reduction-based proof has two components. For starters, since we need \(R\) to be computable, we should describe the algorithm to compute it. The algorithm to compute \(R\) is known as a reduction since the transformation \(R\) modifies an input to \(\ensuremath{\mathit{HALT}}\) to an input to \(\ensuremath{\mathit{BLAH}}\), and hence reduces the task of computing \(\ensuremath{\mathit{HALT}}\) to the task of computing \(\ensuremath{\mathit{BLAH}}\). The second component of a reduction-based proof is the analysis of the algorithm \(R\): namely a proof that \(R\) does indeed satisfy the desired properties.
Reduction-based proofs are just like other proofs by contradiction, but the fact that they involve hypothetical algorithms that don't really exist tends to make reductions quite confusing. The one silver lining is that at the end of the day the notion of reductions is mathematically quite simple, and so it's not that bad even if you have to go back to first principles every time you need to remember what is the direction that a reduction should go in.
A reduction is an algorithm, which means that, as discussed in Remark 0.3, a reduction has three components:
Specification (what): In the case of a reduction from \(\ensuremath{\mathit{HALT}}\) to \(\ensuremath{\mathit{BLAH}}\), the specification is that function \(R:\{0,1\}^* \rightarrow \{0,1\}^*\) should satisfy that \(\ensuremath{\mathit{HALT}}(M,x)=\ensuremath{\mathit{BLAH}}(R(M,x))\) for every Turing machine \(M\) and input \(x\). In general, to reduce a function \(F\) to \(G\), the reduction should satisfy \(F(w)=G(R(w))\) for every input \(w\) to \(F\).
Implementation (how): The algorithm's description: the precise instructions how to transform an input \(w\) to the output \(R(w)\).
Analysis (why): A proof that the algorithm meets the specification. In particular, in a reduction from \(F\) to \(G\) this is a proof that for every input \(w\), the output \(y\) of the algorithm satisfies that \(F(w)=G(y)\).
Example: Halting on the zero problem
Here is a concrete example for a proof by reduction. We define the function \(\ensuremath{\mathit{HALTONZERO}}:\{0,1\}^* \rightarrow \{0,1\}\) as follows. Given any string \(M\), \(\ensuremath{\mathit{HALTONZERO}}(M)=1\) if and only if \(M\) describes a Turing machine that halts when it is given the string \(0\) as input. A priori \(\ensuremath{\mathit{HALTONZERO}}\) seems like a potentially easier function to compute than the full-fledged \(\ensuremath{\mathit{HALT}}\) function, and so we could perhaps hope that it is not uncomputable. Alas, the following theorem shows that this is not the case:
\(\ensuremath{\mathit{HALTONZERO}}\) is uncomputable.
The proof of Theorem 8.10 is below, but before reading it you might want to pause for a couple of minutes and think how you would prove it yourself. In particular, try to think of what a reduction from \(\ensuremath{\mathit{HALT}}\) to \(\ensuremath{\mathit{HALTONZERO}}\) would look like. Doing so is an excellent way to get some initial comfort with the notion of proofs by reduction, which a technique we will be using time and again in this book.
8.7: To prove Theorem 8.10, we show that \(\ensuremath{\mathit{HALTONZERO}}\) is uncomputable by giving a reduction from the task of computing \(\ensuremath{\mathit{HALT}}\) to the task of computing \(\ensuremath{\mathit{HALTONZERO}}\). This shows that if there was a hypothetical algorithm \(A\) computing \(\ensuremath{\mathit{HALTONZERO}}\), then there would be an algorithm \(B\) computing \(\ensuremath{\mathit{HALT}}\), contradicting Theorem 8.7. Since neither \(A\) nor \(B\) actually exists, this is an example of an implication of the form "if pigs could whistle then horses could fly".
The proof is by reduction from \(\ensuremath{\mathit{HALT}}\), see Figure 8.7. We will assume, towards the sake of contradiction, that \(\ensuremath{\mathit{HALTONZERO}}\) is computable by some algorithm \(A\), and use this hypothetical algorithm \(A\) to construct an algorithm \(B\) to compute \(\ensuremath{\mathit{HALT}}\), hence obtaining a contradiction to Theorem 8.7. (As discussed in Big Idea 9, following our "have your cake and eat it too" paradigm, we just use the generic name "algorithm" rather than worrying whether we model them as Turing machines, NAND-TM programs, NAND-RAM, etc.; this makes no difference since all these models are equivalent to one another.)
Since this is our first proof by reduction from the Halting problem, we will spell it out in more details than usual. Such a proof by reduction consists of two steps:
Description of the reduction: We will describe the operation of our algorithm \(B\), and how it makes "function calls" to the hypothetical algorithm \(A\).
Analysis of the reduction: We will then prove that under the hypothesis that Algorithm \(A\) computes \(\ensuremath{\mathit{HALTONZERO}}\), Algorithm \(B\) will compute \(\ensuremath{\mathit{HALT}}\).
Algorithm 11 HALT to HALTONZERO reduction
Input: Turing machine \(M\) and string \(x\).
Output: Turing machine \(M'\) such that \(M\) halts on \(x\) iff \(M'\) halts on zero
Procedure{\(N_{M,x}\)}{\(w\)} # Description of the T.M. \(N_{M,x
return \(EVAL(M,x)\) # Ignore the Input: \(w\), evaluate \(M\) on \(x\).
endproc
return \(N_{M,x}\) # We do not execute \(N_{M,x
Our Algorithm \(B\) works as follows: on input \(M,x\), it runs Algorithm 8.11 to obtain a Turing Machine \(M'\), and then returns \(A(M')\). The machine \(M'\) ignores its input \(z\) and simply runs \(M\) on \(x\).
In pseudocode, the program \(N_{M,x}\) will look something like the following:
def N(z):
M = r'.......'
# a string constant containing desc. of M
x = r'.......'
# a string constant containing x
return eval(M,x)
# note that we ignore the input z
That is, if we think of \(N_{M,x}\) as a program, then it is a program that contains \(M\) and \(x\) as "hardwired constants", and given any input \(z\), it simply ignores the input and always returns the result of evaluating \(M\) on \(x\). The algorithm \(B\) does not actually execute the machine \(N_{M,x}\). \(B\) merely writes down the description of \(N_{M,x}\) as a string (just as we did above) and feeds this string as input to \(A\).
The above completes the description of the reduction. The analysis is obtained by proving the following claim:
Claim: For every strings \(M,x,z\), the machine \(N_{M,x}\) constructed by Algorithm \(B\) in Step 1 satisfies that \(N_{M,x}\) halts on \(z\) if and only if the program described by \(M\) halts on the input \(x\).
Proof of Claim: Since \(N_{M,x}\) ignores its input and evaluates \(M\) on \(x\) using the universal Turing machine, it will halt on \(z\) if and only if \(M\) halts on \(x\).
In particular if we instantiate this claim with the input \(z=0\) to \(N_{M,x}\), we see that \(\ensuremath{\mathit{HALTONZERO}}(N_{M,x})=\ensuremath{\mathit{HALT}}(M,x)\). Thus if the hypothetical algorithm \(A\) satisfies \(A(M)=\ensuremath{\mathit{HALTONZERO}}(M)\) for every \(M\) then the algorithm \(B\) we construct satisfies \(B(M,x)=\ensuremath{\mathit{HALT}}(M,x)\) for every \(M,x\), contradicting the uncomputability of \(\ensuremath{\mathit{HALT}}\).
In the proof of Theorem 8.10 we used the technique of "hardwiring" an input \(x\) to a program/machine \(P\). That is, modifying a program \(P\) that it uses "hardwired constants" for some of all of its input. This technique is quite common in reductions and elsewhere, and we will often use it again in this course.
Rice's Theorem and the impossibility of general software verification
The uncomputability of the Halting problem turns out to be a special case of a much more general phenomenon. Namely, that we cannot certify semantic properties of general purpose programs. "Semantic properties" mean properties of the function that the program computes, as opposed to properties that depend on the particular syntax used by the program.
An example for a semantic property of a program \(P\) is the property that whenever \(P\) is given an input string with an even number of \(1\)'s, it outputs \(0\). Another example is the property that \(P\) will always halt whenever the input ends with a \(1\). In contrast, the property that a C program contains a comment before every function declaration is not a semantic property, since it depends on the actual source code as opposed to the input/output relation.
Checking semantic properties of programs is of great interest, as it corresponds to checking whether a program conforms to a specification. Alas it turns out that such properties are in general uncomputable. We have already seen some examples of uncomputable semantic functions, namely \(\ensuremath{\mathit{HALT}}\) and \(\ensuremath{\mathit{HALTONZERO}}\), but these are just the "tip of the iceberg". We start by observing one more such example:
Let \(\ensuremath{\mathit{ZEROFUNC}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function such that for every \(M\in \{0,1\}^*\), \(\ensuremath{\mathit{ZEROFUNC}}(M)=1\) if and only if \(M\) represents a Turing machine such that \(M\) outputs \(0\) on every input \(x\in \{0,1\}^*\). Then \(\ensuremath{\mathit{ZEROFUNC}}\) is uncomputable.
Despite the similarity in their names, \(\ensuremath{\mathit{ZEROFUNC}}\) and \(\ensuremath{\mathit{HALTONZERO}}\) are two different functions. For example, if \(M\) is a Turing machine that on input \(x \in \{0,1\}^*\), halts and outputs the OR of all of \(x\)'s coordinates, then \(\ensuremath{\mathit{HALTONZERO}}(M)=1\) (since \(M\) does halt on the input \(0\)) but \(\ensuremath{\mathit{ZEROFUNC}}(M)=0\) (since \(M\) does not compute the constant zero function).
The proof is by reduction to \(\ensuremath{\mathit{HALTONZERO}}\). Suppose, towards the sake of contradiction, that there was an algorithm \(A\) such that \(A(M)=\ensuremath{\mathit{ZEROFUNC}}(M)\) for every \(M \in \{0,1\}^*\). Then we will construct an algorithm \(B\) that solves \(\ensuremath{\mathit{HALTONZERO}}\), contradicting Theorem 8.10.
Given a Turing machine \(N\) (which is the input to \(\ensuremath{\mathit{HALTONZERO}}\)), our Algorithm \(B\) does the following:
Construct a Turing Machine \(M\) which on input \(x\in\{0,1\}^*\), first runs \(N(0)\) and then outputs \(0\).
Return \(A(M)\).
Now if \(N\) halts on the input \(0\) then the Turing machine \(M\) computes the constant zero function, and hence under our assumption that \(A\) computes \(\ensuremath{\mathit{ZEROFUNC}}\), \(A(M)=1\). If \(N\) does not halt on the input \(0\), then the Turing machine \(M\) will not halt on any input, and so in particular will not compute the constant zero function. Hence under our assumption that \(A\) computes \(\ensuremath{\mathit{ZEROFUNC}}\), \(A(M)=0\). We see that in both cases, \(\ensuremath{\mathit{ZEROFUNC}}(M)=\ensuremath{\mathit{HALTONZERO}}(N)\) and hence the value that Algorithm \(B\) returns in step 2 is equal to \(\ensuremath{\mathit{HALTONZERO}}(N)\) which is what we needed to prove.
Another result along similar lines is the following:
The following function is uncomputable
\[ \ensuremath{\mathit{COMPUTES}}\text{-}\ensuremath{\mathit{PARITY}}(P) = \begin{cases} 1 & P \text{ computes the parity function } \\ 0 & \text{otherwise} \end{cases} \]
We leave the proof of Theorem 8.14 as an exercise (Exercise 8.6). I strongly encourage you to stop here and try to solve this exercise.
Rice's Theorem
Theorem 8.14 can be generalized far beyond the parity function. In fact, this generalization rules out verifying any type of semantic specification on programs. We define a semantic specification on programs to be some property that does not depend on the code of the program but just on the function that the program computes.
For example, consider the following two C programs
int First(int k) {
return 2*k;
int Second(int n) {
int j = 0
while (j<n) {
i = i + 2;
j= j + 1;
return i;
First and Second are two distinct C programs, but they compute the same function. A semantic property, would be either true for both programs or false for both programs, since it depends on the function the programs compute and not on their code. An example for a semantic property that both First and Second satisfy is the following: "The program \(P\) computes a function \(f\) mapping integers to integers satisfying that \(f(n) \geq n\) for every input \(n\)".
A property is not semantic if it depends on the source code rather than the input/output behavior. For example, properties such as "the program contains the variable k" or "the program uses the while operation" are not semantic. Such properties can be true for one of the programs and false for others. Formally, we define semantic properties as follows:
A pair of Turing machines \(M\) and \(M'\) are functionally equivalent if for every \(x\in \{0,1\}^*\), \(M(x)=M'(x)\). (In particular, \(M(x)=\bot\) iff \(M'(x)=\bot\) for all \(x\).)
A function \(F:\{0,1\}^* \rightarrow \{0,1\}\) is semantic if for every pair of strings \(M,M'\) that represent functionally equivalent Turing machines, \(F(M)=F(M')\). (Recall that we assume that every string represents some Turing machine, see Remark 8.3)
There are two trivial examples of semantic functions: the constant one function and the constant zero function. For example, if \(Z\) is the constant zero function (i.e., \(Z(M)=0\) for every \(M\)) then clearly \(F(M)=F(M')\) for every pair of Turing machines \(M\) and \(M'\) that are functionally equivalent \(M\) and \(M'\). Here is a non-trivial example
Prove that the function \(\ensuremath{\mathit{ZEROFUNC}}\) is semantic.
Recall that \(\ensuremath{\mathit{ZEROFUNC}}(M)=1\) if and only if \(M(x)=0\) for every \(x\in \{0,1\}^*\). If \(M\) and \(M'\) are functionally equivalent, then for every \(x\), \(M(x)=M'(x)\). Hence \(\ensuremath{\mathit{ZEROFUNC}}(M)=1\) if and only if \(\ensuremath{\mathit{ZEROFUNC}}(M')=1\).
Often the properties of programs that we are most interested in computing are the semantic ones, since we want to understand the programs' functionality. Unfortunately, Rice's Theorem tells us that these properties are all uncomputable:
Let \(F:\{0,1\}^* \rightarrow \{0,1\}\). If \(F\) is semantic and non-trivial then it is uncomputable.
The idea behind the proof is to show that every semantic non-trivial function \(F\) is at least as hard to compute as \(\ensuremath{\mathit{HALTONZERO}}\). This will conclude the proof since by Theorem 8.10, \(\ensuremath{\mathit{HALTONZERO}}\) is uncomputable. If a function \(F\) is non trivial then there are two machines \(M_0\) and \(M_1\) such that \(F(M_0)=0\) and \(F(M_1)=1\). So, the goal would be to take a machine \(N\) and find a way to map it into a machine \(M=R(N)\), such that (i) if \(N\) halts on zero then \(M\) is functionally equivalent to \(M_1\) and (ii) if \(N\) does not halt on zero then \(M\) is functionally equivalent \(M_0\).
Because \(F\) is semantic, if we achieved this, then we would be guaranteed that \(\ensuremath{\mathit{HALTONZERO}}(N) = F(R(N))\), and hence would show that if \(F\) was computable, then \(\ensuremath{\mathit{HALTONZERO}}\) would be computable as well, contradicting Theorem 8.10.
We will not give the proof in full formality, but rather illustrate the proof idea by restricting our attention to a particular semantic function \(F\). However, the same techniques generalize to all possible semantic functions. Define \(\ensuremath{\mathit{MONOTONE}}:\{0,1\}^* \rightarrow \{0,1\}\) as follows: \(\ensuremath{\mathit{MONOTONE}}(M)=1\) if there does not exist \(n\in \N\) and two inputs \(x,x' \in \{0,1\}^n\) such that for every \(i\in [n]\) \(x_i \leq x'_i\) but \(M(x)\) outputs \(1\) and \(M(x')=0\). That is, \(\ensuremath{\mathit{MONOTONE}}(M)=1\) if it's not possible to find an input \(x\) such that flipping some bits of \(x\) from \(0\) to \(1\) will change \(M\)'s output in the other direction from \(1\) to \(0\). We will prove that \(\ensuremath{\mathit{MONOTONE}}\) is uncomputable, but the proof will easily generalize to any semantic function.
We start by noting that \(\ensuremath{\mathit{MONOTONE}}\) is neither the constant zero nor the constant one function:
The machine \(\ensuremath{\mathit{INF}}\) that simply goes into an infinite loop on every input satisfies \(\ensuremath{\mathit{MONOTONE}}(\ensuremath{\mathit{INF}})=1\), since \(\ensuremath{\mathit{INF}}\) is not defined anywhere and so in particular there are no two inputs \(x,x'\) where \(x_i \leq x'_i\) for every \(i\) but \(\ensuremath{\mathit{INF}}(x)=0\) and \(\ensuremath{\mathit{INF}}(x')=1\).
The machine \(\ensuremath{\mathit{PAR}}\) that computes the XOR or parity of its input, is not monotone (e.g., \(\ensuremath{\mathit{PAR}}(1,1,0,0,\ldots,0)=0\) but \(\ensuremath{\mathit{PAR}}(1,0,0,\ldots,0)=0\)) and hence \(\ensuremath{\mathit{MONOTONE}}(\ensuremath{\mathit{PAR}})=0\).
(Note that \(\ensuremath{\mathit{INF}}\) and \(\ensuremath{\mathit{PAR}}\) are machines and not functions.)
We will now give a reduction from \(\ensuremath{\mathit{HALTONZERO}}\) to \(\ensuremath{\mathit{MONOTONE}}\). That is, we assume towards a contradiction that there exists an algorithm \(A\) that computes \(\ensuremath{\mathit{MONOTONE}}\) and we will build an algorithm \(B\) that computes \(\ensuremath{\mathit{HALTONZERO}}\). Our algorithm \(B\) will work as follows:
Algorithm \(B\):
Input: String \(N\) describing a Turing machine. (Goal: Compute \(\ensuremath{\mathit{HALTONZERO}}(N)\))
Assumption: Access to Algorithm \(A\) to compute \(\ensuremath{\mathit{MONOTONE}}\).
Construct the following machine \(M\): "On input \(z\in \{0,1\}^*\) do: (a) Run \(N(0)\), (b) Return \(\ensuremath{\mathit{PAR}}(z)\)".
Return \(1-A(M)\).
To complete the proof we need to show that \(B\) outputs the correct answer, under our assumption that \(A\) computes \(\ensuremath{\mathit{MONOTONE}}\). In other words, we need to show that \(\ensuremath{\mathit{HALTONZERO}}(N)=1-MONOTONE(M)\). Suppose that \(N\) does not halt on zero. In this case the program \(M\) constructed by Algorithm \(B\) enters into an infinite loop in step (a) and will never reach step (b). Hence in this case \(N\) is functionally equivalent to \(\ensuremath{\mathit{INF}}\). (The machine \(N\) is not the same machine as \(\ensuremath{\mathit{INF}}\): its description or code is different. But it does have the same input/output behavior (in this case) of never halting on any input. Also, while the program \(M\) will go into an infinite loop on every input, Algorithm \(B\) never actually runs \(M\): it only produces its code and feeds it to \(A\). Hence Algorithm \(B\) will not enter into an infinite loop even in this case.) Thus in this case, \(\ensuremath{\mathit{MONOTONE}}(N)=\ensuremath{\mathit{MONOTONE}}(\ensuremath{\mathit{INF}})=1\).
If \(N\) does halt on zero, then step (a) in \(M\) will eventually conclude and \(M\)'s output will be determined by step (b), where it simply outputs the parity of its input. Hence in this case, \(M\) computes the non-monotone parity function (i.e., is functionally equivalent to \(\ensuremath{\mathit{PAR}}\)), and so we get that \(\ensuremath{\mathit{MONOTONE}}(M)=\ensuremath{\mathit{MONOTONE}}(\ensuremath{\mathit{PAR}})=0\). In both cases, \(\ensuremath{\mathit{MONOTONE}}(M)=1-HALTONZERO(N)\), which is what we wanted to prove.
An examination of this proof shows that we did not use anything about \(\ensuremath{\mathit{MONOTONE}}\) beyond the fact that it is semantic and non-trivial. For every semantic non-trivial \(F\), we can use the same proof, replacing \(\ensuremath{\mathit{PAR}}\) and \(\ensuremath{\mathit{INF}}\) with two machines \(M_0\) and \(M_1\) such that \(F(M_0)=0\) and \(F(M_1)=1\). Such machines must exist if \(F\) is non trivial.
Rice's Theorem is so powerful and such a popular way of proving uncomputability that people sometimes get confused and think that it is the only way to prove uncomputability. In particular, a common misconception is that if a function \(F\) is not semantic then it is computable. This is not at all the case.
For example, consider the following function \(\ensuremath{\mathit{HALTNOYALE}}:\{0,1\}^* \rightarrow \{0,1\}\). This is a function that on input a string that represents a NAND-TM program \(P\), outputs \(1\) if and only if both (i) \(P\) halts on the input \(0\), and (ii) the program \(P\) does not contain a variable with the identifier Yale. The function \(\ensuremath{\mathit{HALTNOYALE}}\) is clearly not semantic, as it will output two different values when given as input one of the following two functionally equivalent programs:
Yale[0] = NAND(X[0],X[0])
Y[0] = NAND(X[0],Yale[0])
Harvard[0] = NAND(X[0],X[0])
Y[0] = NAND(X[0],Harvard[0])
However, \(\ensuremath{\mathit{HALTNOYALE}}\) is uncomputable since every program \(P\) can be transformed into an equivalent (and in fact improved :)) program \(P'\) that does not contain the variable Yale. Hence if we could compute \(\ensuremath{\mathit{HALTONYALE}}\) then determine halting on zero for NAND-TM programs (and hence for Turing machines as well).
Moreover, as we will see in Chapter 10, there are uncomputable functions whose inputs are not programs, and hence for which the adjective "semantic" is not applicable.
Properties such as "the program contains the variable Yale" are sometimes known as syntactic properties. The terms "semantic" and "syntactic" are used beyond the realm of programming languages: a famous example of a syntactically correct but semantically meaningless sentence in English is Chomsky's "Colorless green ideas sleep furiously." However, formally defining "syntactic properties" is rather subtle and we will not use this terminology in this book, sticking to the terms "semantic" and "non semantic" only.
Halting and Rice's Theorem for other Turing-complete models
As we saw before, many natural computational models turn out to be equivalent to one another, in the sense that we can transform a "program" of one model (such as a \(\lambda\) expression, or a game-of-life configurations) into another model (such as a NAND-TM program). This equivalence implies that we can translate the uncomputability of the Halting problem for NAND-TM programs into uncomputability for Halting in other models. For example:
Let \(\ensuremath{\mathit{NANDTMHALT}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function that on input strings \(P\in\{0,1\}^*\) and \(x\in \{0,1\}^*\) outputs \(1\) if the NAND-TM program described by \(P\) halts on the input \(x\) and outputs \(0\) otherwise. Then \(\ensuremath{\mathit{NANDTMHALT}}\) is uncomputable.
Once again, this is a good point for you to stop and try to prove the result yourself before reading the proof below.
We have seen in Theorem 6.12 that for every Turing machine \(M\), there is an equivalent NAND-TM program \(P_M\) such that for every \(x\), \(P_M(x)=M(x)\). In particular this means that \(\ensuremath{\mathit{HALT}}(M)= \ensuremath{\mathit{NANDTMHALT}}(P_M)\).
The transformation \(M \mapsto P_M\) that is obtained from the proof of Theorem 6.12 is constructive. That is, the proof yields a way to compute the map \(M \mapsto P_M\). This means that this proof yields a reduction from task of computing \(\ensuremath{\mathit{HALT}}\) to the task of computing \(\ensuremath{\mathit{NANDTMHALT}}\), which means that since \(\ensuremath{\mathit{HALT}}\) is uncomputable, neither is \(\ensuremath{\mathit{NANDTMHALT}}\).
The same proof carries over to other computational models such as the \(\lambda\) calculus, two dimensional (or even one-dimensional) automata etc. Hence for example, there is no algorithm to decide if a \(\lambda\) expression evaluates the identity function, and no algorithm to decide whether an initial configuration of the game of life will result in eventually coloring the cell \((0,0)\) black or not.
Indeed, we can generalize Rice's Theorem to all these models. For example, if \(F:\{0,1\}^* \rightarrow \{0,1\}\) is a non-trivial function such that \(F(P)=F(P')\) for every functionally equivalent NAND-TM programs \(P,P'\) then \(F\) is uncomputable, and the same holds for NAND-RAM programs, \(\lambda\)-expressions, and all other Turing complete models (as defined in Definition 7.5), see also Exercise 8.12.
Is software verification doomed? (discussion)
Programs are increasingly being used for mission critical purposes, whether it's running our banking system, flying planes, or monitoring nuclear reactors. If we can't even give a certification algorithm that a program correctly computes the parity function, how can we ever be assured that a program does what it is supposed to do? The key insight is that while it is impossible to certify that a general program conforms with a specification, it is possible to write a program in the first place in a way that will make it easier to certify. As a trivial example, if you write a program without loops, then you can certify that it halts. Also, while it might not be possible to certify that an arbitrary program computes the parity function, it is quite possible to write a particular program \(P\) for which we can mathematically prove that \(P\) computes the parity. In fact, writing programs or algorithms and providing proofs for their correctness is what we do all the time in algorithms research.
The field of software verification is concerned with verifying that given programs satisfy certain conditions. These conditions can be that the program computes a certain function, that it never writes into a dangerous memory location, that is respects certain invariants, and others. While the general tasks of verifying this may be uncomputable, researchers have managed to do so for many interesting cases, especially if the program is written in the first place in a formalism or programming language that makes verification easier. That said, verification, especially of large and complex programs, remains a highly challenging task in practice as well, and the number of programs that have been formally proven correct is still quite small. Moreover, even phrasing the right theorem to prove (i.e., the specification) if often a highly non-trivial endeavor.
8.8: The set \(\mathbf{R}\) of computable Boolean functions (Definition 6.4) is a proper subset of the set of all functions mapping \(\{0,1\}^*\) to \(\{0,1\}\). In this chapter we saw a few examples of elements in the latter set that are not in the former.
There is a universal Turing machine (or NAND-TM program) \(U\) such that on input a description of a Turing machine \(M\) and some input \(x\), \(U(M,x)\) halts and outputs \(M(x)\) if (and only if) \(M\) halts on input \(x\). Unlike in the case of finite computation (i.e., NAND-CIRC programs / circuits), the input to the program \(U\) can be a machine \(M\) that has more states than \(U\) itself.
Unlike the finite case, there are actually functions that are inherently uncomputable in the sense that they cannot be computed by any Turing machine.
These include not only some "degenerate" or "esoteric" functions but also functions that people have deeply care about and conjectured that could be computed.
If the Church-Turing thesis holds then a function \(F\) that is uncomputable according to our definition cannot be computed by any means in our physical world.
Let \(\ensuremath{\mathit{NANDRAMHALT}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function such that on input \((P,x)\) where \(P\) represents a NAND-RAM program, \(\ensuremath{\mathit{NANDRAMHALT}}(P,x)=1\) iff \(P\) halts on the input \(x\). Prove that \(\ensuremath{\mathit{NANDRAMHALT}}\) is uncomputable.
Let \(\ensuremath{\mathit{TIMEDHALT}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function that on input (a string representing) a triple \((M,x,T)\), \(\ensuremath{\mathit{TIMEDHALT}}(M,x,T)=1\) iff the Turing machine \(M\), on input \(x\), halts within at most \(T\) steps (where a step is defined as one sequence of reading a symbol from the tape, updating the state, writing a new symbol and (potentially) moving the head).
Prove that \(\ensuremath{\mathit{TIMEDHALT}}\) is computable.
Let \(\ensuremath{\mathit{SPACEHALT}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function that on input (a string representing) a triple \((M,x,T)\), \(\ensuremath{\mathit{SPACEHALT}}(M,x,T)=1\) iff the Turing machine \(M\), on input \(x\), halts before its head reached the \(T\)-th location of its tape. (We don't care how many steps \(M\) makes, as long as the head stays inside locations \(\{0,\ldots,T-1\}\).)
Prove that \(\ensuremath{\mathit{SPACEHALT}}\) is computable. See footnote for hint2
Suppose that \(F:\{0,1\}^* \rightarrow \{0,1\}\) and \(G:\{0,1\}^* \rightarrow \{0,1\}\) are computable functions. For each one of the following functions \(H\), either prove that \(H\) is necessarily computable or give an example of a pair \(F\) and \(G\) of computable functions such that \(H\) will not be computable. Prove your assertions.
\(H(x)=1\) iff \(F(x)=1\) OR \(G(x)=1\).
\(H(x)=1\) iff there exist two nonempty strings \(u,v \in \{0,1\}^*\) such that \(x=uv\) (i.e., \(x\) is the concatenation of \(u\) and \(v\)), \(F(u)=1\) and \(G(v)=1\).
\(H(x)=1\) iff there exist a list \(u_0,\ldots,u_{t-1}\) of non empty strings such that strings\(F(u_i)=1\) for every \(i\in [t]\) and \(x=u_0u_1\cdots u_{t-1}\).
\(H(x)=1\) iff \(x\) is a valid string representation of a NAND++ program \(P\) such that for every \(z\in \{0,1\}^*\), on input \(z\) the program \(P\) outputs \(F(z)\).
\(H(x)=1\) iff \(x\) is a valid string representation of a NAND++ program \(P\) such that on input \(x\) the program \(P\) outputs \(F(x)\).
\(H(x)=1\) iff \(x\) is a valid string representation of a NAND++ program \(P\) such that on input \(x\), \(P\) outputs \(F(x)\) after executing at most \(100\cdot |x|^2\) lines.
Prove that the following function \(\ensuremath{\mathit{FINITE}}:\{0,1\}^* \rightarrow \{0,1\}\) is uncomputable. On input \(P\in \{0,1\}^*\), we define \(\ensuremath{\mathit{FINITE}}(P)=1\) if and only if \(P\) is a string that represents a NAND++ program such that there only a finite number of inputs \(x\in \{0,1\}^*\) s.t. \(P(x)=1\).3
Prove Theorem 8.14 without using Rice's Theorem.
Let \(\ensuremath{\mathit{EQ}}:\{0,1\}^* :\rightarrow \{0,1\}\) be the function defined as follows: given a string representing a pair \((M,M')\) of Turing machines, \(\ensuremath{\mathit{EQ}}(M,M')=1\) iff \(M\) and \(M'\) are functionally equivalent as per Definition 8.15. Prove that \(\ensuremath{\mathit{EQ}}\) is uncomputable.
Note that you cannot use Rice's Theorem directly, as this theorem only deals with functions that take a single Turing machine as input, and \(\ensuremath{\mathit{EQ}}\) takes two machines.
For each of the following two functions, say whether it is computable or not:
Given a NAND-TM program \(P\), an input \(x\), and a number \(k\), when we run \(P\) on \(x\), does the index variable i ever reach \(k\)?
Given a NAND-TM program \(P\), an input \(x\), and a number \(k\), when we run \(P\) on \(x\), does \(P\) ever write to an array at index \(k\)?
Let \(F:\{0,1\}^* \rightarrow \{0,1\}\) be the function that is defined as follows. On input a string \(P\) that represents a NAND-RAM program and a String \(M\) that represents a Turing machine, \(F(P,M)=1\) if and only if there exists some input \(x\) such \(P\) halts on \(x\) but \(M\) does not halt on \(x\). Prove that \(F\) is uncomputable. See footnote for hint.4
Define a function \(F:\{0,1\}^* :\rightarrow \{0,1\}\) to be recursively enumerable if there exists a Turing machine \(M\) such that such that for every \(x\in \{0,1\}^*\), if \(F(x)=1\) then \(M(x)=1\), and if \(F(x)=0\) then \(M(x)=\bot\). (i.e., if \(F(x)=0\) then \(M\) does not halt on \(x\).)
Prove that every computable \(F\) is also recursively enumerable.
Prove that there exists \(F\) that is not computable but is recursively enumerable. See footnote for hint.5
Prove that there exists a function \(F:\{0,1\}^* \rightarrow \{0,1\}\) such that \(F\) is not recursively enumerable. See footnote for hint.6
Prove that there exists a function \(F:\{0,1\}^* \rightarrow \{0,1\}\) such that \(F\) is recursively enumerable but the function \(\overline{F}\) defined as \(\overline{F}(x)=1-F(x)\) is not recursively enumerable. See footnote for hint.7
In this exercise we will prove Rice's Theorem in the form that it is typically stated in the literature.
For a Turing machine \(M\), define \(L(M) \subseteq \{0,1\}^*\) to be the set of all \(x\in \{0,1\}^*\) such that \(M\) halts on the input \(x\) and outputs \(1\). (The set \(L(M)\) is known in the literature as the language recognized by \(M\). Note that \(M\) might either output a value other than \(1\) or not halt at all on inputs \(x\not\in L(M)\). )
Prove that for every Turing Machine \(M\), if we define \(F_M:\{0,1\}^* \rightarrow \{0,1\}\) to be the function such that \(F_M(x)=1\) iff \(x\in L(M)\) then \(F_M\) is recursively enumerable as defined in Exercise 8.10.
Use Theorem 8.16 to prove that for every \(G:\{0,1\}^* \rightarrow \{0,1\}\), if (a) \(G\) is neither the constant zero nor the constant one function, and (b) for every \(M,M'\) such that \(L(M)=L(M')\), \(G(M)=G(M')\), then \(G\) is uncomputable. See footnote for hint.8
Let \(\mathcal{F}\) be the set of all partial functions from \(\{0,1\}^*\) to \(\{0,1\}\) and \(\mathcal{M}:\{0,1\}^* \rightarrow \mathcal{F}\) be a Turing-equivalent model as defined in Definition 7.5. We define a function \(F:\{0,1\}^* \rightarrow \{0,1\}\) to be \(\mathcal{M}\)-semantic if there exists some \(\mathcal{G}:\mathcal{F} \rightarrow \{0,1\}\) such that \(F(P) = \mathcal{G}(\mathcal{M}(P))\) for every \(P\in \{0,1\}^*\).
Prove that for every \(\mathcal{M}\)-semantic \(F:\{0,1\}^* \rightarrow \{0,1\}\) that is neither the constant one nor the constant zero function, \(F\) is uncomputable.
The cartoon of the Halting problem in Figure 8.1 and taken from Charles Cooper's website.
Section 7.2 in (Moore, Mertens, 2011) gives a highly recommended overview of uncomputability. Gödel, Escher, Bach (Hofstadter, 1999) is a classic popular science book that touches on uncomputability, and unprovability, and specifically Gödel's Theorem that we will see in Chapter 10. See also the recent book by Holt (Holt, 2018) .
The history of the definition of a function is intertwined with the development of mathematics as a field. For many years, a function was identified (as per Euler's quote above) with the means to calculate the output from the input. In the 1800's, with the invention of the Fourier series and with the systematic study of continuity and differentiability, people have started looking at more general kinds of functions, but the modern definition of a function as an arbitrary mapping was not yet universally accepted. For example, in 1899 Poincare wrote "we have seen a mass of bizarre functions which appear to be forced to resemble as little as possible honest functions which serve some purpose. … they are invented on purpose to show that our ancestor's reasoning was at fault, and we shall never get anything more than that out of them". Some of this fascinating history is discussed in (Grabiner, 1983) (Kleiner, 1991) (Lützen, 2002) (Grabiner, 2005) .
The existence of a universal Turing machine, and the uncomputability of \(\ensuremath{\mathit{HALT}}\) was first shown by Turing in his seminal paper (Turing, 1937) , though closely related results were shown by Church a year before. These works built on Gödel's 1931 incompleteness theorem that we will discuss in Chapter 10.
Some universal Turing Machines with a small alphabet and number of states are given in (Rogozhin, 1996) , including a single-tape universal Turing machine with the binary alphabet and with less than \(25\) states; see also the survey (Woods, Neary, 2009) . Adam Yedidia has written software to help in producing Turing machines with a small number of states. This is related to the recreational pastime of "Code Golfing" which is about solving a certain computational task using the as short as possible program.
The diagonalization argument used to prove uncomputability of \(F^*\) is derived from Cantor's argument for the uncountability of the reals discussed in Chapter 2.
Christopher Strachey was an English computer scientist and the inventor of the CPL programming language. He was also an early artificial intelligence visionary, programming a computer to play Checkers and even write love letters in the early 1950's, see this New Yorker article and this website.
Rice's Theorem was proven in (Rice, 1953) . It is typically stated in a form somewhat different than what we used, see Exercise 8.11.
We do not discuss in the chapter the concept of recursively enumerable languages, but it is covered briefly in Exercise 8.10. As usual, we use function, as opposto language, notation.
The cartoon of the Halting problem in Figure 8.1 is copyright 2019 Charles F. Cooper.
This argument has also been connected to the issues of consciousness and free will. I am personally skeptical of its relevance to these issues. Perhaps the reasoning is that humans have the ability to solve the halting problem but they exercise their free will and consciousness by choosing not to do so.
A machine with alphabet \(\Sigma\) can have at most \(|\Sigma|^T\) choices for the contents of the first \(T\) locations of its tape. What happens if the machine repeats a previously seen configuration, in the sense that the tape contents, the head location, and the current state, are all identical to what they were in some previous state of the execution?
Hint: You can use Rice's Theorem.
Hint: While it cannot be applied directly, with a little "massaging" you can prove this using Rice's Theorem.
\(\ensuremath{\mathit{HALT}}\) has this property.
You can either use the diagonalization method to prove this directly or show that the set of all recursively enumerable functions is countable.
\(\ensuremath{\mathit{HALT}}\) has this property: show that if both \(\ensuremath{\mathit{HALT}}\) and \(\overline{HALT}\) were recursively enumerable then \(\ensuremath{\mathit{HALT}}\) would be in fact computable.
Show that any \(G\) satisfying (b) must be semantic.
|
CommonCrawl
|
DOI:10.1080/00029890.2009.11920980
Ramanujan Primes and Bertrand's Postulate
@article{Sondow2009RamanujanPA,
title={Ramanujan Primes and Bertrand's Postulate},
author={Jonathan Sondow},
journal={The American Mathematical Monthly},
pages={630 - 635}
J. Sondow
The $n$th Ramanujan prime is the smallest positive integer $R_n$ such that if $x \ge R_n$, then there are at least $n$ primes in the interval $(x/2,x]$. For example, Bertrand's postulate is $R_1 = 2$. Ramanujan proved that $R_n$ exists and gave the first five values as 2, 11, 17, 29, 41. In this note, we use inequalities of Rosser and Schoenfeld to prove that $2n \log 2n < R_n < 4n \log 4n$ for all $n$, and we use the Prime Number Theorem to show that $R_n$ is asymptotic to the $2n$th prime. We…
View on Taylor & Francis
Ramanujan Primes: Bounds, Runs, Twins, and Gaps
J. Sondow, John W. Nicholson, Tony D. Noe
The $n$th Ramanujan prime is the smallest positive integer $R_n$ such that if $x \ge R_n$, then the interval $(x/2,x]$ contains at least $n$ primes. We sharpen Laishram's theorem that $R_n < p_{3n}$…
On the number of primes up to the $n$th Ramanujan prime
Christian Axler
The $n$th Ramanujan prime is the smallest positive integer $R_n$ such that for all $x \geq R_n$ the interval $(x/2, x]$ contains at least $n$ primes. In this paper we undertake a study of the…
On the estimates of the upper and lower bounds of Ramanujan primes
Shichun Yang, A. Togbé
For $$n\ge 1$$n≥1, the nth Ramanujan prime is defined as the least positive integer $$R_{n}$$Rn such that for all $$x\ge R_{n}$$x≥Rn, the interval $$(\frac{x}{2}, x]$$(x2,x] has at least n primes.…
On generalized Ramanujan primes
In this paper, we establish several results concerning the generalized Ramanujan primes. For $$n\in \mathbb {N}$$n∈N and $$k \in \mathbb {R}_{> 1}$$k∈R>1, we give estimates for the $$n$$nth…
New upper bounds for Ramanujan primes
A. Srinivasan, P. Ares
Glasnik Matematicki
For $n\ge 1$, the $n^{\rm th}$ Ramanujan prime is defined as the smallest positive integer $R_n$ such that for all $x\ge R_n$, the interval $(\frac{x}{2}, x]$ has at least $n$ primes. We show that…
Generalized Ramanujan Primes
Nadine Amersi, Olivia Beckwith, S. Miller, Ryan Ronan, J. Sondow
In 1845, Bertrand conjectured that for all integers x ≥ 2, there exists at least one prime in (x∕2, x]. This was proved by Chebyshev in 1860 and then generalized by Ramanujan in 1919. He showed that…
On a conjecture on Ramanujan primes
S. Laishram
For n ≥ 1, the nth Ramanujan prime is defined to be the smallest positive integer Rn with the property that if x ≥ Rn, then $\pi(x)-\pi(\frac{x}{2})\ge n$ where π(ν) is the number of primes not…
An Upper Bound for Ramanujan Primes
A. Srinivasan
This work states that for each > 0, there exists an N such that Rn < p[2n(1+ )] for all n > N .
Insulated primes.
Abhimanyu Kumar, Anuraag Saxena
The degree of insulation $D(p)$ of a prime $p$ is defined as the largest interval around the prime $p$ in which no other prime is present. Based on this, the $n$-th prime $p_{n}$ is said to be…
Set Families with Low Pairwise Intersection
Calvin Beideman, Jeremiah Blocki
The explicit construction of weak $\left(n,\ell,\gamma\right)$-sharing set families can be used to obtain a parallelizable pseudorandom number generator with a low memory footprint by using the pseudorrandom number generator of Nisan and Wigderson.
Ramanujan and I
P. Erdös
Perhaps the title "Ramanujan and the birth of Probabilistic Number Theory" would have been more appropriate and personal, but since Ramanujan's work influenced me greatly in other subjects too, I…
The Book of Prime Number Records
P. Ribenboim
1. How Many Prime Numbers Are There?.- I. Euclid's Proof.- II. Kummer's Proof.- III. Polya's Proof.- IV. Euler's Proof.- V. Thue's Proof.- VI. Two-and-a-Half Forgotten Proofs.- A. Perott's Proof.- B.…
Existence of Ramanujan primes for GL(3)
D. Ramakrishnan
In this Note we show that given any cusp form \pi on GL(3) over the rationals, there exist an infinite number of primes p which are Ramanujan for \pi, i.e., that the local components \pi_p are…
Lecture Notes in Mathematics
A. Dold, B. Eckmann
Vol. 72: The Syntax and Semantics of lnfimtary Languages. Edited by J. Barwtse. IV, 268 pages. 1968. DM 18,I $ 5.00 Vol. 73: P. E. Conner, Lectures on the Action of a Finite Group. IV, 123 pages.…
Sharper bounds for the Chebyshev functions $\theta (x)$ and $\psi (x)$
J. Rosser, L. Schoenfeld
Abstract : The authors demonstrate a wider zero-free region for the Riemann zeta function than has been given before. They give improved methods for using this and a recent determination that…
The On-Line Encyclopedia of Integer Sequences
N. Sloane
Electron. J. Comb.
The On-Line Encyclopedia of Integer Sequences (or OEIS) is a database of some 130000 number sequences which serves as a dictionary, to tell the user what is known about a particular sequence and is widely used.
Collected Papers of Srinivasa Ramanujan
J. Littlewood
RAMANUJAN was born in India in December 1887, came to Trinity College, Cambridge, in April 1914, was ill from May 1917 onwards, returned to India in February 1919, and died in April 1920. He was a…
Mathematical constants
S. Finch
Encyclopedia of mathematics and its applications
UCBL-20418 This collection of mathematical data consists of two tables of decimal constants arranged according to size rather than function, a third table of integers from 1 to 1000, giving some of…
The nth prime is greater than n ln n
Proc. London Math. Soc
|
CommonCrawl
|
Malnutrition as predictor of survival from anti-retroviral treatment among children living with HIV/AIDS in Southwest Ethiopia: survival analysis
Abdu Oumer1,
Mina Edo Kubsa1 &
Berhanu Abebaw Mekonnen2
Approximately 70% of HIV positive people live in Africa where food insecurity and under nutrition are endemic. However the impact of malnutrition on treatment outcome is not clear. This study assessed the effect of under nutrition on Anti-Retroviral Therapy treatment outcome among pediatric age group living with HIV/AIDS in Public Hospitals, Southwest Ethiopia.
A retrospective cohort study was conducted on records of 242 pediatric children in Guraghe zone Public Hospitals. Also median, mean, standard deviation and interquartile range were calculated. Life table, hazard function and survival function were plotted. Log rank test with 95% confidence interval of mean survival time was done. The nutritional status data were managed via WHO Anthros plus and BMI for age Z score was calculated. To assess effects of nutritional status on mortality, both Bivariate and multivariate cox proportional hazard regression was conducted with crude (CHR) and adjusted hazard ratio (AHR) (95% confidence interval and p value). P value of less than 0.05 was used as cut off point to declare statistical significance.
A total of 243 records of pediatric ART records with mean age of 11.6 (± 3.8 years) were reviewed. About 178 (73.3%) have got therapeutic feeding on the course of ART treatment. Whereas significant number of children, 163 (67.1%) reported to had eating problems. A total of 13 (5.3%) children were dead with incidence density of 11.2 deaths per 1000 person years. There is significantly higher survival time among well nourished (11.1 years with 95% CI: 10.8 to 11.4) as compared to underweight children (9.76 with 95% CI: 9.19 to 10.32 years). Underweight children had almost three fold increase incidence of death (AHR = 3.01; 95% CI: 0.80–11.4). Similarly children with anemia had higher incidence of death than children without anemia (AHR = 1.55; 95% CI: 0.49–4.84).
Low nutritional status at the start of ART evidenced by underweight and anemia were found to be predictors of survival among HIV positive children. There should be improved, sustained and focused nutritional screening, care and treatment for children on ART follow up.
Nutrition is a critical component of treatment, care, and support, for chronic Human Immune Virus (HIV) care especially among children [1]. Human Immune Virus (HIV) and acquired immune deficiency syndrome (AIDS) and under nutrition interact in a vicious cycle: HIV-induced immune impairment and increased risk of infection can worsen malnutrition, lead to nutritional deficiencies through decreased food intake, malabsorption syndromes, and increased need and higher nutrient loss [2]. Children with HIV/AIDS have reduced appetite and ability to consume food, as well as a higher incidence of diarrhea resulting in malabsorption and nutrient losses [2, 3], which makes malnutrition common phenomena [4]. In addition nutritional status has an impact on the adherence of ART drug intake which is important for good treatment outcome and increased expectancy [5].
Nutritional status modulates the immunological response to HIV infection, affecting the overall clinical outcomes [6, 7]. Weight loss is an important predictor of death from AIDS. The links between nutrition and HIV/AIDS increase the negative effects of HIV infection on human development at individual, household, community and national levels [8].
Early mortality with advanced disease of HIV is common features of individuals who are already enrolled on ART drugs [9]. Those people living with HIV (PLWHA) and co-infected with tuberculosis (TB), leads to even greater metabolic stress and risk of malnutrition [10].
Approximately 36.9 million people globally live with HIV and AIDS, out of these 3.2 million were children under 15 years, including 220,000 new cases of pediatric age group. Sub Saharan Africa harbors 70% of global burden where food insecurity and under nutrition are endemic [1].
Under nutrition is a significant factor affecting human immune deficiency virus (HIV) care and treatment in resource limited settings [11]. Poor nutritional status and its intersection with food insecurity, poverty, and co-infections also pose a serious threat to efforts to combat HIV/AIDS mainly through hindering favorable treatment outcomes [5, 12]. Undernourished children with PLWHA at start ART had 2–6 times increased risks of death in the first 6 months of ART [3, 13].
In addition, untreated HIV infection increases energy needs by as much as 10% in asymptomatic adults, 20–30% in symptomatic adults, and 50–100% in children with weight loss [14]. Study from Northwest Ethiopia showed overall prevalence of malnutrition, underweight, stunting and wasting was 42.9, 41.7, 65 and 5.8% respectively [15].
Various level of ART treatment outcome, incidence rate of 16.9 deaths per 1000 child-years. With death rate of 4.81% [16], while another study [17] showed cumulative probabilities of survival at 3, 6, 12, and 24 months of ART were 0.96, 0.94, 0.93, and 0.92 respectively. Majority (90.2%) of the deaths occurred within the first year of treatment.
Studies has shown that baseline CD4 level, WHO staging and other parameters as important predictors of child survival from ART [15,16,17]. Those children whose age less than 18 months (AHR = 4.39 (1.15–17.41)), CD4 percentage < 10 (AHR = 2.98 (1.12–7.94), WHO clinical stage (III & IV) (AHR = 4.457 (1.01–19.66) were found to be significant predictors of child survival from HIV/AIDS [16]. Similarly anemia (hemoglobin level < 10 g/dl) (AHR = 2.44, 95% CI: 1.26, 4.73), increased hazards of mortality [17]. Malnourished patients were 1.5 times at risk of death compared to well-nourished group (AHR = 1.5, (0.648, 3.287), but not statistically significant [18].
Generally previous studies have been conducted in Ethiopia addressing the predictors of mortality among ART clients, yet there are a limited number of studies with this specific age group and proper assessment of nutritional status was not used. Furthermore there is no sufficient evidence in the specific area regarding the effects of malnutrition (using BMI for age for children) on Survival of children from HIV/AIDS in the study area. Thus, this particular study aimed to address the magnitude of malnutrition and more specifically its effects on the survival status of children from HIV/AIDS in this particular area which will be an important input to the area and the region Country as well.
Guraghe zone is one of the administrative zones in southern nations and nationalities region. It has 13 districts and two town administrations. Wolkite town is the capital of Guraghe zone which is found 425 km and 158 km from Hawassa and Addis Ababa respectively on the way to Jimma. There are five hospitals with four governmental and one private (Non-governmental) hospital. There were 758 pediatric children who were HIV positive from August 2013–September 2017 (during the past 5 years), in those public hospitals. This study was conducted from June 1 to 20/ 2018.
A retrospective cohort study design was conducted using records from August, 2013 to September, 2017. All pediatrics patients under the age of 18 years on ART in Guraghe zone, SNNPR, Ethiopia were source population.
All randomly selected records of pediatrics patients (6 months – 18 years) on ART from the selected Hospitals in Guraghe zone, SNNPR, Ethiopia were study population. Records with incomplete data on nutritional status (weight, height), age, and/or ART treatment outcome were excluded from the study. Records in which the intake form, register or follow up form lacks the aforementioned variables were excluded. Cases which transferred to other facilities (transfer outs) and those cases of death with confirmed accidental death due to injury (unrelated competing causes of death) were excluded.
Sample size determination
The sample size for the first objective was calculated based on single population proportion formula. Using incidence of mortality among ART clients (P = 2.3%) from study done in Zewditu hospital [19] and margin of error, 5% at 95% confidence interval as follows.
$$ n={\left( Z\alpha /2\right)}^2\ \frac{P\left(1-P\right)}{d^2} $$
$$ n={(1.96)}^2\ \frac{0.023\left(1-0.023\right)}{(0.05)^2} $$
The sample size became 35. For the second specific objective the sample size was calculated using specific software version 13.0 (Stata corps Ltd), using 80% power, 5% type I error and estimates of outcome, the sample size was 347. But as the available records of pediatric ART clients is below the calculated sample size (243), all records of eligible children were included in the study.
Sampling procedures
Since, in Guraghe zone there are five hospitals first we stratify the hospitals and from the lists of hospitals we selected two governmental hospitals out of four which have higher case load. Using the average 5 year case load of each hospitals, we proportionally allocated the sample size to each hospitals. And records were selected using simple random sampling by computer random number generator. Then using unique ART number or medical record number (MRN) the clients' card were retrieved. However, the total number of eligible study participants were below the calculated sample size, all records of children (243) from the selected Hospitals were included in the study.
Data collection methods
Quantitative data were collected using structured and cross checked checklist. The check list included information on weight, socio-demographic characteristics, treatment related issues, ART treatment outcome (as death or survival from the card or register). The data were collected by trained data collectors. Data were collected by 30 health professional (BSc nurses and health officers), who have good knowledge on ART therapy. The number of data collectors assigned to each hospital was based on their previous case load. .
Variables of the study
The dependent variable of this study is ART treatment Outcome (time to death) while age at enrollment, sex, parents survival status, nutritional status (BMI for age score), functional status, ART adherence, CD4 at the start, WHO stage at the start, cotrimoxazole prophylaxis, INH prophylaxis, co morbidities, ART regimen, eligibility criteria (CD4 or WHO staging) were independent variables.
Operational definitions
Death: when the child had approved records of death in his medical record where it is not due to accidental unrelated causes. Censored: those records or ART clients who did not develop the outcome (death) at the ends of the study. Malnutrition is when the enrollment BMI for age z score of the child is below − 2 SD. Under nutrition for children was defined as BMI for age z- score of < − 2 SD (thinness or underweight), using World Health Organization (WHO) 2006 and 2007 reference standards. .
Data quality control
Two days training was given to the data collectors before the actual data collection. Principal investigators and supervisors monitored and check the daily progress of the data collection. The information collected were cross checked with different sources (intake forms, ART register and ART chronic follow up form). The data were entered in to EpiData software and was restricted by legal values and other parameters to minimize errors. The data were also be entered by two independent data entry clerks and then cross checked for possible errors of data entry. Mismatched data were cross checked with the hard copy and corrected accordingly.
Data processing and analysis
The raw data were entered in to Epi-Data software version 3.1 and exported to SPSS version 20 for analysis. Data were presented in frequency, percentages, tables, and graphs. To analyze time to death among ART clients from the day of enrolment, survival analysis was done. Also median, mean, standard deviation and interquartile range were calculated. Life table, hazard function and survival function were plotted. To compare average survival time among different characteristics, log rank test with 95% confidence interval of mean survival time and P of log rank test was done. Patients were followed retrospectively from the day of enrolment to ART till they develop the outcome or being censored.
The nutritional status data (age, weight, height) were entered in to WHO Anthros software and BMI for age Z score was calculated automatically. Then it was exported to SPSS version 20. Nutritional status was categorized in to Malnourished (BMI for Age Z score below − 2) while, the others as normal nutritional status (BMI for Age > = − 2, including obese and overweight) as over nutrition is rare among RVI clients.
To assess effects of nutritional status on mortality, both Bivariate and multivariate cox proportional hazard regression were conducted with crude (CHR) and adjusted hazard ratio (AHR) (95% confidence interval and p value). P value of less than 0.05 was used as cut off point to declare statistical significance.
The study was approved by Wolkite University Institutional and Health Ethical Review committee. Letter of cooperation was taken from the university to the zonal health office and to the respective hospitals sequentially. Before data collection Informed consent was obtained from the respective hospital managers after full explanations of the study procedure. Then during the actual data collection all hard copy and softcopy data were under full protection in the hands of the investigators. The collected data will not be used for other purpose than the study primary objectives. Written informed consent was obtained from hospital administrator on the behalf of the ART data in their hospital. No tissue sample or human experiment was done.
Socio demographic characteristics
A total of 243 records of pediatric ART records with mean age of 11.6 (± 3.8 years) were reviewed and included in the study. More than half children (55.1%), came from rural areas. Only about 41% of children parents live together while 10.7% and 30.5% lost their mother and father at some period% of time. About 209 (86%) of children live with their parents, while others with their relatives or lonely. Almost all, 240 (98.8%) had a care giver who can take care of them, of whom 86% of them were parents either father or mother of the child (Table 1).
Table 1 Socio demographic characteristics of children on ART follow up in Guraghe zone public hospitals, 2019
A total of 239 (98.4%) children got reported nutritional counseling during their ART follow up period. While about 178 (73.3%) have got therapeutic feeding on the course of ART treatment. Whereas significant number of children, 163 (67.1%) reported to had eating problems. Of whom, 156 (95.7%), 68 (41.7%) and 81 (49.7%) had loss of appetite, swallowing difficulty and vomiting respectively (Table 2).
Table 2 Nutrition related characteristics of ART children on ART follow up in Guraghe zone public hospitals, 2019
A total of 191 (78.6%) children reported to have at least one opportunistic diseases. Out of which, 100 (52.4%), 21(11%) and 95 (49.7%) children had pneumonia, Tuberculosis and diarrheal disease respectively (Table 3). About 156 (64.2%) of children started ART based on the WHO criteria of being under 15 years of age, while others were admitted based on cd4 or WHO staging criteria with median age of ART start at age of 5 years. A total of 84 (34.6%) of children were put on ART immediately after HIV confirmation date while 70.2% started with in 1 month of HIV confirmation. Majority of children, 174 (71.6%) were on AZT-3TC-NVP ART regimen with mean baseline hemoglobin of 12 g/dl and CD4 count of 483 cells/mm3.
Table 3 ART related conditions of children on ART follow up in Guraghe zone public hospitals, 2019
A total of 41.2 and 32.2% of children on ART were on WHO stage I and II during enrollment in chronic HIV care while only 2 children were classified as stage IV. Regarding the developmental milestone of children, 109 (44.9%) of children had normal development while, 38 (15.65) had regressed developmental milestone at enrollment to ART care (Fig. 1).
Baseline WHO stage of children on ART follow up in Guraghe zone public hospitals, 2019. (Blue refers to frequency r number of participants in each category)
Concerning the ART adherence at the time of last ART visit, 202 (83.3%) had good adherence while 2.9% had poor ART adherence which need adherence support. In addition, 88.5% and 90.5% of children had taken INH and cotrimoxazole prophylaxis respectively. A total of 33 children (13.6%) reported to have anemia and 25 (10.3%) had virological failure, persistent increase in viral count above 1000 cells/mm3. Accordingly, 13 (5.3%) of clients got shift of ART regimen from first line to second line, which was mainly due to clinical and immunologic failure (Table 3) (Fig. 2).
Baseline functional status of children on ART follow up in Guraghe zone public hospitals, 2019. (Blue refers to Appropriate for age; Grey refers to development delayed for age; and blue black refers to regress normal development)
ART treatment outcome
A total of 200 (82.3%) of children were active on ART follow up while, 13 (5.3%) were dead during the course of their chronic HIV care with incidence density of 11.2 deaths per 1000 person years of follow up (Fig. 3) (Table 4).
ART treatment outcomes of Children on ART follow up in Guraghe Zone public Hospitals, 2019. (Blue refers to frequency r number of participants in each category)
Table 4 Life table for time to death among children on chronic ART follow up in Guraghe zone public hospitals in 2019
Majority of death among children were observed during the early period of ART start, in which 12 out of 13 deaths occurred during the first 2 years of ART start (Fig. 4). Even if the patter of death were similar, higher number of deaths (9 deaths) were observed among malnourished than well nourished (4 deaths) children.
Survival function for children by their Nutritional status on ART follow up at Public Hospitals in Guraghe Zone, 2019 (Blue refers to survival lie for undernourished children while, green refers to survival graph for well nourished)
Under Kaplan Meir test for comparison of mean survival time by nutritional status, there is significantly higher survival time among well nourished (11.1 years with 95% CI: 10.8 to 11.4) as compared to underweight children (9.76 with 95% CI: 9.19 to 10.32 years) (Table 5).
Table 5 Kaplan Meir test for comparison of mean survival time by nutritional status
Predictors of survival among ART children
Using both categorical and numerical covariates, cox proportional hazard regression model was done. The cox proportional hazard assumptions were fulfilled and checked using global test (p value of above 0.05). Younger children had shown to have higher hazards of death, which is per each year increase in age of child the hazards of death decrease by 14% (CHR = 0.86 95% CI: 0.75–0.99). Children from rural area had two fold increased hazards of death (CHR = 1.90; 95% CI: 0.58–6.12). Those children who were underweight (defines as low BMI for age z score below − 2 as compared to the WHO growth reference) shown to have almost four fold increased risk of early death as compared to well-nourished ones (CHR = 3.36; 95% CI: 1.03–10.9). Children with fair (CHR = 4.73; 95% CI: 1.4–14.9) and poor ART adherence (CHR = 4.73; 95% CI: 1.4–14.9) had more than four times increased hazards of death as compared to well adhered clients. In addition, those clients who were not supplemented with therapeutic foods had two times higher hazards of death (CHR = 2.18; 95% CI: 0.48–9.84) (Table 6).
Table 6 Cox regression showing Predictors of survival among ART children in Southwest Ethiopia, 2019
Clients who started ART immediately with the same days of HIV confirmation had lower hazards of early death (CHR = 2.84; 95% CI: 0.63–12.82). Hazards of death was higher among children diagnosed with anemia as compared to non-anemic children (CHR = 2.38; 95% CI: 1.38–4.1). Similarly as the hemoglobin level rises, it showed 22% decline in hazards of death (CHR = 0.78; 95% CI: 0.63–.95). Significantly higher hazards of death was observed among advanced stage ART clients (WHO stage II and above) as compared to WHO stage I (Table 6).
After adjusting for confounder variables, multivariate cox proportional hazard regression was fitted. As a result, advanced WHO stage, nutritional status, duration of pre ART follow up and presence of anemia were significantly associated with increased hazards of death. Underweight children had almost three fold increase in hazard of death (AHR = 3.01; 95% CI: 0.80–11.4). Similarly children with anemia had higher hazards of death than children without anemia (AHR = 1.55; 95% CI: 0.49–4.84) (Table 6).
The findings from this study showed that a total of 13 (5.3%) children were dead during the course of their chronic HIV care with incidence density of 11.2 deaths per 1000 person years of follow up. Additionally, majority of deaths occurred during the first 2 years after the start of ART. Similarly review data on national level showed mortality rate of 5 to 8% at 6 month and 24 months of ART start [19]. Mortality rate (hazards of death was higher among males than females which is comparable to study in Iran which showed lower cumulative proportion surviving among males than in females (P = 0.0001) [20]. In addition study in Addis Ababa also showed about 8.8% mortality rate among children in which majority of death occurs during early periods of ART follow-up [21]. Study from Northern Ethiopia also showed mortality rate of about 8% [17].
In addition, nutritional status evidenced by being underweight and low hemoglobin had increased hazards of death. Underweight children had almost three fold increase in hazard of death (AHR = 3.01; 95% CI: 0.80–11.4). Similarly children with anemia had higher hazards of death than children without anemia (AHR = 1.55; 95% CI: 0.49–4.84). Similarly study among adult clients on HAART showed that being malnourished increased hazards of early death by 40% (AHR = 1.460, 95% CI (0.648, 3.287), p = 0.361] [18]. In addition pre ART nutritional status evidenced by low weight increased hazards of death (AHR = 5.4 95% CI 3.03–9.58) [22]. Also a study done in Ethiopia showed that, being malnourished (low weight) (AHR =4.99, 95% CI 2.4–10.2, P < 0.00) and low hemoglobin (HR = 2.92, 95% CI 1.3–6.7, P = 0.001) increase hazards of death among HIV clients respectively [17]. This emphasized that low weight and hemoglobin are usually indicative of advanced stage disease which decrease child survival significantly.
Thus, as the prevalence of malnutrition among children contributed to almost half of under-five mortality and affects up to 40% of children, maintaining good nutritional status of children is one of the primary proxy indicator for good treatment outcome [23]. This in turn will aggravate the bad cycles of HIV and malnutrition in that it will leads to advanced diseases and poor adherence to HIV chronic care [23, 24]. Significantly, higher number of HIV infected children had higher prevalence of underweight (77% versus 35%), stunting (65 and 61%) and wasting (63 and 26%) respectively [7].
Despite having nutritional rehabilitation, a fourfold increase in mortality is observed among HIV positives indicating the need for improved and focused nutritional care for children for improved treatment outcomes. In addition there should be high energy and micronutrient foods locally prepared in the community or some preparations like ready to use therapeutic foods or others should be readily available to the ART clients.
It estimated that about 3.2 million children living with HIV/AIDS, almost 90% cases were found in sub Saharan African countries in which malnutrition is common problem facing children [23]. This will interact with each other creating double burden among children in that malnutrition will reduce HAART effectiveness, response to drugs while increasing the risks of early death.
In addition, majority of children would not start ART before 6 months. Thus, children usually starts care later where children develop advanced stage disease including advanced WHO stage defining illnesses leading to lower survival [6]. Early initiation of ART usually improve the clinical outcomes of children. On the other side, in areas where ART coverage is about 54% [19, 25], late detection and initiation of treatment will deteriorate the nutritional status of children causing early mortality and immunological failure.
Despite its impressive methods and findings, as of any study the findings of this study should be seen in the light of some limitations. As the number of children on ART were smaller, the sample size was somewhat lower which might affect the power of the study. In addition, the secondary nature of the data makes, the availability of some factors like iron status and other nutritional indices unavailable.
Conclusion and recommendations
Low nutritional status at the start of ART evidenced by underweight and anemia were found to be predictors of survival among HIV positive children. There should be improved, sustained and focused nutritional screening, care and treatment for children on ART follow up. In addition strong adherence support for chronic HIV care should be emphasized, so that it will be a significant input for improved nutritional status and ultimately improved treatment survival. The Health extension workers and the HIV support groups in collaboration with ART care givers should educate and promote care givers to enhance better nutritional support and care for children in addition to adherence to HAART therapy.
A/CHR:
Adjusted/Crude Hazard ratio
Ante retroviral treatment
Human Immune Virus/Acquired Immune deficiency syndrome
PLWHA:
People living with HIV/AIDS
UNAIDS, Global nutrition report multi-sectoral nutrition strategy technical brief: nutrition, food security and HIV. 2015.
PINSTRUP-ANDERSEN P. (Ed.). The African Food System and Its Interactions with Human Health and Nutrition. Cornell University Press; 2010. Retrieved from https://www.jstor.org/stable/10.7591/j.ctt7zd0x.
Weiser S, Kimberly A, Eirikka K, Viviane D, Aranka A, et al. The association between food insecurity and mortality among HIV-infected individuals on HAART. J Acquir Immune Defic Syndr. 2009;52(3):342–9.
Rajshree Thapa AA, Bam K, Newman MS. Nutritional status and its association with quality of life among people living with HIV attending public anti-retroviral therapy sites of KathmanduValley, Nepal. AIDS Res Ther. 2014;12:14.
Kalofonos IA. "All I eat is ARVs": the paradox of AIDS treatment interventions in central Mozambique. Med Anthropol Q. 2010;24(3):363–80.
Jesson J, Leroy V. Challenges of malnutrition care among HIV-infected children on antiretroviral treatment in Africa. Med Mal Infect. 2015;45(5):149–56.
Muenchhoff M, Healy M, Singh R, Roider J, Groll A, et al. Malnutrition in HIV-infected children is an indicator of severe disease with an impaired response to antiretroviral therapy. AIDS Res Hum Retroviruses. 2018;34(1):46–53.
Colecraft E. HIV/AIDS: nutritional implications and impact on human development. Proc Nutr Soc. 2008;67(01):109–13.
Chesney MA. The elusive gold standard: future perspectives for HIV adherence assessment and intervention. J Acquir Immune Defic Syndr. 2006;43:S149–S55.
Negassie B, Alemayehu. Effect of nutritional factors on adherence to antiretroviral therapy among HIV infected adults: a case control study in Northern Ethiopia. BMC Infect Dis. 2013;13:233.
Semba RD, Darnton-Hill I, de Pee S. Addressing tuberculosis in the context of malnutrition and HIV coinfection. Food Nutr Bull. 2010;31(Supplement 4):345S–64S.
Talam NC, Rotich J, Kimaiyo S. Factors affecting antimicrobial drug adherence among HIV/AIDS adult patients attending HIV/AIDS clinic at Moil Teaching and Referral Hospital, Eldoret, Kenya. East Afr J Public Health. 2008;5(2):74–8.
Paton N, Earnest A, Bellamy R. The impact of malnutrition on survival and the CD4 count response in HIV-infected patients starting antiretroviral therapy. HIV Med. 2006;7(5):2–6.
World Health Organization (WHO). Nutrient requirements for people living with HIV/AIDS: report of a technical consultation 13–15 May 2003. Nutrition, Editor: Geneva. 2003;3:12–35.
Megabiaw B, Wassie B, Rogers NL. Malnutrition among HIV positive children at two referral hospitals in Northwest Ethiopia. Ethiopia. Official Journal of College of Medicine and Health Sciences, UOG: University of Gondar; 2011.
Gebremedhin A, Gebremariam S, Haile F, Weldearegawi B, Decotelli C. Predictors of mortality among HIV infected children on anti-retroviral therapy in Mekelle Hospital, Northern Ethiopia: a retrospective cohort study. BMC Public Health. 2013;13(1047):1–4.
Koye DN, Ayele TA, Zeleke BM. Predictors of mortality among children on Antiretroviral Therapy at a referral hospital, Northwest Ethiopia: A retrospective follow up study. BMC Pediatr. 2012;12(161):2–5.
Hussen S, Belachew T, Hussien N. Nutritional status and its effect on treatment outcome among HIV infected clients receiving HAART in Ethiopia: a cohort study. AIDS Res Ther. 2016;13(32):1–5.
Assefa Y, Kiflie A, Tesfaye D, Mariam DH, Kloos H, et al. Outcomes of antiretroviral treatment program in Ethiopia: retention of patients in care is a major challenge and varies across health facilities. BMC Health Serv Res. 2011;11(81):1–6.
Akbari M, Fararouei M, Haghdoost AA, Gouya MM, Kazerooni PA. Survival and associated factors among people living with HIV/AIDS: A 30-year national survey in Iran. J Res Med Sci. 2019;24(5):2–6.
Taye B, Shiferaw S, Enquselassie F. The impact of malnutrition in survival of HIV infected children after initiation of antiretroviral treatment (ART). Ethiop Med J. 2010;48(1):1–10.
Tesfamariam K, Baraki N, Kedir H. Pre-ART nutritional status and its association with mortality in adult patients enrolled on ART at Fiche Hospital in North Shoa, Oromia region, Ethiopia: a retrospective cohort study. BMC Res Notes. 2016;9(1):512–5.
Poda GG, Hsu C-Y, Chao JC-J. Malnutrition is associated with HIV infection in children less than 5 years in Bobo-Dioulasso City, Burkina Faso A case–control study. Medicine. 2017;96(21):1–5.
Uthman OA. Prevalence and pattern of HIV-related malnutrition among women in sub-Saharan Africa: a meta-analysis of demographic health surveys. BMC Public Health. 2008;8(226):1–7.
Assefa Y, Gilks CF, Lynen L, Williams O, Hill PS, et al. Performance of the antiretroviral treatment program in Ethiopia, 2005–2015: strengths and weaknesses toward ending AIDS. Int J Infect Dis. 2017:60(1):70–76.
We thank heath office, Wolkite university and data collectors for their valuable contribution for accomplishment of this study.
We thank Wolkite University, Research and community service directorate for providing financial and administrative support.
Department of Public Health, College of Heath Sciences and Medicine, Wolkite University, Wolkite, Ethiopia
Abdu Oumer & Mina Edo Kubsa
Department of Nutrition and Dietetics, School of Public Heath, Bahir Dar University, Bahir Dar, Ethiopia
Berhanu Abebaw Mekonnen
Abdu Oumer
Mina Edo Kubsa
MEK, AO and BAM participated in from initial inception, proposal writing, preparing data collection tool, pretesting, data collection, data analysis, result write up, preparation of manuscript to submission. All authors read and approved the final manuscript.
Correspondence to Mina Edo Kubsa.
The research was reviewed and ethically adopted by Wolkite university independent Research ethical review committee. Then a written informed consent considering their free will to participate in the study, confidentiality of their information and each participant was giving informed consent on separate paper and then they were eligible for participation. All possible ethical cares were followed throughout the conduct of research project.
Oumer, A., Kubsa, M.E. & Mekonnen, B.A. Malnutrition as predictor of survival from anti-retroviral treatment among children living with HIV/AIDS in Southwest Ethiopia: survival analysis. BMC Pediatr 19, 474 (2019). https://doi.org/10.1186/s12887-019-1823-x
Under nutrition
|
CommonCrawl
|
Search Results: 1 - 10 of 503757 matches for " David A Nix "
GATA: a graphic alignment tool for comparative sequence analysis
David A Nix, Michael B Eisen
BMC Bioinformatics , 2005, DOI: 10.1186/1471-2105-6-9
Abstract: To address some of these issues, we created a stand alone, platform independent, graphic alignment tool for comparative sequence analysis (GATA http://gata.sourceforge.net/ webcite). GATA uses the NCBI-BLASTN program and extensive post-processing to identify all small sub-alignments above a low cut-off score. These are graphed as two shaded boxes, one for each sequence, connected by a line using the coordinate system of their parent sequence. Shading and colour are used to indicate score and orientation. A variety of options exist for querying, modifying and retrieving conserved sequence elements. Extensive gene annotation can be added to both sequences using a standardized General Feature Format (GFF) file.GATA uses the NCBI-BLASTN program in conjunction with post-processing to exhaustively align two DNA sequences. It provides researchers with a fine-grained alignment and visualization tool aptly suited for non-coding, 0–200 kb, pairwise, sequence analysis. It functions independent of sequence feature ordering or orientation, and readily visualizes both large and small sequence inversions, duplications, and segment shuffling. Since the alignment is visual and does not contain gaps, gene annotation can be added to both sequences to create a thoroughly descriptive picture of DNA conservation that is well suited for comparative sequence analysis.The most widely used methods for aligning DNA sequences rely on dynamic programming algorithms initially developed by Smith-Waterman and Needleman-Wunsch [1,2]. These algorithms create the mathematically best possible alignment of two sequences by inserting gaps in either sequence to maximize the score of base pair matches and minimize penalties for base pair mismatches and sequence gaps. Although these methods have proven invaluable in understanding sequence conservation and gene relatedness, they make several assumptions. One of their assumptions in generating the "best" alignment is that sequence features are collinear. For
Empirical methods for controlling false positives and estimating confidence in ChIP-Seq peaks
David A Nix, Samir J Courdy, Kenneth M Boucher
BMC Bioinformatics , 2008, DOI: 10.1186/1471-2105-9-523
Abstract: Here we present a package of algorithms and software that makes use of control input data to reduce false positives and estimate confidence in ChIP-Seq peaks. Several different methods were compared using two simulated spike-in datasets. Use of control input data and a normalized difference score were found to more than double the recovery of ChIP-Seq peaks at a 5% false discovery rate (FDR). Moreover, both a binomial p-value/q-value and an empirical FDR were found to predict the true FDR within 2–3 fold and are more reliable estimators of confidence than a global Poisson p-value. These methods were then used to reanalyze Johnson et al.'s neuron-restrictive silencer factor (NRSF) ChIP-Seq data without relying on extensive qPCR validated NRSF sites and the presence of NRSF binding motifs for setting thresholds.The methods developed and tested here show considerable promise for reducing false positives and estimating confidence in ChIP-Seq data without any prior knowledge of the chIP target. They are part of a larger open source package freely available from http://useq.sourceforge.net/ webcite.Chromatin immunoprecipitation (chIP) is a well-characterized technique for enriching regions of DNA that are marked with a modification (e.g. methylation), display a particular structure (e.g. DNase hypersensitivity), or are bound by a protein (e.g. transcription factor, polymerase, modified histone), in vivo, across an entire genome [1]. Chromatin is typically prepared by fixing live cells with a DNA-protein cross-linker, lysing the cells, and randomly fragmenting the DNA. An antibody that selectively binds the target of interest is then used to immunoprecipitate the target and any associated nucleic acid. The cross-linker is then reversed and DNA fragments of approximately 200–500 bp in size are isolated. The final chIP DNA sample contains primarily background input DNA plus a small amount (<1%) of additional immunoprecipitated target DNA.Several methods have been used to ide
Flexible promoter architecture requirements for coactivator recruitment
Derek Y Chiang, David A Nix, Ryan K Shultzaberger, Audrey P Gasch, Michael B Eisen
BMC Molecular Biology , 2006, DOI: 10.1186/1471-2199-7-16
Abstract: A Cbf1 binding site was required upstream of a Met31/32 binding site for full reporter gene expression. Distance constraints on coactivator recruitment were more flexible than those for cooperatively binding transcription factors. Distances from 18 to 50 bp between binding sites support efficient recruitment of Met4, with only slight modulation by helical phasing. Intriguingly, we found that certain sequences located between the binding sites abolished gene expression.These results yield insight to the influence of both binding site architecture and local DNA flexibility on gene expression, and can be used to refine computational predictions of gene expression from promoter sequences. In addition, our approach can be applied to survey promoter architecture requirements for arbitrary combinations of transcription factor binding sites.In most eukaryotes, the sequences that regulate transcription integrate multiple signals, through the binding of different transcription factors, to modulate levels of gene expression. When bound to DNA, transcription factors anchor the assembly of multiprotein complexes that influence the recruitment of RNA polymerase. Efficient assembly depends on optimally spaced protein-protein interactions among transcription factors and auxiliary proteins [1-4]. Since transcription factors recognize specific sites on DNA, the distance between these binding sites can influence how transcription factors interact with each other and other proteins. For example, overlapping sites may prevent two transcription factors from binding simultaneously, while sites too distant from each other may hinder bound transcription factors from recruiting necessary cofactors. Furthermore, some distantly spaced sites can only properly interact when the DNA between them is looped, a process influenced by the composition of the looped DNA.Computational approaches take into account the multifactorial nature of transcriptional regulation when discovering transcription facto
ENCODE Tiling Array Analysis Identifies Differentially Expressed Annotated and Novel 5′ Capped RNAs in Hepatitis C Infected Liver
Milan E. Folkers,Don A. Delker,Christopher I. Maxwell,Cassie A. Nelson,Jason J. Schwartz,David A. Nix,Curt H. Hagedorn
Abstract: Microarray studies of chronic hepatitis C infection have provided valuable information regarding the host response to viral infection. However, recent studies of the human transcriptome indicate pervasive transcription in previously unannotated regions of the genome and that many RNA transcripts have short or lack 3′ poly(A) ends. We hypothesized that using ENCODE tiling arrays (1% of the genome) in combination with affinity purifying Pol II RNAs by their unique 5′ m7GpppN cap would identify previously undescribed annotated and unannotated genes that are differentially expressed in liver during hepatitis C virus (HCV) infection. Both 5′-capped and poly(A)+ populations of RNA were analyzed using ENCODE tiling arrays. Sixty-four annotated genes were significantly increased in HCV cirrhotic as compared to control liver; twenty-seven (42%) of these genes were identified only by analyzing 5′ capped RNA. Thirty-one annotated genes were significantly decreased; sixteen (50%) of these were identified only by analyzing 5′ capped RNA. Bioinformatic analysis showed that capped RNA produced more consistent results, provided a more extensive expression profile of intronic regions and identified upregulated Pol II transcriptionally active regions in unannotated areas of the genome in HCV cirrhotic liver. Two of these regions were verified by PCR and RACE analysis. qPCR analysis of liver biopsy specimens demonstrated that these unannotated transcripts, as well as IRF1, TRIM22 and MET, were also upregulated in hepatitis C with mild inflammation and no fibrosis. The analysis of 5′ capped RNA in combination with ENCODE tiling arrays provides additional gene expression information and identifies novel upregulated Pol II transcripts not previously described in HCV infected liver. This approach, particularly when combined with new RNA sequencing technologies, should also be useful in further defining Pol II transcripts differentially regulated in specific disease states and in studying RNAs regulated by changes in pre-mRNA splicing or 3′ polyadenylation status.
Large-Scale Turnover of Functional Transcription Factor Binding Sites in Drosophila
Alan M Moses,Daniel A Pollard,David A Nix,Venky N Iyer,Xiao-Yong Li,Mark D Biggin,Michael B Eisen
PLOS Computational Biology , 2006, DOI: 10.1371/journal.pcbi.0020130
Abstract: The gain and loss of functional transcription factor binding sites has been proposed as a major source of evolutionary change in cis-regulatory DNA and gene expression. We have developed an evolutionary model to study binding-site turnover that uses multiple sequence alignments to assess the evolutionary constraint on individual binding sites, and to map gain and loss events along a phylogenetic tree. We apply this model to study the evolutionary dynamics of binding sites of the Drosophila melanogaster transcription factor Zeste, using genome-wide in vivo (ChIP–chip) binding data to identify functional Zeste binding sites, and the genome sequences of D. melanogaster, D. simulans, D. erecta, and D. yakuba to study their evolution. We estimate that more than 5% of functional Zeste binding sites in D. melanogaster were gained along the D. melanogaster lineage or lost along one of the other lineages. We find that Zeste-bound regions have a reduced rate of binding-site loss and an increased rate of binding-site gain relative to flanking sequences. Finally, we show that binding-site gains and losses are asymmetrically distributed with respect to D. melanogaster, consistent with lineage-specific acquisition and loss of Zeste-responsive regulatory elements.
Next generation tools for genomic data generation, distribution, and visualization
David A Nix, Tonya L Di Sera, Brian K Dalley, Brett A Milash, Robert M Cundick, Kevin S Quinn, Samir J Courdy
BMC Bioinformatics , 2010, DOI: 10.1186/1471-2105-11-455
Abstract: Here we present three open-source, platform independent, software tools for generating, analyzing, distributing, and visualizing genomic data. These include a next generation sequencing/microarray LIMS and analysis project center (GNomEx); an application for annotating and programmatically distributing genomic data using the community vetted DAS/2 data exchange protocol (GenoPub); and a standalone Java Swing application (GWrap) that makes cutting edge command line analysis tools available to those who prefer graphical user interfaces. Both GNomEx and GenoPub use the rich client Flex/Flash web browser interface to interact with Java classes and a relational database on a remote server. Both employ a public-private user-group security model enabling controlled distribution of patient and unpublished data alongside public resources. As such, they function as genomic data repositories that can be accessed manually or programmatically through DAS/2-enabled client applications such as the Integrated Genome Browser.These tools have gained wide use in our core facilities, research laboratories and clinics and are freely available for non-profit use. See http://sourceforge.net/projects/gnomex/ webcite, http://sourceforge.net/projects/genoviz/ webcite, and http://sourceforge.net/projects/useq webcite.The post-genomic era holds many promises for addressing fundamental questions regarding biology and improving patient outcome through personalized medicine. It also presents several unique challenges that need to be addressed to maximize the effectiveness of using genomic data in the laboratory and clinic. One key issue is the exponential growth in the number, size, and complexity of datasets generated from genomic experiments. The bottleneck is less the cost and difficulty of generating the data but, more so, efficiently managing, analyzing, and distributing it. Here, we present three, open source, platform independent, software tools that we have developed to address each of th
Effects of Syngas Particulate Fly Ash Deposition on the Mechanical Properties of Thermal Barrier Coatings on Simulated Film-Cooled Turbine Vane Components [PDF]
Kevin Luo, Andrew C. Nix, Bruce S. Kang, Dumbi A. Otunyo
International Journal of Clean Coal and Energy (IJCCE) , 2014, DOI: 10.4236/ijcce.2014.34006
Abstract: Research is being conducted to study the effects of particulate deposition from contaminants in coal synthesis gas (syngas) on the mechanical properties of thermal barrier coatings (TBC) employed on integrated gasification combined cycle (IGCC) turbine hot section airfoils. West Virginia University (WVU) had been working with US Department of Energy, National Energy Technology Laboratory (NETL) to simulate deposition on the pressure side of an IGCC turbine first stage vane. To model the deposition, coal fly ash was injected into the flow of a combustor facility and deposited onto TBC coated, angled film-cooled test articles in a high pressure (approximately 4 atm) and a high temperature (1560 K) environment. To investigate the interaction between the deposition and the TBC, a load-based multiple-partial unloading micro-indentation technique was used to quantitatively evaluate the mechanical properties of materials. The indentation results showed the Young's Modulus of the ceramic top coat was higher in areas with deposition formation due to the penetration of the fly ash. This corresponds with the reduction of strain tolerance of the 7% yttria-stabilized zirconia (7YSZ) coatings.
First search for $K_{L}\toπ^{0}π^{0}ν\barν$
J. Nix,for the E391a Collaboration
Physics , 2007, DOI: 10.1103/PhysRevD.76.011101
Abstract: The first search for the rare kaon decay $\kppnn$ has been performed by the E391a collaboration at the KEK 12-GeV proton synchrotron. An upper limit of $4.7\times10^{-5}$ at the 90 % confidence level was set for the branching ratio of the decay $\kppnn$ using about 10 % of the data collected during the first period of data taking. First limits for the decay mode $\kppP$, where $P$ is a pseudoscalar particle, were also set.
ARQ-Aware Scheduling and Link Adaptation for Video Transmission over Mobile Broadband Networks
Victoria Sgardoni,David R. Bull,Andrew R. Nix
Journal of Computer Networks and Communications , 2012, DOI: 10.1155/2012/369803
Abstract: This paper studies the effect of ARQ retransmissions on packet error rate, delay, and jitter at the application layer for a real-time video transmission at 1.03?Mbps over a mobile broadband network. The effect of time-correlated channel errors for various Mobile Station (MS) velocities is evaluated. In the context of mobile WiMAX, the role of the ARQ Retry Timeout parameter and the maximum number of ARQ retransmissions is taken into account. ARQ-aware and channel-aware scheduling is assumed in order to allocate adequate resources according to the level of packet error rate and the number of ARQ retransmissions required. A novel metric, namely, goodput per frame, is proposed as a measure of transmission efficiency. Results show that to attain quasi error free transmission and low jitter (for real-time video QoS), only QPSK 1/2 can be used at mean channel SNR values between 12?dB and 16?dB, while 16QAM 1/2 can be used below 20?dB at walking speeds. However, these modes are shown to result in low transmission efficiency, attaining, for example, a total goodput of 3?Mbps at an SNR of 14?dB, for a block lifetime of 90?ms. It is shown that ARQ retransmissions are more effective at higher MS speeds. 1. Introduction Mobile WiMAX (IEEE 802.16e) [1] and 3GPP LTE (Long-Term Evolution) [2] represent mobile broadband standards that offer high user data rates and support for bandwidth hungry video applications. Both standards use very similar PHY and MAC layer techniques, especially for downlink (DL) transmission. In order to provide strong QoS, cross-layer adaptive strategies must be implemented in the wireless network [3, 4]. Video applications demand a low Packet Error Rate (PER), which may be achieved via the use of MAC layer Automatic Repeat ReQuest (ARQ) and the choice of suitable Modulation and Coding Schemes (MCS). However, ARQ consumes additional bandwidth and causes increased end-to-end latency and jitter. ARQ is controlled in the MAC layer by the block lifetime and ARQ Retry Timer parameters, which define how many and how frequently retransmissions may occur. Link adaptation is used in mobile broadband networks to improve the PER by matching the QAM constellation and forward error correction coding rate to the time varying channel quality. The impact of specific ARQ parameters and mechanisms has been extensively studied in the literature, for example, [5–9]. In [8], the authors analyze delay and throughput using probabilistic PHY layer error modelling. In [9], packet errors were modelled as an uncorrelated process in time. Often packet errors are modelled
Distortion-Based Link Adaptation for Wireless Video Transmission
Pierre Ferré,James Chung-How,David Bull,Andrew Nix
EURASIP Journal on Advances in Signal Processing , 2008, DOI: 10.1155/2008/253706
Abstract: Wireless local area networks (WLANs) such as IEEE 802.11a/g utilise numerous transmission modes, each providing different throughputs and reliability levels. Most link adaptation algorithms proposed in the literature (i) maximise the error-free data throughput, (ii) do not take into account the content of the data stream, and (iii) rely strongly on the use of ARQ. Low-latency applications, such as real-time video transmission, do not permit large numbers of retransmission. In this paper, a novel link adaptation scheme is presented that improves the quality of service (QoS) for video transmission. Rather than maximising the error-free throughput, our scheme minimises the video distortion of the received sequence. With the use of simple and local rate distortion measures and end-to-end distortion models at the video encoder, the proposed scheme estimates the received video distortion at the current transmission rate, as well as on the adjacent lower and higher rates. This allows the system to select the link-speed which offers the lowest distortion and to adapt to the channel conditions. Simulation results are presented using the MPEG-4/AVC H.264 video compression standard over IEEE 802.11g. The results show that the proposed system closely follows the optimum theoretic solution.
|
CommonCrawl
|
Signal Processing Meta
Derivation of Nyquist Frequency and Sampling Theorem [closed]
I have been looking through different sites and questions over the internet about Sampling theory, but couldn't find the clear definition of how nyquist frequency condition is derived? It would be great if someone could direct me to the derivation of the condition that sampling frequency should >= to maximum frequency in the signal
discrete-signals fourier-transform sampling nyquist theory
Royi
Kamran AbbasovKamran Abbasov
$\begingroup$ The basic Wikipedia article en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem is fairly well written and shows multiple ways to derive it. If you need further help please indicate which parts of the standard deviations are a problematic for you $\endgroup$ – Hilmar Oct 11 '19 at 0:54
$\begingroup$ I added a derivation from a different point of view (2 sided of the same coin of course) in comparison to Wikipedia. $\endgroup$ – Royi Oct 13 '19 at 0:14
Approaching The Sampling Theorem as Inner Product Space
There are many ways to derive the Nyquist Shannon Sampling Theorem with the constraint on the sampling frequency being 2 times the Nyquist Frequency.
The classic derivation uses the summation of sampled series with Poisson SummationFormula.
Let's introduce different approach which is more similar to function analysis - Building an orthogonal space and using projection for analysis and synthesis.
Forming Orthonormal Basis
In this section we'll define an orthonormal base and derive the decomposition and composition process.
First one should define the space of Band Limited Functions. The space of Band Limited Functions is defined by:
$$ \mathcal{B}_{ {W}_{s} } = \left\{ f \left( x \right) \mid F \left( w \right) = \mathcal{F} \left\{ f \left( x \right) \right\}, \; F \left( w \right) = 0, \; \forall \, \left| w \right| > {W}_{s} \right\} $$
In words, it means that for each function $ f \left( x \right) \in \mathcal{B}_{ {W}_{s} }$ its fourier transform $ F \left( w \right) = \mathcal{F} \left\{ f \left( x \right) \right\} $ vanishes for frequencies $ \left| w \right| > {W}_{s} $.
The inner product in this space is given by:
$$ \langle f \left( x \right), g \left( x \right) \rangle = \frac{1}{T} \int_{- \infty}^{\infty} f \left( x \right) g \left( x \right) dx, \; T = \frac{2 \pi}{ {W}_{s} } $$
One could easily show that this is a indeed an inner product space with valid inner product.
The main claim is the orthonormal basis of this space is given by:
$$ f \left( x \right) = \operatorname{sinc} \left( \frac{ x - n T }{T} \right) $$
Where $ \operatorname{sinc} \left( x \right) $ is the Normalized Sinc Function given by $ \operatorname{sinc} \left( x \right) = \frac{ \pi x }{ x } $.
The basis functions are parameterized by the parameter $ n $. Basically we have shifted and scaled function as the basis.
Proof of The Orthonormal Property
One must show the orthonormal property of the basis under the defined inner product:
$$ \begin{aligned} \langle f \left( x \right), g \left( x \right) \rangle & = \frac{1}{T} \int_{- \infty}^{\infty} f \left( x \right) g \left( x \right) dx = \frac{1}{T} \int_{- \infty}^{\infty} \operatorname{sinc} \left( \frac{ x - n T }{T} \right) \operatorname{sinc} \left( \frac{ x - m T }{T} \right) dx && \text{} \\ & \overset{1}{=} \frac{1}{T} \int_{- \infty}^{\infty} \left( \frac{1}{2 \pi} \int_{- \infty}^{\infty} T \Pi \left( \frac{ w }{ {W}_{s} } \right) {e}^{-j w n T} {e}^{j w t} dw \right) \operatorname{sinc} \left( \frac{ x - m T }{T} \right) dx && \text{} \\ & \overset{2}{=} \frac{1}{T} \int_{-\infty}^{\infty} \frac{1}{2 \pi} T \Pi \left( \frac{w}{ {W}_{s} } \right) {e}^{-j w n T} \left( \int_{- \infty}^{\infty} {e}^{j w x} \operatorname{sinc} \left( \frac{x - m T}{T} \right) dx \right) dw && \text{} \\ & \overset{3}{=} \frac{1}{T} \int_{-\infty}^{\infty} \frac{1}{2 \pi} T \Pi \left( \frac{w}{ {W}_{s} } \right) {e}^{-j w n T} T \Pi \left( \frac{-w}{ {W}_{s} } \right) {e}^{j w m T} dw && \text{} \\ & \overset{4}{=} \frac{T}{2 \pi} \int_{ - \frac{ {W}_{s} }{2} }^{ \frac{ {W}_{s} }{2} } {e}^{j w \left( m - n \right) T} dw && \text{} \\ & \overset{5}{=} \begin{cases} 1 & \text{ if } m = n \\ 0 & \text{ if } m \neq n \end{cases} \end{aligned} $$
Since $ \mathcal{F} \left\{ \operatorname{sinc} \left( \frac{ x - n T }{T} \right) \right\} = T \Pi \left( \frac{ w }{ {W}_{s} } \right) {e}^{-j w n T} $.
Changing order of integration for converging integrals.
Applying $ \mathcal{F} \left\{ \operatorname{sinc} \left( \frac{ x - m T }{T} \right) \right\} \left( - w \right) $.
Integration boundaries according to the Rect function (Multiplication).
Integration over a cycle or over a constant.
Since we proved the suggested base is indeed an orthonormal basis of the space the following holds:
$$ \forall f \left( x \right) \in \mathcal{B}_{{W}_{s}} , \; f \left( x \right) = \sum_{n = -\infty}^{n} \langle f \left( x \right), {g}_{n} \left( x \right) \rangle {g}_{n} \left( x \right) $$
Where $ {g}_{n} \left( x \right) = \operatorname{sinc} \left( \frac{ x - n T }{T} \right) $ and $ \langle f \left( x \right), {g}_{n} \left( x \right) \rangle $ is the projection of $ f \left( x \right) $ on $ {g}_{n} \left( x \right) $.
Projection Process
As written above, using the result of a projection of a function in the space onto the basis one could reconstruct it as:
$$ f \left( x \right) = \sum_{n = -\infty}^{n} \langle f \left( x \right), {g}_{n} \left( x \right) \rangle {g}_{n} \left( x \right) $$
The question is, what's is the projection of a general function in this space? Well, it turns out it can be shown in a sloed form way:
$$ \begin{aligned} \langle f \left( x \right), {g}_{n} \left( x \right) \rangle & = \frac{1}{T} \int_{- \infty}^{\infty} f \left( x \right) {g}_{n} \left( x \right) dx = \frac{1}{T} \int_{- \infty}^{\infty} f \left( x \right) \operatorname{sinc} \left( \frac{ x - n T }{T} \right) dx && \text{} \\ & \overset{1}{=} \frac{1}{T} \int_{- \infty}^{\infty} \left( \frac{1}{2 \pi} \int_{-\infty}^{\infty} F \left( w \right) {e}^{j w x} dw \right) \operatorname{sinc} \left( \frac{ x - n T }{T} \right) dx && \text{} \\ & \overset{2}{=} \frac{1}{2 \pi T} \int_{- \infty}^{\infty} \left( \int_{- \infty}^{\infty} \operatorname{sinc} \left( \frac{ x - n T }{T} \right) {e}^{j w x} dx \right) F \left( w \right) dw && \text{} \\ & \overset{3}{=} \frac{1}{2 \pi T} \int_{- \infty}^{\infty} T \Pi \left( \frac{-w}{ {W}_{s} } \right) {e}^{j w n T} {F} \left( w \right) dw && \text{} \\ & \overset{4}{=} \frac{1}{2 \pi} \int_{ - \frac{ {W}_{s} }{2} }^{ \frac{ {W}_{s} }{2} } F \left( w \right) {e}^{j w n T} dw && \text{} \\ & \overset{5}{=} f \left( n T \right) \end{aligned} $$
Since $ \mathcal{F} \left\{ f \left( x \right) \right\} = F \left( w \right) $.
Integration boundaries according to the Rect function.
Applying Inverse Fourier Transform at $ x = n T $.
Wrapping it yields:
$$ f \left( x \right) = \sum_{n = -\infty}^{n} \langle f \left( x \right), {g}_{n} \left( x \right) \rangle {g}_{n} \left( x \right) = \sum_{n = - \infty}^{\infty} f \left( n T \right) \operatorname{sinc} \left( \frac{ x - n T }{T} \right) $$
Which is known as the Whittaker Shannon Interpolation Formula.
In the process above the analysis and synthesis of Band Limited Functions is shown using Orthonormal Basis. If one set $ T $ to be the Sampling Interval, usually denoted by $ {T}_{s} $ then the Sampling Frequency is given by $ {F}_{s} = \frac{1}{ {T}_{s} } $.
Now, since the function in frequency domain stretches in the range $ \left[ -\frac{{F}_{s}}{2}, \frac{{F}_{s}}{2} \right] $ we indeed have the known relation that the sampling frequency has to be twice the one sided support of the function in frequency.
RoyiRoyi
It really boils down to aliasing. In continuous-time, if you have any two signals $x_1(t) = \sin(2 \pi F_1 t)$ and $x_2(t) = \sin(2 \pi F_2 t)$, then as long as $F_1$ and $F_2$ are distinct, the signals are, too.
But consider sampling at some time interval $T_s$, so that the sampled signals are $x_1(k) = \sin(2 \pi F_1 T_s k)$ and $x_2(k) = \sin(2 \pi F_2 T_s k)$. If you have a condition where $2 \pi F_1 = 2 \pi F_2 \pm 2\pi n$ for any integer $n$, then the signals are not clearly not distinguishable, by the simple trigonometric identity $\sin \theta = \sin \theta + 2\pi$. Do the math, and you find that the frequencies cannot be separated by more than the sampling rate to be distinguishable.
It's worse, though, because you also have the trigonometric identity $\sin \theta = -\sin \theta + \pi$. When you consider that the only thing that allows you to distinguish one signal from another is the frequency, that means that if $F_2$ is a mere $\frac{1}{2 T_s}$ away from $F_1$, then $x_1(t)$ cannot be distinguished from $x_2(t)$.
This phenomenon of two distinct signals appearing as one after sampling is called aliasing. It's pretty much the basis of the Shannon-Nyquist sampling theorem, and the Nyquist rate. Because if you're sampling at $\frac{1}{T_s}$ and trying to distinguish signals further apart than $\frac{1}{2T_s}$, you just can't.
Hence, the Nyquist rate.
TimWescottTimWescott
1,12811 silver badge77 bronze badges
Not the answer you're looking for? Browse other questions tagged discrete-signals fourier-transform sampling nyquist theory or ask your own question.
When calculating the Nyquist frequency should carrier frequency be included
Difference between Nyquist rate and Nyquist frequency?
Given a continuous time signal, does the minimum Nyquist sampling rate depend on the choice of the set of basis functions?
Time Domain Example of Nyquist/Shannon?
Understanding the Conditions for Recovering a Discrete Time Signal Through Sampling
Applying Nyquist's sampling theorem to a real signal
Nyquist sampling
Minimum sample frequency of IMU accelerometer and gyroscope
Conclusions of sampling around Nyquist Rate
|
CommonCrawl
|
Link Search Menu Expand Document
RL Theory
The work you do
Planning in MDPs
1. Introductions
2. The Fundamental Theorem
3. Value Iteration and Our First Lower Bound
4. Policy Iteration
5. Local Planning - Part I.
6. Local Planning - Part II.
7. Function Approximation
8. Approximate Policy Iteration
9. Limits of query-efficient planning
10. Planning under $q^*$ realizability
11. Planning under $v^*$ realizability (TensorPlan I.)
12. TensorPlan and eluder sequences
13. From API to Politex
14. Politex
15. From policy search to policy gradients
16. Policy gradients
Batch RL
17. Introduction
18. Sample complexity in finite MDPs
19. Scaling with value function approximation
Online RL
Winter 2021 Lecture Notes
Website of the course CMPUT 653: Theoretical Foundations of Reinforcement Learning.
Note: On March 13, these notes were updated as follows:
Tighter bounds are derived; the old analysis was based on bounding \(\| q^*-q^{\pi_k} \|_\infty\); the new analysis directly bounds \(\| v^* - v^{\pi_k} \|_\infty\), which leads to a better dependence on the approximation error;
Unbiased return estimates are introduced that use rollouts of random length.
One simple idea to use function approximation in MDP planning is to take a planning method that uses internally value functions and add a constraint that restrict the value functions to have a compressed representation.
As usual, two questions arise:
Does this lead to an efficient planner? That is, can the computation be carried out in time polynomial in the relevant quantities, but not the size of the state space? In case of linear function the question is whether we can calculate the coefficients efficiently.
Does this lead to an effective planner? In particular, how good a policy can be arrive at with a limited compute effort?
In this lecture, as a start into exploring the use of value function approximation in planning, we look at modifying policy iteration in the above described way. The resulting algorithm belongs to the family of approximate policy iteration algorithms, which consists of all algorithms derived from policy iteration by adding approximation to it.
We will work with linear function approximation. In particular, we will assume that the planner is given as a hint a feature-map $\varphi: \mathcal{S}\times \mathcal{A}\to \mathbb{R}^d$. In this setting, since policy iteration hinges upon evaluating the policies obtained, the hint given to the planner is considered to be "good" if the (action-)value functions of all policies are well-represented with the features.
This means, that we will work under assumption B2$_\varepsilon$ from the previous lecture, which we copy here for convenience. In what follows we fix $\varepsilon>0$.
Assumption B2$\mbox{}_{\varepsilon}$ (approximate universal value function realizibility) The MDP $M$ and the featuremap $\varphi$ are such that for any memoryless policy $\pi$ of the MDP, \(q^\pi \in_{\varepsilon} \mathcal{F}_\varphi\).
Recall that here the notation $q^\pi \in_{\varepsilon} \mathcal{F}_\varphi$ means that $q^\pi$ can be approximated up to a uniform error of $\varepsilon$ using linear combinations of the basis functions underlying the feature-map $\varphi$:
For any policy $\pi$,
\[\begin{align*} \inf_{\theta\in \mathbb{R}^d} \max_{(s,a)} | q^\pi(s,a) - \langle \theta, \varphi(s,a) \rangle | \left(= \inf_{\theta\in \mathbb{R}^d} \| q^\pi - \Phi\theta \|_\infty\right) \le \varepsilon\,. \end{align*}\]
One may question whether it is reasonable to expect that the value functions of all policies can be compressed. We will come back to this question later.
Approximate Policy Evaluation: Done Well
Recall that in phase $k$ of policy iteration, given a policy $\pi_k$, the next policy $\pi_{k+1}$ is obtained as the policy that is greedy with respect to $q^{\pi_k}$. If we found some coefficients $\theta_k\in \mathbb{R}^d$ such that
\[\begin{align*} q^{\pi_k} \approx \Phi \theta_k\,, \end{align*}\]
then when it comes to "using" policy $\pi_{k+1}$, we could just use $\arg\max_{a} \langle \theta_k,\varphi(s,a)\rangle$ when an action is needed at state $s$. Note that this action can be obtained at the cost of $O(d)$ elementary operations, a small overhead compared to a table lookup (with idealized $O(1)$ access times).
Hence, the main question is how to obtain this parameter in an efficient manner. To be more precise, here we want to control the uniform error committed in approximating $q^{\pi_k}$.
To simplify the notation, let $\pi = \pi_k$. A simple idea is rolling out with the policy $\pi$ from a fixed set $\mathcal{C}\subset \mathcal{S}\times \mathcal{A}$ to "approximately" measure the value of $\pi$ at the pairs in $\mathcal{C}$. For concreteness, let $(s,a)\in \mathcal{C}$. Rolling out with policy this pair means using the simulator to simulate what would happen if we used policy $\pi$ for a number of consecutive time steps when the initial state is $s$, the first action $a$, but for subsequent time steps the actions are chosen using policy $\pi$ for whatever states are encountered. If the simulation goes on for $H$ steps, this way we get \(m\) trajectories starting in \(z = (s, a)\). For $1\le j \le m$ let the trajectory obtained be \(\tau_\pi^{(j)}(s, a)\). Thus,
\(\begin{align*} \tau_\pi^{(j)}(s, a) = \left( S_0^{(j)}, A_0^{(j)}, S_1^{(j)}, A_1^{(j)}, \ldots, S_{H-1}^{(j)}, A_{H-1}^{(j)} \right)\, \end{align*}\),
where \(S_0^{(j)}=s\), \(A_0^{(j)}=a\), and for $1\le t \le H-1$, \(S_{t}^{(j)} \sim P_{A_t^{(j)}} ( S_{t-1}^{(j)} )\), and \(A_t^{(j)} \sim \pi ( \cdot | S_{t}^{(j)} )\). The figure on the right illustrates these trajectories.
Given these trajectories, the empirical mean of the discounted sum of rewards along these trajectories is used for approximating $q^\pi(z)$:
\[\begin{align} \hat R_m(z) = \frac{1}{m} \sum_{j=1}^m \sum_{t=0}^{H-1} \gamma^t r_{A_t^{(j)}}(S_t^{(j)}). \label{eq:petargetsbiased} \end{align}\]
Under the usual condition that the rewards are in the $[0,1]$ interval, the expected value of $\hat{q}^\pi(z)$ is in the $\gamma^H/(1-\gamma)$ vicinity of the $q^\pi(z)$ and by averaging a large number of independent trajectories, we also achieve that the empirical means are tightly concentrated around their mean.
Using a randomization device, it is possible to remove the error ("bias") introduced by truncating the trajectories at a fixed time. For this, just let $(H^{(j)})_{j}$ be independent geometrically distributed random variables with parameter $1-\gamma$, which are also independently chosen from the trajectories. By definition \(H^{(j)}\) is the number of $1-\gamma$-parameter Bernoulli trials needed to get one success. With the help of these variables, define now $\hat R_m(z)$ by
\[\begin{align} \hat R_m(z) = \frac{1}{m} \sum_{j=1}^m \sum_{t=0}^{H^{(j)}-1} r_{A_t^{(j)}}(S_t^{(j)})\,. \label{eq:petargetsunbiased} \end{align}\]
Note that in the expression of \(\hat R_m(z)\) the discount factor is eliminated. To calculate \(\hat R_m(z)\) one can just perform a rollout with policy $\pi$ as before, just in each time step $t=0,1,\dots$, after obtaining $r_{A_t^{(j)}}(S_t^{(j)})$, draw a Bernoulli variable with parameter $(1-\gamma)$ to decide whether the rollout should continue.
To see why the above definition works, fix $j$ and note that by definition, for $h\ge 1$, \(\mathbb{P}(H^{(j)}=h) = \gamma^{h-1}(1-\gamma)\) and thus \(\mathbb{P}(H^{(j)}\ge t+1) = \gamma^t\). Therefore,
\[\begin{align*} \mathbb{E}[ \sum_{t=0}^{H^{(j)}-1} r_{A_t^{(j)}}(S_t^{(j)}) ] & = \sum_{t=0}^\infty \mathbb{E}[ \mathbb{I}\{ t \le H^{(j)}-1\} r_{A_t^{(j)}}(S_t^{(j)}) ] \\ & = \sum_{t=0}^\infty \mathbb{E}[ \mathbb{I}\{ t \le H^{(j)}-1\} ]\, \mathbb{E}[ r_{A_t^{(j)}}(S_t^{(j)}) ] \\ & = \sum_{t=0}^\infty \mathbb{P}( t+1 \le H^{(j)} )\, \mathbb{E}[ r_{A_t^{(j)}}(S_t^{(j)}) ] \\ & = \sum_{t=0}^\infty \gamma^t \mathbb{E}[ r_{A_t^{(j)}}(S_t^{(j)}) ] \\ & = q^\pi(z)\,. \end{align*}\]
All in all, this means, that we expect that if we solve for the least-squares problem
\[\begin{align} \hat\theta = \arg\min_{\theta\in \mathbb{R}^d} \sum_{z\in \mathcal{C}} \left( \langle \theta,\varphi(z) \rangle - \hat R_m(z)\right)^2\,, \label{eq:lse} \end{align}\]
we expect $\Phi \hat\theta$ to be a good approximation to $q^\pi$. Or at least, we can expect this hold at the points of $\mathcal{C}$, where we are taking our measurements. The question is what happens outside of $\mathcal{C}$: That is, what guarantees can we get for extrapolating to points of $\mathcal{Z}:= \mathcal{S}\times \mathcal{A}$. The first thing to observe that unless we are choosing $\mathcal{C}$ carefully, there is no guarantee about the extrapolation error will be kept under control. In fact, if the choice of $\mathcal{C}$ is so unfortunate that all the feature vectors for points in $\mathcal{C}$ are identical, the least-squares problem will have many solutions.
Our next lemma gives an explicit error bound on the extrapolation error. For the coming results we slightly generalize least-squares by introducing a weighting of the various errors in \(\eqref{eq:lse}\). For this, let $\varrho: \mathcal{C} \to (0,\infty)$ be a weighting function assigning a positive weight to the various error terms and let
\[\begin{align} \hat\theta = \arg\min_{\theta\in \mathbb{R}^d} \sum_{z\in \mathcal{C}} \varrho(z) \left( \langle \theta,\varphi(z) \rangle - \hat R_m(z)\right)^2 \label{eq:wlse} \end{align}\]
be the minimizer of the resulting weighted squared-loss. A simple calculation gives that provided the (weighted) moment matrix
\[\begin{align} G_\varrho = \sum_{z\in \mathcal{C}} \varrho(z) \varphi(z) \varphi(z)^\top \label{eq:mommx} \end{align}\]
is nonsingular, the solution to the above weighted least-squares problem is unique and is equal to
\[\hat{\theta} = G_\varrho^{-1} \sum_{z' \in C} \varrho(z') \hat R_m(z') \varphi(z')\,,\]
From this expression we see that there is no loss of generality in assuming that the weights in the weighting function sum to one: \(\sum_{z\in \mathcal{C}} \varrho(z) = 1\). We will denote this by writing $\varrho \in \Delta_1(\mathcal{C})$ (here, $\Delta_1$ refers to the fact that we can see $\varrho$ as an element of a $|\mathcal{C}|-1$ simplex). To state the lemma recall the notation that for a positive definite, $d\times d$ matrix $Q$ and vector $x\in \mathbb{R}^d$,
\[\|x\|_Q^2 = x^\top Q x\,.\]
Lemma (extrapolation error control in least-squares): Fix any \(\theta \in \mathbb{R}^d\), \(\varepsilon: \mathcal{Z} \rightarrow \mathbb{R}\), $\mathcal{C}\subset \mathcal{Z}$ and \(\varrho\in \Delta_1(\mathcal{C})\) such that the moment matrix $G_\varrho$ is nonsingular. Define
\[\begin{align*} \hat{\theta} = G_\varrho^{-1} \sum_{z' \in C} \varrho(z') \Big(\varphi(z')^\top \theta + \varepsilon(z') \Big) \varphi(z')\,. \end{align*}\]
Then, for any \(z\in \mathcal{Z}\) we have
\[\left| \varphi(z)^\top \hat{\theta} - \varphi(z)^\top \theta \right| \leq \| \varphi(z) \|_{G_{\varrho}^{-1}}\, \max_{z' \in C} \left| \varepsilon(z') \right|\,.\]
Before the proof note that what his lemma tells us is that as long as we guarantee that the moment matrix is full rank, the extrapolation errors relative to predicting with some $\theta\in \mathbb{R}^d$ can be controlled by controlling
the value of \(g(\varrho):= \max_{z\in \mathcal{Z}} \| \varphi(z) \|_{G_{\varrho}^{-1}}\); and
the maximum deviation of the targets used in the weighted least-squares problem and the predictions with $\theta$.
Proof: First, we relate $\hat\theta$ to $\theta$:
\[\begin{align*} \hat{\theta} &= G_\varrho^{-1} \sum_{z' \in C} \varrho(z') \Big(\varphi(z')^\top \theta + \varepsilon(z') \Big) \varphi(z') \\ &= G_\varrho^{-1} \left( \sum_{z' \in C} \varrho(z') \varphi(z') \varphi(z')^\top \right) \theta + G_\varrho^{-1} \sum_{z' \in C} \varrho(z') \varepsilon(z') \varphi(z') \\ &= \theta + G_\varrho^{-1} \sum_{z' \in C} \varrho(z') \varepsilon(z') \varphi(z'). \end{align*}\]
Then for a fixed \(z \in \mathcal{Z}\),
\[\begin{align*} \left| \varphi(z)^\top \hat{\theta} - \varphi(z)^\top \theta \right| &= \left| \sum_{z' \in C} \varrho(z') \varepsilon(z') \varphi(z)^\top G_\varrho^{-1} \varphi(z') \right| \\ &\leq \sum_{z' \in C} \varrho(z') | \varepsilon(z') | \cdot | \varphi(z)^\top G_\varrho^{-1} \varphi(z') | \\ &\leq \Big( \max_{z' \in C} |\varepsilon(z')| \Big) \sum_{z' \in C} \varrho(z') | \varphi(z)^\top G_\varrho^{-1} \varphi(z') |\,. \end{align*}\]
To get a sense of how to control the sum notice that if $\varphi(z)$ in the last sum was somehow replaced by $\varphi(z')$, using the definition of $G_\varrho$ could greatly simplify the last expression. To get here, one may further notice that having the term in absolute value squared would help. Now, to get the squares, recall Jensen's inequality, which states that for any convex function \(f\) and probability distribution \(\mu\), \(f \left(\int u \mu(du) \right) \leq \int f(u) \mu(du)\). Of course, this also works when $\mu$ is a finitely supported, which is the case here. Thus, applying Jensen's inequality with \(f(x) = x^2\), we thus get
\[\begin{align*} \left(\sum_{z' \in C} \varrho(z') | \varphi(z)^\top G_\varrho^{-1} \varphi(z') |\right)^2 & \le \sum_{z' \in C} \varrho(z') | \varphi(z)^\top G_\varrho^{-1} \varphi(z') |^2 \\ &= \sum_{z' \in C} \varrho(z') \varphi(z)^\top G_\varrho^{-1} \varphi(z') \varphi(z')^\top G_\varrho^{-1} \varphi(z) \\ &= \varphi(z)^\top G_\varrho^{-1} \left( \sum_{z' \in C} \varrho(z') \varphi(z') \varphi(z')^\top \right) G_\varrho^{-1} \varphi(z) \\ &= \varphi(z)^\top G_\varrho^{-1} \varphi(z) = \|\varphi(z)\|_{G_\varrho^{-1}}^2 \end{align*}\]
Plugging this back into the previous inequality gives the desired result. \(\qquad \blacksquare\)
It remains to be seen of whether \(g(\varrho)=\max_z \|\varphi(z)\|_{G_\varrho^{-1}}\) can be kept under control. This is the subject of a classic result of Kiefer and Wolfowitz:
Theorem (Kiefer-Wolfowitz): Let $\mathcal{Z}$ be finite. Let $\varphi: \mathcal{Z} \to \mathbb{R}^d$ be such that the underlying feature matrix $\Phi$ is rank $d$. There exists a set \(\mathcal{C} \subseteq \mathcal{Z}\) and a distribution \(\varrho: C \rightarrow [0, 1]\) over this set, i.e. \(\sum_{z' \in \mathcal{C}} \varrho(z') = 1\), such that
\(\vert \mathcal{C} \vert \leq d(d+1)/2\);
\(\sup_{z \in \mathcal{Z}} \|\varphi(z)\|_{G_\varrho^{-1}} \leq \sqrt{d}\);
In the previous line, the inequality is achieved with equality and the value of $\sqrt{d}$ is best possible under all possible choices of $\mathcal{C}$ and $\rho$.
We will not give a proof of the theorem, but we give references at the end where the reader can look up the proof. When $\varphi$ is not full rank (i.e., $\Phi$ is not rank $d$), one may reduce the dimensionality (and the cardinality of $C$ reduces accordingly). The problem of choosing $\mathcal{C}$ and $\rho$ such that $g(\rho)$ is minimized is called the $G$-optimal design problem in statistics. This is a specific instance of optimal experimental design.
Combining the Kiefer-Wolfowitz theorem with the previous lemma shows that least-squares amplifies the "measurement errors" by at most a factor of \(\sqrt{d}\):
Corollary (extrapolation error control in least-squares via optimal design): Fix any $\varphi:\mathcal{Z} \to \mathbb{R}^d$ full rank. Then, there exists a set $\mathcal{C} \subset \mathcal{Z}$ with at most $d(d+1)/2$ elements and a weighting function \(\varrho\in \Delta_1(\mathcal{C})\) such that for any \(\theta \in \mathbb{R}^d\) and any \(\varepsilon: \mathcal{C} \rightarrow \mathbb{R}\),
\[\max_{z\in \mathcal{Z}}\left| \varphi(z)^\top \hat{\theta} - \varphi(z)^\top \theta \right| \leq \sqrt{d}\, \max_{z' \in C} \left| \varepsilon(z') \right|\,.\]
where $\hat\theta$ is given by
Importantly, note that $\mathcal{C}$ and $\varrho$ are chosen independently of $\theta$ and $\epsilon$, that is, they are independent of the target. This suggests that in approximate policy evaluation, one should choose $(\mathcal{C},\rho)$ as in the Kiefer-Wolfowitz theorem and use the $\rho$ weighted moment matrix. This leads to \(\begin{align} \hat{\theta} = G_\varrho^{-1} \sum_{z' \in C} \varrho(z') \hat R_m(z') \varphi(z')\,. \label{eq:lspeg} \end{align}\) where $\hat R_m(z)$ is defined by Eq. \(\eqref{eq:petargetsbiased}\) and $G_\varrho$ is defined by Eq. \(\eqref{eq:mommx}\). We call this procedure least-square policy evaluation based on rollouts from $G$-optimal design points, or LSPE-$G$, for short. Note that we stick to the truncated rollouts, because this allows a simpler probabilistic analysis. That this properly controls the extrapolation error is as attested by the next result:
Lemma (LSPE-$G$ extrapolation error control): Fix any full-rank feature-map $\varphi:\mathcal{Z} \to \mathbb{R}^d$ and take the set $\mathcal{C} \subset \mathcal{Z}$ and the weighting function \(\varrho\in \Delta_1(\mathcal{C})\) as in the Kiefer-Wolfowitz theorem. Fix an arbitrary policy $\pi$ and let $\theta$ and $\varepsilon_\pi$ such that $q^\pi = \Phi \theta + \varepsilon_\pi$ and assume that immediate rewards belong to the interval $[0,1]$. Let $\hat{\theta}$ be as in Eq. \eqref{eq:lspeg}. Then, for any $0\le \delta \le 1$, with probability $1-\delta$,
\[\begin{align} \left\| q^\pi - \Phi \hat{\theta} \right\|_\infty &\leq \|\varepsilon_\pi\|_\infty (1 + \sqrt{d}) + \sqrt{d} \left(\frac{\gamma^H}{1 - \gamma} + \frac{1}{1 - \gamma} \sqrt{\frac{\log(2 \vert C \vert / \delta)}{2m}}\right). \label{eq:lspeee} \end{align}\]
Notice that that from the Kiefer-Wolfowitz theorem, \(\vert C \vert = O(d^2)\) and therefore nothing in the above expression depends on the size of the state space. Now, say we want to make the above error bound at most \(\|\varepsilon_\pi\|_\infty (1 + \sqrt{d}) + 2\varepsilon\) with some value of $\varepsilon>0$. From the above we see that it suffices to choose $H$ and $m$ so that
\[\begin{align*} \frac{\gamma^H}{1 - \gamma} \leq \varepsilon/\sqrt{d} \qquad \text{and} \qquad \frac{1}{1 - \gamma} \sqrt{\frac{\log(2 \vert C \vert / \delta)}{2m}} \leq \varepsilon/\sqrt{d}. \end{align*}\]
This, together with \(\vert\mathcal{C}\vert\le d(d+1)/2\) gives
\[\begin{align*} H \geq H_{\gamma, \varepsilon/\sqrt{d}} \qquad \text{and} \qquad m \geq \frac{d}{(1 - \gamma)^2 \varepsilon^2} \, \log \frac{d(d+1)}{\delta}\,. \end{align*}\]
Proof: In a nutshell, we use the previous corollary, together with Hoeffding's inequality and using that $|q^\pi-T_\pi^H \boldsymbol{0}|_\infty \le \gamma^H/(1-\gamma)$, which follows since the rewards are bounded in $[0,1]$.
Click here for the full proof. Fix $z\in \mathcal{C}$. Let us write $\hat{R}_m(z) = q^\pi(z) + \hat{R}_m(z) - q^\pi(z) = \varphi(z)^\top \theta + \varepsilon(z)$ where we define $\varepsilon(z) = \hat{R}_m(z) - q^\pi(z) + \varepsilon_\pi(z)$. Then $$ \hat{\theta} = G_\varrho^{-1} \sum_{z' \in C} \varrho(z') \Big( \varphi(z')^\top \theta + \varepsilon(z') \Big) \varphi(z'). $$ Now we will bound the difference between our action-value function estimate and the true action-value function: $$ \begin{align} \| q^\pi - \Phi \hat\theta \|_\infty & \le \| \Phi \theta - \Phi \hat\theta\|_\infty + \| \varepsilon_\pi \|_\infty \le \sqrt{d}\, \max_{z\in \mathcal{C}} |\varepsilon(z)|\, + \| \varepsilon_\pi \|_\infty \label{eq:bound_q_values} \end{align} $$ where the last line follows from the Corollary above. For bounding the first term above, first note that $\mathbb{E} \left[ \hat{R}_m(z) \right] = (T_\pi^H \mathbf{0})(z)$. Then, $$ \begin{align*} \varepsilon(z) &= \hat{R}_m(z) - q^\pi(z) + \varepsilon_\pi(z) \nonumber \\ &= \underbrace{\hat{R}_m(z) - (T_\pi^H \mathbf{0})(z)}_{\text{sampling error}} + \underbrace{(T_\pi^H \mathbf{0})(z) - q^\pi(z)}_{\text{truncation error}} + \underbrace{\varepsilon_\pi(z)}_{\text{fn. approx. error}}. \end{align*} $$ Since the rewards are assumed to belong to the unit interval, the truncation error is at most $\frac{\gamma^H}{1 - \gamma}$. Concerning the sampling error (first term), Hoeffding's inequality gives that for any given $z\in \mathcal{C}$, $ \left \vert \hat{R}_m(z) - (T_\pi^H \mathbf{0})(z) \right \vert \leq \frac{1}{1 - \gamma} \sqrt{\frac{\log(2 / \delta)}{2m}}$ with at least $1 - \delta$ probability. Applying a union bound, we get that with probability at least $1 - \delta$, for all $z \in \mathcal{C}$, $ \left \vert \hat{R}_m(z) - (T_\pi^H \mathbf{0})(z) \right \vert \leq \frac{1}{1 - \gamma} \sqrt{\frac{\log(2 \vert C \vert / \delta)}{2m}}$. Putting things together, we get that with probability at least $1 - \delta$, $$ \begin{equation} \max_{z \in \mathcal{C}} | \varepsilon(z) | \leq \frac{\gamma^H}{1 - \gamma} + \frac{1}{1 - \gamma} \sqrt{\frac{\log(2 \vert C \vert / \delta)}{2m}} + \|\varepsilon_\pi\|_\infty\,. \label{eq:bound_varepsilon_z} \end{equation} $$ Plugging this into Eq. \eqref{eq:bound_q_values} and algebra gives the desired result.
\(\blacksquare\)
In summary, what we have shown so far is that if the features can approximate well the action-value function of a policy, then there is a simple procedure (Monte-Carlo rollouts and least-squares estimation based on an optimal experimental design) to produce an reliable estimate of the action-value function of the policy. The question remains whether if we use these estimates in policy iteration, the whole procedure will still give good policies after a sufficiently large number of iterations.
Progress Lemma with Approximation Errors
Here we give a refinement of the geometric progress lemma of policy iteration that allows for "approximate" policy improvement steps. This previous lemma stated that the value function of the improved policy $\pi'$ is at least as large as the Bellman operator applied to the value function of the policy $\pi$ to be improved. Our new lemma is as follows:
Lemma (Geometric progress lemma with approximate policy improvement): Consider a memoryless policy \(\pi\) and its corresponding value function \(v^\pi\). Let \(\pi'\) be any policy and define $\varepsilon:\mathcal{S} \to \mathbb{R}$ via
\[T v^\pi = T_{\pi'} v^{\pi} + \varepsilon\,.\]
Then,
\[\|v^* - v^{\pi'}\|_\infty \leq \gamma \|v^* - v^{\pi}\|_\infty + \frac{1}{1 - \gamma} \, \|\varepsilon\|_\infty.\]
Proof: First note that for the optimal policy \(\pi^*\), \(T_{\pi^*} v^* = v^*\). We have
\[\begin{align} v^* - v^{\pi'} & = T_{\pi^*}v^* - T_{\pi^*} v^{\pi} + \overbrace{T_{\pi^*} v^\pi}^{\le T v^\pi} - T_{\pi'} v^\pi + T_{\pi'} v^{\pi} - T_{\pi'} v^{\pi'} \nonumber \\ &\le \gamma P_{\pi^*} (v^*-v^\pi) + \varepsilon + \gamma P_{\pi'} (v^\pi-v^{\pi'})\,. \label{eq:vstar_vpiprime} \end{align}\]
Using the value difference identity and that $v_\pi =T_\pi v^\pi\le T v^\pi$, we calculate
\[\begin{align*} v^\pi - v^{\pi'} = (I-\gamma P_{\pi'})^{-1} [ v^\pi - T_{\pi'}v^\pi] \le (I-\gamma P_{\pi'})^{-1} [ T v^\pi - (T v^\pi -\varepsilon) ] = (I-\gamma P_{\pi'})^{-1} \varepsilon\,, \end{align*}\]
where the inequality follows because $(I-\gamma P_{\pi'})^{-1}= \sum_{k\ge 0} (\gamma P_{\pi'})^k$, the sum of positive linear operators, is a positive linear operator itself and hence is also monotone. Plugging the inequality obtained into \eqref{eq:vstar_vpiprime} gives
\[\begin{align*} v^* - v^{\pi'} \le \gamma P_{\pi^*} (v^*-v^\pi) + (I-\gamma P_{\pi'})^{-1} \varepsilon. \end{align*}\]
Taking the maximum norm of both sides and using the triangle inequality and that \(\| (I-\gamma P_{\pi'})^{-1} \|_\infty \le 1/(1-\gamma)\) gives the desired result. \(\qquad \blacksquare\)
Approximate Policy Iteration
Notice that the progress lemma makes no assumptions about the origin of the errors. This motivates considering a generic version of approximate policy iteration where for $k\ge 1$ in the $k$th update set, the new policy $\pi_k$ is approximately greedy with respect to $v^{\pi_k}$ in that sense that
\[\begin{align} T v^{\pi_k} = T_{\pi_{k+1}} v^{\pi_k} + \varepsilon_k\,. \label{eq:apidef} \end{align}\]
The progress lemma implies that the resulting sequence of policies will have value functions that converge to a neighborhood of $v^*$ where the size of the neighborhood is governed by the magnitude of the error terms \((\varepsilon_k)_k\).
Theorem (Approximate Policy Iteration): Let \((\pi_k)_{k\ge 0}\), \((\varepsilon_k)_k\) be such that \eqref{eq:apidef} holds for all \(k\ge 0\). Then, for any \(k\ge 1\),
\[\begin{align} \|v^* - v^{\pi_k}\|_\infty \leq \frac{\gamma^k}{1-\gamma} + \frac{1}{(1-\gamma)^2} \max_{0\le s \le k-1} \|\varepsilon_{s}\|_\infty\,. \label{eq:apieb} \end{align}\]
Proof: Left as an exercise. \(\qquad \blacksquare\)
Consider now a version of approximate policy iteration where the sequence of policies \((\pi_k)_{k\ge 0}\) is defined as follows:
\[\begin{align} q_k = q^{\pi_k} + \varepsilon_k', \qquad M_{\pi_k} q_k = M q_k\,, \quad k=0,1,\dots\,. \label{eq:apiavf} \end{align}\]
That is, for each \(k=0,1,\dots\), \(\pi_k\) is greedy with respect to \(q_k\).
Corollary (Approximate Policy Iteration with Approximate Action-value Functions): The sequence defined in \eqref{eq:apiavf} is such that
\[\| v^* - v^{\pi_k} \|_\infty \leq \frac{\gamma^k}{1-\gamma} + \frac{2}{(1-\gamma)^2} \max_{0\le s \le k-1} \|\varepsilon_{s}'\|_\infty\,.\]
Proof: To simplify the notation consider policies \(\pi,\pi'\) and functions \(q,\varepsilon'\) over the state-action space such that \(M_{\pi'} q = M q\) and \(q=q^\pi+\varepsilon'\). We have
\[\begin{align*} T v^\pi & \ge T_{\pi'} v^\pi = M_{\pi'} (r+\gamma P v^\pi) = M_{\pi'} q^\pi = M_{\pi'} q - M_{\pi} \varepsilon' = M q - M_\pi \varepsilon'\\ & \ge M (q^\pi - \|\varepsilon'\|_\infty \boldsymbol{1}) - M_\pi \varepsilon' \ge M q^\pi - 2 \|\varepsilon'\|_\infty \boldsymbol{1} = T v^\pi - 2 \|\varepsilon'\|_\infty \boldsymbol{1}\,, \end{align*}\]
where we used that \(M_\pi\) is linear, monotone, and that $M$ is monotone, and both are nonexpansions in the maximum norm.
Hence, if $\varepsilon_k$ is defined by \eqref{eq:apidef} then \(\|\varepsilon_k\|_\infty \le 2 \|\varepsilon_k'\|_\infty\) and the result follows from the previous theorem. \(\qquad \blacksquare\)
Global planning with least-squares policy iteration
Putting things together gives the following planning method:
Given the feature map $\varphi$, find \(\mathcal{C}\) and \(\rho\) as in the Kiefer-Wolfowitz theorem
Let \(\theta_{-1}=0\)
For \(k=0,1,2,\dots,K-1\) do
\(\qquad\) Roll out with policy \(\pi:=\pi_k\) for $H$ steps to get the targets \(\hat R_m(z)\) where \(z\in \mathcal{C}\)
\(\qquad\) and \(\pi_k(s) = \arg\max_a \langle \theta_{k-1}, \varphi(s,a) \rangle\)
\(\qquad\) Solve the weighted least-squares problem given by Eq. \(\eqref{eq:wlse}\) to get \(\theta_k\).
Return \(\theta_{K-1}\)
We call this method least-squares policy iteration (LSPI) for obvious reasons. Note that this is a global planning method: The method makes no use of an input state and the parameter vector returned can be used to get the policy $\pi_{K}$ (as in the method above).
Theorem (LSPI performance): Fix an arbitrary full rank feature-map $\varphi: \mathcal{S}\times \mathcal{A} \to \mathbb{R}^d$ and let $K,m,H\ge 1$. Assume that B2\(_{\varepsilon}\) holds. Then, for any $0\le \zeta \le 1$, with probability at least $1-\zeta$, the policy $\pi_{K}$ which is greedy with respect to $\Phi \theta_{K-1}$ is $\delta$-suboptimal with
\[\begin{align*} \delta \le \underbrace{\frac{2(1 + \sqrt{d})}{(1-\gamma)^2}\, \varepsilon}_{\text{approx. error}} + \underbrace{\frac{\gamma^{K-1}}{1-\gamma}}_{\text{iter. error}} + \underbrace{\frac{2\sqrt{d}}{(1-\gamma)^2} \left(\gamma^H + \sqrt{\frac{\log( d(d+1)K / \zeta)}{2m}}\right)}_{\text{pol.eval. error}} \,. \end{align*}\]
In particular, for any $\varepsilon'>0$, choosing $K,H,m$ so that
\[\begin{align*} K & \ge H_{\gamma,\gamma\varepsilon'/2} \\ H & \ge H_{\gamma,(1-\gamma)^2\varepsilon'/(8\sqrt{d})} \qquad \text{and} \\ m & \ge \frac{32 d}{(1-\gamma)^6 (\varepsilon')^2} \log( (d+1)^2 K /\zeta ) \end{align*}\]
policy $\pi_K$ is $\delta$-optimal with
\[\begin{align*} \delta \le \frac{2(1 + \sqrt{d})}{(1-\gamma)^3}\, \varepsilon + \varepsilon'\,, \end{align*}\]
while the total computation cost is $\text{poly}(\frac{1}{1-\gamma},d,\mathrm{A},\frac{1}{(\varepsilon')^2},\log(1/\zeta))$.
Thus, with a polynomial cost, LSPI with the specific configuration at the cost of polynomial computation cost, but importantly, with a cost that is independent of the size of the state space, can result in a good policy as long as $\varepsilon$, the worst-case error of approximating action-value functions of policies using the features provided, is sufficiently small.
Proof: Note that B2\(_\varepsilon\) and that $\Phi$ is full rank implies that for any memoryless policy $\pi$ there exists a parameter vector $\theta\in \mathbb{R}^d$ such that \(\| \Phi \theta - q^\pi \|_\infty \le \varepsilon\) (cf. Part 2 of Question 3 of Assignment 2). Hence, we can use the "LSPE extrapolation error bound" (cf. \(\eqref{eq:lspeee}\)). By this result, a union bound and of course by B2$_\varepsilon$, we get that for any $0\le \zeta \le 1$, with probability at least $1-\zeta$, for any $0 \le k \le K-1$,
\[\begin{align*} \| q^{\pi_k} - \Phi \theta_k \|_\infty &\leq \varepsilon (1 + \sqrt{d}) + \sqrt{d} \left(\frac{\gamma^H}{1 - \gamma} + \frac{1}{1 - \gamma} \sqrt{\frac{\log( d(d+1)K / \zeta)}{2m}}\right)\,, \end{align*}\]
where we also used that \(\vert \mathcal{C} \vert \le d(d+1)\). Call the quantity on the right-hand side in the above inequality $\kappa$.
Take the event when the above inequalities hold and for now assume this event holds. By the previous theorem, $\pi_K$ is $\delta$-optimal with
\[\delta \le \frac{\gamma^{K-1}}{1-\gamma} + \frac{2}{(1-\gamma)^2} \kappa \,.\]
To obtain the second part of the result, we split $\varepsilon'$ into two equal parts: $K$ is set to force the iteration error to be at most $\varepsilon'/2$, while $H$ and $m$ are chosen to force the policy evaluation error to be at most $\varepsilon'/2$. Here, to choose $H$ and $M$, $\varepsilon'/2$ is again split into two equal parts. The details of this calculation are left to the reader. \(\qquad \blacksquare\)
Approximate Dynamic Programming (ADP)
Value iteration and policy iteration are specific instances of dynamic programming methods. In general, dynamic programming refers to methods that use value functions to calculate good policies. In approximate dynamic programming the methods are modified by introducing "errors" when calculating the values. The idea is that the origin of the errors does not matter (e.g., whether they come due to imperfect function approximation, linear, or nonlinear, or due to the sampling): The analysis is done in a general form. While here we met approximate policy iteration, one can also use the same ideas as shown here to study an approximate version of value iteration. A homework in problem set 2 asks you to study this method, which is usualy called approximate value iteration. In an earlier homework you were asked to study how linear programming can also be used to compute optimal value functions. Adding approximations we then get approximate linear programming.
What function approximation technique to use?
We note in passing that fans of neural networks should like that the general, ADP-style results, like the theorem in the middle of this lecture, can be also applied to the case when neural networks are used as the function approximation technique. However, one main lesson of the lecture is that to control extrapolation errors, one should be quite careful in how the training data is chosen. For linear prediction and least-squares fitting, optimal design gives a complete answer, but the analog questions are completely open in the case of nonlinear function approximation, such as neural networks. There is also a sizable literature that connects nonparametric techniques (an analysis friendly relative of neural networks) to ADP methods.
Concentrability coefficients and all that jazz
The idea of introducing approximate calculations has been introduced at the same time people got interested in Markov Decision Processes in the 1960s. Hence, the literature is quite enormous. However, the approach taken here which asks for error bounds where the algorithmic (not approximation-) error is uniformly controlled regardless of the MDP is quite recent and where the term that involves the approximation error is also uniformly bounded (for a fixed dimension and discount factor).
Earlier literature often presented bounds where the magnification factor of the approximation and the algorithmic error involved terms which depended on the MDP. Often these came in the form of "concentrability coefficients" (and yours truly was quite busy with working on these results a while ago). The main conclusion of this earlier analysis is that more stochasticity in the transitions means less control, less concentrability, which is advantageous for the ADP algorithms. While this makes sense and this indicates that these earlier results are complementary to the results presented here, the issue is that these results are quite pessimistic for example when the MDP is deterministic (as in this case the concentrability coefficients can be as large as the size of the state space).
While here we emphasized the importance of using a good design to control the extrapolation errors, in these earlier results, no optimal design was used. The upshot is that this saves the effort of coming up with a good design, but the obvious downside is that the extrapolation error may become uncontrolled. In the batch setting (which we will come back to later), of course, there is no way to control the sample collection, and this is in fact the setting where this earlier analysis was done.
The strength of hints
A critical assumption in the analysis of API was that the approximation error is controlled uniformly for all policies. This feels limiting. Yet, there are some interesting sufficient conditions when this assumption is clearly satisfied. In general, these require that the transition dynamics and the reward are both "compressible". For example, if the MDP is such that $r$, the immediate reward as a function of the state-action pairs satisfies \(r = \Phi \theta_r\) and the transition matrix, \(P\in [0,1]^{\mathrm{S}\mathrm{A} \times \mathrm{S}}\) satisfies \(P = \Phi H\) with some matrix \(H\in \mathbb{R}^{d\times \mathrm{S}}\), then for any policy policy \(\pi\), \(T_\pi q = r+ \gamma P M_\pi q\) has a range which is a subset of \(\text{span}(\Phi)=\mathcal{F}_{\varphi}\). Since \(q^\pi\) is the fixed-point of \(T_\pi\), i.e., \(q^\pi = T_\pi q^\pi\), it follows that \(q^\pi\) is also necessarily in the range space of \(T_\pi\). As such, \(q^\pi \in \mathcal{F}_{\varphi}\) and \(\varepsilon_{\text{apx}}=0\). MDPs that satisfy the above two constraints are called linear in \(\Phi\) (or sometimes, just "linear MDPs"). Exact linearity can be relaxed: If \(r = \Phi \theta_r + \varepsilon_r\) and \(P = \Phi H +E\), then for any policy \(\pi\), \(q^\pi\in_{\varepsilon} \mathcal{F}_{\varphi}\) with \(\varepsilon \le \|\varepsilon_r\|_\infty+\frac{\gamma}{1-\gamma}\|E\|_\infty\). Nevertheless, later we will investigate whether this assumption can be relaxed.
The tightness of the bounds
It is not known whether the bound presented in the final result is tight. In fact, the dependence of $m$ on the $1/(1-\gamma)$ is almost certainly not tight; in similar scenarios it has been shown in the past that replacing Hoeffding's inequality with Bernstein's inequality allows the reduction of this factor. It is more interesting whether the amplification factor of the approximation error, $\sqrt{d}/(1-\gamma)^2$, is best possible. In the next lecture we will show that the $\sqrt{d}$ approximation error amplification factor cannot be removed while keeping the runtime under control. In a later lecture, we will show that the dependence on $1/(1-\gamma)$ cannot be improved either – at least for this algorithm. However, we will see that if the main concern is the amplification of the approximation error, while keeping the runtime polynomial (perhaps with a higher order though) then under B2\(_{\varepsilon}\) better algorithms exist.
The cost of optimal experimental design
The careful reader would not miss that to run the proposed method one needs to find the set $\mathcal{C}$ and the weighting function $\rho$. The first observation here is that it is not crucial to find the best possible $(\mathcal{C},\rho)$ pair. The Kiefer-Wolfowitz theorem showed that with this best possible choice, $g(\rho) = \sqrt{d}$. However, if one finds a pair such that $g(\rho)=2\sqrt{d}$, the price of this is that wherever $\sqrt{d}$ appears in the final performance bound, a submultiplicative factor of $2$ will also need to be introduced. This should be acceptable. In relation to this note that by relaxing this optimality requirement, the cardinality of $\mathcal{C}$ can be reduced. For example, by introducing the factor of $2$ as suggested above allows one to reduce the cardinlity to $O(d \log \log d)$; which may actually be a good tradeoff as this can save much on the runtime.
However, the question still remains of who computes these (approximately) optimal designs and at what cost. While this calculation only needs to be done once and is independent of the MDP (just depends on the feature map), the value of these methods remains unclear because of this compute cost. General methods to compute approximately optimal designs needed here are known, but their runtime for our case will be proportional to the number of state-action pairs. In the very rare cases when simulating transitions is very costly but the number of state-action pairs is not too high, this may be a viable option. However, these cases are rare. For special choices of the feature-map, optimal designs may be known. However, this reduces the general applicability of the method presented here. Thus, a major question is whether the optimal experimental design can be avoided. What is known is that for linear prediction with least-squares, clearly, they cannot be avoided. One suspects that this is true more generally.
Can optimal designs be avoided while keeping the results essentially unchanged? Of particular interest would be if the feature-map would also be only "locally explored" as the planner interacts with the simulator. Altogether, one suspects that two factors contributed here for the appearance of optimal experimental design: One factor is that the planner is global: It comes up with a parameter vector that leads to a policy that can be used regardless of the state. The other (perhaps) factor is that the approach was based on simple "patching up" a dynamic programming algorithm with a function approximator. While this is a common approach, controlling the extrapolation errors in this approach is critical and is likely only possible with something like an optimal experimental design. As we shall see soon, there are indeed approaches that avoid the optimal experimental design step and which are based on local planning and they also deviate from the ADP approach.
Policy evaluation alternatives
The policy evaluation method presented here feels unsophisticated. It uses simple Monte-Carlo rollouts, with truncation, averaging and least-squares regression. The reinforcement learning literature offers many alternatives, such as the "temporal difference" learning type methods that are based on solving the fixed point equation $q^\pi = T_\pi q^\pi$. One can indeed try to use this equation to avoid the crude Monte-Carlo approach presented here, in the hope of reducing the variance (which is currently rather crudely upper bounded using the $1/(1-\gamma)$ term in the Hoeffding bound). Rewriting the fixed point as $(I-\gamma P_\pi) q^\pi = r$, and then plugging in $q^\phi = \Phi \theta + \varepsilon$, we see that the trouble is that to control the extrapolation errors, the optimal design must likely depend on the policy to be evaluated (because of the appearance of $(I-\gamma P_\pi)\Phi$).
Alternative error control: Bellman residuals
Let \((\pi_k)_{k\ge 0}\) and \((q_k,\varepsilon_k)_{k\ge 0}\) be so that
\[\varepsilon_k = q_k - T_{\pi_k} q_k\]
Here, \(\varepsilon_k\) is called the "Bellman residual" of \(q_k\). The policy evaluation alternatives above aim at controlling these residuals. The reader is invited to derive the analogue of the "approximate policy iteration" error bound in \eqref{eq:apieb} for this scenario.
The role of $\rho$ in the Kiefer-Wolfowitz result
One may wonder about how critical is the presence of $\rho$ in the results presented. For this, we can say that it is not critical. Unweighted least-squares does not perform much worse.
Least-squares error bound
The error bound presented for least-squares does not use the full power of randomness. When part of the errors $\varepsilon(z)$ with $z\in \mathcal{C}$ are random, some helpful averaging effects can appear, which we ignored for now, but which could be used in a more refined analysis.
Optimal experimental design – a field on its own
Optimal exoerimental design is a subfield of statistics. The design considered here is just one possibility. In fact, this design which is called G-optimal design (G stands, uninspiringly, for the word "general"). The Kiefer-Wolfowitz theorem actually also states that this is equivalent to the D-optimal designs.
Lack of convergence
The results presented show convergence to a ball around the optimal target. Some people think this is a major concern. While having a convergent method may look more appealing, as long as one controls the size of the ball, I will not be too concerned.
Approximate value iteration (AVI)
Similarly to what is done here, one can introduce an approximate version of value-iteration. This is the subject of Question 3 of homework 2. While the conditions are different, the qualitative behavior of AVI is similar to that of approximate policy iteration.
In particular, as for approximate policy iteration, there are two steps to this proof: One is to show that the residuals $\varepsilon_k = q_k - T q_{k-1}$ can be controlled and the second is that if they are controlled then the policy that is greedy with respect to (say) $q_K$ is $\delta$-optimal with $\delta$ controlled by \(\varepsilon_{1:K}:=\max_{1\le k \le K} \| \varepsilon_k \|_\infty\). For this second part, we have the following bound:
\[\begin{align} \delta \le 2 H^2 (\gamma^K + \varepsilon_{1:K})\,. \label{eq:lsvibound} \end{align}\]
where $H=1/(1-\gamma)$. The procedure that uses least-squares fitting to get the iterates $(q_k)_k$ is known under various names, such as least-squares value iteration (LSVI), fitted Q-iteration (FQI), least-squares Q iteration (LSQI). This proliferation of abbreviations and names is unfortunate, but there is not much that can be done at this stage. To add insult to injury, when neural networks are used to represent the iterates and an incremental stochastic gradient descent algorithm is used for "fitting" the weights of these networks by resampling old data from a "replay buffer", the resulting procedure is coined "Deep Q-Networks" (training), or DQN for short.
Bounds on the parameter vector
The Kiefer-Wolfowitz theorem implies the following:
Proposition: Let $\phi:\mathcal{Z}\to\mathbb{R}^d$ and $\theta\in \mathbb{R}^d$ be such that $\sup_{z\in \mathcal{Z}}|\langle \phi(z),\theta \rangle|\le 1$ and \(\sup_{z\in \mathcal{Z}} \|\phi(z)\|_2 <+\infty\). Then, there exist a matrix $S\in \mathbb{R}^{d\times d}$ such that for $\tilde \phi$
\[\begin{align*} \tilde\phi(z) & = S\phi(z)\,, \qquad z\in \mathcal{Z} \end{align*}\]
there exists \(\tilde \theta\in \mathbb{R}^d\) such that the following hold:
\(\langle \phi(z),\theta \rangle = \langle \tilde \phi(z),\tilde \theta \rangle\), \(z\in \mathcal{Z}\);
\(\sup_{z\in \mathcal{Z}} \| \tilde \phi(z) \|_2 \le 1\);
\(\|\tilde \theta \|_2 \le \sqrt{d}\).
Proof: Let $\rho:\mathcal{Z} \to [0,1]$ be the $G$-optimal design whose existence is guaranteed by the Kiefer-Wolfowitz theorem. Let \(M = \sum_{z\in \mathrm{supp}(\rho)} \rho(z) \phi(z)\phi(z)^\top\) be the underlying moment matrix. Then, by the definition of $\rho$, \(\sup_{z\in \mathcal{Z}}\|\phi(z)\|_{M^{-1}}^2 \le d\).
Define \(S= (dM)^{-1/2}\) and \(\tilde \theta = S^{-1} \theta\). The first property is clearly satisfied. As to the second property,
\[\|\tilde \phi(z)\|_2^2 = \| (dM)^{-1/2}\phi(z)\|_2^2 = \phi(z)^\top (dM)^{-1} \phi(z) \le 1\,.\]
Finally, for the third property,
\[\| \tilde \theta \|_2^2 = d \theta^\top \left( \sum_{z\in \mathrm{supp}(\rho)} \rho(z) \phi(z) \phi(z)^\top \right) \theta = d \sum_{z\in \mathrm{supp}(\rho)} \rho(z) \underbrace{(\theta^\top \phi(z))^2}_{\le 1} \le d\,,\]
finishing the proof. \(\qquad \blacksquare\)
Thus, if one has access to the full feature-map then knowing that a function realized is bounded, one may as well assume that the feature map is bounded and the parameter vector is bounded just by $\sqrt{d}$.
Regularized least-squares
The linear least-squares predictor given by a feature-map $\phi$ and data $(z_1,y_1),\dots,(z_n,y_n)$ predicts a response at $z$ via $\langle \phi(z),\hat\theta \rangle$ where
\[\begin{align} \hat\theta = G^{-1}\sum_{i=1}^n \phi_i y_i\,, \label{eq:ridgesol} \end{align}\]
\[G = \sum_{i=1}^n \phi_i \phi_i^\top\,.\]
Here, by abusing notation for the sake of minimizing clutter, we use $\phi_i=\phi(z_i)$, $i=1,\dots,n$. The problem is that $G$ may not be invertible (i.e., $\hat \theta$ may not be defined as written above). "By continuity", it is nearly equally problematic when $G$ is ill-conditioned (i.e., its minimum eigenvalue is "much smaller" than its maximum eigenvalue). In fact, this leads to poor "generalization". One remedy, often used, is to modify $G$ by shifting it with a small constant multiple of the identity matrix:
\[G = \lambda I + \sum_{i=1}^n \phi_i \phi_i^\top\,.\]
Here, $\lambda>0$ is a tuning parameter, whose value is often chosen based on cross-validation or with a similar process. The modification guarantees that $G$ is invertible and it overall improves the quality of predictions, especially when $\lambda$ is tuned base on data.
Above, the choice of the identity matrix, while is common in the literature, is completely arbitrary. In particular, invertibility will be guaranteed if $I$ is replaced with any other positive definite matrix $P$. In fact, the matrix one should use here should be one that makes $|\theta|_P^2$ small (while, say, keeping the minimum eigenvalue of $P$ at constant). That this is the choice that makes sense can be argued for by noting that with
\[G = \lambda P + \sum_{i=1}^n \phi_i \phi_i^\top\,.\]
the $\hat\theta$ vector defined in \eqref{eq:ridgesol} is the minimizer of
\[L_n(\theta) = \sum_{i=1}^n ( \langle \phi_i,\theta \rangle - y_i)^2 \,\,+ \lambda \| \theta\|_P^2\,,\]
and thus, the extra penalty has the least impact for the choice of $P$ that makes the norm of $\theta$ the smallest. If we only know that $\sup_{z} |\langle \phi(z),\theta \rangle|\le 1$, by our previous note, a good choice is $P=d M$, where \(M = \sum_{z\in \mathrm{supp}(\rho)} \rho(z) \phi(z)\phi(z)^\top\) where \(\rho\) is a $G$-optimal design. Indeed, with this choice, \(\|\theta\|_P^2 = d \|\theta \|_M^2 \le d\). Note also that if we apply the feature-standardization transformation of the previous note, we have
\[(dM)^{-1/2} (\sum_i \phi_i \phi_i^\top + \lambda d M ) (dM)^{-1/2} = \sum_i \tilde \phi_i \tilde \phi_i^\top + \lambda I\,,\]
showing that the choice of using the identity matrix is justified when the features are standardized as in the proposition of the previous note.
We will only scratch the surface now; expect more references to be added later.
The bulk of this lecture is based on
Tor Lattimore, Csaba Szepesvári, and Gellért Weisz. 2020. "Learning with Good Feature Representations in Bandits and in RL with a Generative Model." ICML and arXiv:1911.07676,
who introduced the idea of using \(G\)-optimal designs for controlling the extrapolation errors. A very early reference on error bounds in "approximate dynamic programming" is the following:
Whitt, Ward. 1979. "Approximations of Dynamic Programs, II." Mathematics of Operations Research 4 (2): 179–85.
The analysis of the generic form of approximate policy iteration is a refinement of Proposition 6.2 from the book of Bertsekas and Tsitsiklis:
Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, Massachusetts, 1996.
However, there are some differences between the "API" theorem presented here and Proposition 6.2. In particular, the theorem presented here appears to capture all sources of errors in a general way, while Proposition 6.2 is concerned with value function approximation errors and errors introduced in the "greedification step". The form adopted here appears, for example, in Theorem 1 of a technical report of Scherrer, who also gives earlier references:
Scherrer, Bruno. 2013. "On the Performance Bounds of Some Policy Search Dynamic Programming Algorithms." arxiv.
The earliest of these references is perhaps
Munos, R. 2003. "Error Bounds for Approximate Policy Iteration." ICML.
Least-squares policy iteration appears in
Lagoudakis, M. G. and Parr, R. Least-squares policy iteration. The Journal of Machine Learning Re-search, 4:1107–1149, 2003.
The particular form presented in this work though uses value function approximation based on minimizing the Bellman residuals (using the so-called LSTD method).
Two books that advocate the ADP approach:
Powell, Warren B. 2011. Approximate Dynamic Programming. Solving the Curses of Dimensionality. Hoboken, NJ, USA: John Wiley & Sons, Inc.
Lewis, Frank L., and Derong Liu. 2013. Reinforcement Learning and Approximate Dynamic Programming for Feedback Control. Hoboken, NJ, USA: John Wiley & Sons, Inc.
And a chapter:
Bertsekas, Dimitri P. 2009. "Chapter 6: Approximate Dynamic Programming," January, 1–118.
A paper that is concerned with API and least-squares methods, but uses concentrability is:
Antos, Andras, Csaba Szepesvári, and Rémi Munos. 2007. "Learning near-Optimal Policies with Bellman-Residual Minimization Based Fitted Policy Iteration and a Single Sample Path." Machine Learning 71 (1): 89–129.
Optimal experimental design has a large literature. A nice book concerned with computation is this:
M. J. Todd. Minimum-volume ellipsoids: Theory and algorithms. SIAM, 2016.
The Kiefer-Wolfowitz theorem is from:
J. Kiefer and J. Wolfowitz. The equivalence of two extremum problems. Canadian Journal of Mathematics, 12(5):363–365, 1960.
More on computation here:
E. Hazan, Z. Karnin, and R. Meka. Volumetric spanners: an efficient exploration basis for learning. Journal of Machine Learning Research, 17(119):1–34, 2016
M. Grötschel, L. Lovász, and A. Schrijver. Geometric algorithms and combinatorial optimization, volume 2. Springer Science & Business Media, 2012.
The latter book is a very good general starting point for convex optimization.
That the features are standardized as shown in the notes is assumed (and discussed), e.g., in
Wang, Ruosong, Dean P. Foster, and Sham M. Kakade. 2020. "What Are the Statistical Limits of Offline RL with Linear Function Approximation?" arXiv [cs.LG]. arXiv
which we will meet later.
Copyright © 2020 RL Theory.
|
CommonCrawl
|
The bacterial community significantly promotes cast iron corrosion in reclaimed wastewater distribution systems
Guijuan Zhang1,2,
Bing Li1,2,
Jie Liu1,2,
Mingqiang Luan1,2,
Long Yue1,2,
Xiao-Tao Jiang3,
Ke Yu4 &
Yuntao Guan1,2
Microbiomevolume 6, Article number: 222 (2018) | Download Citation
Currently, the effect of the bacterial community on cast iron corrosion process does not reach consensus. Moreover, some studies have produced contrasting results, suggesting that bacteria can either accelerate or inhibit corrosion.
The long-term effects of the bacterial community on cast iron corrosion in reclaimed wastewater distribution systems were investigated from both spatial (yellow layer vs. black layer) and temporal (1-year dynamic process) dimensions of the iron coupon-reclaimed wastewater microcosm using high-throughput sequencing and flow cytometry approaches. Cast iron coupons in the NONdisinfection and UVdisinfection reactors suffered more severe corrosion than did those in the NaClOdisinfection reactor. The bacterial community significantly promoted cast iron corrosion, which was quantified for the first time in the practical reclaimed wastewater and found to account for at least 30.5% ± 9.7% of the total weight loss. The partition of yellow and black layers of cast iron corrosion provided more accurate information on morphology and crystal structures for corrosion scales. The black layer was dense, and the particles looked fusiform, while the yellow layer was loose, and the particles were ellipse or spherical. Goethite was the predominant crystalline phase in black layers, while corrosion products mainly existed as an amorphous phase in yellow layers. The bacterial community compositions of black layers were distinctly separated from yellow layers regardless of disinfection methods. The NONdisinfection and UVdisinfection reactors had a more similar microbial composition and variation tendency for the same layer type than did the NaClOdisinfection reactor. Biofilm development can be divided into the initial start-up stage, mid-term development stage, and terminal stable stage. In total, 12 potential functional genera were selected to establish a cycle model for Fe, N, and S metabolism. Desulfovibrio was considered to accelerate the transfer of Fe0 to Fe2+ and speed up weight loss.
The long-term effect of disinfection processes on corrosion behaviors of cast iron in reclaimed wastewater distribution systems and the hidden mechanisms were deciphered for the first time. This study established a cycle model for Fe, N, and S metabolism that involved 12 functional genera and discovered the significant contribution of Desulfovibrio in promoting corrosion.
Wastewater reclamation and reuse is an effective way to relieve the dilemma of water resource shortages. Reclaimed wastewater can be used for irrigation, industrial consumption, and supplementation of ecological water, i.e., artificial wetlands and rivers. Cast iron pipes have been widely used in water distribution systems for more than 150 years because of their high mechanical strength and cost effectiveness [1]. In contrast to the corrosion of drinking water distribution systems (DWDS), which has attracted the attention of many researchers due to its serious effect of "red water" or "colored water" on people's daily life [2,3,4,5,6,7], only a few studies have investigated corrosion in reclaimed wastewater distribution systems (RWDS). Reclaimed wastewater contains a much higher concentration of organic matter than does drinking water, which could result in the consumption of more disinfectants and promote the regrowth of more abundant and diverse bacteria [8]. The above features of reclaimed wastewater might lead to more severe corrosion of cast iron pipes and to consequent pipeline burst, resulting in loss of reclaimed wastewater. Additionally, considering scientific issues, the corrosion mechanism in RWDS may be strikingly different from that in DWDS. Corrosion is a synergistic interaction among the metal surface, abiotic corrosion products, bacterial cells, and their metabolites [9]. It can be affected by various factors, such as water quality, disinfection method, and microbial community structure [10]. Alkalinity and calcium hardness have been found to inhibit corrosion [7]. Among disinfectants, it is generally accepted that sodium hypochlorite and its residuals increase corrosion rates [11, 12].
Currently, the effect of microorganisms on the cast iron corrosion process has not reached consensus. Some studies have produced contrasting results, suggesting that microorganisms can either accelerate or inhibit corrosion [3]. In general, the main bacterial species related to metal transformation in terrestrial and aquatic habitats are sulfate-reducing bacteria (SRBs), sulfur-oxidizing bacteria (SOBs), iron-oxidizing bacteria (IOBs), and iron-reducing bacteria (IRBs) [13]. Numerous studies have investigated the impact of pure or artificially mixed culture bacteria on cast iron corrosion in water distribution pipelines. Sulfate-reducing bacteria are usually related to anaerobic iron corrosion [14], and artificially mixed cultures verified that the promotion of corrosion by SRBs can be diminished in the presence of Pseudomonas aeruginosa (denitrifying bacterium) [15]. Sulfur-oxidizing bacteria are believed to accelerate corrosion because of their ability to produce acid [16]. The presence of IOBs rapidly inhibited corrosion on cast iron coupons due to the formation of a passive layer in the early stage (approximately the first 20 days) and accelerated corrosion with the decrease in passive layer adhesion [4]. Iron-reducing bacteria can enhance corrosion by the reduction of Fe3+ corrosion products, which are easily dissolved to expose the metal surface to the corrosive medium again. However, IRBs can also inhibit corrosion by developing biofilms at the metal surface and producing the extracellular polymeric substance (EPS) as a protective layer [17]. It should be noted that in natural environments and engineered systems, microbial biofilms are always composed of multifarious bacteria and not merely a single bacterium or several types of bacteria. Therefore, the effect of the microbial community on cast iron corrosion has attracted increasing attention. Some researchers believe that the effect of biofilm on cast iron corrosion in RWDS changes over time. In a 30-day experiment, Teng et al. [18] verified that biofilm accelerated corrosion within 7 days but inhibited corrosion after 7 days. The major reason for this result was the abundance transition of IOBs and IRBs. Wang et al. [12] considered that corrosion-inducing bacteria, including the IRB Shewanella sp., the IOB Sediminibacterium sp., and the SOB Limnobacter thiooxidans, promoted iron corrosion by synergistic interactions in the primary period. Nevertheless, when IRBs became the dominant bacteria, they could prevent further corrosion via the formation of protective layers. Some studies have demonstrated that the existence of biofilm in reclaimed wastewater significantly promoted corrosion [19]. However, it was suggested that biofilm could protect metal from corrosion by preventing the diffusion of oxygen [20]. Other researchers did not reach a clear conclusion but rather speculated that bacterial communities could at least promote the layering process and the formation of corrosion tubercles [8].
Whether microbes inhibit or promote corrosion, their remarkable effect on corrosion has been affirmed, especially the impact of anaerobic bacteria existing close to the base of cast iron. Nevertheless, previous studies did not distinguish the anaerobic layer from the aerobic layer when investigating the effect of the microbial community on cast iron corrosion in RWDS. Furthermore, no information is available on the relationship between the dynamics of the bacterial community composition of different layers and the corrosion behaviors in RWDS over time, especially in the long term. With the rapid development of high-throughput sequencing technology, the dynamics of microbial community structure and its effect on corrosion could be disclosed with high resolution and high accuracy [21].
Considering the above research gaps, the questions that we wish to address in this study are summarized as follows. (i) Do different disinfection processes (NaClOdisinfection and UVdisinfection) affect the corrosion behaviors of cast iron in reclaimed wastewater in the long term compared to nondisinfection process (NONdisinfection)? (ii) Does the bacterial community promote or inhibit corrosion of cast iron? (iii) Are there some key bacterial species contributing to corrosion inhibition or promotion? (iv) How do functional microorganisms drive the iron element cycle in the interface microcosm of cast iron-reclaimed wastewater?
Laboratory-scale reactor setup
Three laboratory-scale reactors were set up to simulate RWDS (Additional file 1: Figure S1) and were placed in the dark to prevent the growth of phototrophic microorganisms at the Xili reclaimed wastewater plant (RWP) in Shenzhen, Guangdong Province, China. Xili RWP uses a BIOSTYR® biological active filter and an ACTIFLO® high-density settling basin (Veolia Water, France) as the main treatment process with a treatment capacity of 50,000 m3/d. NaClOdisinfection, UVdisinfection and NONdisinfection reclaimed wastewaters were pumped into the three reactors to compare the effect of disinfecting methods on the corrosion of cast iron coupons. Both NONdisinfection and NaClOdisinfection reclaimed wastewaters were collected from the secondary sedimentation tank effluent, and the latter was obtained by adding sodium hypochlorite (NaClO) with 5 mg/L free chlorine. The UVdisinfection reclaimed wastewater was collected from the UV disinfection tank directly. Three types of reclaimed wastewater were pumped into three 1000 L storage tanks and then discharged horizontally through the pipeline system at a rate of 0.2 m/s [8]. Ductile cast iron coupons (QT450), with C (3.4 ~ 3.9%), Si (2.2 ~ 2.8%), Mn (< 0.5%), P (< 0.07%), S (< 0.03%), Mg (0.03 ~ 0.06%), and Re (0.02 ~ 0.04%) were used in this study. Prior to the experiment, the coupons were first rinsed with deionized water thrice, degreased with acetone, sterilized by immersion in 70% ethanol for 8 h, and then dried aseptically in a laminar flow cabinet. Finally, the coupons were exposed to UV light for 30 min before they were weighed [22]. All water quality parameters were measured according to the standard methods [23] (see Additional file 1: Text S1). The detailed water quality of NaClOdisinfection, NONdisinfection, and UVdisinfection reclaimed wastewaters is summarized in Additional file 1: Table S1.
Sample collection and preparation
To investigate the diversity and dynamics of the bacterial community on cast iron coupons, samples were collected from three reactors weekly for the first month and then every 3 weeks for the next 11 months, with a total of 20 sampling times during the entire experimental period. Sampling time points are shown in Additional file 1: Table S2. All samples were transported to the laboratory within 2 h for subsequent pretreatment and analysis. All analyses were conducted within 24 h. To distinguish the effect of metabolic bacterial activity on corrosion under aerobic and anaerobic conditions, biofilms in cast iron coupons were divided into two layers according to their color (Additional file 1: Figure S2). The surface layer was aerobic and yellow, while the inner layer was anaerobic and black. To obtain sufficient biomass on different layers, four pieces of cast iron coupons were collected each time, the surface layer and inner layer of each piece were separately sampled, and the same layers were mixed together. The surface layer, namely, the yellow layer, was flushed slightly with ultrapure water and finally collected a total of 500 mL of suspension liquid. To detach bacteria from the inner layer, i.e., the black layer, the cast iron coupons on which the yellow layer had already been removed were treated by ultrasonic processing (42 kHz) three times for 5 min each, and a total of 500 mL of suspension liquid was obtained [24, 25]. The potential biases of bacterial viability and adenosine triphosphate (ATP) measurement resulting from ultrasonic processing were excluded based on the preliminary experiment (see Additional file 1: Text S2). Four milliliters of obtained suspension liquid was used for further adenosine triphosphate (ATP) measurement and flow cytometry cell counting, and the other 496 mL was used for DNA extraction. The corrosion rate was determined by the weight loss method [10, 22]. The corrosive cast iron coupons were lyophilized for 24 h and gently divided into yellow layer and black layer by a sterile metal spatula. The crystalline phase of the yellow layer and black layer was characterized using an X-ray powder diffractometer (XRD; RIGAKU D/max2500/PC, Japan). The micrograph of the cast iron corrosion scale was examined by scanning electron microscopy operating at 15.0 kV (SU8010, HITACHI, Japan). In addition, polarization curves were also measured by an electrochemical workstation (CHI750e, Chenhua, Shanghai, China).
Adenosine triphosphate measurement
To allow the bacteria to be adequately released from the iron rust, 0.25 mL of 0.5 mm glass beads was added into the suspension liquid obtained above. After a 60 s × 3 vortex pretreatment, supernatant was collected via centrifugation for 2 min at 600g [26] for ATP measurement using the BacTiter-Glo™ reagent (Promega Corporation, Madison, USA) and a luminometer (SpectraMax i3, Molecular Devices, USA) [27]. The data were collected as relative light units and converted to ATP (nM) by a calibration curve established with a known rATP standard (Promega).
Flow cytometry measurement
To count viable/dead bacteria simultaneously, bacterial suspensions (1 mL) were stained with 10 μL/mL SYBR Green I (1:100 dilution in DMSO; Invitrogen) and 6 μM propidium iodide, which only stains damaged bacteria, and incubated in the dark for 25 min at room temperature before measurement. If necessary, samples were diluted to lower than 2 × 105 cells/mL by cell-free Milli-Q water before measurement. Flow cytometry measurement was performed using FACSCalibur (BD, USA), emitting at a fixed wavelength of 488 nm and volumetric counting hardware. The signals of SYBR Green I and propidium iodide were respectively collected in the FL1 channel (520 nm) and the FL3 channel (615 nm), all data were processed with BD CellQuest™ Pro, and electronic gating with the software was used to separate positive signals from noise [27, 28].
DNA extraction, PCR amplification, and Illumina sequencing
Microbial biomass was harvested from suspension liquid using 0.22 μm nitrocellulose membrane filters (47 mm diameter, Millipore, Billerica, MA, USA) [29]. Genomic DNA from the biomass in the black layer and yellow layer was separately extracted using a FastDNA® SPIN Kit for soil (MP Biomedicals, France) following the manufacturer's instructions. The concentration and purity of DNA were determined using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, USA). The extracted DNA was stored at − 20 °C for subsequent use. For PCR amplification, the hypervariable V4 region of the bacterial 16S rRNA gene was amplified using a forward primer (5′-TATGGTAATTGTGTGCCAGCMGCCGCGGTAA-3′) and reverse primer (5′-AGTCAGTCAGCCGGACTACHVGGGTWTCTAAT-3′). Barcode was added at the 5′ end of the forward and reverse primers to allow for sample multiplexing during sequencing [30], resulting in a fragment size of 333 bp that was sequenced in a paired ends fashion, with read length of 250 bp per read-mate. PCR solutions contained 25 μL of ExTaq™ premix (Takara, China), 2 μL of 10 μM forward and reverse primers, 1 μL of 20 ng/μL DNA, and 22 μL of RNA-free H2O. The thermocycling steps for PCR were set as follows: initial denaturation at 95 °C for 5 min; 28 cycles at 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 1 min; and a final extension step at 72 °C for 5 min. PCR products were purified using the MiniBEST DNA Fragment Purification Kit Ver. 4.0 (Takara, Japan) and then visualized on an agarose gel. Purified PCR amplicons were quantified by NanoDrop 2000 and mixed to achieve equal mass concentrations for paired-end 250 bp sequencing on a HiSeq 2500 platform.
Bioinformatics analyses
All the raw sequencing data of the 16S rRNA amplicons were processed in Mothur v. 1.39.5 [31]. Briefly, sequences were first demultiplexed, quality trimmed, aligned, and finally checked with chimera.uchime to remove chimeric sequences, following the standard pipeline in the Mothur manual. Then, the clear sequences were normalized by randomly extracting 40,000 clean sequences from each sample dataset to fairly compare all samples at the same sequencing depth [32]. Next, the normalized sequences from all samples were clustered into operational taxonomic units (OTUs) at an identity threshold of 97%, which approximately corresponds to the taxonomic levels of species for bacteria. OTUs with an abundance of less than 10 sequences were removed from the OTU table. Representative sequences of OTUs were extracted and submitted to the Ribosomal Database Project (RDP) Classifier for taxonomy annotation at an 80% threshold. The diversity index and evenness were calculated using PAST 3 [33]. The taxonomic dendrogram was visualized by Cytoscape 3.6.0 [34] to obtain an overall view of the bacterial community structure. All sample similarities and pairwise comparisons between different layers were computed as weighted UniFrac distances [35] and were visualized by principal coordinate analysis (PCoA) using "vegan" and "ggplot2" packages in R studio.
Corrosion process and corrosion scale characterization under different disinfection conditions
Corrosion process monitoring
The weight loss results indicated that the cast iron coupons in the NONdisinfection and UVdisinfection reactors suffered more severe corrosion than did those in the NaClOdisinfection reactor (Fig. 1), which seemed contradictory to the expectation because NaClO was thought to promote corrosion [11, 12]. The detailed explanation will be discussed in the subsection "Morphology and crystal structures of the corrosion scale." Before the 19th week, the weight loss of the cast iron coupons in the three reactors did not significantly differ (P > 0.05, paired t test). After the 19th week, the weight loss of the cast iron coupons in both the NONdisinfection and UVdisinfection reactors became significantly more than that in the NaClOdisinfection reactor (P < 0.01, paired t test). At the end of the 1-year experimental period, the weight loss of the coupons in the NONdisinfection and UVdisinfection reactors reached 3.53 g ± 0.14 g and 3.57 g ± 0.08 g, which accounted for 19.4% ± 1.1% and 19.7% ± 0.1% of the initial coupon weight, respectively. For the NaClOdisinfection reactor, the weight loss was only 2.49 g ± 0.19 g, accounting for 13.5% ± 1.5% of the initial coupon weight.
The weight loss of the cast iron coupons in the NaClOdisinfection, NONdisinfection, and UVdisinfection reactors during a 1-year period. Each data point represents the average weight loss of four pieces of cast iron coupons (n = 4). Error bars represent the standard deviation
Polarization curves are frequently used to characterize electrochemical reactions at the metal/biofilm interface and the formation of corrosion and biofilms [15, 36]. In this study, the polarization curves (Additional file 1: Figure S3) of the corroded coupons in the corresponding water were measured to analyze the change in the corrosion current density. Corrosion current density did not exhibit significant discrepancy before the 19th week in the three reactors (P > 0.05, paired t test), while it was significantly higher in the NONdisinfection and UVdisinfection reactors than in the NaClOdisinfection reactor after the 19th week. The electrochemical results agreed with the weight loss results and confirmed that cast iron coupons suffered much more serious corrosion in the NONdisinfection and UVdisinfection reactors than did those in the NaClOdisinfection reactor in the mid-late experiment period.
Morphology and crystal structures of the corrosion scale
Additional file 1: Figure S2 shows examples of the partitioning of the yellow layer and black layer in the NaClOdisinfection and NONdisinfection reactors, respectively. Compared to that in the NONdisinfection reactor, the corrosion scale of cast iron in the NaClOdisinfection reactor was flatter, thinner, and more close-grained. Scanning electron microscopy (SEM) showed that in the corrosion scales of the black layer (Fig. 2a, c, e; Additional file 1: Figure S4a, c, e), the fusiform-shaped nanoparticles were agglomerated into larger spheres with a size of 2 μm. The image of the black layer corrosion scale is very similar to the corrosion scale disinfected using chloramine in drinking groundwater distribution systems [37]. For the yellow layers (Fig. 2b, d, f; Additional file 1: Figure S4b, d, f), the corrosion scales were composed of loose sphere-shaped nanoparticles. Furthermore, the elemental composition of the corrosion scales was detected by SEM and energy-dispersive spectrometer. C, O, Si, Al, and Fe were the predominant elements among the six samples (Additional file 1: Figure S5).
SEM micrograph of the cast iron corrosion scale at the 52nd week, magnification = ×20,000. a Black layer in the NaClOdisinfection reactor; b yellow layer in the NaClOdisinfection reactor; c black layer in the NONdisinfection reactor; d yellow layer in the NONdisinfection reactor; e black layer in the UVdisinfection reactor; f yellow layer in the UVdisinfection reactor
An X-ray diffractometer was used to characterize the crystal structure of corrosion scales in the yellow and black layers on cast iron coupons at the 4th week, 34th week, and 52nd week (Additional file 1: Figures S6 and S7, Fig. 3). Goethite (FeOOH) was identified as the predominant crystalline phase in the corrosion scale of the black layers during the entire experimental period regardless of disinfection methods. This was consistent with previous studies, i.e., goethite was the dominant crystalline phase of the cast iron corrosion scale in both drinking water and reclaimed wastewater distribution pipelines [3, 5]. However, in addition to goethite, magnetite, siderite, lepidocrocite, and calcite were also detected in the cast iron corrosion scales, reported by Wang et al. [3] and Zhu et al. [10], but these crystalline phases did not appear in our study. For the yellow layers, corrosion products mainly exhibited amorphous structures during the entire experiment period. It should be noted that cast iron corrosion scales were not divided into yellow layers and black layers for subsequent morphology and crystal structure characterization in previous studies. Pretreatment of the partition of the yellow layer and black layer could provide more specific and accurate information on the morphology and crystal structures for cast iron corrosion scales.
XRD spectrograms of the cast iron corrosion products at the 52nd week. a Black layer in the NaClOdisinfection reactor; b yellow layer in the NaClOdisinfection reactor; c black layer in the NONdisinfection reactor; d yellow layer in the NONdisinfection reactor; e black layer in the UVdisinfection reactor; f yellow layer in the UVdisinfection reactor
To decipher why the cast iron coupons in the UVdisinfection and NONdisinfection reactors suffered more serious corrosion than did those in the NaClOdisinfection reactor, water quality parameters, microbial quantity, microbial activity, and community composition were comprehensively analyzed. Among all the water quality parameters detected, the concentrations (or values) of TN, TP, TOC, hardness, soluble iron, and pH in the NaClOdisinfection, UVdisinfection, and NONdisinfection reactors were highly similar (Additional file 1: Table S1). However, the ORP, free chlorine and total chlorine concentrations in the NaClOdisinfection reactor were much higher than those in the UVdisinfection and NONdisinfection reactors due to the addition of NaClO (Additional file 1: Table S1). Oxidation-reduction potential, representing the oxidizing ability of water, was up to 473.2 ± 62.9 mv in the NaClOdisinfection reactor, which was much higher than that in the NONdisinfection (290.8 ± 87.1 mv) and UVdisinfection (282.0 ± 80.4 mv) reactors, respectively. Theoretical Eh (volts) values for Fe2+–γ-goethite couples, Fe2+–α-goethite couples, Fe2+–Fe3O4 couples, and Fe–Fe2+ couples were 88 mv, 274 mv, 314 mv, and 440 mv at circumneutral pH, respectively [38]. These values were all less than the ORP of the NaClOdisinfection water. The existence of ClO− can promote the transition from Fe2+ to Fe3+ [12]. In summary, greater ORP and the existence of ClO− were beneficial for oxidizing Fe2+ to Fe3+ and Fe–Fe2+, which could have caused more severe corrosion. Nevertheless, cast iron coupons in the NaClOdisinfection reactor after the 19th week exhibited weaker corrosion than did the coupons placed in the NONdisinfection and UVdisinfection reactors. Therefore, it seems that water parameters are not the main factors resulting in differences in corrosion behavior. Instead, microbial quantity, microbial activity, or community composition might be responsible for the corrosion difference.
Microbial quantity and activity under different disinfection conditions
A flow cytometer was used extensively to count the cell numbers due to its high accuracy [27, 28]. The amounts of live, dead, and total bacteria in the biofilm in black and yellow layers over time were measured separately in the present study. Dead bacteria in the yellow layers of the NaClOdisinfection, NONdisinfection and UVdisinfection reactors were always at a low quantity (1.33 × 105 ~ 1.29 × 107 cells/cm2) throughout the entire experimental period regardless of different disinfection method. Interestingly, the quantity of dead bacteria in the black layers of these three reactors was much higher than that in the corresponding yellow layers (Additional file 1: Figure S8a and d). At the beginning of the experiment (i.e., the first 4–7 weeks for the black layer and the first 10–13 weeks for the yellow layer), the live bacteria in the cast iron biofilm in the NONdisinfection and UVdisinfection reactors maintained a much more rapid growth rate than did those in the NaClOdisinfection reactor. During the initial 22 weeks, the quantity of live microbes in the black layer in the NONdisinfection and UVdisinfection reactors was also significantly higher than that in the NaClOdisinfection reactor (P < 0.01, paired t test). After the 22nd week, the quantity of viable microbes in the black layer in these three reactors reached the same level and maintained a relatively steady state throughout the whole experimental period (Additional file 1: Figure S8b). For the yellow layers, the quantity of live microbes was higher in the NONdisinfection and UVdisinfection reactors than in the NaClOdisinfection reactor for the initial 10 weeks instead of 22 weeks (P < 0.05, paired t test, Additional file 1: Figure S8e). The variation in the quantity of total microbial cells over time was similar to that of live microbial cells (Additional file 1: Figure S8c and f).
As shown in Additional file 1: Figure S9, the variation in ATP, which represents microbial activity [24, 27, 39], was highly consistent with the variation in the quantity of live bacteria (Additional file 1: Figure S8b, e). This phenomenon was quite reasonable because ATP reflected the microbial activity of the live cells.
Diversity and microbial community compositions
α-Diversity and β-diversity analyses
In total, 5719 OTUs remained after removing OTUs with an abundance of less than 10 sequences for all 120 samples. The OTU diversity index was expressed by the Chao1 index, Shannon index, and Simpson index (Additional file 1: Figure S10). Chao 1 indexes indicated that, compared to the corresponding black layers in three different reactors, yellow layers had significantly higher OTU richness throughout the entire experimental period (P < 0.01, paired t test, Additional file 1: Figure S10a). Similar to the Chao1 indexes, the Shannon indexes of the yellow layers were always significantly higher (P < 0.01, paired t test) than those of the corresponding black layers in the UVdisinfection and NONdisinfection reactors (Additional file 1: Figure S10b). However, for the NaClOdisinfection reactor, black layer samples underwent a drastic diversity increase from 1.78 to 4.37, and the Shannon indexes of the black layer samples began to surpass those of the yellow layer samples after the 10th week. In addition, the Simpson index (Additional file 1: Figure S10c) and Evenness index (Additional file 1: Figure S10d) shared the same trends except, for in the black layer in the NaClOdisinfection reactor, where these values suffered a drastic increase during the entire experiment.
It should be noted that both Shannon and Simpson indices of the NaClOdisinfection-B samples, especially the samples collected after the 7–10 weeks, exhibited higher values as compared to those of UVdisinfection-B and NONdisinfection-B samples. It is possible that the chlorination pressure exerted on bacteria in the NaClOdisinfection-B samples decreased with the increase of corrosion layer thickness because of less contact between bacteria and hypochlorite. This suggested that the NaClOdisinfection-B samples collected after the 7–10 weeks were under an intermediate pressure level caused by the attenuate chlorination. According to the classical intermediate disturbance hypothesis theory, diversity might reach maximum at intermediate levels of disturbance or pressure [40]. In the present study, the attenuate NaClO disinfection for the NaClOdisinfection-B samples collected after the 7–10 weeks may act as intermediate disturbance.
A two-dimensional PCoA plot showed the bacterial community differences among the samples in the yellow layers and black layers under different disinfection methods. (Additional file 1: Figure S11). Samples of the black layer were distinctly separated from those of the yellow layer regardless of the disinfection method. One possible reason could be related to the oxygen availability, that is, the yellow layer belongs to the aerobic environment while the black layer might be anaerobic. In addition, compared to the NaClOdisinfection reactor, the NONdisinfection and UVdisinfection reactors had much more similar microbial compositions and variation tendencies over time for the same layer type. The dynamic change in the microbial community also followed a regular tendency and will be discussed in detail in subsection "Community temporal trajectories and identification of differential bacterial genera during biofilm development."
Characterization of microbial community compositions
As shown in Additional file 1: Figure S12, Proteobacteria was the most abundant phylum in all the samples collected from both the yellow layers and black layers, accounting for 53.8 ~ 94.2% of the total bacterial community. This is consistent with the analytical results of the bacterial community in the cast iron corrosion scale of RWDS [8] and DWDS [5], in which Proteobacteria accounted for 56.7% and 64.0% on average. Another interesting phenomenon is that the relative abundance of Proteobacteria in black layers was significantly higher than that in the corresponding yellow layers under different disinfection conditions. In contrast to Proteobacteria, Acidobacteria is a dominant phylum without significant differences among NaClOdisinfection (black layer 4.55% ± 1.80%, yellow layer 5.16% ± 2.17%), NONdisinfection (black layer 4.84% ± 2.08%, yellow layer 5.98% ± 1.46%), and UVdisinfection (black layer 4.59% ± 1.96%, yellow layer 5.67% ± 1.21%) reactors (P > 0.05, paired t test). The relative abundance of Bacteroidetes in the NaClOdisinfection reactor (black layer 5.87% ± 2.26%, yellow layer 8.22% ± 4.92%) was significantly (P < 0.01, paired t test) higher than that in the NONdisinfection (black layer 3.27% ± 2.11%, yellow layer 4.60% ± 2.22%) and UVdisinfection (black layer 3.27% ± 2.02%, yellow layer 4.65% ± 2.64%) reactors. Additionally, the relative abundance of Bacteroidetes decreased with time, especially in the NaClOdisinfection reactor. Bacteroidetes are known to produce EPS [41], which can act as a protective mechanism for bacteria in an adverse or stressful environment and contribute to the formation of biofilms. This should be the possible reason for the abundant Bacteroidetes in the NaClOdisinfection reactor due to the existence of chlorination oxidation stress. Nevertheless, according to the two-month preliminary experiment, we found that it was impracticable to measure the EPS content in the present study because the amount of EPS was not sufficient for the subsequent measurement of protein and polysaccharide, and thus, the contribution of EPS for biofilm formation is difficult to determine. Nitrospirae was much more abundant in the NONdisinfection yellow layer (5.75% ± 3.78%) and UVdisinfection yellow layer (5.93% ± 3.57%) than in the corresponding black layers (0.49% ± 0.31%; 0.58% ± 0.38%), which may be related to its aerobic property. Moreover, it seems that Nitrospirae should be very sensitive to chlorination disinfection because its relative abundance in the NaClOdisinfection yellow layer was much lower than that in the yellow layers of the NONdisinfection and UVdisinfection reactors. Similar to Nitrospirae, Actinobacteria was also much more abundant in the NONdisinfection yellow layer (2.15% ± 1.04%) and UVdisinfection yellow layer (2.32% ± 1.44%) than in the corresponding black layers (0.76% ± 0.38%; 0.78% ± 0.51%).
At the genus level, Additional file 1: Figure S13 shows the relative abundance distribution of the top 50 genera in all black and yellow samples. Azospira, Sediminibacterium, Geothrix, and Nitrospira presented notable differences between the yellow and black layers under different disinfection methods. These potential functional bacteria responsible for corrosion will be discussed in detail in subsection "Identification of crucial genera responsible for promoting cast iron corrosion and establishment of a cycle model for Fe, N and S metabolism."
It should be noted that, the diversity in NaClOdisinfection-black layer illustrated in Additional file 1: Figure S12 seems like lower than that in the yellow layer, which is contrast to the Shannon and Simpson indices trend (Additional file 1: Figure S10). The apparent contradiction is explicable and reasonable because Shannon and Simpson indices were calculated based on OTUs (identity threshold of 97%) results, while Additional file 1: Figure S12 was presented at phylum level.
Abundant and persistent bacteria during the biofilm development process
Figure 4 shows the taxonomic identity of all OTUs with an abundance sum from 60 samples of more than 0.5% in the black and yellow layers, respectively. We followed the taxonomic composition of bacterial populations from 60 samples retrieved over a 1-year period. To provide a more detailed characterization of the identified OTUs, we classified them into abundant or rare and into persistent, intermediate, and transient types, assuming that abundant and persistent bacterial groups play the most significant roles in biological processes related to corrosion. Abundant OTUs were defined as those that contributed ≥ 1% of the total abundance at least once in all the sampling times, while rare OTUs contributed < 1% of the total abundance in all samples. Persistent OTUs were defined as those detected in ≥ 75% of the samples; intermediate OTUs were detected in 25–75% of the samples, and transient OTUs were detected in < 25% of the samples [42, 43]. We can obtain six types of OTUs: abundant-persistent (AP), rare-persistent (RP), abundant-intermediate (AI), rare-intermediate (RI), abundant-transient (AT), and rare-transient (RT) OTUs. In total, 390 OTUs (accounting for 95.5% ± 2.1% of the total bacterial abundance) for the black layer (Fig. 4a) and 742 OTUs (accounting for 92.4% ± 3.8% of the total bacterial abundance) for the yellow layer (Fig. 4b) were selected for the taxonomic dendrogram analysis.
Taxonomic dendrograms of the bacterial community detected over a 1-year period in the a black layer and b yellow layer of three reactors. Different taxonomic branches are labeled according to phylum, except Proteobacteria, which was labeled by class. The edges represent the taxonomic path from the root bacteria down to the OTU level (similarity cutoff: 97%). OTUs were located at the lowest possible assignment level, and the node sizes indicated their relative abundance. The nodes are colored according to both their abundance and frequency of occurrence. Red nodes: abundant-persistent (AP) OTUs; blue nodes: abundant-intermediate (AI) OTUs; orange nodes: rare-persistent (RP) OTUs; violet nodes: rare-intermediate (RI) OTUs. The definition of OTUs types was described in subsection "Abundant and persistent bacteria during the biofilm development process"
Among the 390 OTUs in the black layer, 45 AP-type OTUs accounted for 85.0% ± 5.6% of the total bacterial abundance. One OTU, 52 OTUs, and 292 OTUs were classified as AI, RI, and RP, respectively. These 45 AP-type OTUs belonged mainly to the six phyla of Acidobacteria, Actinobacteria, Bacteroidetes, Nitrospirae, Ignavibacteriae, and Proteobacteria. Among the 742 OTUs screened in the yellow layer, 74 AP-type OTUs belonging to eight phyla accounted for 72.1% ± 11.0% of the total bacterial abundance. These AP-type OTUs were classified to 8 phyla, including the 6 phyla mentioned above and Planctomycetes and Verrucomicrobia. Except for AP-type OTUs, 6 OTUs, 93 OTUs, and 569 OTUs were classified into AI, RI, and RP types, respectively. It should be noted that there were no AT-type or RT-type OTUs in either the black or yellow layer samples.
Among the four classes of Proteobacteria, branches of β-Proteobacteria had 51% and 51.4% of AP-type OTUs, irrespective of being from black or yellow layer samples (red dots in Fig. 4). It is worth mentioning that in six out of eight OTUs belonging to Desulfovibrio, one type of well-known SRB genus [44] was AP-type OTU in the black layer. In the yellow layer, four OTUs derived from Nitrospira responsible for nitrite oxidizing were all AP-type OTUs. Other functional genera will be discussed in detail in subsection "Identification of crucial genera responsible for promoting cast iron corrosion and establishment of a cycle model for Fe, N and S metabolism."
Community temporal trajectories and identification of differential bacterial genera during biofilm development
Community temporal trajectories and biofilm development stage divide
To explore the dynamic trend of microbial composition in both black and yellow layers under three disinfection conditions over a 1-year period, three trajectory graphs were presented in the ordination space of PCoA based on weighted UniFrac distance [35]. Trajectories were presented by lines sequentially connecting sampling points. Pairwise comparisons of community composition shifting through time indicated that bacterial communities exhibited similar trajectories in both black and yellow layers under the three disinfection conditions. In the early stage of the experiment, samples fluctuated and moved slightly along with the two principal coordinate axes, then shifted drastically in the mid-term stage, and finally became relatively stable with negligible fluctuation in the terminal stage. According to the three trajectory graphs shown in Fig. 5, biofilm development can be divided into three stages: initial start-up stage (stage I), mid-term development stage (stage II), and terminal stable stage (stage III). Stage III suggested that the bacterial community compositions reached a final steady phase during the entire experimental period. To conduct downstream comparison analysis, stages I and III are highlighted by gray-dotted ellipses in Fig. 5.
Temporal trajectories in the community composition of the yellow and black layers are presented in the ordination space of principal coordinate analysis (PCoA) based on weighted UniFrac distance for the a NaClOdisinfection reactor, b NONdisinfection reactor, and c UVdisinfection reactor. Trajectories were presented by lines that sequentially connect sampling points. Circles highlighted initial attachment stage I and terminal stable stage III. Other plots were classified as mid-term development stage II
Identification of differential bacterial genera during biofilm development
STAMP (statistical analysis of taxonomic and functional profiles) is a powerful software tool to test for differences between two groups using mean proportion effect size measures along with Welch's confidence intervals [45]. As shown in Fig. 6 and Additional file 1: Figure S14, a two-group Welch t test was conducted among different disinfection reactors in both the black and yellow layers, respectively, based on the three stages with an effect size ≥ 0.75 and P value < 0.05 [45]. Considering the difference in weight loss starting from the 19th week in the three reactors (Fig. 1) and the biofilm development stage divide (Fig. 5), the bacterial composition differences of stages II and III between the NaClOdisinfection reactor and the other two reactors were concerning.
Extended error bar plots showing the abundance of genera differing significantly between the NaClOdisinfection and NONdisinfection reactors with an effect size of 0.75. a Genera in the black layer of stage II; b genera in the black layer of stage III; c genera in the yellow layer of stage II; d genera in the yellow layer of stage III. The numbers in the parentheses represent the amounts of OTUs belonging to the genus correspondingly to Fig. 4. The red numbers represent the AP-type OTUs; the orange numbers represent the RP-type OTUs; and the light purple numbers represent the RI-type OTUs
During stage II, seven genera, i.e., Bradyrhizobium, Azospira, Sediminibacterium, Myxococcaceae, Desulfovibrio, Thermomonas, and Dechloromonas, differed significantly between the black layers in the NaClOdisinfection and NONdisinfection reactors (Fig. 6a). This was very similar to the comparison between the black layers in the NaClOdisinfection and UVdisinfection reactors, apart from the addition of Micrococcineae (Additional file 1: Figure S14a). For the yellow layer, 11 genera, including Bradyrhizobium, Sphingomonas, Novosphingobium, Dongia, Micrococcineae, Nitrospira, Sediminibacterium, Hyphomicrobium, Nitrosomonas, Undibacterium, and Geothrix, manifested significant differences between the NaClOdisinfection and NONdisinfection reactors (Fig. 6c). In addition to the above genera, Opitutus and Aquabacterium also exhibited significant differences between the NONdisinfection and UVdisinfection reactors (Additional file 1: Figure S14c). With an effect size ≥ 0.75 recommended by Parks et al. [45], no genus significantly differed between the NONdisinfection and UVdisinfection reactors, regardless of being from the black or yellow layers.
During stage III, ten genera, including Desulfovibrio, Dechloromonas, Nitrospira, Sediminibacterium, Terrimonas, Bradyrhizobium, Aquabacterium, Myxococcaceae, Geothrix, and Micrococcineae, differed significantly in the black layers between the NaClOdisinfection and NONdisinfection reactors (Fig. 6b). Except for Micrococcineae, the other nine genera differed significantly between the black layers in the NaClOdisinfection and UVdisinfection reactors (Additional file 1: Figure S14b). With regard to the comparison of the yellow layers in the NaClOdisinfection and NONdisinfection reactors (Fig. 6d), 11 genera, i.e., Bradyrhizobium, Hyphomicrobium, Corynebacterineae, Frankineae, Burkholderia, Pseudolabrys, Pirellula, Terrimonas, Sphingomonas, Undibacterium, and Sediminibacterium, showed distinct differences. However, 13 genera, including Bradyrhizobium, Nitrospira, Pirellula, Hyphomicrobium, Corynebacterineae, Gaiellaceae, Sediminibacterium, Sphingomonas, Melioribacter, Burkholderia, Frankineae, Terrimonas, and Undibacterium, displayed statistically significant differences between the NaClOdisinfection and UVdisinfection reactors (Additional file 1: Figure S14d). Two group tests between the NONdisinfection and UVdisinfection reactors indicated that only one genus displayed a significant difference, i.e., Azospira corresponding to the black layer and Frankineae corresponding to the yellow layer.
Identification of crucial genera responsible for promoting cast iron corrosion and establishment of a cycle model for Fe, N and S metabolism
In total, 26 genera were found to have significant differences between the NaClOdisinfection reactor and the other two reactors (Fig. 6 and Additional file 1: Figure S14). According to the literature review, 12 out of 26 genera were potential functional species playing roles in the cast iron corrosion process in RWDS. These 12 genera mainly included four nitrate-dependent IOBs: Aquabacterium [46, 47], Sediminibacterium [48], Azospira [49], and Geobacter [50]; one IRB: Geothrix [51, 52]; five nitrate-reducing bacteria (NRBs): Thermomonas [53], Rhodoferax [53], Sulfuritalea [53], Dechloromonas [53], and Hyphomicrobium [54]; one nitrite-oxidizing bacteria (NOB): Nitrospira [55]; and one SRB: Desulfovibrio [56]. To obtain a clearer picture of the variability in the functional genera in different layers and reactor systems, relative abundance variation with time was portrayed for IOBs, IRBs, NOBs, NRBs (Additional file 1: Figure S15), and SRBs (Fig. 7), respectively.
a The abundance variation in Desulfovibrio during a 1-year period. b Model of the redox transition between NO3− and NO2−, Fe, Fe2+, and Fe3+. The electrochemical corrosion process is represented by black dotted lines, and the microbial-induced corrosion process is represented by black solid lines
IOBs and IRBs
The total relative abundance of IOBs, i.e., Azospira, Aquabacterium, Geobacter, and Sediminibacterium, increased distinctly in all black layers over time and decreased in yellow layers (Additional file 1: Figure S15a). Azospira is able to oxidize iron (II) using nitrate as an electron acceptor instead of oxygen [49]. The decrease in the total relative abundance of IOBs in the yellow layer was caused by Sediminibacterium, which suffered a gradual reduction in all yellow layer samples regardless of disinfection method (Additional file 1: Figure S15e). Sediminibacterium is always isolated from sediment and activated sludge [57]. It tends to grow under aggregation conditions to protect bacteria against oxidative stress [58], and this aggregation property may be responsible for its high relative abundance in the initial stage due to contributing to the formation of biofilms. The decreased relative abundance of Sediminibacterium over time may be caused by an increasingly complex microbial community structure, which leads to fiercer interspecific competition. Geobacter was found to be the first microorganism with the ability to oxidize organic compounds and metals, including iron. One Geobacter species, Geobacter metallireducens, has the ability to oxidize Fe(II) using nitrate as electron acceptor and generate ammonium [59], which demonstrated that nitrite (NO2−) and nitrogen gas (N2) were not the sole end products of nitrate reduction. In the present study, the relative abundance of Geobacter remained less than 0.2% before the 25th week and then increased unstably (Additional file 1: Figure S15f). Geothrix, as an IRB [51, 52], was widespread in any layer and fluctuated slightly during the entire experimental period (Additional file 1: Figure S15b).
NOBs and NRBs
Nitrite-oxidizing bacteria, such as Nitrospira, carried out two obvious ascend phases in the yellow layers of the NONdisinfection and UVdisinfection reactors (Additional file 1: Figure S15c), while the relative abundance of NOBs in the yellow layer of the NaClOdisinfection reactor remained relatively stable before the 31st week and then increased continuously. Nitrate-reducing bacteria spread across multiple prokaryotic phyla with diverse physiologies [60]. Five genera belonging to NRBs with significant relative abundance differences were filtered out by STAMP analysis. The total relative abundance did not significantly differ in the three reactors regardless of the black or yellow layers (Additional file 1: Figure S15d).
SRBs
The relative abundance of SRBs, i.e., Desulfovibrio, increased from the 22nd week in the black layer of the NONdisinfection and UVdisinfection reactors (Fig. 7a), which coincided with the variation in weight loss results (Fig. 1). Although both electrochemical corrosion and microbiological-induced corrosion existed in the NaClOdisinfection and NONdisinfection reactors, NaClOdisinfection could have influenced the overall microbial community composition to the extent that the influence of microbiological-induced corrosion was reduced. Additionally, due to the higher oxidation-reduction potential in the NaClOdisinfection reactors, the cast iron in NaClOdisinfection reactors should suffer more severe electrochemical corrosion than that in NONdisinfection reactor. Combining these two factors, the cast iron in NONdisinfection reactor exhibited more weight losses after 19th week. Therefore, via ignoring electrochemical corrosion difference between NONdisinfection reactor and NaClOdisinfection reactor, the least contribution of the bacterial community to cast iron corrosion in NONdisinfection reactors can be estimated according to Eq. (1). From the 22nd to 52nd weeks, bacterial community-induced corrosion accounted for at least 30.5% ± 9.7% of the total weight loss in the NONdisinfection reactor. Sulfate-reducing bacteria can notably influence iron (Fe0) corrosion in anaerobic environments, and the mechanism is usually explained by the corrosiveness of formed H2S and the scavenge of "cathodic" H2 from the chemical reaction of Fe0 with H2O [61]. Among SRBs, Desulfovibrio species are conventionally regarded as the main culprits of anaerobic corrosion because of their capability to consume hydrogen effectively [62]. It has been verified that Desulfovibrio indeed have derepressed hydrogenase to consume cathodic hydrogen on the metal surface and thus accelerate the dissolution of Fe2+ from the anode to ensure a ready supply of Fe2+ for the cells' iron proteins under stressful conditions [56]. In the present study, the concentration of SO42− in the effluent (not shown) was lower in the NONdisinfection and UVdisinfection reactors than in the NaClOdisinfection reactor (P < 0.01, paired t test), which can be proof of SRBs' higher activity in the NONdisinfection and UVdisinfection reactors. Therefore, the existence of Desulfovibrio was suggested to accelerate the transfer of Fe0 to Fe2+, speed up the loss of weight, and result in more serious corrosion.
$$ \mathrm{Contribution}\ \mathrm{of}\ \mathrm{bacteria}-\mathrm{induced}\ \mathrm{corrosion}=\frac{\frac{{\mathrm{W}\mathrm{L}}_{\mathrm{NON}}}{{\mathrm{W}}_{\mathrm{O}}}-\frac{{\mathrm{W}\mathrm{L}}_{\mathrm{NaClO}}}{{\mathrm{W}}_{\mathrm{O}}}}{\frac{{\mathrm{W}\mathrm{L}}_{\mathrm{NON}}}{{\mathrm{W}}_{\mathrm{O}}}}\times 100\% $$
Where WLNON represents the weight loss of the cast iron coupon in the NONdisinfection reactor, WLNaClO represents the weight loss of the cast iron coupon in the NaClOdisinfection reactor and WO represents the original weight of the cast iron coupon.
Cycle model for Fe, N, and S metabolism
Nitrate-dependent IRBs, such as Geobacter, Sediminibacterium, and Azospira, can oxidize Fe2+ to Fe3+ accompanied by the reduction of nitrate to nitrite [38, 49], and then, the nitrite is oxidized by NOBs to generate nitrate. Iron-reducing bacteria, such as Geothrix, can reduce Fe3+ to Fe2+. Iron-oxidizing bacteria/NRB, IOBs/NRBs, IRBs, and NOBs constituted a cycle model for Fe, N, and S metabolism (Fig. 7b) to drive the transition between NO3− and NO2–, Fe2+, and Fe3+. The redox transition between Fe2+ and Fe3+ did not contribute to the weight loss, while only the dissolution of Fe to Fe2+ increased the weight loss. Oxygen can be used as an electron acceptor to oxidize Fe to Fe2+ by galvanic effect, which was the primary pathway to dissolve Fe in the initial experimental stage in all three reactors. However, the ability of oxygen penetration was limited by the increased corrosion layer thickness. Our previous study found that oxygen was blocked effectively, and anaerobic conditions could be created when the corrosion layer thickness of cast iron was greater than ~ 8 mm [8]. When the corrosive layer blocked oxygen penetration, it also created an appropriate environment for anaerobic bacteria at the same time. Sulfate-reducing bacteria are anaerobic [62], and their survival depends on the corrosion layer, which prevents oxygen from entering. The corrosion layer was thinner in the NaClOdisinfection reactor (Additional file 1: Figure S2a) than in the NONdisinfection reactor (Additional file 1: Figure S2b). The thinner corrosion layer was insufficient to inhibit oxygen definitely; thus, only a few Desulfovibrio could survive in the black layer, which resulted in slight or negligible microbiology-induced corrosion. NaClO could oxidize Fe2+ to Fe3+ in the NaClOdisinfection reactor. Even if the relative abundance of IOBs in the NaClOdisinfection reactor was lower than that in the NONdisinfection and UVdisinfection reactors in the initial stage, the reaction of Fe2+ to Fe3+ also existed in the corrosion scale of the NaClOdisinfection reactor (Fig. 3 and Additional file 1: Figure S6). Overall, although the disinfection of NaClO theoretically enhanced the electrochemical corrosion of cast iron, it inhibited the microbiology-induced corrosion simultaneously by influencing the thickness of the corrosion layer and different microbiology compositions.
Nevertheless, this circle model mainly takes into account some bacteria with known functions related to Fe, N, and S metabolism and possessing significant relative abundance differences between the NaClOdisinfection and NONdisinfection reactors. In addition, a large portion of the effective bacterial sequences in this study could not be assigned to the genus level. Overall, it is not sufficient to construct the above conceptual model and predicate the effect of SRBs based on 16S rRNA gene annotation results alone. If a more accurate mechanism hypothesis is established, it is suggested to confirm the gene abundance variation and expression levels involving Fe, N, and S metabolisms using metagenomics and metatranscriptomics approaches. Moreover, the protection function of EPS which might play a possible role in inhibiting corrosion should also be considered in the future study.
The long-term effect of disinfection processes on the corrosion behaviors of cast iron in RWDS and the related hidden mechanisms were deciphered in the present study. The cast iron coupons in the NONdisinfection and UVdisinfection reactors suffered more serious corrosion than did those in the NaClOdisinfection reactor, while there was no significant difference in corrosion behaviors between the NONdisinfection and UVdisinfection reactors. Bacterial community composition was considered the principal factor resulting in the different corrosion behaviors, and the corrosion induced by the bacterial community accounted for 30.5% ± 9.7% of the total weight loss in the NONdisinfection reactor. The partition of the yellow layer and black layer of the cast iron corrosion scales provided more specific and accurate information on the morphology, crystal structures, and bacterial community compositions for corrosion scales. Proteobacteria was the most abundant phylum, accounting for 53.8 ~ 94.2% of the total bacterial community in the corrosion scale samples, followed by Acidobacteria, Bacteroidetes, and Nitrospirae. Core bacterial community, i.e., AP-type OTUs, existed during the 1-year dynamic period, with relative abundance accounting for 85.0% ± 5.6% and 72.1% ± 11.0% of the total bacterial relative abundance in the black and yellow layers, respectively. Twelve functional genera, including four IOBs, one IRB, five NRBs, one NOB, and one SRB, were selected to establish a cycle model for Fe, N, and S metabolism. Iron-oxidizing bacteria, NRBs, IRBs, and NOBs drove the transition between NO3− and NO2–, Fe2+, and Fe3+. Oxygen acted as an electron acceptor to oxidize Fe to Fe2+ by galvanic effect, which was the primary pathway to dissolve Fe in all three reactors. Except for the above electrochemical corrosion process, Desulfovibrio was considered to accelerate the transfer of Fe0 to Fe2+ and thus result in more serious corrosion in the NONdisinfection and UVdisinfection reactors.
AI:
Abundant-intermediate
AP:
Abundant-persistent
Abundant-transient
TP:
Adenosine triphosphate
DWDS:
Drinking water distribution system
EPS:
Extracellular polymeric substance
IOBs:
Iron-oxidizing bacteria
IRBs:
Iron-reducing bacteria
NaClOdisinfection :
Sodium hypochlorite treated
NOBs:
Nitrite-oxidizing bacteria
NONdisinfection :
Without disinfection treatment
NRBs:
Nitrate-reducing bacteria
Rare-intermediate
Rare-persistent
Rare-transient
RWDS:
Reclaimed wastewater distribution system
SOBs:
Sulfur-oxidizing bacteria
SRBs:
Sulfate-reducing bacteria
UVdisinfection :
Mohebbi H, Li CQ. Experimental investigation on corrosion of cast iron pipes. Int J Corros. 2011;1:383–9.
Benson AS, Dietrich AM, Gallagher DL. Evaluation of iron release models for water distribution system. Crit Rev Env Sci Technol. 2011;42(1):44–97.
Wang H, Hu C, Zhang L, Li X, Zhang Y, Yang M. Effects of microbial redox cycling of iron on cast iron pipe corrosion in drinking water distribution systems. Water Res. 2014;65:362–70.
Qi B, Cui C, Yuan Y. Effects of iron Bacteria on cast iron pipe corrosion and water quality in water distribution systems. Int J Electrochem Sci. 2015;10:545–58.
Li X, Wang H, Hu X, Hu C, Liao L. Characteristics of corrosion sales and biofilm in aged pipe distribution systems with switching water source. Eng Fail Anal. 2016;60:166–75.
Sun H, Shi B, Yang F, Wang D. Effects of sulfate on heavy metal release from iron corrosion scales in drinking water distribution system. Water Res. 2017;114:69–77.
Hu J, Dong H, Xu Q, Ling W, Qu J, Qiang Z. Impacts of water quality on the corrosion of cast iron pipes for water distribution and proposed source water switch strategy. Water Res. 2018;129:428–35.
Jin J, Wu G, Guan Y. Effect of bacterial communities on the formation of cast iron corrosion tubercles in reclaimed water. Water Res. 2015;71:207–18.
Yang F, Shi B, Bai Y, Sun H, Lytle DA, Wang D. Effect of sulfate on the transformation of corrosion scale composion and bacterial community in cast iron water distribution pipes. Water Res. 2014;59:46–57.
Zhu Y, Wang H, Li X, Hu C, Yang M, Qu J. Characterization of biofilm and corrosion of cast iron pipes in drinking water distribution system with UV/Cl2 disinfection. Water Res. 2014;60:174–81.
Frateur I, Deslouis C, Kiene L, Levi Y, Tribollet B. Free chlorine consumption induced by cast iron corrosion in drinking water distribution systems. Water Res. 1999;33(8):1781–90.
Wang H, Hu C, Hu X, Yang M, Qu J. Effects of disinfectant and biofilm on the corrosion of cast iron pipes in a reclaimed water distribution system. Water Res. 2012;46(4):1070–8.
Sun H, Shi B, Bai Y, Wang D. Bacterial community of biofilms developed under different water supply conditions in a distribution system. Sci Total Environ. 2013;472:99–107.
Xu C, Zhang Y, Cheng G, Zhu W. Localized corrosion behavior of 316L stainless steel in the presence of sulfate-reducing and iron-oxidizing bacteria. Math Sci Eng. 2007;443:235–41.
Batmanghelich F, Li L, Seo Y. Influence of multispecies biofilms of Pseudomonas aeruginosa and Desulfovibrio vulgaris on the corrosion of cast iron. Corros Sci. 2017;121:94–104.
Olli HT, Tariq MB, Jerry MB, et al. Oxidative dissolution of Arsenopyrite by mesophilic and moderately thermophilic Acidophilest. Appl Environ Microbiol. 1994;60(9):3268–74.
Liz KH, Hector AV. Role of iron-reducing bacteria in corrosion and protection of carbon steel. Int Biodeterior Biodegrad. 2009;63:891–5.
Teng F, Guan YT, Zhu WP. Effect of biofilm on cast iron pipe corrosion in drinking water distribution system: corrosion scales characterization and microbial community structure investigation. Corros Sci. 2008;50(10):2816–23.
Zhang H, Tian Y, Wan J, Zhao P. Study of biofilm influenced corrosion on cast iron pipes in reclaimed water. Appl Surf Sci. 2015;357:236–47.
Zuo R, Kus E, Mansfeld F, Wood TK. The importance of live biofilms in corrosion protection. Corros Sci. 2005;47:279–87.
Cayford BI, Dennis PG, Keller J, Tyson GW, Bond PL. High-throughput amplicon sequencing reveals distinct communities within a corroding concrete sewer system. Appl Environ Microbiol. 2012;78(19):7160–2.
Wang H, Hu C, Li X. Characterization of biofilm bacterial communities and cast iron corrosion in bench-scale reactors with chloraminated drinking water. Eng Fail Anal. 2015;57:423–33.
Ministry of Environmental Protection, China. Analysis method for water and wastewater. 4th ed. Beijing: Press of Chinese Environmental Science; 2002.
Liu G, Bakker GL, Vreeburg JHG, et al. Pyrosequencing reveals bacterial communities in unchlorinated drinking water distribution system: an integral study of bulk water, suspended solids, loose deposits, and pipe wall biofilm. Environ Sci Technol. 2014;48:5467–76.
Chao Y, Mao Y, Wang Z, Zhang T. Diversity and functions of bacterial community in drinking water biofilms revealed by high-throughput sequencing. Sci Rep-UK. 2015;5:10044.
Cerca F, Trigo G, Correia A, Cerca N, Azeredo J, Vilanova M. SYBR green as a fluorescent probe to evaluate the biofilm physiological state of the Staphylococcus epidermidis, using flow cytometry. Can J Microbiol. 2011;57(10):850–6.
Hammes F, Berney M, Wang Y, Vital M, Koster, Egli T. Flow-cytometric total bacterial cell counts as a descriptive microbiological parameter for drinking water treatment processes. Water Res. 2008;42:269–77.
Khan MMT, Pyle BH, Camper AK. Specific and rapid enumeration of viable but nonculturable and viable-culturable gram-negative bacteria by using flow cytometry. Appl Environ Microbiol. 2010;76(15):5088–96.
Ling F, Hwang C, LeChevallier MW, Andersen GL, Liu WT. Core-satellite populations and seasonality of water meter biofilms in a metropolitan drinking water distribution system. ISME J. 2016;10(3):582–95.
Kozich JJ, Westcott SL, Baxter NT, Highlander SK, Schloss PD. Development of a dual-index sequencing strategy and curation pipeline for analyzing amplicon sequence data on the MiSeq Illumina sequencing platform. Appl Environ Microbiol. 2013;79(17):5112–20.
Schloss PD, Westcott SL, Ryabin T, Hall JR, Hartmann M, Hollister EB, Lesniewski RA, Oakley BB, Parks DH, Robinson CJ, Sahl JW, Stres B, Thallinger GG, Van Horn DJ, Weber CF. Introducing Mothur: open-source, platform-independent, community-supported software for describing and comparing microbial communities. Appl Environ Microbiol. 2009;75(23):7537–41.
Zhang T, Shao MF, Ye L. 454 pyrosequencing reveals bacterial diversity of activated sludge from 14 sewage treatment plants. ISME J. 2012;6(6):1137–47.
Hammer O, Davis AT, Harper, Paul DR. PAST: Paleontological Statistics Software Package for education and data analysis. Palaeontol Electron. 2001;4(1):1–9.
Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, Ramage D, Amin N, Schwikowski B, Ideker T. Cytoscape: a software environment for integrated models of biomolecular interaction networks. Genome Res. 2003;13(11):2498–504.
Lozupone C, Knight R. UniFrac: a new phylogenetic method for comparing microbial communities. Appl Environ Microbiol. 2005;71:8228–35.
Wang H, Hu C, Hu X. Effects of combined UV and chlorine disinfection on corrosion and water quality within reclaimed water distribution systems. Eng Fail Anal. 2014;39:12–20.
Li X, Wang H, Zhang Y, Hu C. Characterization of the bacterial communities and iron corrosion scales in drinking groundwater distribution systems with chlorine/chloramine. Int Biodeterior Biodegrad. 2014;96:71–9.
Weber KA, Achenbach LA, Coates JD. Microorganisms pumping iron: anaerobic microbial iron oxidation and reduction. Nat Rev Microbiol. 2006;4(10):752–64.
Magic-Knezev A, Kooij D. Optimisation and significance of ATP analysis for measuring active biomass in granular activated carbon filters used in water treatment. Water Res. 2004;38:3971–9.
Connell JH. Intermediate-disturbance hypothesis. Science. 1979;204(4399):1344–5.
Fernandez-Gomez B, Richter M, Schuler M, Pinhassi J, Acinas SG, Gonzalez JM, Pedros-Alio C. Ecology of marine Bacteroidetes: a comparative genomics approach. ISME J. 2013;7(5):1026–37.
Werner JJ, Knights D, Garcia ML, Scalfone NB, Smith S. Bacterial community structures are unique and resilient in full-scale bioenergy systems. PNAS. 2014;108(10):4158–63.
Alonso-Sáez L, Díaz-Pérez L, Morán XA. The hidden seasonality of the rare biosphere in coastal marine bacterioplankton. Environ Microbiol. 2015;17(10):3766–80.
Zhou J, He Q, Hemme CL, Mukhopadhyay A, Zhou A, He Z, Van Nostrand J, Hazen TC, Stahl DA, Wall JD, Arkin AP. How sulphate-reducing microorganisms cope with stress: lessons from systems biology. Nat Rev Microbiol. 2011;9:452–66.
Parks DH, Tyson GW, Hugenholtz P, Beiko RG. STAMP: statistical analysis of taxonomic and functional profiles. Bioinformatics. 2014;30(21):3123–4.
Delamuta JRMO, Ribeiro RA, Gomes DF, Souza RC, Chueire LMO, Hungria M. Genome sequence of Bradyrhizobium pachyrhizi strain PAC48T, a nitrogen-fixing symbiont of Pachyrhizus erosus (L.) Urb. Genome Announc. 2015;3(5):e01074–15.
Zhang X, Li A, Szewzyk U, Ma F. Improvement of biological nitrogen removal with nitrate-dependent Fe (II) oxidation bacterium Aquabacterium parvum B6 in an up-flow bioreactor for wastewater treatment. Bioresour Technol. 2016;219:624–31.
Medihala PG, Lawrence JR, Swerhone G, Korber DR. Transient response of microbial communities in a water well field to application of an impressed current. Water Res. 2012;47:672–82.
Li X, Zhang W, Liu T, Chen L, Chen P, Li F. Changes in the composition and diversity of microbial communities during anaerobic nitrate reduction and Fe(II) oxidation at circumneutral pH in paddy soil. Soil Biol Biochem. 2016;94:70–9.
Childers S, Ciufo S, Lovley DR. Geobacter metallireducens accesses insoluble Fe (III) oxide by chemotaxis. Nature. 2002;416:767–9.
Bond DR, Lovley DR. Evidence for involvement of an electron shuttle in electricity generation by Geothrix fermentans. Appl Environ Microbiol. 2005;71(4):2186–9.
Mehta-Kolte MG, Bond DR. Geothrix fermentans secretes two different redox-active compounds to utilize electron acceptors across a wide range of redox potentials. Appl Environ Microbiol. 2012;78(19):6987–95.
McIlroy SJ, Starnawska A, Starnawski P, Saunders AM, Nierychlo M, Nielsen PH, Nielsen JL. Identification of active denitrifiers in full-scale nutrient removal wastewater treatment systems. Environ Microbiol. 2016;18(1):50–64.
Auclair J, Parent S, Villemur R. Functional diversity in the denitrifying biofilm of the methanol-fed marine denitrification system at the Montreal Biodome. Microb Ecol. 2012;63(4):726–35.
Spieck E, Hartwig C, McCormack I, Maixner F, Wagner M, Lipski A, Daims H. Selective enrichment and molecular characterization of a previously uncultured Nitrospira-like bacterium from activated sludge. Environ Microbiol. 2006;8(3):405–15.
Bryant RD, Van Ommen KF, Laishley EJ. Regulation of the periplasmic [Fe] hydrogenase by ferrous iron in Desulfovibrio vulgaris (Hildenborough). Appl Environ Microbiol. 1993;59(2):491–5.
Besemer K, Peter H, Logue JB, Langenheder S, Lindström ES, Tranvik LJ, Battin TJ. Unraveling assembly of stream biofilm communities. ISME J. 2012;6(8):1459–68.
Ayarza JM, Mazzella MA, Erijman L. Expression of stress-related proteins in Sediminibacterium sp. growing under planktonic conditions. J Basic Microbiol. 2015;55:1134–40.
Weber KA, Urrutia MM, Churchill PF, Kukkadapu RK, Roden EE. Anaerobic redox cycling of iron by freshwater sediment microorganisms. Environ Microbiol. 2006;8(1):100–13.
Zumft WG. Cell biology and molecular basis of denitrification. Microbiol Mol Biol Rev. 1997;61(4):533–616.
Dennis E, Hendrik V, Julia G, Hang TD, Volker M, Karl M, Achim WH, Martin S, Friedrich W. Marine sulfate-reducing bacteria cause serious corrosion of iron under electroconductive biogenic mineral crust. Environ Microbiol. 2012;14(7):1772–87.
Dinh HT, Kuever J, Mußmann M, Hassel AW, Stratmann M, Widdel F. Iron corrosion by novel anaerobic microorganisms. Nature. 2004;427:829–32.
The authors wish to thank Mr. Kexin Guo from Xili RWP in Shenzhen for providing the constant support on the experimental space and reactor setup.
This work was financially supported by the National Natural Science Foundation of China (No. 51478238) and the Development and Reform Commission of Shenzhen Municipality (Urban Water Recycling and Environment Safety Program).
The 16S rRNA gene sequences have been submitted to the National Centre for Biotechnology information (NCBI) Sequence Read Archive (SRA) under the accession number PRJNA486058.
Guangdong Provincial Engineering Research Center for Urban Water Recycling and Environmental Safety, Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
Guijuan Zhang
, Bing Li
, Jie Liu
, Mingqiang Luan
, Long Yue
& Yuntao Guan
State Environmental Protection Key Laboratory of Microorganism Application and Risk Control, School of Environment, Tsinghua University, Beijing, China
Microbiome Research Centre, St George and Sutherland Clinical School, Department of Medicine, University of New South Wales, Sydney, Australia
Xiao-Tao Jiang
School of Environment and Energy, Shenzhen Graduate School, Peking University, Shenzhen, China
Ke Yu
Search for Guijuan Zhang in:
Search for Bing Li in:
Search for Jie Liu in:
Search for Mingqiang Luan in:
Search for Long Yue in:
Search for Xiao-Tao Jiang in:
Search for Ke Yu in:
Search for Yuntao Guan in:
GJZ, BL, and YTG designed this study. GJZ, JL, and MQL conducted the experiments. GJZ, JL, LY, KY, and XTJ analyzed the data. GJZ, BL, and YTG contributed to drafting the initial manuscript, and all co-authors revised, read, and approved the final manuscript.
Correspondence to Bing Li or Yuntao Guan.
The manuscript does not report data collected from humans or animals.
The manuscript does not contain any individual person's data in any form.
The additional file accompanying this article contains Figures S1–S15 and Tables S1–S2. (DOCX 23899 kb)
Reclaimed wastewater
High-throughput sequencing
Bacterial community
Desulfovibrio
|
CommonCrawl
|
Search Results: 1 - 10 of 203 matches for " Sohrab Shahshahani "
Renormalization and blow up for wave maps from $S^2\times \RR$ to $S^2$
Sohrab Shahshahani
Abstract: We construct a one parameter family of finite time blow ups to the co-rotational wave maps problem from $S^2\times \RR$ to $S^2,$ parameterized by $\nu\in(1/2,1].$ The longitudinal function $u(t,\alpha)$ which is the main object of study will be obtained as a perturbation of a rescaled harmonic map of rotation index one from $\RR^2$ to $S^2.$ The domain of this harmonic map is identified with a neighborhood of the north pole in the domain $S^2$ via the exponential coordinates $(\alpha,\theta).$ In these coordinates $u(t,\alpha)=Q(\lambda(t)\alpha)+\mathcal{R}(t,\alpha),$ where $Q(r)=2\arctan{r},$ is the standard co-rotational harmonic map to the sphere, $\lambda(t)=t^{-1-\nu},$ and $\mathcal{R}(t,\alpha)$ is the error with local energy going to zero as $t\rightarrow 0.$ Blow up will occur at $(t,\alpha)=(0,0)$ due to energy concentration, and up to this point the solution will have regularity $H^{1+\nu-}.$
Blow up for critical wave equations on curved backgrounds
Joules Nahas,Sohrab Shahshahani
Abstract: We extend the slow blow up solutions of Krieger, Schlag, and Tataru to semilinear wave equations on a curved background. In particular, for a class of manifolds $(M,g)$ we show the existence of a family of blow-up solutions with finite energy norm to the equation {equation} \partial_t^2 u - \Delta_g u = |u|^4 u, \notag {equation} with a continuous rate of blow up. In contrast to the case where $g$ is the Minkowski metric, the argument used to produce these solutions can only obtain blow up rates that are bounded above.
Stability of Stationary Wave Maps from a Curved Background to a Sphere
Sohrab M. Shahshahani
Abstract: We study time and space equivariant wave maps from $M\times\RR\rightarrow S^2,$ where $M$ is diffeomorphic to a two dimensional sphere and admits an action of SO(2) by isometries. We assume that metric on $M$ can be written as $dr^2+f^2(r)d\theta^2$ away from the two fixed points of the action, where the curvature is positive, and prove that stationary (time equivariant) rotationally symmetric (of any rotation number) smooth wave maps exist and are stable in the energy topology. The main new ingredient in the construction, compared with the case where $M$ is isometric to the standard sphere (considered by Shatah and Tahvildar-Zadeh \cite{ST1}), is the the use of triangle comparison theorems to obtain pointwise bounds on the fundamental solution on a curved background.
Asymptotic properties of solutions of the Maxwell Klein Gordon equation with small data
Lydia Bieri,Shuang Miao,Sohrab Shahshahani
Abstract: We prove peeling estimates for the small data solutions of the Maxwell Klein Gordon equations with non-zero charge and with a non-compactly supported scalar field, in $(3+1)$ dimensions. We obtain the same decay rates as in an earlier work by Lindblad and Sterbenz, but giving a simpler proof. In particular we dispense with the fractional Morawetz estimates for the electromagnetic field, as well as certain space-time estimates. In the case that the scalar field is compactly supported we can avoid fractional Morawetz estimates for the scalar field as well. All of our estimates are carried out using the double null foliation and in a gauge invariant manner.
On the Motion of a Self-Gravitating Incompressible Fluid with Free Boundary and Constant Vorticity: An Appendix
Lydia Bieri,Shuang Miao,Sohrab Shahshahani,Sijue Wu
Physics , 2015,
Abstract: In a recent work [1] the authors studied the dynamics of the interface separating a vacuum from an inviscid incompressible fluid, subject to the self-gravitational force and neglecting surface tension, in two space dimensions. The fluid is additionally assumed to be irrotational, and we proved that for data which are size $\epsilon$ perturbations of an equilibrium state, the lifespan $T$ of solutions satisfies $T \gtrsim \epsilon^{-2}$. The key to the proof is to find a nonlinear transformation of the unknown function and a coordinate change, such that the equation for the new unknown in the new coordinate system has no quadratic nonlinear terms. For the related irrotational gravity water wave equation with constant gravity the analogous transformation was carried out by the last author in [3]. While our approach is inspired by the last author's work [3], the self-gravity in the present problem is a new nonlinearity which needs separate investigation. Upon completing [1] we learned of the work of Ifrim and Tataru [2] where the gravity water wave equation with constant gravity and constant vorticity is studied and a similar estimate on the lifespan of the solution is obtained. In this short note we demonstrate that our transformations in [1] can be easily modified to allow for nonzero constant vorticity, and a similar energy method as in [1] gives an estimate $T\gtrsim\epsilon^{-2}$ for the lifespan $T$ of solutions with data which are size $\epsilon$ perturbations of the equilibrium. In particular, the effect of the constant vorticity is an extra linear term with constant coefficient in the transformed equation, which can be further transformed away by a bounded linear transformation. This note serves as an appendix to the aforementioned work of the authors.
On the Motion of a Self-Gravitating Incompressible Fluid with Free Boundary
Abstract: We consider the motion of the interface separating a vacuum from an inviscid, incompressible, and irrotational fluid, subject to the self-gravitational force and neglecting surface tension, in two space dimensions. The fluid motion is described by the Euler-Poission system in moving bounded simply connected domains. A family of equilibrium solutions of the system are the perfect balls moving at constant velocity. We show that for smooth data which are small perturbations of size $\epsilon$ of these static states, measured in appropriate Sobolev spaces, the solution exists and remains of size $\epsilon$ on a time interval of length at least $c\epsilon^{-2},$ where $c$ is a constant independent of $\epsilon.$ This should be compared with the lifespan $O(\epsilon^{-1})$ provided by local well-posdness. The key ingredient of our proof is finding a nonlinear transformation which removes quadratic terms from the nonlinearity. An important difference with the related gravity water waves problem is that unlike the constant gravity for water waves, the self-gravity in the Euler-Poisson system is nonlinear. As a first step in our analysis we also show that the Taylor sign condition always holds and establish local well-posedness for this system.
The Cauchy problem for wave maps on hyperbolic space in dimensions $d \geq 4$
Andrew Lawrie,Sung-Jin Oh,Sohrab Shahshahani
Abstract: We establish global well-posedness and scattering for wave maps from $d$-dimensional hyperbolic space into Riemannian manifolds of bounded geometry for initial data that is small in the critical Sobolev space for $d \geq 4$. The main theorem is proved using the moving frame approach introduced by Shatah and Struwe. However, rather than imposing the Coulomb gauge we formulate the wave maps problem in Tao's caloric gauge, which is constructed using the harmonic map heat flow. In this setting the caloric gauge has the remarkable property that the main `gauged' dynamic equations reduce to a system of nonlinear scalar wave equations on $\mathbb{H}^{d}$ that are amenable to Strichartz estimates rather than tensorial wave equations (which arise in other gauges such as the Coulomb gauge) for which useful dispersive estimates are not known. This last point makes the heat flow approach crucial in the context of wave maps on curved domains.
Gap Eigenvalues and Asymptotic Dynamics of Geometric Wave Equations on Hyperbolic Space
Abstract: In this paper we study $k$-equivariant wave maps from the hyperbolic plane into the $2$-sphere as well as the energy critical equivariant $SU(2)$ Yang-Mills problem on $4$-dimensional hyperbolic space. The latter problem bears many similarities to a $2$-equivariant wave map into a surface of revolution. As in the case of $1$-equivariant wave maps considered in~\cite{LOS1}, both problems admit a family of stationary solutions indexed by a parameter that determines how far the image of the map wraps around the target manifold. Here we show that if the image of a stationary solution is contained in a geodesically convex subset of the target, then it is asymptotically stable in the energy space. However, for a stationary solution that covers a large enough portion of the target, we prove that the Schr\"odinger operator obtained by linearizing about such a harmonic map admits a simple positive eigenvalue in the spectral gap. As there is no a priori nonlinear obstruction to asymptotic stability, this gives evidence for the existence of metastable states (i.e., solutions with anomalously slow decay rates) in these simple geometric models.
Equivariant Wave Maps on the Hyperbolic Plane with Large Energy
Abstract: In this paper we continue the analysis of equivariant wave maps from 2-dimensional hyperbolic space into surfaces of revolution that was initiated in [13, 14]. When the target is the hyperbolic plane we proved in [13] the existence and asymptotic stability of a 1-parameter family of finite energy harmonic maps indexed by how far each map wraps around the target. Here we conjecture that each of these harmonic maps is globally asymptotically stable, meaning that the evolution of any arbitrarily large finite energy perturbation of a harmonic map asymptotically resolves into the harmonic map itself plus free radiation. Since such initial data exhaust the energy space, this is the soliton resolution conjecture for this equation. The main result is a verification of this conjecture for a nonperturbative subset of the harmonic maps
Profile decompositions for wave equations on hyperbolic space with applications
Abstract: The goal for this paper is twofold. Our first main objective is to develop Bahouri-Gerard type profile decompositions for waves on hyperbolic space. Recently, such profile decompositions have proved to be a versatile tool in the study of the asymptotic dynamics of solutions to nonlinear wave equations with large energy. With an eye towards further applications, we develop this theory in a fairly general framework, which includes the case of waves on hyperbolic space perturbed by a time-independent potential. Our second objective is to use the profile decomposition to address a specific nonlinear problem, namely the question of global well-posedness and scattering for the defocusing, energy critical, semi-linear wave equation on three-dimensional hyperbolic space, possibly perturbed by a repulsive time-independent potential. Using the concentration compactness/rigidity method introduced by Kenig and Merle, we prove that all finite energy initial data lead to a global evolution that scatters to linear waves in infinite time. This proof will serve as a blueprint for the arguments in a forthcoming work, where we study the asymptotic behavior of large energy equivariant wave maps on the hyperbolic plane.
|
CommonCrawl
|
Double and multiple knockout simulations for genome-scale metabolic network reconstructions
Yaron AB Goldstein1 &
Alexander Bockmayr1
Algorithms for Molecular Biology volume 10, Article number: 1 (2015) Cite this article
Constraint-based modeling of genome-scale metabolic network reconstructions has become a widely used approach in computational biology. Flux coupling analysis is a constraint-based method that analyses the impact of single reaction knockouts on other reactions in the network.
We present an extension of flux coupling analysis for double and multiple gene or reaction knockouts, and develop corresponding algorithms for an in silico simulation. To evaluate our method, we perform a full single and double knockout analysis on a selection of genome-scale metabolic network reconstructions and compare the results.
A prototype implementation of double knockout simulation is available at http://hoverboard.io/L4FC.
Constraint-based modeling has become a widely used approach for the analysis of genome-scale reconstructions of metabolic networks [1]. Given a set of metabolites and a set of reactions , the metabolic network is represented by its stoichiometric matrix \(S \in \mathbb {R}^{\mathcal {M} \times \mathcal {R}}\), and a subset of irreversible reactions \(\text {Irr} \subseteq \mathcal {R}\). The flux cone \(C = \{v \in \mathbb {R}^{\mathcal {R}} \mid Sv = 0, v_{r} \geq 0, r \in \text {Irr}\}\) contains all steady-state flux vectors satisfying the stoichiometric and thermodynamic irreversibility constraints. Based on this flux cone, many analysis methods have been proposed over the years (see e.g. [2] for an overview). Flux Balance Analysis (FBA) [3,4] solves a linear program (LP) max{z(v)∣S v=0,l≤v≤u} over the (truncated) flux cone in order to predict how efficiently an organism can realize a certain biological objective, represented by the linear objective function z(v). For example, one may compute the maximal biomass production rate under some specific growth conditions. Flux Coupling Analysis (FCA) [5,6] studies dependencies between reactions. Here the question is whether or not for all steady-state flux vectors v∈C, zero flux v r =0 through some reaction r implies zero flux v s =0, for some other reaction s.
Knockout analysis has become an important technique for the study of metabolic networks and in metabolic engineering. Starting from flux balance analysis (FBA), various in silico screening methods for genetic modifications have been developed, see [7,8] for an overview. On the one hand, complete methods have been proposed, which systematically explore all possible knockout sets up to a given size, e.g. [9,10]. On the other hand, there exist heuristic algorithms such as [11-14], which may be considerably faster, but in general are not complete. Klamt et al. [15-17] developed the related concept of minimal cut sets, which are (inclusion-wise) minimal sets of reactions whose knockout will block certain undesired flux distributions while maintaining others.
Recent progress in the development of algorithms for flux coupling analysis (FCA) [6,18] may lead to a different approach. FCA [5] describes the impact of each possible single reaction knockout in a metabolic network. It analyzes which other reactions become blocked after removing one reaction ("directional coupling"), and which reactions are always active together ("partial coupling"). As we will see, using flux coupling information inside a double or multiple knockout simulation may significantly reduce the search space, without loosing any information.
In this paper, we present an algorithmic framework for double and multiple knockouts in qualitative models of metabolic networks. We will use a lattice-theoretic approach [18], which includes classical constraint-based models at steady-state as a special case, but which is much more general. We illustrate and evaluate our method by computing full double knockout simulations on a selection of genome-scale metabolic network reconstructions. In particular, we compare the impact of single vs. double reaction knockouts on the other reactions in the network. We also show how our method can be extended to gene (in contrast to reaction) knockouts, and provide computational results for both cases.
Our algorithms are based on an efficient search for the maximal element in suitably defined lattices [18]. To simulate all double or multiple reaction knockouts, we describe a method to select a subset of the reactions as representatives for the whole system. More precisely, we partition the reaction set in equivalence classes of partially coupled reactions. This enables us to obtain the information about all possible double or multiple reaction knockouts much faster and to store the results in a compact format.
The approach developed in this paper is a qualitative method. We do not measure the quantitative impact of knockout sets on the cellular growth rate (or other metabolic fluxes) as this would be done in an FBA approach. Instead, we count how many reactions become blocked by a knockout, similar to the flux balance impact degree introduced in [19]. However, even though we do not apply FBA to evaluate the impact of a knockout, the idea of working with representatives for reaction classes via partial coupling could also be applied in an FBA context. Thus, studies like [20] and even MILP-based approaches like [21] might benefit from this method.
Reaction coupling in the context of knockout analysis
We start from a metabolic network \(\mathcal {N} = (\mathcal {M}, \mathcal {R}, S, \text {Irr})\) given by a set of metabolites , a set of reactions , a stoichiometric matrix \(S \in \mathbb {R}^{\mathcal {M} \times \mathcal {R}}\), and a set of irreversible reactions \(\text {Irr} \subseteq \mathcal {R}\), see Figure 1 for an example. The set \(C = \{ v \in \mathbb {R}^{\mathcal {R}} \mid Sv = 0, v_{r} \geq 0, r \in \text {Irr}\}\) of all flux vectors \(v \in \mathbb {R}^{\mathcal {R}}\) satisfying the steady-state (mass balance) constraints S v=0 and the thermodynamic irreversibility constraints v r ≥0, for all r∈Irr, is called the steady-state flux cone. A reaction \(s\in \mathcal {R}\) is called blocked if v s =0, for all v∈C, otherwise s is unblocked. Two unblocked reactions r,s are called directionally coupled [5], written \(r \stackrel {=0}{\rightarrow } s\), if for all v∈C, v r =0 implies v s =0. A possible biological interpretation is that the reactions directionally coupled to r are those reactions that will become blocked by knocking out the reaction r.
Example network with corresponding lattice and coupling relations. The network contains the set of metabolites \(\mathcal {M} = \{A,B,C,D\}\) and the set of reactions \(\mathcal {R} = \{1,2,3,4,5,6\}\). We assume that all coefficients s mr of the stoichiometric matrix S belong to {0,+1,−1}. Thus, reaction 2 has the stoichiometry s A2=−1,s B2=s C2=1 and s D2=0. The set of irreversible reactions is \(\text {Irr} = \mathcal {R} \setminus \{1\}\). A possible flux vector satisfying the steady-state condition S v=0 is v=(0,1,1,2,1,1) with supp v={2,3,4,5,6}. The corresponding lattice contains the trivial element ∅ representing the vector v=0 and the minimal (non-trivial) elements {1,2,3,4},{1,4,5,6} and {2,3,4,5,6}. The maximal element is {1,2,3,4,5,6}, i.e., there is no blocked reaction. a) There are two pairs of partially coupled reactions, namely 2⇔3 and 5⇔6. Therefore, no knockout sets containing reaction 3 or 5 need to be analysed. The impact of a double knockout of {3,r} will be the same as for {2,r}. b) Reaction 1 is coupled to reaction 4. Thus, a double knockout of {1,4} will have the same effect as the simple knockout of 4. In both cases, all reactions {1,2,3,4,5,6} get blocked.
To determine which reactions are coupled, a simple approach would be to solve for each pair of reactions (r,s) two linear programs (LPs) and to check whether max {v s | v ∈C,v r =0}= min {v s | v ∈C,v r =0}=0. During the last years, efficient flux coupling algorithms have been developed [6,18] that drastically reduce the number of LPs to be solved, so that that genome-wide metabolic network reconstructions can now be analyzed in a few minutes on a desktop computer (compared to a couple of days of running time before).
Whether reactions are blocked or coupled does not depend on the specific flux values. It only matters whether or not v r =0 resp. v s =0. In this sense, flux coupling is a qualitative property that can be analysed by studying the set L C={supp v | v ∈C} of all supports of flux vectors v∈C, where \(\text {supp}\; v = \{r\in \mathcal {R}\,|\,v_{r} \neq 0\}\). Each element a∈L C is the set of active reactions of some flux vector v∈C. Therefore, we can interpret L C as the set of all possible reaction sets or pathways in the flux cone C. Since L C does not contain any information about specific flux values, we also speak of a qualitative model of the metabolic network .
In [18,22], we have shown that flux coupling analysis can be extended to much more general qualitative models, where the space of possible pathways \(L \subseteq 2^{\mathcal {R}}\) can be any non-empty subset of the power set \(2^{\mathcal {R}}\), e.g. L={supp v∣v∈C,v satisfies thermodynamic loop law constraints}. The definition of flux coupling needs only be slightly modified in order to be applicable to these qualitative models. A reaction \(t\in \mathcal {R}\) is called blocked in L if and only if for all a∈L, we have t∉a. For reactions \(r, s \in \mathcal {R}\) that are unblocked in L, we define \(r \stackrel {=0}{\rightarrow } s\) in L, if for all a∈L, r∉a implies s∉a. To distinguish between the original flux coupling and its qualitative extension, we will call the latter reaction coupling from now on.
The goal of this paper is to study more general dependencies between reactions, where the flux through some reaction has to be zero, if the flux through two or more other reactions is zero.
Definition 1 (Joint reaction coupling).
Given a qualitative model \(L \subseteq 2^{\mathcal {R}}\) of a metabolic network , let \(r, s, t \in \mathcal {R}\) be unblocked reactions in L such that neither \({r \stackrel {=0}{\rightarrow } t}\) in L nor \({s \stackrel {=0}{\rightarrow } t}\) in L holds. We say t is jointly coupled to the pair {r,s} in L, written \( {\left \{ r, s \right \}}\stackrel {=0}{\rightarrow } t\) in L, if for all a∈L, r∉a and s∉a implies t∉a.
More generally, given a set \(\mathcal {K} \subseteq \mathcal {R}\) of unblocked reactions in L, we say that t is jointly coupled to in L, written \(\mathcal {K} \stackrel {=0}{\rightarrow } t\) in L, if for all a∈L, \(a \cap \mathcal {K} = \emptyset \) implies t∉a, and \( {\mathcal {K}'} \stackrel {=0}{\rightarrow } t\) in L does not hold for any \(\emptyset \neq \mathcal {K}' \subsetneq \mathcal {K}\).
Note that in the definition of the joint coupling \({\left \{ r, s \right \}}\stackrel {=0}{\rightarrow } t\) in L, we require that the simple couplings \({r \stackrel {=0}{\rightarrow }t}\) in L and \({s \stackrel {=0}{\rightarrow } t}\) in L both do not hold. Thus, joint coupling is about the synergistic effect of a pair of reactions r,s on some other reaction t, which cannot be obtained by either r or s alone. Similarly, \(\mathcal {K} \stackrel {=0}{\rightarrow }t\) in L can only hold if \({\mathcal {K}'} \stackrel {=0}{\rightarrow }t\) in L does not hold, for any smaller knockout set \(\emptyset \neq \mathcal {K}' \subsetneq \mathcal {K}\).
Lattices and maximal elements
In [18], we presented a generic algorithm for flux coupling analysis in qualitative models. This algorithm determines the pairs of coupled reactions by computing the maximal element in suitably defined lattices.
A family of reaction sets \(L \subseteq 2^{\mathcal {R}}\) is a (finite) lattice if ∅∈L and for all a 1,a 2∈L, we have a 1∪a 2∈L. The biological interpretation of this property is that the combination of two metabolic pathways is again a pathway. In [18] we showed that L C is a lattice. Any finite lattice L has a unique maximal element 1 L (w.r.t. set inclusion), which is simply the union of all lattice elements, i.e., \(\displaystyle 1_{L} = \bigcup _{a \in L} a\). For any subset of reactions \(\mathcal {K} \subseteq \mathcal {R}\), we may define the family
$$L_{\bot\mathcal{K}} = \left\{ a\in L \ |\ a \cap \mathcal{K} = \emptyset\right\} $$
called L without of those reaction sets a∈L that do not contain any reaction in . If L is a lattice, then \(L_{\bot \mathcal {K}}\) is a lattice again, and thus it has a maximal element
$$1_{L_{\bot \mathcal{K}}} = \underset{a\in L, a \cap \mathcal{K} = \emptyset}\bigcup\!\!\!\!\!\!\!a. $$
Given any lattice \(L \subseteq 2^{\mathcal {R}}\), we have shown in [18] that a reaction \(r \in \mathcal {R}\) is unblocked in L if and only if r∈1 L . For two unblocked reactions r,s∈1 L , the coupling relation \(r \stackrel {=0}{\rightarrow } s\) in L holds if and only if \(s \notin 1_{L_{\bot \{r\}}}\). In [18], we also presented an efficient algorithm to compute 1 L and \(1_{L_{\bot \{r\}}}\). Once these maximal elements have been found, one can immediately determine the blocked and coupled reactions.
In this paper, we generalize these results to joint couplings. We present a method to compute the effects of double (resp. multiple) reaction knockouts based on the maximal element \(1_{L_{\bot \{r,s\}}}\) (resp. \(1_{L_{\bot \mathcal {K}}}\)).
Proposition 1.
If \(L \subseteq 2^{\mathcal {R}}\)is a lattice, then for any unblocked reactions r,s,t∈1 L we have
$$\{ r, s \} \stackrel{=0}{\rightarrow}\text{t in L if and only if } t \in \left(1_{L_{\bot\{r\}}} \cap 1_{L_{\bot\{s\}}} \right) \setminus 1_{L_{\bot\{r,s\}}}. $$
More generally, for a set of unblocked reactions \(\mathcal {K} \subseteq 1_{L}\), we have
$$\mathcal{K} \stackrel{=0}{\rightarrow} \text{t in L if and only if } t \in \left(\;\underset{k \in \mathcal{K}} \bigcap1_{L_{\bot{\mathcal{K} \setminus \left\{ k \right\}}}} \right) \setminus 1_{L_{\bot \mathcal{K}}}. $$
We prove only the first part. The second part follows by induction.
Assume \(\{r, s\} \stackrel {=0}{\rightarrow } t\) in L. By definition, we know t∉a for all \(a \in L_{\bot _{\left \{ r, s \right \}}}\), and therefore \(t \notin 1_{L_{\bot {\{r, s\}}}}\). If \(\{r, s\} \stackrel {=0}{\rightarrow } t\) in L, we also know that neither \(r \stackrel {=0}{\rightarrow } t \) in L nor \(s \stackrel {=0}{\rightarrow } t \) in L and that all three reactions are unblocked, i.e., r,s,t∈1 L . As discussed in [18], we have \(r \stackrel {=0}{\rightarrow } t\) in L if and only if \(t \in 1_{L} \setminus 1_{L_{\bot \{r\}}}\phantom {\dot {i}\!}\). Since t∈1 L , we conclude \(t \in 1_{L_{\bot \{r\}}}\), and by the same argument \(t \in 1_{L_{\bot \{s\}}}\phantom {\dot {i}\!}\). Hence, \( t \in \left (1_{L_{\bot \{r\}}} \cap 1_{L_{\bot \{s\}}} \right) \setminus 1_{L_{\bot \{r,s\}}}\phantom {\dot {i}\!}\).
If \( t \in \left (1_{L_{\bot \{r\}}} \cap 1_{L_{\bot \{s\}}} \right) \setminus 1_{L_{\bot \{r,s\}}}\) holds, then \(t \notin 1_{L_{\bot \{r,s\}}}\phantom {\dot {i}\!}\), which implies t∉a for all a∈L ⊥{r,s}. Since \(t \in 1_{L_{\bot \{r\}}} \cap 1_{L_{\bot \{s\}}}\), we can again apply [18] to see that \(r \stackrel {=0}{\rightarrow } t\) in L and \(s \stackrel {=0}{\rightarrow } t\) in L do not hold. Finally, since r,s,t∈1 L are unblocked, we get \({\left \{ r,s \right \}} \stackrel {=0}{\rightarrow }t\) in L.
In [22], we considered even more general qualitative models \(\emptyset \neq P \subseteq 2^{\mathcal {R}}\), where P needs not be a lattice. We showed there that qualitative flux coupling analysis can be done in the lattice L P=〈P〉 that is generated by P. The results we will present in this paper would be applicable to those qualitative models P as well, but for simplicity we will continue to work with models L that are lattices.
Classes of partially coupled reactions
To determine joint coupling relations \(\mathcal {K} \stackrel {=0}{\rightarrow } t\) in L, we will use as much as possible the information that can be obtained from standard couplings \(r \stackrel {=0}{\rightarrow } s\) in L, i.e., with normal FCA. If \(r \stackrel {=0}{\rightarrow } s\) in L, any pathway a∈L that does not use reaction r will also not use reaction s. Thus, knocking out s in addition to r will not affect the system, i.e., {a∈L | r,s∉a}={a∈L | r∉a}.
Additional improvements can be obtained by looking at partially coupled reactions. Two unblocked reactions r,s∈1 L are called partially coupled in the lattice L, written r⇔s, if both \(r {\,\stackrel {=0}{\rightarrow }\,}s \;\text {in}\, L\) and \(s {\,\stackrel {=0}{\rightarrow }\,}r \text { in } L\). The relation ⇔ is reflexive, transitive and symmetric, and thus an equivalence relation. Any equivalence relation defines a partition of its ground set into equivalence classes. In our case, \(1_{L} = \bigcup _{r \in 1_{L}} \left [ r \right ]_{{\,\leftrightarrow \,}}\), where [r]⇔={s∈1 L | r⇔s}. An equivalence class can be represented by any of its elements, i.e., \([\!r]_{\leftrightarrow } = [\!\tilde r]_{\leftrightarrow }\) if \(r \leftrightarrow {\tilde r}\). By selecting one element from each equivalence class, we get a set of representatives Rep⊆1 L that covers all unblocked reactions, i.e., \(1_{L} = \bigcup _{r \in \texttt {Rep}} [\!r]_{\leftrightarrow }\). We will call [ r]⇔ the coupling class or reaction class of reaction r. Biologically, coupling classes can be interpreted as subsets of reactions that are always active together, similarly to the notion of enzyme subsets in [23].
For \(r, \tilde r \in \left [ r \right ]_{\,\leftrightarrow \,}\) and a∈L, we have r∈a if and only if \(\tilde r \in a\). Thus, a knockout of r has the same impact as a knockout of \(\tilde r\). Furthermore, r can only be blocked by another knockout k∉[r]⇔ if the same holds for \(\tilde r\), i.e., \(k\stackrel {=0}{\rightarrow } r\) in L if and only if \( k \stackrel {=0}{\rightarrow }{\tilde r}\) in L. It follows that to analyse the effect of a knockout pair \(\left \{ \tilde r,\tilde s \right \}\), one can instead knockout the corresponding representatives {r,s} with \(\tilde r \in [r]_{\leftrightarrow }\) in L and \(\tilde s \in [s]_{\leftrightarrow }\). To simulate all double knockouts, one does not have to check all pairs \(\left \{ \left \{ \tilde r, \tilde s \right \} \,|\, \tilde r, \tilde s \in 1_{L}\right \}\), but it is enough to iterate over a fixed set of representatives: {{r,s}|r,s∈Rep}, see Figure 1a) for illustration. As we will see, for many genome-scale network reconstructions, there are only about half as many different equivalence classes as there are unblocked reactions (Table 1). Thus, only about 1/4 of all original pairs need to be checked. As mentioned before, although we apply this compression to reaction coupling analysis, it could also be combined with FBA-based methods.
Table 1 Knockout impact on different networks
In [18], we introduced an algorithm that performs flux coupling analysis by computing maximal elements of suitably defined finite lattices \(\tilde {L}\) (see also the section above on lattices and maximal elements). The basic ingredient of this algorithm is a method that checks if a given reaction \(r \in \mathcal {R}\) is blocked in \(\tilde {L}\), and if not returns a pathway \(a\in \tilde {L}\) with r∈a. The maximal element \(1_{\tilde {L}}\) of \(\tilde {L}\) is computed by improving lower and upper bounds \(lb,ub \in \tilde {L}\) with \(lb \subseteq 1_{\tilde {L}} \subseteq ub\). In each step of the algorithm, either lb is increased or ub is decreased, until finally \(lb = ub = 1_{\tilde {L}}\). The following Algorithm 1 is an extension of this method. It allows finding all the reactions in that are unblocked after a multiple knockout \(\mathcal {K} \subseteq 1_{L}\).
As discussed in [18], the flexibility of the lattice-based approach comes from hiding the search for specific pathways in a separate function FindPath. For traditional steady-state based models, FindPath can be realized by solving the linear programs \(\max \{\pm v_{t}|Sv = 0, v_{\text {Irr}} \geq 0, v_{k} = 0, k \in \mathcal {K}\}\). But, one can also use other modeling hypotheses and corresponding algorithmic methods (see [22] for the example of thermodynamic loop law constraints). The skeleton of Algorithm 1 will remain the same, only the auxiliary function FindPath has to be changed.
In Algorithm 1, we perform a multiple knockout analysis with a fixed knockout set . For a full d-dimensional knockout analysis, we would have to iterate over all \(\mathcal {K} \subseteq 1_{L}\) with \(|\mathcal {K}| = d\), i.e., we would have to run the algorithm \(O\left (\binom {|\mathcal {R}|}{d}\right)\) times. In each iteration, we have to solve \(O(|\mathcal {R}|)\) linear programs. Since linear programming can be done in polynomial time, full d-dimensional knockout analysis is still polynomial (for fixed d), but computationally very expensive as soon as d>2. However, we can still use the partition of 1 L into equivalence classes of partially coupled reactions. Thus, our next Algorithm 2 calculates representatives of all jointly coupled reactions in the case of double knockouts.
In Algorithm 2, we iterate over a subset of all possible double knockouts without loosing any information. For this, we filter redundant knockout pairs such as \(r \stackrel {=0}{\rightarrow } s\) in L (by checking \(s \in 1_{L_{\bot \{r\}}}\)). It is unnecessary to test such a pair, because a knockout of {r,s} is equivalent to the single knockout of r, see Figure 1b) for illustration. For higher-dimensional knockout sets one can proceed in a similar fashion:
Let \(\mathcal {K} = \left \{ k_{1}, \ldots, k_{d} \right \} \subseteq \texttt {Rep}\) be a d-dimensional knockout set. Then we do not need to test , if any of the following conditions is fulfilled:
\(k_{i} {\,\stackrel {=0}{\rightarrow }\,}k_{j} \text { in } L\) for two reactions \(k_{i}, k_{j} \in \mathcal {K}\),
\(\left \{ k_{i_{1}}, k_{i_{2}} \right \} {\,\stackrel {=0}{\rightarrow }\,}k_{j} \text { in } L\) for three reactions \(k_{i_{1}}, k_{i_{2}}, k_{j} \in \mathcal {K}\),
\(\left \{ k_{i_{1}}, k_{i_{2}}, k_{i_{3}} \right \} {\,\stackrel {=0}{\rightarrow }\,}k_{j}\; \text {in}\; L\) for four reactions \(k_{i_{1}}, k_{i_{2}}, k_{i_{3}}, k_{j} \in \mathcal {K}\),
Standard FCA finds all pairs of reactions that are directionally coupled. This allows us to iterate in Algorithm 2 over all \(\{r, s\} \in \mathcal {K}_{2, 1}\) with
$$ {\fontsize{7}{6}\mathcal{K}_{2, 1} = \left\{ \left\{ k_{1}, k_{2} \right\} \subseteq {\texttt{Rep}} \, |\, \text{not} k_{1} {\,\stackrel{=0}{\rightarrow}\,}k_{2} \text{ in } L \text{ and not } k_{2} {\,\stackrel{=0}{\rightarrow}\,}k_{1} \text{ in } L\right\}.} $$
\(\mathcal {K}_{2, 1}\) contains all 2-tuples of coupling class representatives that are not coupled with respect to knockouts up to cardinality 1.
If one is interested to perform a full triple knockout analysis and joint coupling information is available, one can adapt the filtering technique and iterate over all \(\{r_{1}, r_{2}, r_{3}\} \in \mathcal {K}_{3, 1}\) (or K 3,2) with
$$\begin{array}{@{}rcl@{}} \mathcal{K}_{3, 1} &=& \left\{ \left\{ k_{1}, k_{2}, k_{3} \right\} \subseteq {\texttt{Rep}} \, |\, \text{not } k_{i} {\,\stackrel{=0}{\rightarrow}\,}k_{j} \text{ in } L,\right.\\ &&\left.\text{ for all } i \neq j \in \{1,2,3\} \right\},\\ \mathcal{K}_{3, 2} &=& \left\{ \left\{ k_{1}, k_{2}, k_{3} \right\} \subseteq {\texttt{Rep}} \, |\, \text{not } k_{i_{1}} {\,\stackrel{=0}{\rightarrow}\,}k_{j} \text{ in } L \right. \\ & & \text{ and not } \left\{ k_{i_{1}}, k_{i_{2}} \right\} {\,\stackrel{=0}{\rightarrow}\,}k_{j} \text{ in } L, \text{ for all pairwise} \\&&\left.\text{different } i_{1}, i_{2}, j \in \{1,2,3\} \vphantom{\left\{ k_{1}, k_{2}, k_{3} \right\} \subseteq {\texttt{Rep}} \, |\, \text{not } k_{i_{1}} {\,\stackrel{=0}{\rightarrow}\,}k_{j} \text{ in } L \text{ and not } \left\{ k_{i_{1}}, k_{i_{2}} \right\} {\,\stackrel{=0}{\rightarrow}\,}k_{j} \text{ in } L,}\right\}. \end{array} $$
\(\mathcal {K}_{3, 1}\) contains all 3-tuples of coupling class representatives that are not directionally coupled, and \(\mathcal {K}_{3, 2}\) all triples that do not contain reactions that are coupled with respect to knockouts up to cardinality 2. Similarly one could define \(\mathcal {K}_{d, m}\).
While these techniques are applied here only to reaction coupling analysis, they could also be combined with FBA-based methods. Thus, if one is interested to measure the impact of all possible triple knockouts on FBA, it would be sufficient to solve \(\max \left \{v_{\textit {biomass}}| Sv = 0, v_{\text {Irr}} \geq 0, v_{\mathcal {K}} = 0\right \}\) for all \(\mathcal {K} \in \mathcal {K}_{3, 1}\) (if only FCA data is available) or all \(\mathcal {K} \in \mathcal {K}_{3, 2}\) (if FCA and joint coupling data is available).
The case of gene knockouts
Often metabolic networks contain regulatory rules for the gene products that catalyze the reactions, e.g. reaction r 1 is catalyzed by the product of a gene g 1 and reaction r 2 is catalyzed by the gene product of g 1 or g 2. Here r 1 is only possible if g 1 is active, and r 2 can only be blocked by a simultaneous knockout of the two genes g 1 and g 2. Typically, there is no 1-1 relationship between the set of genes and the set of reactions . On the one hand, there are reactions that only get blocked by a combination of two or more gene knockouts, as indicated above in r 2≡g 1∨g 2. On the other hand, the knockout of a single gene \(g \in \mathcal {G}\) may block more than one reaction. For example, reactions r 1 and r 3 may both depend on the gene g 1. Then one immediately gets that a knockout of g 1 implies v 1=v 3=0. Let us further assume that FCA and double reaction knockout analysis have been performed, leading to \(3\stackrel {=0}{\rightarrow } 4\) in L and \(\left \{ 1, 3 \right \}\stackrel {=0}{\rightarrow } 6\) in L. Based on this information, we can extend the reactions that are blocked by the knockout of gene g 1 to v 1=v 3=v 4=v 6=0. Thus, in this example we have 2 reactions (r 1,r 3) that are associated to the gene g 1 based on information that is directly available in the network reconstruction, but in total 4 reactions (r 1,r 3,r 4,r 6) that are coupled to the gene g 1. We formalize these notions in the following definition.
Definition 2 (Gene coupling).
Consider a qualitative model \(L \subseteq 2^{\mathcal {R}}\) of a metabolic network with reaction set and gene set . Let \(\alpha : 2^{\mathcal {G}} \rightarrow 2^{\mathcal {R}}, \Gamma \mapsto \mathcal {K}_{\Gamma }\) be a function defining a set of reactions \(\mathcal {K}_{\Gamma }\) associated to the knockout of all genes in the set Γ. For an unblocked reaction r∈1 L and \(\Gamma \subseteq \mathcal {G}\)we define:
$$\Gamma\stackrel{=0}{\rightarrow} \text{r in L if and only if } r \notin 1_{L_{\bot {\mathcal{K}_{\Gamma}}}}. $$
We say that the reaction r is coupled to the gene knockout Γ. If Γ={g} is a single gene, we simply write \(g\stackrel {=0}{\rightarrow } r\) in L.
Given the function \(\alpha : 2^{\mathcal {G}} \rightarrow 2^{\mathcal {R}}\), we can determine the reactions coupled to the gene set Γ by applying Algorithm 1 to the set of associated reactions \(\mathcal {K}_{\Gamma }\). Note that the definition of gene coupling slightly differs from the one of joint reaction coupling. Here, we do not exclude reactions that are already knocked out by single (or smaller set of) gene knockouts. This is to account for the possibility that, for example, a reaction r may be associated to a single gene knockout g 1, but not to the double knockout {g 1,g 2} (assume r≡g 1∨¬g 2).
To simulate the impact of all single gene knockouts, one can perform an iteration over all genes \(g \in \mathcal {G}\). Similarly, one can determine all double gene knockout effects by an iteration over all pairs of genes \(\{g_{1}, g_{2}\} \subseteq \mathcal {G}\). However, in contrast to Algorithm 2, we cannot use gene class representatives to decrease the number of pairs that have to be analyzed.
To evaluate our method, we simulated all single and double reaction knockouts for a number of genome-scale metabolic network reconstructions from the BiGG-database [24]. The computations were done on a MacBook Air (2012), with 1.8 GHz Intel Core i5, 4GB RAM, and running Java Oracle JDK 1.7.45 under Mac OS X 10.9. To solve linear programs (LPs), we used CPLEX Version 12.6.
Impact of double knockouts
Table 1 shows the impact of single and double reaction knockouts for the different networks. In most cases, the knockout of a single reaction class (due to the knockout of one or more of its reactions) blocks the reactions in 4 to 5 other reaction classes in average. The least robust system is S. aureus iSB619, where a single knockout has an average impact of almost 12 coupled reaction classes. In S. aureus iSB619, about 9.2% of all possible double knockouts {r,s} have joint coupling effects, i.e., there exist reactions \(t \in \mathcal {R}\) that are blocked by the double knockout {r,s}, but not by a single knockout of r or s alone. This is a comparatively large number. For the bigger E. coli models iAF1260 and iJO1366, only around 1% of all double knockouts of two uncoupled reaction classes {r,s} have an impact that exceeds the effects of the corresponding two single knockouts. In S. aureus iSB619, double knockouts also have very strong combined effects. In addition to the reaction classes that would be knocked out by r or s alone, in average more than 7 reaction classes are coupled to a double knockout corresponding to a joint coupling \(\{r, s\}\stackrel {=0}{\rightarrow } t\) in L. But, even for the most robust system, M. tuberculosis iNJ661, a double knockout (if its impact is different from the two single knockouts) in average has a combined effect of 2 additional knocked out classes resp. 5.8 reactions.
In our next experiment, we take the opposite perspective (Table 2). We analyse how robust an average reaction is to single or double knockouts. More precisely, we ask the following question: Given a reaction t, what are the possible choices for a single reaction r resp. a pair of reactions {r,s} such that \(r\stackrel {=0}{\rightarrow } t \) in L resp. \(\{r, s\} \stackrel {=0}{\rightarrow }t\) in L holds. This perspective corresponds to a lab experiment for finding knockout targets for the reaction t. Here, we consider single reactions instead of reaction classes. This means that for \(\{r, s\} \stackrel {=0}{\rightarrow } t\) in L with r,s,t∈Rep, we get |[r]|·|[s]| knockout options for all the |[t]| reactions that belong to the same reaction class as t.
Table 2 Average number of knockout options
For most of the studied networks, the average number of knockout options for a given target reaction is in the range of 25-85 single reactions and 100-150 reaction pairs. With all double knockout information at hand, one can reduce the set of all possible knockout candidates for a wet lab experiment to a small number, and additionally decide beforehand which of them have the smallest side effects.
Impact on biomass production
To finish our discussion, we study the impact of knockouts on biomass production. To measure this, we counted the number of single and double knockouts that block the biomass reaction. Table 3 presents the results for the largest available models of the respective organisms. For two of them, more than one biomass reaction was available. In the case of E. coli iJO1366, we present the results for the two biomass reactions, for S. aureus, we selected 2 out of the 14 available reactions.
Table 3 Number of knockouts for the biomass reaction in selected networks
We observe that for most of the organisms, the number of single knockouts that block biomass production is very similar to the number of different double knockouts (corresponding to joint couplings) having this property, although the number of double knockout candidates is much larger (quadratic in |1 L |).
Algorithmic considerations
To perform a double knockout analysis, we first run standard flux coupling analysis (FCA) using the L4FC routine from [18]. Then we calculate the unblocked reactions for each double knockout of a pair of reaction class representatives. Table 4 presents the running times for six genome-scale network reconstructions and the central metabolism of E. coli. Even for our largest network, E. coli iJO1366 with its 2583 reactions, the complete simulation of all double reaction knockouts took less than 1h 10 min.
Table 4 Runtime and number of solved LPs for double reaction knockouts (Algorithm 2)
Next we discuss the number of LPs we have to solve in order to obtain this additional information. For all our networks, double knockout analysis required solving 5 to 20 times as many LPs than single knockouts, i.e., classical FCA. While this seems to be a large number, it is relatively small compared to the complexity of the problem. A full double knockout simulation is comparable to iterating over all reactions r∈Rep, removing the reaction r and performing a single knockout simulation for each of the resulting subnetworks. Reusing known pathways as witnesses and including reaction coupling information as proposed in [18] allows performing |Rep| simulations with only 5 to 20 times the effort in LP solving. Table 1 shows that the median value for |Rep| is 370 for our networks.
In order to evaluate the runtime effect of our algorithmic improvements, we considered two variants of Algorithm 2: Variant A (no representatives) In the main loop of Algorithm 2, we do not iterate over all representatives r,s∈Rep,r<s, but over all pairs of uncoupled reactions r,s∈1 L ,r<s, with not \(r \stackrel {=0}{\rightarrow } s\; in\; \textit {L}\) and not \(s \stackrel {=0}{\rightarrow } r\) in L. Variant B (no witnesses) Same as Variant A. Additionally we do not save witnesses, thus \(\mathcal {W} = \emptyset \).
These two experiments allow determining time savings due to representatives (comparing Algorithm 2 and Variant A) and time savings due to warm starts based on knowledge of existing reaction sets (comparing Variant A and B). We should emphasize here that Variant B is is still more efficient than a naive brute force algorithm. The runtime results are given in Table 5, where we stopped computations after a timeout of 6h. Table 5 shows that the efficiency of Algorithm 2 is mostly due to the re-use of (up to 10000) pathways as witnesses (factor 10 in the case of E. coli textbook and factor 100 for S. aureus). Nevertheless, iterating over the set of representatives adds another improvement of up to 50% (S. aureus). Since it takes a very small effort to calculate a set of representatives to profit from this additional speed-up, we highly recommend to iterate over representatives whenever possible.
Table 5 Runtime of variants of Algorithm 2 for computing double reaction knockouts
Gene knockouts
Table 6 gives the runtimes and the number of LPs for single and double gene knockouts. To determine the reactions associated to a (double) gene knockout, we used the library JEval that allows fast evaluation of logical formulas given as Java strings. As expected we are confronted with longer runtimes up to almost 4h for double gene knockouts compared to < 70 min for double reaction knockouts. This is due to the fact that we need to check every single pair of genes instead of a representative selection like the one we could apply in double reaction knockout analysis. In spite of this, with the methods proposed here, a full simulation of double reaction or double gene knockouts on a genome-scale metabolic network reconstruction can still be performed in a reasonable time.
Table 6 Runtime and number of solved LPs for single and double gene knockouts
On the algorithmic side, this study presented the following main results:
Algorithm 2 is an effective method for a complete double knockout analysis in genome-scale metabolic networks.
Using Algorithm 1, it is possible to compute the impact of specific multiple knockout sets containing 3 or more reactions.
By exploiting the information present in reaction coupling data (obtained by FCA), one can significantly decrease the number of candidates that need to be tested in double and multiple knockout simulations.
Regarding the biological data, we can make the following observations based on our computational experiments:
In the genome-scale metabolic network reconstructions that were considered in this study, 1-10% of the possible double knockout sets have joint coupling effects. Thus, given a randomly chosen reaction pair, the probability is high that the combined effect of the double knockout (in terms of other blocked reactions) will be the same as for the two corresponding single knockouts.
However, in all these networks, there exists a small number of double knockouts showing synergistic effects, blocking 5 to 20 additional reactions in average. These double knockouts cannot be predicted from the single knockout/reaction coupling data alone.
Due to the algorithmic improvements, we are now able to perform full double gene or reaction knockout simulations in a few hours of computation time. Thus, whenever one is interested in understanding the robustness of a network to knockouts, one should take the opportunity and run such an in silico simulation, before starting other more time consuming and expensive experiments.
Bordbar A, Monk JM, King ZA, Palsson B. Constraint-based models predict metabolic and associated cellular functions. Nat Rev Genet. 2014; 15(2):107–120.
Lewis NE, Nagarajan H, Palsson B. Constraining the metabolic genotype-phenotype relationship using a phylogeny of in silico methods. Nat Rev Microbiol. 2012; 10(4):291–305.
Varma A, Palsson BO. Predictions for oxygen supply control to enhance population stability of engineered production strains. Biotechnol Bioeng. 1994; 43(4):275–285.
Orth JD, Thiele I, Palsson BO. What is flux balance analysis?. Nat Biotechnol. 2010; 28(3):245–248.
Burgard AP, Nikolaev EV, Schilling CH, Maranas CD. Flux coupling analysis of genome-scale metabolic network reconstructions. Genome Res. 2004; 14(2):301–312.
Larhlimi A, David L, Selbig J, Bockmayr A. F2C2: a fast tool for the computation of flux coupling in genome-scale metabolic networks. BMC Bioinformatics. 2012; 13(1):57.
Tomar N, De RK. Comparing methods for metabolic network analysis and an application to metabolic engineering. Gene. 2013; 521(1):1–14.
Zomorrodi AR, Suthers PF, Ranganathan S, Maranas CD. Mathematical optimization applications in metabolic networks. Metabolic Eng. 2012; 14(6):672–686.
Burgard AP, Pharkya P, Maranas CD. Optknock: a bilevel programming framework for identifying gene knockout strategies for microbial strain optimization. Biotechnol Bioeng. 2003; 84(6):647–657.
Tepper N, Shlomi T. Predicting metabolic engineering knockout strategies for chemical production: accounting for competing pathways. Bioinformatics. 2010; 26(4):536–543.
Patil KR, Rocha I, Förster J, Nielsen J. Evolutionary programming as a platform for in silico metabolic engineering. BMC Bioinformatics. 2005; 6(1):308.
Lun DS, Rockwell G, Guido NJ, Baym M, Kelner JA, Berger B, Galagan JE, Church GM. Large-scale identification of genetic design strategies using local search. Mol Syst Biol. 2009; 5(1): 296.
PubMed Central PubMed Google Scholar
Rocha I, Maia P, Evangelista P, Vilaça P, Soares S, Pinto JP, Nielsen J, Patil KR, Ferreira EC, Rocha M. Optflux: an open-source software platform for in silico metabolic engineering. BMC Syst Biol. 2010; 4(1):45.
Ohno S, Shimizu H, Furusawa C. FastPros screening of reaction knockout strategies for metabolic engineering. Bioinformatics. 2014; 30(7):981–987.
Klamt S, Gilles ED. Minimal cut sets in biochemical reaction networks. Bioinformatics. 2004; 20(2):226–234.
Jungreuthmayer C, Nair G, Klamt S, Zanghellini J. Comparison and improvement of algorithms for computing minimal cut sets. BMC Bioinformatics. 2013; 14(1):318.
von Kamp A, Klamt S. Enumeration of smallest intervention strategies in genome-scale metabolic networks. PLOS Comput Biol. 2014; 10(1):1003378.
Goldstein YAB, Bockmayr A. A lattice-theoretic framework for metabolic pathway analysis In: Gupta A, Henzinger T, editors. Computational Methods in Systems Biology. Lecture Notes in Computer Science. Vol. 8130,Berlin: Springer: 2013. p. 178–191.
Zhao Y, Tamura T, Akutsu T, Vert J-P. Flux balance impact degree: a new definition of impact degree to properly treat reversible reactions in metabolic networks. Bioinformatics. 2013; 29(17):2178–2185.
Nogales J, Gudmundsson S, Thiele I. An in silico re-design of the metabolism in thermotoga maritima for increased biohydrogen production. Int J Hydrogen Energy. 2012; 37(17):12205–12218.
Suthers PF, Zomorrodi A, Maranas CD. Genome-scale gene/reaction essentiality and synthetic lethality analysis. Mol Syst Biol. 2009; 5(1):301.
Reimers AC, Goldstein YAB, Bockmayr A. Qualitative and thermodynamic flux coupling analysis. Technical Report #1054, Matheon (March 2014). http://nbn-resolving.de/urn:nbn:de:0296-matheon-12801
Pfeiffer T, Sánchez-Valdenebro I, Nuño JC, Montero F, Schuster S. METATOOL: for studying metabolic networks. Bioinformatics. 1999; 15:251–257.
Schellenberger J, Park JO, Conrad TM, Palsson BO. BiGG: a biochemical genetic and genomic knowledgebase of large scale metabolic reconstructions. BMC Bioinformatics. 2010; 11(213):213.
The PhD work of Yaron Goldstein was supported by the Berlin Mathematical School and the Gerhard C. Starck Stiftung.
FB Mathematik und Informatik, Freie Universität Berlin, Arnimallee 6, Berlin, 14195, Germany
Yaron AB Goldstein & Alexander Bockmayr
Yaron AB Goldstein
Alexander Bockmayr
Correspondence to Yaron AB Goldstein.
The paper is based on the PhD thesis of YG, which was supervised by AB. YG implemented the algorithms and performed the computational experiments. YG and AB together wrote the manuscript and approved the final version.
Goldstein, Y.A., Bockmayr, A. Double and multiple knockout simulations for genome-scale metabolic network reconstructions. Algorithms Mol Biol 10, 1 (2015). https://doi.org/10.1186/s13015-014-0028-y
DOI: https://doi.org/10.1186/s13015-014-0028-y
Constraint-based modeling
Metabolic network
Flux coupling analysis
Reaction knockout
Gene knockout
Constraints in Bioinformatics
|
CommonCrawl
|
Methodology article | Open | Published: 09 January 2019
A whitening approach to probabilistic canonical correlation analysis for omics data integration
Takoua Jendoubi ORCID: orcid.org/0000-0001-7846-97631,2 &
Korbinian Strimmer3
BMC Bioinformaticsvolume 20, Article number: 15 (2019) | Download Citation
Canonical correlation analysis (CCA) is a classic statistical tool for investigating complex multivariate data. Correspondingly, it has found many diverse applications, ranging from molecular biology and medicine to social science and finance. Intriguingly, despite the importance and pervasiveness of CCA, only recently a probabilistic understanding of CCA is developing, moving from an algorithmic to a model-based perspective and enabling its application to large-scale settings.
Here, we revisit CCA from the perspective of statistical whitening of random variables and propose a simple yet flexible probabilistic model for CCA in the form of a two-layer latent variable generative model. The advantages of this variant of probabilistic CCA include non-ambiguity of the latent variables, provisions for negative canonical correlations, possibility of non-normal generative variables, as well as ease of interpretation on all levels of the model. In addition, we show that it lends itself to computationally efficient estimation in high-dimensional settings using regularized inference. We test our approach to CCA analysis in simulations and apply it to two omics data sets illustrating the integration of gene expression data, lipid concentrations and methylation levels.
Our whitening approach to CCA provides a unifying perspective on CCA, linking together sphering procedures, multivariate regression and corresponding probabilistic generative models. Furthermore, we offer an efficient computer implementation in the "whitening" R package available at https://CRAN.R-project.org/package=whitening.
Canonical correlation analysis (CCA) is a classic and highly versatile statistical approach to investigate the linear relationship between two sets of variables [1, 2]. CCA helps to decode complex dependency structures in multivariate data and to identify groups of interacting variables. Consequently, it has numerous practical applications in molecular biology, for example omics data integration [3] and network analysis [4], but also in many other areas such as econometrics or social science.
In its original formulation CCA is viewed as an algorithmic procedure optimizing a set of objective functions, rather than as a probablistic model for the data. Only relatively recently this perspective has changed. Bach and Jordan [5] proposed a latent variable model for CCA building on earlier work on probabilistic principal component analysis (PCA) by [6]. The probabilistic approach to CCA not only allows to derive the classic CCA algorithm but also provide an avenue for Bayesian variants [7, 8].
In parallel to establishing probabilistic CCA the classic CCA approach has also been further developed in the last decade by introducing variants of the CCA algorithm that are more pertinent for high-dimensional data sets now routinely collected in the life and physical sciences. In particular, the problem of singularity in the original CCA algorithm is resolved by introducing sparsity and regularization [9–13] and, similarly, large-scale computation is addressed by new algorithms [14, 15].
In this note, we revisit both classic and probabilistic CCA from the perspective of whitening of random variables [16]. As a result, we propose a simple yet flexible probabilistic model for CCA linking together multivariate regression, latent variable models, and high-dimensional estimation. Crucially, this model for CCA not only facilitates comprehensive understanding of both classic and probabilistic CCA via the process of whitening but also extends CCA by allowing for negative canonical correlations and providing the flexibility to include non-normal latent variables.
The remainder of this paper is as follows. First, we present our main results. After reviewing classical CCA we demonstrate that the classic CCA algorithm is special form of whitening. Next, we show that the link of CCA with multivariate regression leads to a probabilistic two-level latent variable model for CCA that directly reproduces classic CCA without any rotational ambiguity. Subsequently, we discuss our approach by applying it to both synthetic data as well as to multiple integrated omics data sets. Finally, we describe our implementation in R and highlight computational and algorithmic aspects.
Much of our discussion is framed in terms of random vectors and their properties rather than in terms of data matrices. This allows us to study the probabilistic model underlying CCA separate from associated statistical procedures for estimation.
Multivariate notation
We consider two random vectors X=(X1,…,Xp)T and Y=(Y1,…,Yq)T of dimension p and q. Their respective multivariate distributions FX and FY have expectation E(X)=μX and E(Y)=μY and covariance var(X)=ΣX and var(Y)=ΣY. The cross-covariance between X and Y is given by cov(X,Y)=ΣXY. The corresponding correlation matrices are denoted by PX, PY, and PXY. By VX=diag(ΣX) and VY=diag(ΣY) we refer to the diagonal matrices containing the variances only, allowing to decompose covariances as Σ=V1/2PV1/2. The composite vector (XT,YT)T has therefore mean $\left (\boldsymbol {\mu }_{\boldsymbol {X}}^{T}, \boldsymbol {\mu }_{\boldsymbol {Y}}^{T}\right)^{T}$ and covariance $\left (\begin {array}{cc} \boldsymbol {\Sigma }_{\boldsymbol {X}} & \boldsymbol {\Sigma }_{\boldsymbol {X} \boldsymbol {Y}}\\ \boldsymbol {\Sigma }_{\boldsymbol {X} \boldsymbol {Y}}^{T} & \boldsymbol {\Sigma }_{\boldsymbol {Y}} \end {array}\right)$.
Vector-valued samples of the random vectors X and Y are denoted by xi and yi so that (x1,…,xi,…,xn)T is the n×p data matrix for X containing n observed samples (one in each row). Correspondingly, the empirical mean for X is given by $\hat {\boldsymbol {\mu }}_{\boldsymbol {X}} = \bar {\boldsymbol {x}} =\frac {1}{n} {\sum \nolimits }_{i=1}^{n} \boldsymbol {x}_{i}$, the unbiased covariance estimate is $\widehat {\boldsymbol {\Sigma }}_{\boldsymbol {X}} = \boldsymbol {S}_{\boldsymbol {X}} = \frac {1}{n-1} {\sum \nolimits }^{n}_{i=1} (\boldsymbol {x}_{i} -\bar {\boldsymbol {x}})\left (\boldsymbol {x}_{i} -\bar {\boldsymbol {x}}\right)^{T}$, and the corresponding correlation estimate is denoted by $\widehat {\boldsymbol {P}}_{\boldsymbol {X}} = \boldsymbol {R}_{\boldsymbol {X}}$.
We first introduce CCA from a classical perspective, then we demonstrate that CCA is best understood as a special and uniquely defined type of whitening transformation. Next, we investigate the close link of CCA with multivariate regression. This not only allows to interpret CCA as regression model and to better understand canonical correlations, but also provides the basis for a probabilistic generative latent variable model of CCA based on whitening. This model is introduced in the last subsection.
Classical CCA
In canonical correlation analysis the aim is to find mutually orthogonal pairs of maximally correlated linear combinations of the components of X and of Y. Specifically, we seek canonical directions αi and βj (i.e. vectors of dimension p and q, respectively) for which
$$ \text{cor}\left(\boldsymbol{\alpha}_{i}^{T} \boldsymbol{X}, \boldsymbol{\beta}_{j}^{T} \boldsymbol{Y}\right)= \left\{ \begin{array}{ll} \lambda_{i} & \text{maximal for } i=j\\ 0 & \text{otherwise, } \end{array}\right. $$
where λi are the canonical correlations, and simultaneously
$$ \text{cor}\left(\boldsymbol{\alpha}_{i}^{T} \boldsymbol{X}, \boldsymbol{\alpha}_{j}^{T} \boldsymbol{X}\right) =\left\{ \begin{array}{ll} 1 & \text{for } i=j\\ 0 & \text{otherwise, } \end{array}\right. $$
$$ \text{cor}\left(\boldsymbol{\beta}_{i}^{T} \boldsymbol{Y}, \boldsymbol{\beta}_{j}^{T} \boldsymbol{Y}\right) =\left\{ \begin{array}{ll} 1 & \text{for } i=j\\ 0 & \text{otherwise.} \end{array}\right. $$
In matrix notation, with A=(α1,…,αp)T, B=(β1,…,βq)T, and Λ=diag(λi), the above can be written as cor(AX,BY)=Λ as well as cor(AX)=I and cor(BY)=I. The projected vectors AX and BY are also called the CCA scores or the canonical variables.
Hotelling (1936) [1] showed that there are, assuming full rank covariance matrices ΣX and ΣY, exactly m= min(p,q) canonical correlations and pairs of canonical directions αi and βi, and that these can be computed analytically from a generalized eigenvalue problem (e.g., [2]). Further below we will see how canonical directions and correlations follow almost effortlessly from a whitening perspective of CCA.
Since correlations are invariant against rescaling, optimizing Eq. 1 determines the canonical directions αi and βi only up to their respective lengths, and we can thus arbitrarily fix the magnitude of the vectors αi and βi. A common choice is to simply normalize them to unit length so that $\boldsymbol {\alpha }_{i}^{T} \boldsymbol {\alpha }_{i}= 1$ and $\boldsymbol {\beta }_{i}^{T} \boldsymbol {\beta }_{i}= 1$.
Similarly, the overall sign of the canonical directions αi and βj is also undetermined. As a result, different implementations of CCA may yield canonical directions with different signs, and depending on the adopted convention this can be used either to enforce positive or to allow negative canonical correlations, see below for further discussion in the light of CCA as a regression model.
Because it optimizes correlation, CCA is invariant against location translation of the original vectors X and Y, yielding identical canonical directions and correlations in this case. However, under scale transformation of X and Y only the canonical correlations λi remain invariant whereas the directions will differ as they depend on the variances VX and VY. Therefore, to facilitate comparative analysis and interpretation the canonical directions the random vectors X and Y (and associated data) are often standardized.
Classical CCA uses the empirical covariance matrix S to obtain canonical correlations and directions. However, S can only be safely employed if the number of observations is much larger than the dimensions of either of the two random vectors X and Y, since otherwise S constitutes only a poor estimate of the underlying covariance structure and in addition may also become singular. Therefore, to render CCA applicable to small sample high-dimensional data two main strategies are common: one is to directly employ regularization on the level of the covariance and correlation matrices to stabilize and improve their estimation; the other is to devise probabilistic models for CCA to facilitate application of Bayesian inference and other regularized statistical procedures.
Whitening transformations and CCA
Background on whitening
Whitening, or sphering, is a linear statistical transformation that converts a random vector X with covariance matrix ΣX into a random vector
$$ \widetilde{\boldsymbol{X}} = \boldsymbol{W}_{\boldsymbol{X}} \boldsymbol{X} $$
with unit diagonal covariance $\text {var}\left (\widetilde {\boldsymbol {X}}\right) =\boldsymbol {\Sigma }_{\widetilde {\boldsymbol {X}}} = \boldsymbol {I}_{p}$. The matrix WX is called the whitening matrix or sphering matrix for X, also known as the unmixing matrix. In order to achieve whitening the matrix WX has to satisfy the condition $\boldsymbol {W}_{\boldsymbol {X}} \boldsymbol {\Sigma }_{\boldsymbol {X}} \boldsymbol {W}_{\boldsymbol {X}}^{T} = \boldsymbol {I}_{p}$, but this by itself is not sufficient to completely identify WX. There are still infinitely many possible whitening transformations, and the family of whitening matrices for X can be written as
$$ \boldsymbol{W}_{\boldsymbol{X}} = \boldsymbol{Q}_{\boldsymbol{X}} \boldsymbol{P}_{\boldsymbol{X}}^{-1/2} \boldsymbol{V}_{\boldsymbol{X}}^{-1/2}\,. $$
Here, QX is an orthogonal matrix; therefore the whitening matrix WX itself is not orthogonal unless PX=VX=Ip. The choice of QX determines the type of whitening [16]. For example, using QX=Ip leads to ZCA-cor whitening, also known as Mahalanobis whitening based on the correlation matrix. PCA-cor whitening, another widely used sphering technique, is obtained by setting QX=GT, where G is the eigensystem resulting from the spectral decomposition of the correlation matrix PX=GΘGT. Since there is a sign ambiguity in the eigenvectors G we adopt the convention of [16] to adjust columns signs of G, or equivalently row signs of Qx, so that the rotation matrix QX has a positive diagonal.
The corresponding inverse relation $\boldsymbol {X}=\boldsymbol {W}_{\boldsymbol {X}}^{-1} \widetilde {\boldsymbol {X}} =\boldsymbol {\Phi }_{\boldsymbol {X}}^{T} \widetilde {\boldsymbol {X}}$ is called a coloring transformation, where the matrix $\boldsymbol {W}_{\boldsymbol {X}}^{-1} =\boldsymbol {\Phi }_{\boldsymbol {X}}^{T} $ is the mixing matrix, or coloring matrix that we can write in terms of rotation matrix QX as
$$ \boldsymbol{\Phi}_{\boldsymbol{X}} = \boldsymbol{Q}_{\boldsymbol{X}} \boldsymbol{P}_{\boldsymbol{X}}^{1/2} \boldsymbol{V}_{\boldsymbol{X}}^{1/2} $$
Like WX the mixing matrix ΦX is not orthogonal. The entries of the matrix ΦX are called the loadings, i.e. the coefficients linking the whitened variable $\widetilde {\boldsymbol {X}}$ with the original x. Since $\widetilde {\boldsymbol {X}}$ is a white random vector with $\text {cov}\left (\widetilde {\boldsymbol {X}}\right) = \boldsymbol {I}_{p}$ the loadings are equivalent to the covariance $\text {cov}\left (\widetilde {\boldsymbol {X}}, \boldsymbol {X}\right) = \boldsymbol {\Phi }_{\boldsymbol {X}}$. The corresponding correlations, also known as correlation-loadings, are
$$ \text{cor}\left(\widetilde{\boldsymbol{X}}, \boldsymbol{X}\right) = \boldsymbol{\Psi}_{\boldsymbol{X}} = \boldsymbol{\Phi}_{\boldsymbol{X}} \boldsymbol{V}_{\boldsymbol{X}}^{-1/2}=\boldsymbol{Q}_{\boldsymbol{X}} \boldsymbol{P}_{\boldsymbol{X}}^{1/2}\,. $$
Note that the sum of squared correlations in each column of ΨX sum up to 1, as $\text {diag}\left (\boldsymbol {\Psi }_{\boldsymbol {X}}^{T}\boldsymbol {\Psi }_{\boldsymbol {X}}\right) = \text {diag}(\boldsymbol {P}_{\boldsymbol {X}}) = \boldsymbol {I}_{p}$.
CCA whitening
We will show now that CCA has a very close relationship to whitening. In particular, the objective of CCA can be seen to be equivalent to simultaneous whitening of both X and Y, with a diagonality constraint on the cross-correlation matrix between the whitened $\widetilde {\boldsymbol {X}}$ and $\widetilde {\boldsymbol {Y}}$.
First, we make the choice to standardize the canonical directions αi and βi according to $\text {var}\left (\boldsymbol {\alpha }_{i}^{T} \boldsymbol {X}\right) = \boldsymbol {\alpha }_{i}^{T} \boldsymbol {\Sigma }_{\boldsymbol {X}} \boldsymbol {\alpha }_{i}= 1$ and $\text {var}\left (\boldsymbol {\beta }_{i}^{T} \boldsymbol {Y}\right) = \boldsymbol {\beta }_{i}^{T} \boldsymbol {\Sigma }_{\boldsymbol {Y}} \boldsymbol {\beta }_{i}= 1$. As a result αi and βi form the basis of two whitening matrices, WX=(α1,…,αp)T=A and WY=(β1,…,βq)T=B, with rows containing the canonical directions. The length constraint $\boldsymbol {\alpha }_{i}^{T} \boldsymbol {\Sigma }_{\boldsymbol {X}} \boldsymbol {\alpha }_{i}= 1$ thus becomes $\boldsymbol {W}_{\boldsymbol {X}} \boldsymbol {\Sigma }_{\boldsymbol {X}}\boldsymbol {W}_{\boldsymbol {X}}^{T} = \boldsymbol {I}_{p}$ meaning that WX (and WY) is indeed a valid whitening matrix.
Second, after whitening X and Y individually to $\widetilde {\boldsymbol {X}}$ and $\widetilde {\boldsymbol {Y}}$ using WX and WY, respectively, the joint covariance of $\left (\widetilde {\boldsymbol {X}}^{T}, \widetilde {\boldsymbol {Y}}^{T}\right)^{T}$ is $\left (\begin {array}{cc} \boldsymbol {I}_{p} & \boldsymbol {P}_{\widetilde {\boldsymbol {X}} \widetilde {\boldsymbol {Y}}}\\ \boldsymbol {P}_{\widetilde {\boldsymbol {X}} \widetilde {\boldsymbol {Y}}}^{T} & \boldsymbol {I}_{q} \end {array}\right)$. Note that whitening of (XT,YT)T simultaneously would in contrast lead to a fully diagonal covariance matrix. In the above $\boldsymbol {P}_{\widetilde {\boldsymbol {X}} \widetilde {\boldsymbol {Y}}} = \text {cor}\left (\widetilde {\boldsymbol {X}}, \widetilde {\boldsymbol {Y}}\right) = \text {cov}\left (\widetilde {\boldsymbol {X}}, \widetilde {\boldsymbol {Y}}\right)$ is the cross-correlation matrix between the two whitened vectors and can be expressed as
$$ \boldsymbol{P}_{\widetilde{\boldsymbol{X}} \widetilde{\boldsymbol{Y}}} = \boldsymbol{W}_{\boldsymbol{X}} \boldsymbol{\Sigma}_{\boldsymbol{X} \boldsymbol{Y}} \boldsymbol{W}_{\boldsymbol{Y}}^{T} = \boldsymbol{Q}_{\boldsymbol{X}} \boldsymbol{K} \boldsymbol{Q}_{\boldsymbol{Y}}^{T} = (\widetilde{\rho}_{ij}) $$
$$ \boldsymbol{K} = \boldsymbol{P}_{\boldsymbol{X}}^{-1/2} \boldsymbol{P}_{\boldsymbol{X} \boldsymbol{Y}} \boldsymbol{P}_{\boldsymbol{Y}}^{-1/2} = (k_{ij}). $$
Following the terminology in [17] we may call K the correlation-adjusted cross-correlation matrix between X and Y.
With this setup the CCA objective can be framed simply as the demand that $\text {cor}\left (\widetilde {\boldsymbol {X}}, \widetilde {\boldsymbol {Y}}\right)= \boldsymbol {P}_{\widetilde {\boldsymbol {X}} \widetilde {\boldsymbol {Y}}}$ must be diagonal. Since in whitening the orthogonal matrices QX and QY can be freely selected we can achieve diagonality of $\boldsymbol {P}_{\widetilde {\boldsymbol {X}} \widetilde {\boldsymbol {Y}}}$ and hence pinpoint the CCA whitening matrices by applying singular value decomposition to
$$ \boldsymbol{K}= \left(\boldsymbol{Q}_{\boldsymbol{X}}^{\text{CCA}}\right)^{T} \boldsymbol{\Lambda} \boldsymbol{Q}_{\boldsymbol{Y}}^{\text{CCA}} \,. $$
This provides the rotation matrices $\boldsymbol {Q}_{\boldsymbol {X}}^{\text {CCA}} $ and the $\boldsymbol {Q}_{\boldsymbol {Y}}^{\text {CCA}} $ of dimensions m×p and m×q, respectively, and the m×m matrix Λ=diag(λi) containing the singular values of K, which are also the singular values of $\boldsymbol {P}_{\widetilde {\boldsymbol {X}} \widetilde {\boldsymbol {Y}}}$. Since m= min(p,q) the larger of the two rotation matrices will not be a square matrix but it can nonetheless be used for whitening via Eqs. 4 and 5 since it still is semi-orthogonal with $\boldsymbol {Q}_{\boldsymbol {X}}^{\text {CCA}} \left (\boldsymbol {Q}_{\boldsymbol {X}}^{\text {CCA}}\right)^{T} = \boldsymbol {Q}_{\boldsymbol {Y}}^{\text {CCA}} \left (\boldsymbol {Q}_{\boldsymbol {Y}}^{\text {CCA}}\right)^{T} = \boldsymbol {I}_{m}$. As a result, we obtain $\text {cor}\left (\widetilde {X}^{\text {CCA}}_{i}, \widetilde {Y}^{\text {CCA}}_{i}\right) = \lambda _{i}$ for i=1…m, i.e. the canonical correlations are identical to the singular values of K.
Hence, CCA may be viewed as the outcome of a uniquely determined whitening transformation with underlying sphering matrices $\boldsymbol {W}_{\boldsymbol {X}}^{\text {CCA}}$ and $\boldsymbol {W}_{\boldsymbol {Y}}^{\text {CCA}}$ induced by the rotation matrices $\boldsymbol {Q}_{\boldsymbol {X}}^{\text {CCA}}$ and $\boldsymbol {Q}_{\boldsymbol {Y}}^{\text {CCA}}$. Thus, the distinctive feature of CCA whitening, in contrast to other common forms of whitening described in [16], is that by construction it is not only informed by PX and PY but also by PXY, which fixes all remaining rotational freedom.
CCA and multivariate regression
Optimal linear multivariate predictor
In multivariate regression the aim is to build a model that, given an input vector X, predicts a vector Y as well as possible according to a specific measure such as squared error. Assuming a linear relationship Y⋆=a+bTX is the predictor random variable, with mean $\mathrm {E}(\boldsymbol {Y}^{\star }) = \boldsymbol {\mu }_{\boldsymbol {Y}^{\star }} = \boldsymbol {a}+\boldsymbol {b}^{T} \boldsymbol {\mu }_{\boldsymbol {X}}$. The expected squared difference between Y and Y⋆, i.e. the mean squared prediction error
$$ \begin{aligned} \text{MSE} &= \text{Tr} \left(\mathrm{E}\left(\left(\boldsymbol{Y} - \boldsymbol{Y}^{\star}\right) \left(\boldsymbol{Y} - \boldsymbol{Y}^{\star}\right)^{T}\right)\right) \\ &= \sum_{i=1}^{q} \mathrm{E}\left(\left(Y_{i}-Y_{i}^{\star} \right)^{2}\right), \end{aligned} $$
is a natural measure of how well Y⋆ predicts Y. As a function of the model parameters a and b the predictive MSE becomes
$$ \begin{aligned} \text{MSE}(\boldsymbol{a}, \boldsymbol{b}) = &\text{Tr}\left((\boldsymbol{\mu}_{\boldsymbol{Y}}-\boldsymbol{\mu}_{\boldsymbol{Y}^{\star}}) \left(\boldsymbol{\mu}_{\boldsymbol{Y}}-\boldsymbol{\mu}_{\boldsymbol{Y}^{\star}}\right)^{T} + \right.\\ & \left. \boldsymbol{\Sigma}_{\boldsymbol{Y}} + \boldsymbol{b}^{T} \boldsymbol{\Sigma}_{\boldsymbol{X}} \boldsymbol{b} -2 \boldsymbol{b}^{T} \boldsymbol{\Sigma}_{\boldsymbol{X} \boldsymbol{Y}} \right) \,. \end{aligned} $$
Optimal parameters for best linear predictor are found by minimizing this MSE function. For the offset a this yields
$$ \boldsymbol{a}^{.} = \boldsymbol{\mu}_{\boldsymbol{Y}} - (\boldsymbol{b}^{.})^{T} \boldsymbol{\mu}_{\boldsymbol{X}} $$
which regardless of the value of b. ensures $\boldsymbol {\mu }_{\boldsymbol {Y}^{\star }} - \boldsymbol {\mu }_{\boldsymbol {Y}} = 0$. Likewise, for the matrix of regression coefficients minimization results in
$$ \boldsymbol{b}^{\text{all}} = \boldsymbol{\Sigma}_{\boldsymbol{X}}^{-1}\boldsymbol{\Sigma}_{\boldsymbol{X} \boldsymbol{Y}} $$
with minimum achieved $\text {MSE}\left (\boldsymbol {a}^{\text {all}}, \boldsymbol {b}^{\text {all}}\right) = \text {Tr} \left (\boldsymbol {\Sigma }_{\boldsymbol {Y}} \right) - \text {Tr} \left (\boldsymbol {\Sigma }_{\boldsymbol {Y} \boldsymbol {X}} \boldsymbol {\Sigma }_{\boldsymbol {X}}^{-1} \boldsymbol {\Sigma }_{\boldsymbol {X} \boldsymbol {Y}}\right)$.
If we exclude predictors from the model by setting regression coefficients bzero=0 then the corresponding optimal intercept is azero=μY and the minimum achieved MSE(azero,bzero)=Tr(ΣY). Thus, by adding predictors X to the model the predictive MSE is reduced, and hence the fit of the model correspondingly improved, by the amount
$$ \begin{aligned} \Delta &= \text{MSE}\left(\boldsymbol{a}^{\text{zero}}, \boldsymbol{b}^{\text{zero}}\right)-\text{MSE}\left(\boldsymbol{a}^{\text{all}}, \boldsymbol{b}^{\text{all}}\right) \\ &= \text{Tr} \left(\boldsymbol{\Sigma}_{\boldsymbol{Y} \boldsymbol{X}} \boldsymbol{\Sigma}_{\boldsymbol{X}}^{-1} \boldsymbol{\Sigma}_{\boldsymbol{X} \boldsymbol{Y}} \right) \\ &= \text{Tr}\left(\text{cov}\left(\boldsymbol{Y}, \boldsymbol{Y}^{\text{all}\star}\right)\right) \,. \end{aligned} $$
If the response Y is univariate (q=1) then Δ reduces to the variance-scaled coefficient of determination $\sigma ^{2}_{Y} \boldsymbol {P}_{Y \boldsymbol {X}} \boldsymbol {P}_{\boldsymbol {X}}^{-1} \boldsymbol {P}_{\boldsymbol {X} Y}$. Note that in the above no distributional assumptions are made other than specification of means and covariances.
Regression view of CCA
The first step to understand CCA as a regression model is to consider multivariate regression between two whitened vectors $\widetilde {\boldsymbol {X}}$ and $\widetilde {\boldsymbol {Y}}$ (considering whitening of any type, including but not limited to CCA-whitening). Since $\boldsymbol {\Sigma }_{\widetilde {\boldsymbol {X}}} = \boldsymbol {I}_{p}$ and $\boldsymbol {\Sigma }_{\widetilde {\boldsymbol {X}} \widetilde {\boldsymbol {Y}}} = \boldsymbol {P}_{\widetilde {\boldsymbol {X}} \widetilde {\boldsymbol {Y}}}$ the optimal regression coefficients to predict $\widetilde {\boldsymbol {Y}}$ from $\widetilde {\boldsymbol {X}}$ are given by
$$ \boldsymbol{b}^{\text{all}} = \boldsymbol{P}_{\widetilde{\boldsymbol{X}} \widetilde{\boldsymbol{Y}}}\,, $$
i.e. the pairwise correlations between the elements of the two vectors $\widetilde {\boldsymbol {X}}$ and $\widetilde {\boldsymbol {Y}}$. Correspondingly, the decrease in predictive MSE due to including the predictors $\widetilde {\boldsymbol {X}}$ is
$$ \begin{aligned} \Delta & = \text{Tr}\left(\boldsymbol{P}_{\widetilde{\boldsymbol{X}} \widetilde{\boldsymbol{Y}}}^{T} \boldsymbol{P}_{\widetilde{\boldsymbol{X}}\widetilde{\boldsymbol{Y}}}\right) = \sum_{i,j} \widetilde{\rho}^{2}_{ij} \\ &= \text{Tr}\left(\boldsymbol{K}^{T} \boldsymbol{K}\right) = \sum_{i,j} k_{ij}^{2} \\ & = \text{Tr}\left(\boldsymbol{\Lambda}^{2}\right) =\sum_{i} \lambda_{i}^{2} \,. \end{aligned} $$
In the special case of CCA-whitening the regression coefficients further simplify to $b^{\text {all}}_{ii} = \lambda _{i}$, i.e. the canonical correlations λi act as the regression coefficients linking CCA-whitened $\widetilde {\boldsymbol {Y}}$ and $\widetilde {\boldsymbol {X}}$. Furthermore, as the decrease in predictive MSE Δ is the sum of the squared canonical correlations (cf. Eq. 17), each $\lambda _{i}^{2}$ can be interpreted as the variable importance of the corresponding variable in $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ to predict the outcome $\widetilde {\boldsymbol {Y}}^{\text {CCA}}$. Thus, CCA directly results from multivariate regression between CCA-whitened random vectors, where the canonical correlations λi assume the role of regression coefficients and $\lambda _{i}^{2}$ provides a natural measure to rank the canonical components in order of their respective predictive capability.
A key difference between classical CCA and regression is that in the latter both positive and negative coefficients are allowed to account for the directionality of the influence of the predictors. In contrast, in classical CCA only positive canonical correlations are permitted by convention. To reflect that CCA analysis is inherently a regression model we advocate here that canonical correlations should indeed be allowed to assume both positive and negative values, as fundamentally they are regression coefficients. This can be implemented by exploiting the sign ambiguity in the singular value decomposition of K (Eq. 10). In particular, the rows signs of $\boldsymbol {Q}_{\boldsymbol {X}}^{\text {CCA}}$ and $\boldsymbol {Q}_{\boldsymbol {Y}}^{\text {CCA}}$ and the signs of λi can be revised simultaneously without affecting K. We propose to choose $\boldsymbol {Q}_{\boldsymbol {X}}^{\text {CCA}}$ and $\boldsymbol {Q}_{\boldsymbol {Y}}^{\text {CCA}}$ such that both rotation matrices have a positive diagonal, and then to adjust the signs of the λi accordingly. Note that orthogonal matrices with positive diagonals are closest to the identity matrix (e.g. in terms of the Frobenius norm) and thus constitute minimal rotations.
Generative latent variable model for CCA
With the link of CCA to whitening and multivariate regression established it is straightforward to arrive at simple and easily interpretable generative probabilistic latent variable model for CCA. This model has two levels of hidden variables: it uses uncorrelated latent variables ZX, ZY, Zshared (level 1) with zero mean and unit variance to generate the CCA-whitened variables $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ and $\widetilde {\boldsymbol {Y}}^{\text {CCA}}$ (level 2) which in turn produce the observed vectors X and Y – see Fig. 1
Probabilistic CCA as a two layer latent variable generative model. The middle layer contains the CCA-whitened variables $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ and $\widetilde {\boldsymbol {Y}}^{\text {CCA}}$, and the top layer the uncorrelated generative latent variables ZX, ZY, and Zshared
Specifically, on the first level we have latent variables
$$ \begin{aligned} \boldsymbol{Z}^{\boldsymbol{X}} &\sim F_{\boldsymbol{Z}_{\boldsymbol{X}}}, \\ \boldsymbol{Z}^{\boldsymbol{Y}} &\sim F_{\boldsymbol{Z}_{\boldsymbol{Y}}}, \,\text{and}\\ \boldsymbol{Z}^{\text{shared}} &\sim F_{\boldsymbol{Z}_{\text{shared}}}, \end{aligned} $$
with E(ZX)=E(ZY)=E(Zshared)=0 and var(ZX)=Ip, var(ZY)=Iq, and var(Zshared)=Im and no mutual correlation among the components of ZX, ZY, and Zshared. The second level latent variables are then generated by mixing shared and non-shared variables according to
$$ \begin{aligned} \widetilde{X}^{\text{CCA}}_{i} & = \sqrt{1 - |\lambda_{i}|}\, Z^{\boldsymbol{X}}_{i} + \sqrt{|\lambda_{i}|} Z^{\text{shared}}_{i} \\ \widetilde{Y}^{\text{CCA}}_{i} & = \sqrt{1 - |\lambda_{i}|}\ Z^{\boldsymbol{Y}}_{i} + \sqrt{|\lambda_{i}|} Z^{\text{shared}}_{i} \, \text{sign}(\lambda_{i}) \end{aligned} $$
where the parameters λ1,…,λm can be positive as well as negative and range from -1 to 1. The components i>m are always non-shared and taken from ZX or ZY as appropriate, i.e. as above but with λi>m=0. By construction, this results in $\text {var}\left (\widetilde {\boldsymbol {X}}^{\text {CCA}}\right) = \boldsymbol {I}_{p}$, $\text {var}\left (\widetilde {\boldsymbol {Y}}^{\text {CCA}}\right) = \boldsymbol {I}_{q}$ and $\text {cov}\left (\widetilde {X}^{\text {CCA}}_{i}, \widetilde {Y}^{\text {CCA}}_{i}\right) = \lambda _{i}$. Finally, the observed variables are produced by a coloring transformation and subsequent translation
$$ \begin{aligned} \boldsymbol{X} &= \boldsymbol{\Phi}_{\boldsymbol{X}}^{T} \widetilde{\boldsymbol{X}}^{\text{CCA}} + \boldsymbol{\mu}_{\boldsymbol{X}} \\ \boldsymbol{Y} &= \boldsymbol{\Phi}_{\boldsymbol{Y}}^{T} \widetilde{\boldsymbol{Y}}^{\text{CCA}} + \boldsymbol{\mu}_{\boldsymbol{Y}} \end{aligned} $$
To clarify the workings behind Eq. 19 assume there are three uncorrelated random variables Z1, Z2, and Z3 with mean 0 and variance 1. We construct X1 as a mixture of Z1 and Z3 according to $X_{1} = \sqrt {1-\alpha } Z_{1} + \sqrt {\alpha } Z_{3}$ where α∈[0,1], and, correspondingly, X2 as a mixture of Z2 and Z3 via $X_{2} = \sqrt {1-\alpha } Z_{2} + \sqrt {\alpha } Z_{3}$. If α=0 then X1=Z1 and X2=Z2, and if α=1 then X1=X2=Z3. By design, the new variables have mean zero (E(X1)=E(X2)=0) and unit variance (var(X1)=var(X2)=1). Crucially, the weight α of the latent variable Z3 common to both mixtures induces a correlation between X1 and X2. The covariance between X1 and X2 is $\text {cov}(X_{1}, X_{2}) = \text {cov}\left (\sqrt {\alpha } Z_{3},\sqrt {\alpha } Z_{3} \right) = \alpha $, and since X1 and X2 have variance 1 we have cor(X1,X2)=α. In Eq. 19 this is further extended to allow a signed α and hence negative correlations.
Note that the above probabilistic model for CCA is in fact not a single model but a family of models, since we do not completely specify the underlying distributions, only their means and (co)variances. While in practice we will typically assume normally distributed generative latent variables, and hence normally distributed observations, it is equally possible to employ other distributions for the first level latent variables. For example, a rescaled t-distribution with a wider tail than the normal distribution may be employed to obtain a robustified version of CCA [18].
In order to test whether our algorithm allows to correctly identify negative canonical correlations we conducted simulations using simulated data. Specifically, we generated data Xi and yi from a p+q dimensional multivariate normal distribution with zero mean and covariance matrix $\left (\begin {array}{cc} \boldsymbol {\Sigma }_{\boldsymbol {X}} & \boldsymbol {\Sigma }_{\boldsymbol {X} \boldsymbol {Y}}\\ \boldsymbol {\Sigma }_{\boldsymbol {X} \boldsymbol {Y}}^{T} & \boldsymbol {\Sigma }_{\boldsymbol {Y}} \end {array}\right)$where ΣX=Ip, ΣY=Iq and ΣXY=diag(λi). The canonical correlations where set to have alternating positive and negative signs λ1=λ3=λ5=λ7=λ9=λ and λ2=λ4=λ6=λ8=λ10=−λ with varying strength λ∈{0.3,0.4,0.5,0.6,0.7,0.8,0.9}. A similar setup was used in [14]. The dimensions were fixed at p=60 and q=10 and the sample size was n∈{20,30,50,100,200,500} so that both the small and large sample regime was covered. For each combination of n and λ the simulations were repeated 500 times, and our algorithm using shrinkage estimation of the underlying covariance matrices was employed to each of the 500 data sets to fit the CCA model. The resulting estimated canonical correlations were then compared with the corresponding true canonical correlations, and the proportion of correctly estimated signs was recorded.
The outcome from this simulation study is summarized graphically in Fig. 2. The key finding is that, depending on the strength of correlation λ and sample size n, our algorithm correctly determines the sign of both negative and positive canonical correlations. As expected, the proportion of correctly classified canonical correlations increases with sample size and with the strength of correlation. Remarkably, even for comparatively weak correlation such as λ=0.5 and low sample size still the majority of canonical correction were estimated with the true sign. In short, this simulation demonstrates that if there are negative canonical correlations between pairs of canonical variables these will be detected by our approach.
Percentage of estimated canonical correlations with correctly identified signs in dependence of the sample size and the strength of the true canonical correlation
Nutrimouse data
We now analyze two experimental omics data sets to illustrate our approach. Specifically, we demonstrate the capability of our variant of CCA to identify negative canonical correlations among canonical variates as well its application to high-dimensional data where the number of samples n is smaller than the number of variables p and q.
The first data set is due to [19] and results from a nutrigenomic study in the mouse studying n=40 animals. The X variable collects the measurements of the gene expression of p=120 genes in liver cells. These were selected a priori considering the biological relevance for the study. The Y variable contains lipid concentrations of q=21 hepatic fatty acids, measured on the same animals. Before further analysis we standardized both X and Y.
Since the number of available samples n is smaller than the number of genes p we used shrinkage estimation to obtain the joint correlation matrix which resulted in a shrinkage intensity of λcor=0.16. Subsequently, we computed canonical directions and associated canonical correlations λ1,…,λ21. The canonical correlations are shown in Fig. 3, and range in value between -0.96 and 0.87. As can be seen, 16 of the 21 canonical correlations are negative, including the first three top ranking correlations. In Fig. 4 we depict the squared correlation loadings between the first 5 components of the canonical covariates $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ and $\widetilde {\boldsymbol {Y}}^{\text {CCA}}$ and the corresponding observed variables X and Y. This visualization shows that most information about the correlation structure within and between the two data sets (gene expression and lipid concentrations) is concentrated in the first few latent components.
Plot of the estimated canonical correlations for the Nutrimouse data. The majority of the correlations indicate a negative assocation between the corresponding canonical variables
Squared correlations loadings between the first 5 components of the canonical covariates $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ and $\widetilde {\boldsymbol {Y}}^{\text {CCA}}$ and the corresponding observed variables X and Y for the Nutrimouse data
This is confirmed by further investigation of the scatter plots both between corresponding pairs of $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ and $\widetilde {\boldsymbol {Y}}^{\text {CCA}}$ canonical variates (Fig. 5) as well as within each variate (Fig. 6). Specifically, the first CCA component allow to identify the genotype of the mice (wt: wild type; ppar: PPAR- α deficient) whereas the subsequent few components reveal the imprint of the effect of the various diets (COC: coconut oil; FISH: fish oils; LIN: linseed oils; REF: reference diet; SUN: sunflower oil) on gene expression and lipid concentrations.
Scatter plots between corresponding pairs of canonical covariates for the Nutrimouse data
Scatter plots between first and second components within each canonical covariate for the Nutrimouse data
The Cancer Genome Atlas LUSC data
As a further illustrative example we studied genomic data from The Cancer Genome Atlas (TCGA), a public resource that catalogues clinical data and molecular characterizations of many cancer types [20]. We used the TCGA2STAT tool to access the TCGA database from within R [21].
Specifically, we retrieved gene expression (RNASeq2) and methylation data for lung squamous cell carcinoma (LUSC) which is one of the most common types of lung cancer. After download, calibration and filtering as well as matching the two data types to 130 common patients following the guidelines in [21] we obtained two data matrices, one (X) measuring gene expression of p=206 genes and one (Y) containing methylation levels corresponding to q=234 probes. As clinical covariates the sex of each of the 130 patients (97 males, 33 females) was downloaded as well as the vital status (46 events in males, and 11 in females) and cancer end points, i.e. the number of days to last follow-up or the days to death. In addition, since smoking cigarettes is a key risk factor for lung cancer, the number of packs per year smoked was also recorded. The number of packs ranged from 7 to 240, so all of the patients for which this information was available were smokers.
As above we applied the shrinkage CCA approach to the LUSC data which resulted in a correlation shrinkage intensity of λcor=0.19. Subsequently, we computed canonical directions and associated canonical correlations λ1,…,λ21. The canonical correlations are shown in Fig. 7, and range in value between -0.92 and 0.98. Among the top 10 strongest correlated pairs of canonical covariates only one has a negative coefficient. The plot of the squared correlation loadings (Fig. 8) for these 10 components already indicates that the data can be sufficiently summarized by a few canonical covariates.
Plot of the estimated canonical correlations for the TCGA LUSC data
Squared correlations loadings between the first 10 components of the canonical covariates $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ and $\widetilde {\boldsymbol {Y}}^{\text {CCA}}$ and the corresponding observed variables X and Y for the TCGA LUSC data
Scatter plots between the first pair of canonical components and between the first two components of $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ are presented in Fig. 9. These plots show that the first canonical component corresponds to the sex of the patients, with males and females being clearly separated by underlying patterns in gene expression and methylation. The survival probabilities computed for both groups show that there is a statistically significant different risk pattern between males and females (Fig. 10). However, inspection of the second order canonical variates reveals that the difference in risk is likely due to overrepresentation of strong smokers in male patients rather than being directly attributed to the sex of the patient (Fig. 9 right).
Scatter plots between first component of $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ and $\widetilde {\boldsymbol {Y}}^{\text {CCA}}$ (left) and within the first two components of $\widetilde {\boldsymbol {X}}^{\text {CCA}}$ (right) for the TCGA LUSC data
Plot of the survival probabilities for male and female patients for the TCGA LUSC data
CCA is crucially important procedure for integration of multivariate data. Here, we have revisited CCA from the perspective of whitening that allows a better understanding of both classical CCA and its probabilistic variant. In particular, our main contributions in this paper are:
first, we show that CCA is procedurally equivalent to a special whitening transformation, that unlike other general whitening procedures, is uniquely defined and without any rotational ambiguity;
second, we demonstrate the direct connection of CCA with multivariate regression and demonstrate that CCA is effectively a linear model between whitened variables, and that correspondingly canonical correlations are best understood as regression coefficients;
third, the regression perspective advocates for permitting both positive and negative canonical correlations and we show that this also allows to resolve the sign ambiguity present in the canonical directions;
fourth, we propose an easily interpretable probabilistic generative model for CCA as a two-layer latent variable framework that not only admits canonical correlations of both signs but also allows non-normal latent variables;
and fifth, we provide a computationally effective computer implementation in the "whitening" R package based on high-dimensional shrinkage estimation of the underlying covariance and correlation matrices and show that this approach performs well both for simulated data as well as in application to the analysis of various types of omics data.
In short, this work provides a unifying perspective on CCA, linking together sphering procedures, multivariate regression and corresponding probabilistic generative models, and also offers a practical tool for high-dimensional CCA for practitioners in applied statistical data analysis.
Implementation in R
We have implemented our method for high-dimensional CCA allowing for potential negative canonical correlations in the R package "whitening" that is freely available from https://CRAN.R-project.org/package=whitening. The functions provided in this package incorporate the computational efficiencies described below. The R package also includes example scripts. The "whitening" package has been used to conduct the data analysis described in this paper. Further information and R code to reproduce the analyses in this paper is available at http://strimmerlab.org/software/whitening/.
High-dimensional estimation
Practical application of CCA, in both the classical and probabilistic variants, requires estimation of the joint covariance of X and Y from data, as well as the computation of the corresponding underlying whitening matrices $\boldsymbol {W}_{\boldsymbol {X}}^{\text {CCA}}$ and $\boldsymbol {W}_{\boldsymbol {Y}}^{\text {CCA}}$ (i.e. canonical directions) and canonical correlations λi.
In moderate dimensions and large sample size n, i.e. when both p and q are not excessively big and n is larger than both p and q the classic CCA algorithm is applicable and empirical or maximum likelhood estimates may be used. Conversely, if the sample size n is small compared to p and q then there exist numerous effective Bayesian, penalized likelihood and other related regularized estimators to obtain statistically efficient estimates of the required covariance matrices (e.g., [22–25]). In our implementation in R and in the analysis below we use the shrinkage covariance estimation approach developed in [22] and also employed for CCA analysis in [14]. However, in principle any other preferred covariance estimator may be applied.
Algorithmic efficiencies
In addition to statistical issues concerning accurate estimation, high dimensionality also poses substantial challenges in algorithmic terms, with regard both to memory requirements as well as to computing time. Specifically, for large values of p and q directly performing the matrix operations necessary for CCA, such as computing the matrix square root or even simple matrix multiplication, will be prohibitive since these procedures typically scale in cubic order of p and q.
In particular, in a CCA analysis this affects i) the computation and estimation of the matrix K (Eq. 9) containing the adjusted cross-correlations, and ii) the calculation of the whitening matrices $\boldsymbol {W}_{\boldsymbol {X}}^{\text {CCA}}$ and $\boldsymbol {W}_{\boldsymbol {Y}}^{\text {CCA}}$ with the canonical directions αi and βi from the rotation matrices $\boldsymbol {Q}_{\boldsymbol {X}}^{\text {CCA}}$ and $\boldsymbol {Q}_{\boldsymbol {Y}}^{\text {CCA}}$ (Eq. 5). These computational steps involve multiplication and square-root calculations involving possibly very large matrices of dimension p×p and q×q.
Fortunately, in the small sample domain with n≤p,q there exist computational tricks to perform these matrix operations in a very effective and both time- and memory-saving manner that avoids to directly compute and handle the large-scale covariance matrices and their derived quantities [e.g. [26]]. Note this requires the use of regularized estimators, e.g. shrinkage or ridge-type estimation. Specifically, in our implementation of CCA we capitalize on an algorithm described in [27] (see "Zuber et al. algorithm" section for details) that allows to compute the matrix product of the inverse matrix square root of the shrinkage estimate of the correlation matrix R with a matrix M without the need to store or compute the full estimated correlation matrices. The computationals savings due to effective matrix operations for n<p and n<q can be substantial, going from O(p3) and O(q3) down to O(n3) in terms of algorithmic complexity. Correspondingly, for example for p/n = 3 this implies time savings of factor 27 compared to "naive" direct computation.
Zuber et al. algorithm
Zuber et al. (2012) [27] describe an algorithm that allows to compute the matrix product of the inverse matrix square root of the shrinkage estimate of the correlation matrix R with a matrix M without the need to store or compute the full estimated correlation matrices. Specifically, writing the correlation estimator in the form
$$ \underbrace{\boldsymbol{R}}_{p \times p} = \lambda \left(\boldsymbol{I}_{p} + \underbrace{\boldsymbol{U}}_{p \times n} \underbrace{\boldsymbol{N}}_{n \times n} \boldsymbol{U}^{T} \right) $$
allows for algorithmically effective matrix multiplication of
$$ {\begin{aligned} \underbrace{\boldsymbol{R}^{-1/2}}_{p \times p} \underbrace{\boldsymbol{M}}_{p \times d} = \lambda^{-1/2} \left(\boldsymbol{M} - \boldsymbol{U}\underbrace{\left(\boldsymbol{I}_{n}-(\boldsymbol{I}_{n} + \boldsymbol{N})^{-1/2}\right)}_{n \times n} \left(\underbrace{\boldsymbol{U}^{T} \boldsymbol{M}}_{n \times d} \right)\right) \,. \end{aligned}} $$
Note that on the right-hand side of these two equations no matrix of dimension p×p appears; instead all matrices are of much smaller size.
In the CCA context we apply this procedure to Eq. 5 in order to obtain the whitening matrix and the canonical directions and also to Eq. 9 to efficiently compute the matrix K.
Hotelling H. Relations between two sets of variates. Biometrika. 1936; 28:321–77.
Härdle WK, Simar L. Canonical correlation analysis. In: Applied Multivariate Statistical Analysis. Chap. 16. Berlin: Springer: 2015. p. 443–54.
Cao D-S, Liu S, Zeng W-B, Liang Y-Z. Sparse canonical correlation analysis applied to -omics studies for integrative analysis and biomarker discovery. J Chemometrics. 2015; 29:371–8.
Hong S, Chen X, Jin L, Xiong M. Canonical correlation analysis for RNA-seq co-expression networks. Nucleic Acids Res. 2013; 41:95.
Bach FR, Jordan MI. A probabilistic interpretation of canonical correlation analysis. Technical Report No. 688, Department of Statistics. Berkeley: University of California; 2005.
Tipping ME, Bishop CM. Probabilistic principal component analysis. J R Statist Soc B. 1999; 61(3):611–22. https://doi.org/10.1111/1467-9868.00196.
Wang C. Variational Bayesian approach to canonical correlation analysis. IEEE T Neural Net. 2007; 18:905–10.
Klami A, Kaski S. Local dependent components. Proceedings of the 24th International Conference on Machine Learning (ICML 2007). 2007; 24:425–32.
Waaijenborg S, de Witt Hamer PCV, Zwinderman AH. Quantifying the association between gene expressions and DNA-markers by penalized canonical correlation analysis. Stat Appl Genet Molec Biol. 2008;7(1). Article 3. https://doi.org/10.2202/1544-6115.1329.
Parkhomenko E, Tritchler D, Beyene J. Sparse canonical correlation analysis with application to genomic data integration. Stat Appl Genet Molec Biol. 2009; 8:1.
Witten D, Tibshirani R, Hastie T. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics. 2009; 10(3):515–34. https://doi.org/10.1093/biostatistics/kxp008.
Hardoon DR, Shawe-Taylor J. Sparse canonical correlation analysis. Mach Learn. 2011; 83:331–53.
Wilms I, Croux C. Sparse canonical correlation analysis from a predictive point of view. Biomet J. 2015; 57:834–51.
Cruz-Cano R, Lee M-LT. Fast regularized canonical correlation analysis. Comp Stat Data Anal. 2014; 70:88–100.
Ma Z, Lu Y, Foster D. Finding linear structure in large datasets with scalable canonical correlation analysis. Proceedings of the 32th International Conference on Machine Learning (ICML 2015), PLMR. 2015; 37:169–78.
Kessy A, Lewin A, Strimmer K. Optimal whitening and decorrelation. Am Stat. 2018; 72:309–14. https://doi.org/10.1080/00031305.2016.1277159.
Zuber V, Strimmer K. High-dimensional regression and variable selection using CAR scores. Stat Appl Genet Molec Biol. 2011; 10:34.
Adrover JG, Donato SM. A robust predictive approach for canonical correlation analysis. J Multiv Anal. 2015; 133:356–76.
Martin PGP, Guillou H, Lasserre F, Déjean S, Lan A, Pascussi J-M, Cristobal MS, Legrand P, Besse P, Pineau T. Novel aspects of PPAR α-mediated regulation of lipid and xenobiotic metabolism revealed through a multigenomic study. Hepatology. 2007; 54:767–77.
Kandoth C, McLellan MD, Vandin F, Ye K, Niu B, Lu C, Xie M, JF McMichael QZ, Wyczalkowski MA, Leiserson MDM, Miller CA, Welch JS, Walter MJ, Wendl MC, Ley TJ, Wilson RK, Raphael BJ, Ding L. Mutational landscape and significance across 12 major cancer types. Nature. 2013; 502:333–9.
Wan Y-W, Allen GI, Liu Z. TCGA2STAT: simple TCGA data access for integrated statistical analysis in R. Bioinformatics. 2016; 32:952–4.
Schäfer J, Strimmer K. A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Stat Appl Genet Molec Biol. 2005; 4:32.
Bickel PJ, Levina E. Regularized estimation of large covariance matrices. Ann Stat. 2008; 36:199–227.
Hannart A, Naveau P. Estimating high dimensional covariance matrices: A new look at the Gaussian conjugate framework. J Multiv Anal. 2014; 131:149–62.
Touloumis A. Nonparametric Stein-type shrinkage covariance matrix estimators in high-dimensional settings. Comp Stat Data Anal. 2015; 83:251–61.
Hastie T, Tibshirani T. Efficient quadratic regularization for expression arrays. Biostatistics. 2004; 5:329–40.
Zuber V, Duarte Silva AP, Strimmer K. A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies. BMC Bioinformatics. 2012; 13:284.
KS thanks Martin Lotz for discussion. We also thank the anonymous referees for their many useful suggestions.
TJ was funded by a Wellcome Trust ISSF Ph.D. studentship. The funding body did not play any role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript
The proposed method has been implemented in the R package "whitening" that is distributed from https://CRAN.R-project.org/package=whitening. R code to reproduce the analysis is available from http://strimmerlab.org/software/whitening/.
Epidemiology and Biostatistics, School of Public Health, Imperial College London, Norfolk Place, London, W2 1PG, UK
Takoua Jendoubi
Statistics Section, Department of Mathematics, Imperial College London, South Kensington Campus, London, SW7 2AZ, UK
School of Mathematics, University of Manchester, Alan Turing Building, Oxford Road, Manchester, M13 9PL, UK
Korbinian Strimmer
Search for Takoua Jendoubi in:
Search for Korbinian Strimmer in:
TJ and KS jointly conducted the research and wrote the paper. Both authors read and approved the manuscript.
Correspondence to Takoua Jendoubi.
Probabilistic canonical correlation analysis
Machine Learning and Artificial Intelligence in Bioinformatics
|
CommonCrawl
|
What is the intuition behind taking the sum of square roots, squared
In a recent publication, the authors report the following transformation when aggregating across three different scales:
Cognitive style level was used as a control variable and captured as (sqrt(OV) + sqrt(SV) + sqrt(V))^2 which is similar to, but a more robust measure than, the sum of the means of the three cognitive styles, particularly when team size varies. It captures both the level and the range of the member cognitive styles in the team (Matousek 2002, Rudin 1987).
The referenced publications are both lengthy real analysis textbooks, which don't provide a clear intuition for why this is valuable. What is the intuition behind this? Are there more robust transformations?
mathematical-statistics scales psychometrics
ParseltongueParseltongue
$\begingroup$ $\left( |x_1|^p + |x_2|^p + \ldots + |x_n|^p \right)^{1/p}$ for $p \geq 1$ is known as the p-norm. For $p=2$, it is the classic Euclidean norm. Your case though is $p = \frac{1}{2}$ which makes the measure concave (rather than convex) and hence isn't a norm (because it doesn't satisfy the triangle inequality: the norm of a sum is less than the sum of the norms). This isn't my field and don't know why a concave measure would be desired. $\endgroup$ – Matthew Gunn Dec 4 '18 at 1:10
$\begingroup$ The threads at stats.stackexchange.com/questions/3682 and stats.stackexchange.com/questions/244202 will illuminate this. @Matthew being a norm is not important. This is a multiple of a "root mean;" it lies on a continuum between the arithmetic mean ($p=1$), geometric mean ($p=0$), and harmonic mean ($p=-1$). You can see how closely it is tied to the Box-Cox family of transformations. $\endgroup$ – whuber♦ Dec 4 '18 at 1:14
Browse other questions tagged mathematical-statistics scales psychometrics or ask your own question.
What is the meaning of $\|a\|_p=\left(\sum _{i=1}^n \left|a_i(t)\right|{}^p\right){}^{\frac{1}{p}}$?
Prove that $E(X^n)^{1/n}$ is non-decreasing for non-negative random variables
|
CommonCrawl
|
Bogdanov-Takens bifurcation of codimension 3 in a predator-prey model with constant-yield predator harvesting
CPAA Home
Global regular solutions to two-dimensional thermoviscoelasticity
May 2016, 15(3): 1029-1039. doi: 10.3934/cpaa.2016.15.1029
Inversion of the spherical Radon transform on spheres through the origin using the regular Radon transform
Sunghwan Moon 1,
Department of Mathematical Sciences, Ulsan National Institute of Science and Technology, Ulsan 689-798, South Korea
Received July 2014 Revised April 2015 Published February 2016
A spherical Radon transform whose integral domain is a sphere has many applications in partial differential equations as well as tomography. This paper is devoted to the spherical Radon transform which assigns to a given function its integrals over the set of spheres passing through the origin. We present a relation between this spherical Radon transform and the regular Radon transform, and we provide a new inversion formula for the spherical Radon transform using this relation. Numerical simulations were performed to demonstrate the suggested algorithm in dimension 2.
Keywords: tomography, circular-arc, Radon, spherical, transform., Compton.
Mathematics Subject Classification: Primary: 44A12, 65R10; Secondary: 92C5.
Citation: Sunghwan Moon. Inversion of the spherical Radon transform on spheres through the origin using the regular Radon transform. Communications on Pure & Applied Analysis, 2016, 15 (3) : 1029-1039. doi: 10.3934/cpaa.2016.15.1029
L. Andersson, On the determination of a function from spherical averages, SIAM Journal on Mathematical Analysis, 19 (1988), 214-232. doi: 10.1137/0519016. Google Scholar
A. M. Cormack, Representation of a function by its line integrals, with some radiological applications, Journal of Applied Physics, 34 (1963), 2722-2727. Google Scholar
A. M. Cormack, Representation of a function by its line integrals, with some radiological applications. II, Journal of Applied Physics, 35 (1964), 2908-2913. Google Scholar
A. M. Cormack and E. T. Quinto, A Radon transform on spheres through the origin in $\mathbbR^n$ and applications to the Darboux equation, Transactions of the American Mathematical Society, 260 (1980), 575-581. doi: 10.2307/1998023. Google Scholar
J. Fawcett, Inversion of $n$-dimensional spherical averages, SIAM Journal on Applied Mathematics, 45 (1985), 336-341. doi: 10.1137/0145018. Google Scholar
D. Finch, M. Haltmeier and Rakesh, Inversion of spherical means and the wave equation in even dimensions, SIAM Journal on Applied Mathematics, 68 (2007), 392-412. doi: 10.1137/070682137. Google Scholar
D. Finch, S. Patch and Rakesh, Determining a function from its mean values over a family of spheres, SIAM Journal on Mathematical Analysis, 35 (2004), 1213-1240. doi: 10.1137/S0036141002417814. Google Scholar
D. Finch and Rakesh, Recovering a function from its spherical mean values in two and three dimensions, In Photoacoustic Imaging and Spectroscopy (L. Wang ed.), Optical Science and Engineering. Taylor & Francis, 2009. Google Scholar
S. Gindikin, J. Reeds and L. Shepp, Spherical tomography and spherical integral geometry, In Tomography, Impedance Imaging, and Integral Geometry: 1993 AMS-SIAM Summer Seminar on the Mathematics of Tomography, Impedance Imaging, and Integral Geometry, June 7-18, 1993, Mount Holyoke College, Massachusetts (E. T. Quinto, M. Cheney, P. Kuchment and American Mathematical Society eds.), Lectures in Applied Mathematics Series, pages 83-92. American Mathematical Society, 1994. Google Scholar
M. Haltmeier, Exact reconstruction formula for the spherical mean Radon transform on ellipsoids, Inverse Problems, 30 (2014), 035001. doi: 10.1088/0266-5611/30/10/105006. Google Scholar
S. Helgason, A duality in integral geometry: some generalizations of the Radon transform, Bulletin of the American Mathematical Society, 70 (1964), 435-446. Google Scholar
H. Hellsten and L. E. Andersson, An inverse method for the processing of synthetic aperture radar data, Inverse Problems, 3 (1987), 111. Google Scholar
F. John, Plane Waves and Spherical Means Applied to Partial Differential Equations, Dover Books on Mathematics Series. Dover Publications, 2004. Google Scholar
L. A. Kunyansky, Explicit inversion formulae for the spherical mean Radon transform, Inverse Problems, 23 (2007), 373. doi: 10.1088/0266-5611/23/1/021. Google Scholar
L. A. Kunyansky, A series solution and a fast algorithm for the inversion of the spherical mean Radon transform, Inverse Problems, 23 (2007), S11. doi: 10.1088/0266-5611/23/6/S02. Google Scholar
L. A. Kunyansky, Fast reconstruction algorithms for the thermoacoustic tomography in certain domains with cylindrical or spherical symmetries, Inverse Problems and Imaging, 6 (2012), 111-131. doi: 10.3934/ipi.2012.6.111. Google Scholar
D. Ludwig, The Radon transform on Euclidean space, Communications on Pure and Applied Mathematics, 19 (1966), 49-81. Google Scholar
E. K. Narayanan and Rakesh, Spherical means with centers on a hyperplane in even dimensions, Inverse Problems, 26 (2010), 035014. doi: 10.1088/0266-5611/26/3/035014. Google Scholar
F. Natterer, The Mathematics of Computerized Tomography, Classics in Applied Mathematics. Society for Industrial and Applied Mathematics, Philadelphia, 2001. doi: 10.1137/1.9780898719284. Google Scholar
F. Natterer and F. Wübbeling, Mathematical methods in image reconstruction, SIAM Monographs on mathematical modeling and computation. SIAM, Society of industrial and applied mathematics, Philadelphia (Pa.), 2001. doi: 10.1137/1.9780898718324. Google Scholar
M. K. Nguyen and T. T. Truong, Inversion of a new circular-arc Radon transform for Compton scattering tomography, Inverse Problems, 26 (2010), 065005. doi: 10.1088/0266-5611/26/6/065005. Google Scholar
M. K. Nguyen, G Rigaud and T. T. Truong, A new circular-arc Radon transform and the numerical method for its inversion, In Aip Conference Proceedings, volume 1281, page 1064, 2010. Google Scholar
C. J. Nolan and M. Cheney, Synthetic aperture inversion, Inverse Problems, 18 (2002), 221. doi: 10.1088/0266-5611/18/1/315. Google Scholar
S. J. Norton, Reconstruction of a reflectivity field from line integrals over circular paths, The Journal of the Acoustical Society of America, 67 (1980), 853-863. doi: 10.1121/1.384168. Google Scholar
E. T. Quinto, Null spaces and ranges for the classical and spherical Radon transforms, Journal of Mathematical Analysis and Applications, 90 (1982), 408-420. doi: 10.1016/0022-247X(82)90069-5. Google Scholar
E. T. Quinto, Singular value decompositions and inversion methods for the exterior radon transform and a spherical transform, Journal of Mathematical Analysis and Applications, 95 (1983), 437-448. doi: 10.1016/0022-247X(83)90118-X. Google Scholar
E. Quinto, Singularities of the X-ray transform and limited data tomography in $\mathbbR^2$ and $\mathbbR^3$, SIAM Journal on Mathematical Analysis, 24 (1993), 1215-1225. doi: 10.1137/0524069. Google Scholar
N. T. Redding and G. N. Newsam, Inverting the circular Radon transform, DTSO Research Report DTSO-Ru-0211, August 2001. Google Scholar
H. Rhee, A representation of the solutions of the Darboux equation in odd-dimensional spaces, Transactions of the American Mathematical Society, 150 (1970), 491-498. Google Scholar
K. T. Smith, D. C. Solmon and S. L. Wagner, Practical and mathematical aspects of the problem of reconstructing a function from radiographs, Bulletin of the American Mathematical Society, 82 (1977), 1227-1270. Google Scholar
A. E. Yagle, Inversion of spherical means using geometric inversion and Radon transforms, Inverse Problems, 8 (1992), 949. Google Scholar
C. E. Yarman and B. Yazici, Inversion of the circular averages transform using the Funk transform, Inverse Problems, 27 (2011), 065001. doi: 10.1088/0266-5611/27/6/065001. Google Scholar
L. Zalcman, Offbeat integral geometry, The American Mathematical Monthly, 87 (1980), 161-175. doi: 10.2307/2321600. Google Scholar
Alberto Ibort, Alberto López-Yela. Quantum tomography and the quantum Radon transform. Inverse Problems & Imaging, 2021, 15 (5) : 893-928. doi: 10.3934/ipi.2021021
Simon Gindikin. A remark on the weighted Radon transform on the plane. Inverse Problems & Imaging, 2010, 4 (4) : 649-653. doi: 10.3934/ipi.2010.4.649
Michael Krause, Jan Marcel Hausherr, Walter Krenkel. Computing the fibre orientation from Radon data using local Radon transform. Inverse Problems & Imaging, 2011, 5 (4) : 879-891. doi: 10.3934/ipi.2011.5.879
Linh V. Nguyen. Spherical mean transform: A PDE approach. Inverse Problems & Imaging, 2013, 7 (1) : 243-252. doi: 10.3934/ipi.2013.7.243
Mark Agranovsky, David Finch, Peter Kuchment. Range conditions for a spherical mean transform. Inverse Problems & Imaging, 2009, 3 (3) : 373-382. doi: 10.3934/ipi.2009.3.373
Cécilia Tarpau, Javier Cebeiro, Geneviève Rollet, Maï K. Nguyen, Laurent Dumas. Analytical reconstruction formula with efficient implementation for a modality of Compton scattering tomography with translational geometry. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021075
Ali Gholami, Mauricio D. Sacchi. Time-invariant radon transform by generalized Fourier slice theorem. Inverse Problems & Imaging, 2017, 11 (3) : 501-519. doi: 10.3934/ipi.2017023
Hans Rullgård, Eric Todd Quinto. Local Sobolev estimates of a function by means of its Radon transform. Inverse Problems & Imaging, 2010, 4 (4) : 721-734. doi: 10.3934/ipi.2010.4.721
Victor Palamodov. Remarks on the general Funk transform and thermoacoustic tomography. Inverse Problems & Imaging, 2010, 4 (4) : 693-702. doi: 10.3934/ipi.2010.4.693
Gaik Ambartsoumian, Leonid Kunyansky. Exterior/interior problem for the circular means transform with applications to intravascular imaging. Inverse Problems & Imaging, 2014, 8 (2) : 339-359. doi: 10.3934/ipi.2014.8.339
Chase Mathison. Thermoacoustic Tomography with circular integrating detectors and variable wave speed. Inverse Problems & Imaging, 2020, 14 (4) : 665-682. doi: 10.3934/ipi.2020030
Hiroshi Fujiwara, Kamran Sadiq, Alexandru Tamasan. Partial inversion of the 2D attenuated $ X $-ray transform with data on an arc. Inverse Problems & Imaging, 2022, 16 (1) : 215-228. doi: 10.3934/ipi.2021047
Leonid Kunyansky. Fast reconstruction algorithms for the thermoacoustic tomography in certain domains with cylindrical or spherical symmetries. Inverse Problems & Imaging, 2012, 6 (1) : 111-131. doi: 10.3934/ipi.2012.6.111
C E Yarman, B Yazıcı. A new exact inversion method for exponential Radon transform using the harmonic analysis of the Euclidean motion group. Inverse Problems & Imaging, 2007, 1 (3) : 457-479. doi: 10.3934/ipi.2007.1.457
Jean-François Crouzet. 3D coded aperture imaging, ill-posedness and link with incomplete data radon transform. Inverse Problems & Imaging, 2011, 5 (2) : 341-353. doi: 10.3934/ipi.2011.5.341
Alexander Barg, Oleg R. Musin. Codes in spherical caps. Advances in Mathematics of Communications, 2007, 1 (1) : 131-149. doi: 10.3934/amc.2007.1.131
David Rojas, Pedro J. Torres. Bifurcation of relative equilibria generated by a circular vortex path in a circular domain. Discrete & Continuous Dynamical Systems - B, 2020, 25 (2) : 749-760. doi: 10.3934/dcdsb.2019265
Martin Bauer, Thomas Fidler, Markus Grasmair. Local uniqueness of the circular integral invariant. Inverse Problems & Imaging, 2013, 7 (1) : 107-122. doi: 10.3934/ipi.2013.7.107
Hengrui Luo, Alice Patania, Jisu Kim, Mikael Vejdemo-Johansson. Generalized penalty for circular coordinate representation. Foundations of Data Science, 2021, 3 (4) : 729-767. doi: 10.3934/fods.2021024
Behrouz Kheirfam, Morteza Moslemi. On the extension of an arc-search interior-point algorithm for semidefinite optimization. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 261-275. doi: 10.3934/naco.2018015
Sunghwan Moon
|
CommonCrawl
|
Measurement of antiproton annihilation on Cu, Ag and Au with emulsion films (1701.06306)
S. Aghion, C. Amsler, A. Ariga, T. Ariga, G. Bonomi, P. Braunig, R. S. Brusa, L. Cabaret, M. Caccia, R. Caravita, F. Castelli, G. Cerchiari, D. Comparat, G. Consolati, A. Demetrio, L. Di Noto, M. Doser, A. Ereditato, C. Evans, R. Ferragut, J. Fesel, A. Fontana, S. Gerber, M. Giammarchi, A. Gligorova, F. Guatieri, S. Haider, A. Hinterberger, H. Holmestad, T. Huse, J. Kawada, A. Kellerbauer, M. Kimura, D. Krasnicky, V. Lagomarsino, P. Lansonneur, P. Lebrun, C. Malbrunot, S. Mariazzi, V. Matveev, Z. Mazzotta, S. R. Muller, G. Nebbia, P. Nedelec, M. Oberthaler, N. Pacifico, D. Pagano, L. Penasa, V. Petracek, C. Pistillo, F. Prelz, M. Prevedelli, L. Ravelli, B. Rienaecker, O. M. Rohne, A. Rotondi, M. Sacerdoti, H. Sandaker, R. Santoro, P. Scampoli, M. Simon, L. Smestad, F. Sorrentino, G. Testera, I. C. Tietje, S. Vamosi, M. Vladymyrov, E. Widmann, P. Yzombard, C. Zimmer, J. Zmeskal, N. Zurlo
April 23, 2017 hep-ex, physics.ins-det
The characteristics of low energy antiproton annihilations on nuclei (e.g. hadronization and product multiplicities) are not well known, and Monte Carlo simulation packages that use different models provide different descriptions of the annihilation events. In this study, we measured the particle multiplicities resulting from antiproton annihilations on nuclei. The results were compared with predictions obtained using different models in the simulation tools GEANT4 and FLUKA. For this study, we exposed thin targets (Cu, Ag and Au) to a very low energy antiproton beam from CERN's Antiproton Decelerator, exploiting the secondary beamline available in the AEgIS experimental zone. The antiproton annihilation products were detected using emulsion films developed at the Laboratory of High Energy Physics in Bern, where they were analysed at the automatic microscope facility. The fragment multiplicity measured in this study is in good agreement with results obtained with FLUKA simulations for both minimally and heavily ionizing particles.
Annihilation of low energy antiprotons in silicon (1311.4982)
S. Aghion, O. Ahlén, A. S. Belov, G. Bonomi, P. Bräunig, J. Bremer, R. S. Brusa, G. Burghart, L. Cabaret, M. Caccia, C. Canali, R. Caravita, F. Castelli, G. Cerchiari, S. Cialdi, D. Comparat, G. Consolati, J.H. Derking, S. Di Domizio, L. Di Noto, M. Doser, A. Dudarev, R. Ferragut, A. Fontana, P. Genova, M. Giammarchi, A. Gligorova, S. N. Gninenko, S. Haider, J. Harasimowicz, T. Huse, E. Jordan, L. V. Jørgensen, T. Kaltenbacher, A. Kellerbauer, A. Knecht, D. Krasnický, V. Lagomarsino, A. Magnani, S. Mariazzi, V. A. Matveev, F. Moia, G. Nebbia, P. Nédélec, N. Pacifico, V. Petrácek, F. Prelz, M. Prevedelli, C. Regenfus, C. Riccardi, O. Røhne, A. Rotondi, H. Sandaker, A. Sosa, M. A. Subieta Vasquez, M. Špacek, G. Testera, C. P. Welsch, S.Zavatarelli
March 11, 2014 physics.ins-det
The goal of the AE$\mathrm{\bar{g}}$IS experiment at the Antiproton Decelerator (AD) at CERN, is to measure directly the Earth's gravitational acceleration on antimatter. To achieve this goal, the AE$\mathrm{\bar{g}}$IS collaboration will produce a pulsed, cold (100 mK) antihydrogen beam with a velocity of a few 100 m/s and measure the magnitude of the vertical deflection of the beam from a straight path. The final position of the falling antihydrogen will be detected by a position sensitive detector. This detector will consist of an active silicon part, where the annihilations take place, followed by an emulsion part. Together, they allow to achieve 1$%$ precision on the measurement of $\bar{g}$ with about 600 reconstructed and time tagged annihilations. We present here, to the best of our knowledge, the first direct measurement of antiproton annihilation in a segmented silicon sensor, the first step towards designing a position sensitive silicon detector for the AE$\mathrm{\bar{g}}$IS experiment. We also present a first comparison with Monte Carlo simulations (GEANT4) for antiproton energies below 5 MeV
Formation Of A Cold Antihydrogen Beam in AEGIS For Gravity Measurements (0805.4727)
G. Testera, A.S. Belov, G. Bonomi, I. Boscolo, N. Brambilla, R. S. Brusa, V.M. Byakov, L. Cabaret, C. Canali, C. Carraro, F. Castelli, S. Cialdi, M. de Combarieu, D. Comparat, G. Consolati, N. Djourelov, M. Doser, G. Drobychev, A. Dupasquier, D. Fabris, R. Ferragut, G. Ferrari, A. Fischer, A. Fontana, P. Forget, L. Formaro, M. Lunardon, A. Gervasini, M.G. Giammarchi, S.N. Gninenko, G. Gribakin, R. Heyne, S.D. Hogan, A. Kellerbauer, D. Krasnicky, V. Lagomarsino, G. Manuzio, S. Mariazzi, V.A. Matveev, F. Merkt, S. Moretto, C. Morhard, G. Nebbia, P. Nedelec, M.K. Oberthaler, P. Pari, V. Petracek, M. Prevedelli, I. Y. Al-Qaradawi, F. Quasso, O. Rohne, S. Pesente, A. Rotondi, S. Stapnes, D. Sillou, S.V. Stepanov, H. H. Stroke, G. Tino, A. Vairo, G. Viesti, H. Walters, U. Warring, S. Zavatarelli, A. Zenoni, D.S. Zvezhinskij
May 30, 2008 gr-qc, physics.atom-ph
The formation of the antihydrogen beam in the AEGIS experiment through the use of inhomogeneous electric fields is discussed and simulation results including the geometry of the apparatus and realistic hypothesis about the antihydrogen initial conditions are shown. The resulting velocity distribution matches the requirements of the gravity experiment. In particular it is shown that the inhomogeneous electric fields provide radial cooling of the beam during the acceleration.
|
CommonCrawl
|
Printable pagePDF recommendation Hide reviewsShow user commentsClose
The role of behavior and habitat availability on species geographic expansion
Esther Sebastián González based on reviews by Pizza Ka Yee Chow, Caroline Marie Jeanne Yvonne Nieberding, Tim Parker and 1 anonymous reviewer
A recommendation of:
Logan CJ, McCune KB, Breen A, Chen N, Lukas D. Implementing a rapid geographic range expansion - the role of behavior and habitat changes (2020), In Principle Recommendation. PCI Ecology. http://corinalogan.com/Preregistrations/gxpopbehaviorhabitat.html version 06 Oct 2020
Submitted: 14 May 2020, Recommended: 05 October 2020
Cite this recommendation as:
Esther Sebastián González (2020) The role of behavior and habitat availability on species geographic expansion. Peer Community in Ecology, 100062. 10.24072/pci.ecology.100062
Understanding the relative importance of species-specific traits and environmental factors in modulating species distributions is an intriguing question in ecology [1]. Both behavioral flexibility (i.e., the ability to change the behavior in changing circumstances) and habitat availability are known to influence the ability of a species to expand its geographic range [2,3]. However, the role of each factor is context and species dependent and more information is needed to understand how these two factors interact.
In this pre-registration, Logan et al. [4] explain how they will use Great-tailed grackles (Quiscalus mexicanus), a species with a flexible behavior and a rapid geographic range expansion, to evaluate the relative role of habitat and behavior as drivers of the species' expansion [4]. The authors present very clear hypotheses, predicted results and also include alternative predictions. The rationales for all the hypotheses are clearly stated, and the methodology (data and analyses plans) are described with detail. The large amount of information already collected by the authors for the studied species during previous projects warrants the success of this study. It is also remarkable that the authors will make all their data available in a public repository, and that the pre-registration in already stored in GitHub, supporting open access and reproducible science.
I agree with the three reviewers of this pre-registration about its value and I think its quality has largely improved during the review process. Thus, I am happy to recommend it and I am looking forward to seeing the results.
[1] Gaston KJ. 2003. The structure and dynamics of geographic ranges. Oxford series in Ecology and Evolution. Oxford University Press, New York.
[2] Sol D, Lefebvre L. 2000. Behavioural flexibility predicts invasion success in birds introduced to new zealand. Oikos. 90(3): 599–605. doi: https://doi.org/10.1034/j.1600-0706.2000.900317.x
[3] Hanski I, Gilpin M. 1991. Metapopulation dynamics: Brief history and conceptual domain. Biological journal of the Linnean Society. 42(1-2): 3–16. doi: https://doi.org/10.1111/j.1095-8312.1991.tb00548.x
[4] Logan CJ, McCune KB, Breen A, Chen N, Lukas D. 2020. Implementing a rapid geographic range expansion - the role of behavior and habitat changes (http://corinalogan.com/Preregistrations/gxpopbehaviorhabitat.html) In principle acceptance by PCI Ecology of the version on 6 Oct 2020 https://github.com/corinalogan/grackles/blob/master/Files/Preregistrations/gxpopbehaviorhabitat.Rmd.
Reviewed by Caroline Marie Jeanne Yvonne Nieberding, 2020-09-18 07:44
Review 2 of revised version of manuscript by Logan et al "Implementing a rapid geographic range expansion - the role of behavior and habitat changes" submitted to PCI Ecology
I would like to congratulate the authors for revising so thoroughly their manuscript, and I have no further comments or concerns after reading the responses to referees and revised manuscript. Except for one useful "detail" regarding comment and response 39: in fact, the actual original paper showing that learning is costly was by Mery and Kawecky in Science in 2005. Using experimental evolution on learning for oviposition in flies, they showed that learning induced a reduction in offspring production and in survival, as far as I remember.
Best wishes for collecting sufficient data, CN.
Reviewed by anonymous reviewer, 2020-10-02 15:56
Thank you for inviting me to review this revised pre-print manuscript. I think the authors did a great job in the revision – the investigation goal presented in the Introduction is clearer than the previous version, the information presented in the current revision is also consistent with the goal of the investigation. I understand that there are many areas to be explored in the study, but the authors have addressed my previous comments and concerns appropriately. Hence, there is no major issue to be raised.
Reviewed by Tim Parker, 2020-09-18 18:34
This pre-registration draft is a plan for studying range expansion in great-tailed grackles. The authors present clear questions and predictions, and detailed analysis plans. This is an appropriate pre-registration.
This authors of this pre-registration have addressed nearly all the concerns I laid out in my review of the prior draft. I have only one recommendation for a change prior to archiving (see below)
However, as I stated in the original review, I wish to acknowledge that I lack expertise regarding some of the methods in this pre-registration, and therefore cannot attest to their sufficiency. In particular, I am unfamiliar with the modeling techniques the authors used as a form of power analysis, and I am unfamiliar with Bayesian statistics. Also, I am unfamiliar with molecular genetics analyses. Finally, I have never conducted the sorts of behavioral assays that form the core of this research.
This is my single substantial concern:
Q3, P3 - "Most MaxEnt papers use cross-validation and the area under the curve (AUC) to evaluate model performance."
For the pre-registration to constrain researcher degrees of freedom, you need to state either (1) that you will use this method or (2) the decision rule you will use to determine whether you will use this method (and what you would do instead).
Author's reply:
Dear Dr.'s González, Nieberding, Parker, and an anonymous reviewer,
We are so glad that you thought we did a good job with the revision! Indeed, all of your comments so greatly improved our manuscript! We are happy to make the final revision and prepare it for in principle recommendation.
We revised our preregistration at http://corinalogan.com/Preregistrations/gxpopbehaviorhabitat.html, and we responded to your two comments below.
Note that the version-tracked version of this preregistration is in rmarkdown at GitHub: https://github.com/corinalogan/grackles/blob/master/Files/Preregistrations/gxpopbehaviorhabitat.Rmd. In case you want to see the history of track changes for this document at GitHub, click the link, then click the "History" button (right near top). From there, you can scroll through our comments on what was changed for each save event, and, if you want to see exactly what was changed, click on the text that describes the change and it will show you the text that was replaced (in red) next to the new text (in green).
Thank you very much for your time!
Corina, Kelsey, Alexis, Nancy, and Dieter
Comment 1: Review 2 of revised version of manuscript by Logan et al "Implementing a rapid geographic range expansion - the role of behavior and habitat changes" submitted to PCI Ecology
Response 1: Thank you so much for your help and for your congratulations on our revision! Thank you also for highlighting the impact of the Mery and Kawecki (2005) article. We added new sentences to the end of the paragraph to incorporate this key example: Prediction 1 > Alternative 1: "Both of these alternatives assume that learning is costly [e.g., @mery2005cost], therefore individuals avoid it if they can. In the first case, individuals might not need to rely much on learning because they are attending to familiar cues across their range, therefore they only need to learn where in this new space space these cues are located. In the second case, individual learning that the founding individuals needed to rely on to move into this new space could have been lost due to potential pressure to reduce this investment as soon as possible after moving to a new location."
Comment 2: Thank you for inviting me to review this revised pre-print manuscript. I think the authors did a great job in the revision – the investigation goal presented in the Introduction is clearer than the previous version, the information presented in the current revision is also consistent with the goal of the investigation. I understand that there are many areas to be explored in the study, but the authors have addressed my previous comments and concerns appropriately. Hence, there is no major issue to be raised.
Response 2: We are so glad you think this version is clearer and more consistent. Thank you very much for your help!
Comment 3: This pre-registration draft is a plan for studying range expansion in great-tailed grackles. The authors present clear questions and predictions, and detailed analysis plans. This is an appropriate pre-registration.
Response 3: We are so glad to hear that we were able to address almost all of your concerns! Thank you for bringing up this further point, which allows us to better clarify our analysis plan. We made the following addition in bold:
Analysis Plan > Q3 Habitat > "Most MaxEnt papers use cross-validation and the area under the curve (AUC) to evaluate model performance, and we will do the same."
Revision round #1
I have now received the comments from 3 experienced reviewers on your preprint. The three of them think that your preprint is of interest and that you have made a great effort on putting it together, but they all include many comments that can help to improve it. Therefore, I am going to ask you to have a deep look to all the issues raised by the reviewers and submit a revised version of it.
Preprint DOI: http://corinalogan.com/Preregistrations/gxpopbehaviorhabitat.html
Reviewed by Pizza Ka Yee Chow, 2020-07-14 07:27
I have reviewed Logan and colleagues' preregistered manuscript title 'Implementing a rapid geographic range expansion - the role of behavior and habitat changes'. The authors would like to examine the role of behaviours and habitat suitability in relation to an invasive species expansion, using Great-tailed grackles (Quiscalus mexicanus) as study species. To do so, they will assess multiple behaviours that have been shown or are thought to related to range expansion using several tasks (e.g. behavioural flexibility innovation, reversal learning, exploration) alongside dispersal behaviours within several populations at different stage of expansion. The authors will also include habitat-related variables (e.g. availability, suitability) in their investigation.
I think this work is important; not many studies to date have covered both internal factors such as characteristics (behaviour) of a species and external factors (habitat suitability/availability) in relation to species expansion. This study will help to shed lights on factors related to invasion success or successful settlement in new environments. While I find the study concept is important and worth to be investigated, I also find there are some major issues and queries in relation to smaller aspects within the concept (see below). Perhaps, this is down to the authors have provided a very brief version of their study (i.e. pre-registration). In this review, I have provided some suggestions here, which I hope they would be help the authors to refine their study design and write up for the final submission.
1) The abstract provides a very brief study background and the study objectives. However, it does not convey clearly the idea of the alternative explanation for range expansion. One issue here is that having suitable habitats as a facilitator of an species expansion is not new. In particular in ecology and more specifically invasive ecology, be it alone in plants or animals. Yet, there is no reference to support such idea.
2) Another major issue here is down to what the authors would like to do: are the authors seeking behaviour OR habitat suitability is the cause of range expansion? Or are they examining the relative roles of behaviour AND habitat suitability? (as the authors have stated in the C) Hypothesis : 'the relative roles of changes in behaviour and changes in habitats in the range expansion of great-tailed grackles.'). The former question appears to argue either nature or nurture whereas the latter is more prone to a combination of both. In the actual write up, the authors should clarify this concept succinctly.
3) Hypothesis: It is good that the authors are looking at several behaviours to understand the research question. However, the authors appear to weight up all behaviours in understanding the research question. Indeed, any two behaviours may vary their importance within the same expansion stage. For example, looking each trait at within population level, exploration may be more important than flexibility (despite both traits may be correlated in some ways) within the 'edge' populations (and not only between 'edge' and 'recently established' populations) because grackles may have to secure resources (e.g. places to stay, food to eat etc). That is to say each behaviour of interest may relate to the stage of range expansion differently and the authors should have different predictions for each behaviour. The lots-of-perhaps in the Hypothesis section may provide explanation for populations at different expansion stage, but the importance of behavioural traits may vary and shall be understood in populations that are at the same stage of expansion.
4) this comment is related to comment 2 - Assuming the authors are not testing 'either-or' but 'relatively importance'. When we talk about the relatively role, I think hypothesis should be stated in a way that should reflect the relative proportion of each role in the process. For example, H1 shall be 'if behaviour plays a more important role than habitat-related factors in expansion'(?).
Abstract 1) Clarity –I suggest the authors write it clearly or provide more informative labels for each study population (e.g. the population in the centre is 'recently published' and the edge of the population is 'invasive front population'); the label will allow the readers to know it right away that the authors are comparing populations at the front of expansion with those that are established or at the middle of expansion.
2) Habitat availability? Or 'suitability'? the two words mean very different things and different measurements, please clarify.
Hypotheses 1) This hypothesis is needed to be more precise in that new location and range expansion could be seen in two ways but could well be depending on how authors are measuring these things. Are the authors stating the new locations where the grackles invade is a geographically continuous landscape? Or a completely different locations? 'Expansion' implies it is the former case. If this is true, the continuous landscape may not completely pose a higher challenge to grackles that lead higher behaviour flexibility than recently established population.
Protocols and open materials 1) Thank you for providing a detailed protocol – I have read through them point-by-point. The design of each task is either adopted from other studies or established set up for grackles. However, there are some key information missing here. For example, the authors may state clearly that all grackles will go through a habitation period novel apparatus (details of habitation period could be find online – what the authors have provided for this pre-registration); this will allow readers to know that task performance and participation rate presumably would be not be affected by neophobia but more down to motivation or other reasons (e.g. weather).
2) Provided information for each task is not entirely consistent throughout the section. For example, total duration of assessing exploration is provided but not in flexibility or innovativeness tasks.
3) The rationales of some measurements are not entirely clear to me. For example, why would the authors need to analyse DNA of grackles, how does the relatedness related to the research question?
4) Suitable habitat: please name a few ecological variables that are important for grackles. Are the authors using some kinds of index to indicate the degree of suitability of a habitat?
5) Flexibility task: how long does it session last for?
6) Innovativeness: What is the flexibility measure for this task? When a bird has successfully used a solution to solve the task, why did the authors block the previously successful solution and not allow the bird to explore an alternative solution? My two cents is there are pros and cons here, if the authors allow a bird to explore new solution, this would be a way to measure natural exploration tendency (which is another variable that the authors are interested in).
7) Exploration – the authors would like to go for simplicity by testing novel object and not novel environment, but what if exploration of novel object and environments correlate with boldness in opposite direction? Also, regarding relevance to what the authors are interested in, one shall assume exploring new environment test (which may allow invasive species to explore novel resources/ 'object') would be related to invasive species expansion. How would the authors measure exploration here? By the frequent of manipulating the object or the duration? I do not get this until I read the analysis plan.
8) Persistence. It is good that the authors have given habituation period for grackles as well as having a relatively strict passing criteria to ensure neophobia would not be a confound for task performance and participation. A note is that the authors may want to clarify why the proportion of trials participated in the flexibility and innovativeness reflect 'persistence' – I cannot get my head around this…as the measure could equally reflect 'high motivation' or 'eagerness to participate in task'.
E. Analysis Plan Model and simulation I agree with the authors that using hypothesis-appropriate mathematical model is a good way to analyse the data. A note on the analyses plan is that although the authors may set prior distribution from available, or the authors', publications, the authors may want to incorporate a larger and smaller mean and SD to increase the robustness of the results (i.e. to reflect whether the results in the current study is covered within the probability distribution).
This pre-registration draft is a plan for studying range expansion in great-tailed grackles. The authors present relatively clear hypotheses and predictions, and detailed analysis plans.
It is my opinion that, as a pre-registration, this draft is almost ready to be archived, although I have some specific suggestions for improvement. For the most part, the methods are presented clearly and with a high degree of detail (except for H3). Also, to the extent that my expertise allows me to evaluate the methods, those methods appear reasonable.
However, I wish to acknowledge that I lack expertise regarding some of the methods in this pre-registration, and therefore cannot attest to their sufficiency. In particular, I am unfamiliar with the modeling techniques the authors used as a form of power analysis, and I am unfamiliar with Bayesian statistics. Also, I am unfamiliar with molecular genetics analyses. Finally, I have never conducted the sorts of behavioral assays that form the core of this research.
Hypothesis – Predictions framework:
Because the authors have chosen to present a framework of hypotheses and predictions, I feel compelled point out that they have not used this framework in the traditional manner, and so I found their use of the framework confusing. This is a bit of a pet issue with me, so I apologize in advance for what follows, but I do very much believe that the tendency for the community of evolutionary biologists and ecologists to not rigorously follow the hypothesis-prediction framework when it is invoked hinders understanding and clarity of thinking.
Traditionally, a hypothesis is a tentative statement regarding how the world works, and a prediction of that hypothesis is something that the scientist should be able to observe if the hypothesis is true. Therefore, if the researcher examines the prediction and finds a lack of evidence for it, this should undermine confidence in the hypothesis. Thus a prediction is just a statement of what the researcher should observe/measure given a hypothesis is correct, and a hypothesis cannot have conflicting predictions. If you have constructed conflicting predictions, that is a sign that you have multiple (alternative) hypotheses.
For instance, the way Prediction 1 and Prediction 1 alternative 1 for H1 are presented confused me. I thought you were presenting two divergent (partly conflicting) predictions for the same hypothesis. However, after looking at Fig 1, I decided that 'Prediction 1 alternative' was maybe supposed to be a prediction of H3 (though this prediction as currently worded is not an ideal prediction of H3 as currently worded). Anyway, below is what I wrote in response to that paragraph before I looked at Fig 1. I'm including it here because I hope it will help you recognize my confusion and will help you clarify how you present hypothesis and predictions. I encourage you to re-work your descriptions of all your hypotheses and predictions so that they adhere to the standard framework.
Prediction 1 and Prediction 1 alternative 1 for H1 are in essence two different hypotheses (in part). One hypothesis is something like: the range expansion in great tailed grackles is facilitated by behavioral traits (flexibility, innovation, exploration, and persistence [actually, each of these should probably be considered a separate hypothesis]) that are found disproportionately at the leading edge of the range expansion. The other hypothesis is something like: the range expansion in great tailed grackles is facilitated by behavioral traits (flexibility, innovation, exploration, and persistence) that are characteristic of this species. You could divide up these hypotheses in other ways, but the point is that the predictions for the 1st half of both of these hypotheses are identical (presence of behavioral flexibility/ innovation/ exploration/ persistence at the leading edge), but the predictions for the 2nd parts of both of these hypotheses are different (behavioral flexibility/ innovation/ exploration/ persistence greater at leading edge vs. spread evenly through the entire population).
In a pre-registration, clarity about hypothesis and predictions is useful, because this allows the researchers to clearly state what they will conclude about each separate (component of their) hypothesis based on the outcome of each separate prediction.
protocols: Why not include the detailed protocols for H1 (now in a separate Google Doc) as part of the pre-registration?
Flexibility: Under what condition would you decide to "modify this protocol by moving the passing criterion sliding window in 1-trial increments, rather than 10-trial increments"?
Blinding during analyses: Would you like to present any justification for you lack of blinding?
Analysis Plan, H1
As I understand it, you present a clear criterion for statistical decisions ("From the pairwise contrasts, if the difference between the distributions crosses zero (yes), then we are not able to detect differences between the two sites. If they do not cross zero (no), then we are able to detect differences between the two sites.")
However, a bit more explanation here for those not familiar with your analytical methods would be welcome.
Is there only one value for 'relatedness' produced by this method? in other words, is their undisclosed analytical flexibility here?
This appears to be the weakest part of the pre-registration (the vaguest portion, and thus the portion for which this pre-registration does not appear to be doing the work of constraining analytical options and thus constraining 'researcher degrees of freedom')
Can you provide more information about some of your explanatory variables? What exactly will the climate variables be? How will predator density be measured? Can you explain 'Distance to the next suitable habitat patch weighted by nearest mountain range/forest'? How will you define 'conspecific population' (for explanatory variable #6)? Will it be the detection of any individuals, or the detection of some minimum number of individuals? Can you provide any more info about your decision making process while fitting models using maxent?
Trivial comments:
Typo in abstract: "We first aim to compare behavior in wild-caught grackled"
Dear Authors, please find in attachment my comments to your proposed research project. Overall it is very interesting and well thought; some of my comments end up being due to finding the place where you produce the information I was looking for. Hopefully some comments will be useful to further improve the link between your experimental work and their relevance to the ecology of the species. Good luck with the covid crisis, Best regards, C. Nieberding.
Download the review (PDF file)
Dear Dr.'s González, Chow, Parker, and Nieberding,
We greatly appreciate the time you have taken to give us such useful feedback! We are very thankful for your willingness to participate in the peer review of preregistrations, and we are happy to have the opportunity to revise and resubmit.
We revised our preregistration at http://corinalogan.com/Preregistrations/gxpopbehaviorhabitat.html, and we responded to your comments below.
We think the revised version is much improved due to your generous feedback!
Two additional things: Due to COVID-19 issues, we have had to delay our data collection start date by a month. We now plan to begin collecting data in mid-October. We added a new co-author, Alexis Breen, who just joined the grackle team.
by Esther Sebastián González, 2020-08-11 13:40
Manuscript: http://corinalogan.com/Preregistrations/gxpopbehaviorhabitat.html
Please revise your pre-print
COMMENT 1: I have now received the comments from 3 experienced reviewers on your preprint. The three of them think that your preprint is of interest and that you have made a great effort on putting it together, but they all include many comments that can help to improve it. Therefore, I am going to ask you to have a deep look to all the issues raised by the reviewers and submit a revised version of it.
RESPONSE 1: Thank you very much for facilitating this process! We responded to all comments below.
Reviews Reviewed by Pizza Ka Yee Chow, 2020-07-14 07:27
COMMENT 2: I think this work is important; not many studies to date have covered both internal factors such as characteristics (behaviour) of a species and external factors (habitat suitability/availability) in relation to species expansion. This study will help to shed lights on factors related to invasion success or successful settlement in new environments. While I find the study concept is important and worth to be investigated, I also find there are some major issues and queries in relation to smaller aspects within the concept (see below). Perhaps, this is down to the authors have provided a very brief version of their study (i.e. pre-registration). In this review, I have provided some suggestions here, which I hope they would be help the authors to refine their study design and write up for the final submission.
RESPONSE 2: Thank you very much for your positive feedback and for providing comments on how we can improve this work! We look forward to addressing your detailed comments below.
COMMENT 3: 1) The abstract provides a very brief study background and the study objectives. However, it does not convey clearly the idea of the alternative explanation for range expansion. One issue here is that having suitable habitats as a facilitator of an species expansion is not new. In particular in ecology and more specifically invasive ecology, be it alone in plants or animals. Yet, there is no reference to support such idea.
RESPONSE 3: Great point! We added citations to the Abstract and to the new Introduction for the alternative.
COMMENT 4: 2) Another major issue here is down to what the authors would like to do: are the authors seeking behaviour OR habitat suitability is the cause of range expansion? Or are they examining the relative roles of behaviour AND habitat suitability? (as the authors have stated in the C) Hypothesis : 'the relative roles of changes in behaviour and changes in habitats in the range expansion of great-tailed grackles.'). The former question appears to argue either nature or nurture whereas the latter is more prone to a combination of both. In the actual write up, the authors should clarify this concept succinctly.
RESPONSE 4: This is a really good point and one we need to clarify. Thanks to your comment, we realized that we were giving mixed messages, but in reality, we are not able to compare habitat and behavior with each other because the data for these variables are being collected at completely different scales and not on the same individuals. We are testing habit and behavior individually to assess whether either or both play a role in the range expansion. We updated Figure 1 to clarify this point, and we modified Figure 3 and added Figure 4 to help show this. We also clarified this in the text in the following sections:
Abstract: "However, it is an alternative non-exclusive possibility that an increase in the amount of available habitat can be a facilitator for a range expansion."
The last sentence of the Abstract and Introduction: "Results will elucidate whether the rapid geographic range expansion of great-tailed grackles is associated with individuals differentially expressing particular behaviors and/or whether the expansion is facilitated by the alignment of their natural behaviors with an increase in suitable habitat (i.e., human-modified environments)."
Hypotheses (the note at the top, which is now in the Introduction): "There could be multiple mechanisms underpinning the results we find, however our aim here is to narrow down the role of changes in behavior and changes in habitats in the range expansion of great-tailed grackles"
COMMENT 5: 3) Hypothesis: It is good that the authors are looking at several behaviours to understand the research question. However, the authors appear to weight up all behaviours in understanding the research question. Indeed, any two behaviours may vary their importance within the same expansion stage. For example, looking each trait at within population level, exploration may be more important than flexibility (despite both traits may be correlated in some ways) within the 'edge' populations (and not only between 'edge' and 'recently established' populations) because grackles may have to secure resources (e.g. places to stay, food to eat etc). That is to say each behaviour of interest may relate to the stage of range expansion differently and the authors should have different predictions for each behaviour. The lots-of-perhaps in the Hypothesis section may provide explanation for populations at different expansion stage, but the importance of behavioural traits may vary and shall be understood in populations that are at the same stage of expansion.
RESPONSE 5: Thank you for pointing out that it was unclear that we are only investigating whether these behaviors are important at the very edge. We have now removed the note from the Hypothesis section, and expanded on it in the new Introduction, which we hope will make the "perhaps" in the predictions make more sense.
Introduction: "It is generally thought that behavioral flexibility, the ability to change behavior when circumstances change (see @mikhalevichis2017 for theoretical background on our flexibility definition), plays an important role in the ability of a species to rapidly expand their geographic range (e.g., @lefebvre1997feeding, @griffin2014innovation, @chow2016practice, @sol2000behavioural, @sol2002behavioural, @sol2005big, @sol2007big). These ideas predict that flexibility, exploration, and innovation facilitate the expansion of individuals into completely new areas and that their role diminishes after a certain number of generations [@wright2010behavioral]. In support of this, experimental studies have shown that latent abilities are primarily expressed in a time of need [e.g., @taylor2007spontaneous; @bird2009insightful; @manrique2011spontaneous; @auersperg2012spontaneous; @laumer2018spontaneous]. Therefore, we do not expect the founding individuals who initially dispersed out of their original range to have unique behavioral characteristics that are passed on to their offspring. Instead, we expect that the actual act of continuing a range expansion relies on flexibility, exploration, innovation, and persistence, and that these behaviors are therefore expressed more on the edge of the expansion range where there have not been many generations to accumulate relevant knowledge about the environment."
COMMENT 6: 4) this comment is related to comment 2 - Assuming the authors are not testing 'either-or' but 'relatively importance'. When we talk about the relatively role, I think hypothesis should be stated in a way that should reflect the relative proportion of each role in the process. For example, H1 shall be 'if behaviour plays a more important role than habitat-related factors in expansion'(?).
RESPONSE 6: We hope that our revision in Response 4 clarified that we are not examining the relative roles of habitat and behavior with each other, but each one separately.
COMMENT 7: Abstract 1) Clarity –I suggest the authors write it clearly or provide more informative labels for each study population (e.g. the population in the centre is 'recently published' and the edge of the population is 'invasive front population'); the label will allow the readers to know it right away that the authors are comparing populations at the front of expansion with those that are established or at the middle of expansion.
RESPONSE 7: Thank you for pointing this out. We avoid using the word "invasion" with this species because invasion ecologists think that a species must be introduced by humans for it to be invasive. Great-tailed grackles have primarily introduced themselves across their range, therefore we tend to avoid the invasion term. We added study location points to the map in Figure 3 and we revised the text as follows to improve clarity: Abstract: "core of the original range, a more recent population in the middle of the northern expansion front, a very recent population on the northern edge of the expansion front"
COMMENT 8: 2) Habitat availability? Or 'suitability'? the two words mean very different things and different measurements, please clarify.
RESPONSE 8: Good point that we were unclear about these terms. We now defined them in the Abstract and in the new Introduction.
Abstract: "3) these species use different habitats, habitat suitability and connectivity (which combined determines whether habitat is available) has increased across their range, and what proportion of suitable habitat both species occupy."
COMMENT 9: Hypotheses 1) This hypothesis is needed to be more precise in that new location and range expansion could be seen in two ways but could well be depending on how authors are measuring these things. Are the authors stating the new locations where the grackles invade is a geographically continuous landscape? Or a completely different locations? 'Expansion' implies it is the former case. If this is true, the continuous landscape may not completely pose a higher challenge to grackles that lead higher behaviour flexibility than recently established population.
RESPONSE 9: The expansion is occurring in a geographically continuous landscape, therefore we are looking at a relative difference between populations. We consider the Woodland population to be close enough to the range edge to be classified as an edge population because there have been so few generations there that individuals are likely to still encounter new elements in their environment that they could not have learned about socially. Woodland, California is not the northernmost part of the great-tailed grackle's range, however it is as far north as we could go while operating under the constraint that we need a large enough population that exists there year round to be able to conduct such a study as this. We clarified as follows:
Introduction: "Instead, we are investigating whether the actual act of continuing a range expansion relies on flexibility, exploration, innovation, and persistence, and that these behaviors are therefore expressed more on the edge of the expansion range where there have not been many generations to accumulate relevant knowledge about the environment."
COMMENT 10: Protocols and open materials 1) Thank you for providing a detailed protocol – I have read through them point-by-point. The design of each task is either adopted from other studies or established set up for grackles. However, there are some key information missing here. For example, the authors may state clearly that all grackles will go through a habitation period novel apparatus (details of habitation period could be find online – what the authors have provided for this pre-registration); this will allow readers to know that task performance and participation rate presumably would be not be affected by neophobia but more down to motivation or other reasons (e.g. weather).
RESPONSE 10: Good point, thank you! We added to this section that we conduct habituation first for both the flexibility and innovativeness experiments and we added your point to the persistence description. We revised as follows:
Protocols and open materials > Flexibility: "Grackles are first habituated to a yellow tube and trained to search for hidden food."
Protocols and open materials > Flexibility: "Grackles are first habituated to the log apparatus with all of the doors locked open and food inside each locus. After habituation, the log, which has four ways of accessing food..."
Protocols and open materials > Persistence: "Persistence is measured as the proportion of trials participated in during the flexibility and innovativeness experiments (after habituation, thus it is not confounded with neophobia)"
COMMENT 11: 2) Provided information for each task is not entirely consistent throughout the section. For example, total duration of assessing exploration is provided but not in flexibility or innovativeness tasks.
RESPONSE 11: This is because only the exploration assay has a set session duration because the aim is to determine the latency to approach a novel object. To make sure results are comparable across birds, the session has to be standardized across birds so they all get the same amount of time with the apparatus. For the reversal learning and multiaccess log experiments, we only pay attention to when they pass criterion, therefore sessions last for varying amounts of time depending on the grackle's motivation. In the protocol, we provide only the information that the experimenter needs to attend to when testing the birds, so what is relevant for one task might not be relevant or noteworthy for another task.
COMMENT 12: 3) The rationales of some measurements are not entirely clear to me. For example, why would the authors need to analyse DNA of grackles, how does the relatedness related to the research question?
RESPONSE 12: This is a great point, sorry for the confusion! We now clarify this in the new Introduction.
COMMENT 13: 4) Suitable habitat: please name a few ecological variables that are important for grackles. Are the authors using some kinds of index to indicate the degree of suitability of a habitat?
RESPONSE 13: The model we will run on this data, MaxEnt, produces a continuous prediction of habitat suitability for each grid cell (0 is least suitable and 1 is most suitable). We will also use jackknifing procedures to evaluate the relative contribution/importance of different environmental variables to the probability of species occurrence. We added this clarification to Analysis Plan > Q3 > P3 > Explanatory variables.
We added detailed descriptions of each variable in the Analysis Plan > Q3 (please see Response 29 for the changes), and we provided examples of these variables in the revised text in the new Introduction and also in:
Protocols and open materials > Suitable habitat: "We identified suitable habitat variables from Selander and Giller (1961), Johnson and Peer (2001), and Post et al. (1996) (e.g., types of suitable land cover including wetlands, marine coastal, arable land, grassland, mangrove, urban), and we added additional variables relevant to our hypotheses (e.g., distance to nearest uninhabited suitable habitat patch to the north, presence/absence of water in the area)."
COMMENT 14: 5) Flexibility task: how long does it session last for?
RESPONSE 14: Please see Response 11.
COMMENT 15: 6) Innovativeness: What is the flexibility measure for this task? When a bird has successfully used a solution to solve the task, why did the authors block the previously successful solution and not allow the bird to explore an alternative solution? My two cents is there are pros and cons here, if the authors allow a bird to explore new solution, this would be a way to measure natural exploration tendency (which is another variable that the authors are interested in).
RESPONSE 15: For the multiaccess box, we follow the methods developed by Auersperg et al. (2011). We and Auersperg et al. (2011) interpreted the switching between options as a measure of flexibility instead of exploration. This fits with our definition of flexibility, which is to decide among a variety of options which choice is a functional option to attempt. We can see how exploration tendency might come into play the first time a bird has touched an option, however the box is not new to the bird by the time the test starts because they undergo habituation with it with the doors locked open. Your idea to measure exploration on the log would work well if one were to video record the habituation period and measure latency to first touch to each locus. In this case, it would be a great measure of exploration if the presence of food in all loci was acceptable.
Although in a previous study, we extracted a flexibility measure from the multiaccess box task (the latency to attempt to solve a new locus after having previously successfully solved a different locus; Logan et al. 2019 http://corinalogan.com/Preregistrations/g_flexmanip.html), we are not going to examine flexibility in the context of the multiaccess box task in the current study. For flexibility, we will use one measure: the number of trials to reverse a color preference. Sorry if this was confusing. Please let us know if there is a place in the text that we need to clarify.
The reason for blocking off an option that a bird previously demonstrated proficiency on is to force them to try to solve the other options because we are interested in how many options a bird can solve. If we didn't block off a previously successful option, then the bird might only use that one option to repeatedly obtain the food and we would not have an accurate measure of their innovative potential. There was only one way to solve each option, so once they were proficient at a particular locus, there wouldn't have been any alternative ways of solving that locus. We now mention that our experimental design is after Auersperg in Protocols and open materials > Innovativeness.
COMMENT 16: 7) Exploration – the authors would like to go for simplicity by testing novel object and not novel environment, but what if exploration of novel object and environments correlate with boldness in opposite direction? Also, regarding relevance to what the authors are interested in, one shall assume exploring new environment test (which may allow invasive species to explore novel resources/ 'object') would be related to invasive species expansion. How would the authors measure exploration here? By the frequent of manipulating the object or the duration? I do not get this until I read the analysis plan.
RESPONSE 16: This is a great question and one that we will have answers to soon! We are currently analyzing data from a study we conducted in Arizona grackles that looks at the relationship between novel object and novel environment performance as well as whether performance on both relates with boldness all in the same individuals (McCune et al. 2019 http://corinalogan.com/Preregistrations/g_exploration.html). Once we know these relationships, we will be able to decide which exploration test (or tests) to include in the current study and explain if and how they relate to boldness.
In terms of whether exploration of a novel environment/object relates to the exploration of novel resources/objects in the wild, we investigate these links in a separate preregistration: space use (http://corinalogan.com/Preregistrations/gspaceuse.html). In space use, our analysis for H1 (across all three populations) will tell us whether exploration of novel object/environment is correlated with space use in the wild. If these variables are correlated we will be able to infer that results from the analyses of movement behavior across populations (space use H2) likely also apply to the exploration of the novel object/environment in the aviary assays.
Sorry that the methods for exploration weren't clear until the end! We placed a detailed explanation of the methods in Methods > Protocols and open materials, and we summarized the method in Prediction 1, as well as describing the method in the new Introduction. We hope that this helps readers understand how the test works sooner in the text.
COMMENT 17: 8) Persistence. It is good that the authors have given habituation period for grackles as well as having a relatively strict passing criteria to ensure neophobia would not be a confound for task performance and participation. A note is that the authors may want to clarify why the proportion of trials participated in the flexibility and innovativeness reflect 'persistence' – I cannot get my head around this…as the measure could equally reflect 'high motivation' or 'eagerness to participate in task'.
RESPONSE 17: We're glad you like the habituation passing criteria! We find that it works really well in practice to make sure we aren't testing neophobic birds. For the persistence measure, if a bird participated in 10/10 trials, then they would score a 1, and this would indicate they have a high participation level. Alternatively, if a bird participated in only 1/10 trials, then they would score a 0.1, and this would indicate that they did not persist in attempting to participate in trials. The lack of participation could be due in part to motivation, however we think motivation is impossible to measure in this species because limited food restriction often does not get them to participate in a trial if they really don't want to participate. We think that the birds who choose to participate more often could be more eager to participate in the task and in this sense I think we mean the same thing by eagerness and persistence. What we have noticed when testing grackles in Santa Barbara and Arizona is that, when all birds have equal opportunities to participate in trials every day, those birds who do not participate as often could be considered less persistent in terms of their persistence with engaging with the task. We added a clarification to:
Methods > Protocols and open materials > Persistence: "This measure indicates that those birds who do not participate as often are less persistent in terms of their persistence with engaging with the task."
COMMENT 18: E. Analysis Plan Model and simulation I agree with the authors that using hypothesis-appropriate mathematical model is a good way to analyse the data. A note on the analyses plan is that although the authors may set prior distribution from available, or the authors', publications, the authors may want to incorporate a larger and smaller mean and SD to increase the robustness of the results (i.e. to reflect whether the results in the current study is covered within the probability distribution).
RESPONSE 18: Yes, the information from a subset of one population might not reflect the variation found in the data we are going to collect. Therefore, we previously assessed whether the prior distribution we chose for the Bayesian analyses would cover a range of expected results in the study through prior simulations. We realize that we had included the code for the prior simulations, but not mentioned this in the text - sorry for the confusion! We now added the following:
Analysis Plan > Hypothesis-specific mathematical model: "We formulated these models in a Bayesian framework. We determined the priors for each model by performing prior predictive simulations based on ranges of values from the literature to check that the models are covering the likely range of results."
Reviewed by Tim Parker, 2020-07-30 19:27 This pre-registration draft is a plan for studying range expansion in great-tailed grackles. The authors present relatively clear hypotheses and predictions, and detailed analysis plans.
COMMENT 19: It is my opinion that, as a pre-registration, this draft is almost ready to be archived, although I have some specific suggestions for improvement. For the most part, the methods are presented clearly and with a high degree of detail (except for H3). Also, to the extent that my expertise allows me to evaluate the methods, those methods appear reasonable. However, I wish to acknowledge that I lack expertise regarding some of the methods in this pre-registration, and therefore cannot attest to their sufficiency. In particular, I am unfamiliar with the modeling techniques the authors used as a form of power analysis, and I am unfamiliar with Bayesian statistics. Also, I am unfamiliar with molecular genetics analyses. Finally, I have never conducted the sorts of behavioral assays that form the core of this research.
RESPONSE 19: Thank you very much for your feedback! We look forward to addressing your comments below.
COMMENT 20: Hypothesis – Predictions framework: Because the authors have chosen to present a framework of hypotheses and predictions, I feel compelled point out that they have not used this framework in the traditional manner, and so I found their use of the framework confusing. This is a bit of a pet issue with me, so I apologize in advance for what follows, but I do very much believe that the tendency for the community of evolutionary biologists and ecologists to not rigorously follow the hypothesis-prediction framework when it is invoked hinders understanding and clarity of thinking. Traditionally, a hypothesis is a tentative statement regarding how the world works, and a prediction of that hypothesis is something that the scientist should be able to observe if the hypothesis is true. Therefore, if the researcher examines the prediction and finds a lack of evidence for it, this should undermine confidence in the hypothesis. Thus a prediction is just a statement of what the researcher should observe/measure given a hypothesis is correct, and a hypothesis cannot have conflicting predictions. If you have constructed conflicting predictions, that is a sign that you have multiple (alternative) hypotheses. For instance, the way Prediction 1 and Prediction 1 alternative 1 for H1 are presented confused me. I thought you were presenting two divergent (partly conflicting) predictions for the same hypothesis. However, after looking at Fig 1, I decided that 'Prediction 1 alternative' was maybe supposed to be a prediction of H3 (though this prediction as currently worded is not an ideal prediction of H3 as currently worded). Anyway, below is what I wrote in response to that paragraph before I looked at Fig 1. I'm including it here because I hope it will help you recognize my confusion and will help you clarify how you present hypothesis and predictions. I encourage you to re-work your descriptions of all your hypotheses and predictions so that they adhere to the standard framework. Prediction 1 and Prediction 1 alternative 1 for H1 are in essence two different hypotheses (in part). One hypothesis is something like: the range expansion in great tailed grackles is facilitated by behavioral traits (flexibility, innovation, exploration, and persistence [actually, each of these should probably be considered a separate hypothesis]) that are found disproportionately at the leading edge of the range expansion. The other hypothesis is something like: the range expansion in great tailed grackles is facilitated by behavioral traits (flexibility, innovation, exploration, and persistence) that are characteristic of this species. You could divide up these hypotheses in other ways, but the point is that the predictions for the 1st half of both of these hypotheses are identical (presence of behavioral flexibility/ innovation/ exploration/ persistence at the leading edge), but the predictions for the 2nd parts of both of these hypotheses are different (behavioral flexibility/ innovation/ exploration/ persistence greater at leading edge vs. spread evenly through the entire population). In a pre-registration, clarity about hypothesis and predictions is useful, because this allows the researchers to clearly state what they will conclude about each separate (component of their) hypothesis based on the outcome of each separate prediction.
RESPONSE 20: Thank you for bringing this up. We can see now how there was confusion in how we presented the hypotheses and predictions. To address this, we changed the Hypotheses to Research Questions, which allowed us to keep the outcomes neutral (e.g., the hypothesis is not that they facilitate the range expansion, which implies a positive outcome, but rather that the question is about whether there are behavior changes across populations). We then added for each Prediction what hypothesis would be supported. We also clearly marked the actual predictions (in bold) and what hypothesis would be supported (in italics) so it is more organized and helpful for readers to follow.
This kind of question comes up a lot when we submit preregistrations for pre-study peer review and we like to clarify why we think it is important to list our various predictions in advance. For this, we quote our response to a reviewer in the peer review process for a different preregistration at PCI Ecology (Mendez 2019 https://ecology.peercommunityin.org/public/rec?id=65&reviews=True): "For each hypothesis, there are a number of results that could occur (e.g., positive, negative, or no correlations) and we wanted to make a priori predictions about how we would interpret every potential result from a given hypothesis. This prevents us from HARKing (Hypothesizing After Results are Known; see Kerr 1998), which could occur if we get a result that we weren't expecting. In this case, we could then make up a post hoc story about why that result might have occurred. By a priori accounting for as many variations of the results that we can think of, it places our focus on being predictive in advance, which allows us to test these predictions in this study (see Nosek et al. 2019). If we didn't list the alternatives at the pre-data collection stage, and we ended up encountering a result that was not in our predictions, we would be providing an interpretation post hoc, which would require us to conduct a new study to determine whether that prediction was supported. Another advantage to listing multiple alternatives in advance and having automated version tracking at GitHub with time and date stamps and track changes for all edits to the document is that readers can verify for themselves whether we were HARKing or not. Listing all potential predictions in advance allows us to explore the whole logical space that we are working in, rather than just describing one outcome possibility."
Nosek, B. A., Beck, E. D., Campbell, L., Flake, J. K., Hardwicke, T. E., Mellor, D. T., ... & Vazire, S. (2019). Preregistration Is Hard, And Worthwhile. Trends in cognitive sciences, 23(10), 815-818.
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196-217.
COMMENT 21: protocols: Why not include the detailed protocols for H1 (now in a separate Google Doc) as part of the pre-registration?
RESPONSE 21: We like to include the protocols as a link to the google doc because this is the document that the experimenters use when testing. The experimenters update this document as exceptions and notes occur. If we keep the link to the version-tracked google doc, then everyone can see where we are at in the process and what has happened so far, rather than making it a static part of the preregistration. We consider the document at the link as part of the preregistration, however, if you prefer, we could copy and paste the H1 protocols as they are currently into the Methods section of the preregistration.
COMMENT 22: Flexibility: Under what condition would you decide to "modify this protocol by moving the passing criterion sliding window in 1-trial increments, rather than 10-trial increments"?
RESPONSE 22: This is a really good point and we can decide right now. We will go with the 1-trial increments because this makes more logical sense than analyzing in a sliding 10-trial-block window, which is a socially inherited tradition in the field of comparative cognition. We updated as follows:
Methods > Protocols and open materials > Flexibility: "An individual is considered to have a preference if it chose the rewarded option at least 85% of the time (17/20 correct) in the most recent 20 trials (with a minimum of 8 or 9 correct choices out of 10 on the two most recent sets of 10 trials). We use a sliding window in 1-trial increments to calculate whether they passed after their first 20 trials."
COMMENT 23: Blinding during analyses: Would you like to present any justification for you lack of blinding?
RESPONSE 23: Thanks for your comment, which also made us remember that we actually do conduct some analyses with blind coders. We updated this section to:
Methods > Blinding during analyses: "Blinding is usually not involved in the final analyses because the experimenters collect the data (and therefore have seen some form of it) and run the analyses. Hypothesis- and data-blind video coders are recruited to conduct interobserver reliability of 20% of the videos for each experiment."
We also included a new section that describes our interobserver reliability analyses in Analysis Plan > Interobserver reliability of dependent variables
COMMENT 24: Analysis Plan, H1: As I understand it, you present a clear criterion for statistical decisions ("From the pairwise contrasts, if the difference between the distributions crosses zero (yes), then we are not able to detect differences between the two sites. If they do not cross zero (no), then we are able to detect differences between the two sites.") However, a bit more explanation here for those not familiar with your analytical methods would be welcome.
RESPONSE 24: We can see where more information, particularly in a step by step way, would be useful - thank you for pointing this out. We added more information to explain the approach as follows:
Analysis Plan > Q1 > Hypothesis specific mathematical model: "We will then perform pairwise contrasts to determine at what point we will be able to detect differences between sites by manipulating sample size, and $\alpha$ means and standard deviations. Before running the simulations, we decided that a model would detect an effect if 89% of the difference between two sites is on the same side of zero (following @statrethinkingbook). We are using a Bayesian approach, therefore comparisons are based on samples from the posterior distribution. We will draw 10,000 samples from the posterior distribution, where each sample will have an estimated mean for each population. For the first contrast, within each sample, we subtract the estimated mean of the edge population from the estimated mean of the core population. For the second contrast, we subtract the estimated mean of the edge population from the estimated mean of the middle population. For the third contrast, we subtract the estimated mean of the middle population from the estimated mean of the core population. We will now have samples of differences between all of the pairs of sites, which we can use to assess whether any site is systematically larger or smaller than the others. We will determine whether this is the case by estimating what percentage of each sample of differences is either larger or smaller than zero. For the first contrast, if 89% of the differences are larger than zero, then the core population has a larger mean. If 89% of the differences are smaller than zero, then the edge population has a larger mean."
Analysis Plan > Q1 > Table 2 > legend: "Simulation outputs from varying sample size (n), and $\alpha$ means and standard deviations. We calculate pairwise contrasts between the estimated means from the posterior distribution: if for a large sample the difference is both positive and negative and crosses zero (yes), then we are not able to detect differences between the two sites. If the differences between the means are all on one side of zero for 89% of the posterior samples (no), then we are able to detect differences between the two sites. We chose the 89% interval based on [@statrethinkingbook]."
COMMENT 25: Analysis Plan, H2: Is there only one value for 'relatedness' produced by this method? in other words, is their undisclosed analytical flexibility here?
RESPONSE 25: There are multiple ways to calculate relatedness among pairs of individuals from genotypic data. We were originally thinking that we would compare the validity and robustness of different ways of calculating relatedness based on our data, but we had not mentioned this in the preregistration. Since submitting this preregistration, we have now checked various estimators as part of a separate preregistration (Sevchik et al. 2019; http://corinalogan.com/Preregistrations/gdispersal_manuscript.html) on a subset of the Arizona data, which suggested that the estimator by Queller & Goodnight appears most appropriate for our data. As such, we will now use only the Queller & Goodnight method in the current preregistration. We clarified this as follows:
Analysis Plan > Q2 Dispersal: "Genetic relatedness between all pairs of individuals is calculated using the package "related" (@pew2015related) in R (as in @thrasher2018double) using the estimator by Queller & Goodnight, which was more robust for our inferences in a subset of the Arizona data [@sevchik2019dispersal]."
COMMENT 26: Analysis Plan, H3: This appears to be the weakest part of the pre-registration (the vaguest portion, and thus the portion for which this pre-registration does not appear to be doing the work of constraining analytical options and thus constraining 'researcher degrees of freedom') Can you provide more information about some of your explanatory variables? What exactly will the climate variables be? How will predator density be measured? Can you explain 'Distance to the next suitable habitat patch weighted by nearest mountain range/forest'? How will you define 'conspecific population' (for explanatory variable #6)? Will it be the detection of any individuals, or the detection of some minimum number of individuals? Can you provide any more info about your decision making process while fitting models using maxent?
RESPONSE 26: We now describe the reason for including each explanatory variable and what it might mean for a grackle range expansion, including explaining Distance to the next suitable habitat patch weighted by nearest mountain range/forest, and what the climate variables are (please see Response 29 for details). Our aim is not to precisely identify which variables are the primary constraints on where grackles can be found. Instead, we only want to identify suitable habitat across the Americas. Therefore, there is no decision making process in the model about which variables to include or not. We will optimize the model by trying different regularization coefficient values, which controls how much additional terms are penalized (Maxent's way of protecting against overfitting), and choosing the value that maximizes model fit. Most MaxEnt papers use cross-validation and the area under the curve (AUC) to evaluate model performance. We added this description to Analysis Plan > Q3 > Explanatory variables.
COMMENT 27: Trivial comments: Typo in abstract: "We first aim to compare behavior in wild-caught grackled"
RESPONSE 27: Thank you for catching that! We fixed it!
COMMENT 28: Dear Authors, please find in attachment my comments to your proposed research project. Overall it is very interesting and well thought; some of my comments end up being due to finding the place where you produce the information I was looking for. Hopefully some comments will be useful to further improve the link between your experimental work and their relevance to the ecology of the species. Good luck with the covid crisis, Best regards, C. Nieberding. Download the review (PDF file)
RESPONSE 28: We are so glad that you like the project! Thanks so much for your feedback (and also for the luck wishes during a time of COVID), which we include and respond to below. COMMENT 29 (CN1): Abstract: "3) these species use different habitats, habitat availability and connectivity" What habitat variables ? This is tricky because: - the needs of the species need to be known (food sources, habitat type(s) for shelter and nest,...) - where do the data come from? this type of habitat information is not available through GIS / satellite / remote sensing ? At the very end of the file I have found the list of specific habitat variables that you intend to map and compare to the occurrence data of the birds, but it would be a useful improvement to specify why/how these variables are important to the ecology of the species. So far it seems that you collect these habitat variables because they are available and they are not necessarily relevant to explain the species distribution. This is my major concern.
RESPONSE 29: Good points, thank you for pointing in the right direction in terms of what we needed to clarify. We now include an Introduction where we clarify for both species: 1) habitat types for foraging and nesting, 2) food sources, and 3) list examples of suitable habitat variables as well as describe variables that we added because they are hypothesis-relevant. Note that we removed predator density as an independent variable from the model because adult grackles have very few predators (i.e., two raptor species, one owl species, one snake species, and domestic cats; @johnson2001great) and predation is not noted in the literature as a major cause of mortality.
We also moved "Distance to the next suitable habitat patch weighted by nearest mountain range/forest" into the new "Distance between points on the northern edge of the range to the nearest uninhabited suitable habitat patch to the north in 1970 compared with the same patches in ~2018", which replaced "Distance to the nearest conspecific population 10 years previous to the point in time being investigated". These changes were made because we got clearer about what exactly the model is doing and what exactly we need to answer our questions. Thank you very much for your great questions which helped us narrow this down!
We also realized that we needed to pull out of the Land Cover variable the "distance from road/water body/wetland/water treatment plant" and move it into its own independent variable because it involves a separate treatment to obtain this data. It is now it's own variable under "Presence/absence of water in the cell for each point".
In the Analysis Plan we now describe the background for all variables (why we included them and what they could mean to a grackle range expansion) as follows:
Analysis Plan > Q3 > P3 > Explanatory Variables: "1) Land cover (e.g., forest, urban, arable land, pastureland, wetlands, marine coastal, grassland, mangrove) - we chose these land cover types because they represent the habitat types in which both species exist, as well as habitat types (e.g., forest) they are not expected to exist in [@selander1961analysis] to confirm that this is the case. If it is the case, it is possible that large forested areas are barriers for the range expansion of one or both species. We will download global land cover type data from MODIS (16 terrestrial habitat types) and/or the IUCN habitat classification (47 terrestrial habitat types). The IUCN has assigned habitat classifications to great-tailed (https://www.iucnredlist.org/species/22724308/132174807#habitat-ecology) and boat-tailed (https://www.iucnredlist.org/species/22724311/94859792#habitat-ecology) grackle, however these appear to be out of date and we will update them for the purposes of this project.
2) Elevation - @selander1961analysis notes the elevation range for GTGR (0-2134m), but not BTGR, therefore establishing the current elevation ranges for both species will allow us to determine whether and which mountain ranges present range expansion challenges. We will obtain elevation data from USGS.
3) Climate (e.g., daily/annual temperature range) - because this species was originally tropical [@wehtje2003range], which generally has a narrow daily and annual climate range, and now they exist in temperate regions, which have much larger climate ranges, this variable will allow us to determine potential climatic limits for both species. If there are limits, this could inform the difference between the range expansion rates of the two species. We will consider the 19 bioclimatic variables from WorldClim.
4) Presence/absence of water in the cell for each point - both species are considered to be highly associated with water [e.g., @selander1961analysis], therefore we will identify how far from water each species can exist to determine whether it is a limiting factor in the range expansion of one or both species. The data will come from USGS National Hydrography.
5) Connectivity: Distance between points on the northern edge of the range to the nearest uninhabited suitable habitat patch to the north in 1970 compared with the same patches in ~2018. We identified the northern edge of the distribution based on reports on eBird.org from 1968-1970, which resulted in recordings of GTGR in 48 patches and recordings of BTGR in 30 patches. For these patches, we calculated the connectivity (the least cost path) to the nearest uninhabited suitable habitat patch in 1970 and again in ~2018. Given that GTGR are not found in forests and that the elevation limits for GTGR [@selander1961analysis], and observing the sightings of both species on eBird.org, large forests, tall mountain ranges and high elevation geographic features could block or slow the expansion of one or both species into these areas and their surroundings. For each point, we will calculate the least cost path between it and the nearest location with grackle presence using the leastcostpath R package (@leastcostpath). This will allow us to determine the costs involved in a grackle deciding whether to fly around or over a mountain range/forest. We will define the forest and mountain ranges from the land cover and/or elevation maps.
COMMENT 30 (CN2): State of the data: "This preregistration was written (Mar 2020) prior to collecting any data from the edge and core populations." Which means ? Please specify
RESPONSE 30: Good point. We clarified as follows: "This preregistration was written (Mar 2020) prior to collecting any data from the edge and core populations, therefore we were blind to these data"
COMMENT 31 (CN3): State of the data: "Some of the relatedness data from the middle population (Arizona) has already been analyzed for other purposes (n=57 individuals, see Sevchik et al. (2019)), therefore it will be considered secondary data: data that are in the process of being collected for other investigations. However, we have now collected blood samples from many more grackles in Arizona, therefore we will redo the analyses from the Arizona population in the analyses involved in the current preregistration" Relevant for this study ? Are you going to use relatedness/ genetic data ? Relevance unclear based on abstract above. It is clear later in the description of the protocole but it may be useful to be more explicit earlier about the type of data (snp) and that these data have been proven useful to quantify a range of relatedness in different populations of this species?
RESPONSE 31: Sorry for our lack of clarity! We now describe this in the new Introduction and we clarified the State of the Data as follows:
"However, we were not blind to some of the data from the Arizona population: some of the relatedness data (SNPs used for Hypothesis 2 to quantify relatedness to infer whether individuals disperse away from relatives) from the middle population (Arizona) has already been analyzed for other purposes (n=57 individuals, see @sevchik2019dispersal). Therefore, it will be considered secondary data: data that are in the process of being collected for other investigations. We have now collected blood samples from many more grackles in Arizona, therefore we will redo the analyses from the Arizona population in the analyses involved in the current preregistration."
COMMENT 32 (CN4): State of the data: "This preregistration was submitted in May 2020 to PCI Ecology for pre-study peer review" I have not seen this data
RESPONSE 32: Sorry for the confusion. This just documents the time we submitted this preregistration to PCI Ecology, which is the submission you commented on. This section is more of a place for people to see which parts of the study happened before data collection, after data collection and before data analysis, and after data analysis so readers can judge for themselves our level of bias throughout the process.
COMMENT 33 (CN5): State of the data: "Level of data blindness: Logan and McCune collect the behavioral data (H1) and therefore have seen this data for the Arizona population. Lukas has access to the Arizona data and has seen some of the summaries in presentations. Chen has not seen any data." I think that this is not what is expected as an answer : behavioural studies may be biased mostly by knowing which outcome is expected for the animal at the time the data is collected. So we rather expect that the scientist who collected the behavioural observations is not aware of the population origin of the animals, and that animals from different populations (as far as possible) are randomized during successive observations. I understand that this may not be feasible for a study of such large geographical scale (as you write late in the protocols).
RESPONSE 33: Yes, in our case, the behavioral data are collected at the location the particular population is at, so experimenters always know which population they are working in. In registered reports, the level of data blindness is important to document because seeing the data can influence their future predictions about a particular question (see Chambers & Tzavella 2020 https://osf.io/preprints/metaarxiv/43298/ for more details). We wanted to be clear up front what our potential for bias is.
COMMENT 34 (CN6): H1 > P1 "speed at reversing a previously learned color preference" For food items ?
RESPONSE 34: Yes, exactly. We clarified, thank you! We revised it to:
"speed at reversing a previously learned color preference based on it being associated with a food reward"
COMMENT 35 (CN7): H1 > P1 "innovativeness: number of options solved on a puzzle box" Link to natural selection in the wild is unclear ?
RESPONSE 35: We have now clarified this in the new Introduction.
COMMENT 36 (CN8): H1 > P1 "Perhaps in newly established populations, individuals need to learn about and innovate new foraging techniques or find new food sources" Relevant but then better link your experimental tests to ecologically relevant hypotheses. For example why test for learning of colour change, if not for food? Or puzzle tests while localization /exploration of new / scattered food item is perhaps more relevant? In general, this becomes more clear after one has read the protocols below, but this is my second and last real concern about this project: can you link better the expected ecological needs (for finding food) and the type of tests that you conduct here? To give you an example, in butterflies we test the specific host plant that females use to oviposit, and the test is about the time they need to find, and remember, the location of the host plant. The link to the demography and on selection in the field is more immediate.
COMMENT 37 (CN9): H1 > P1 "Higher variances in behavioral traits indicate that there is a larger diversity of individuals in the population, which means that there is a higher chance that at least some individuals in the population could innovate foraging techniques and be more flexible, exploratory, and persistent, which could be learned by conspecifics and/or future generations." The expectations about variance in addition to means are highly relevant.
RESPONSE 37: Great point, thank you for bringing this up! For the flexibility analysis, we now repeated the same simulation while holding the sample size constant, and setting all three site means to be the same and holding them constant, while we varied the standard deviation for each response variable in Q1. The results are in the new Table 3, and we added explanations about these results as follows:
Analysis Plan > Q1 > Flexibility Analysis: "To investigate the degree to which we can detect differences in the variances between sites, we ran another version of the mathematical model using a sample size of 15 per site and we held the mean number of trials to reverse a preference constant between all populations. We then changed the $\alpha$ standard deviations and performed pairwise site contrasts. We determined that it will be difficult to detect meaningful differences in variances in the number of trials to reverse a preference between sites (Table 3)."
The results show that we will not be able to robustly detect differences in variance between populations because the boundary for where all of the values are on one side of zero moves around quite a lot. One thing that we have been discussing is the fact that measurement error can obscure differences in variances. Therefore, the simulation suggests that we will be able to detect differences in the mean with these sample sizes, but likely not differences in the variances.
For the other three analyses (innovation, exploration, and persistence), the distributions we used (binomial and gamma-Poisson) were such that the mean is tied to the variance, therefore, instead of attempting to pull variance out in these models, we will plot the variance for these variables to compare site differences visually. We believe this will be sufficient because the flexibility analysis showed us empirically that we will not be able to robustly detect differences between site variances, which is likely due to us choosing to model small sample sizes per site and that there is likely to be measurement error that, if large enough, can obscure differences in variance. We clarified this in the text as follows:
Analysis Plan > Q1 > Innovation Analysis: "Because the mean and the variance are linked in the binomial distribution, and because the variance simulations in the flexibility analysis showed that we will not be able to robustly detect differences in variance between sites, we will plot the variance in the number of loci solved between sites to determine whether the edge population has a wider or narrower spread than the other two populations."
Analysis Plan > Q1 > Exploration Analysis: "Because the mean and the variance are linked in the gamma-Poisson distribution, and because the variance simulations in the flexibility analysis showed that we will not be able to robustly detect differences in variance between sites, we will plot the variance in the latency to approach the object between sites to determine whether the edge population has a wider or narrower spread than the other two populations."
Analysis Plan > Q1 > Persistence Analysis: "Because the mean and the variance are linked in the binomial distribution, and because the variance simulations in the flexibility analysis showed that we will not be able to robustly detect differences in variance between sites, we will plot the variance in the proportion of trials participated in between sites to determine whether the edge population has a wider or narrower spread than the other two populations."
COMMENT 38 (CN10): H1 > P1 alt 1 "If the original behaviors exhibited by this species happen to be suited to the uniformity of human-modified landscapes (e.g., urban, agricultural, etc. environments are modified in similar ways across Central and North America), then the averages and/or variances of these traits will be similar in the grackles sampled from populations across their range" This result may also occur if irrelevant behaviour have been tested, hence my comments above about ecological relevance of behavioural tests. Hence my concern about the ecological relevance of your experimental behavioural tests.
RESPONSE 38: We have now clarified the ecological relevance in the new Introduction.
COMMENT 39 (CN11): H1 > P1 alt 1 "Alternatively, it is possible that 2.9 generations at the edge site is too long after their original establishment date to detect differences in the averages and/or variances" It would be relevant to backup this by evidence from experimental evolution on learning skills in vertebrates (like mouse,...). I doubt that populations would get back to ancestral averages in cognition within 3 generations.
RESPONSE 39: We agree that we should back this statement up with evidence from experimental evolution - thank you for pointing this out! Evidence is accumulating that learning can be costly (reviews in Mery and Burns 2010 and Dunlap and Stephens 2016), and we found examples that we now include in the preregistration:
Prediction 1 > Alternative 1: "Alternatively, it is possible that 2.9 generations at the edge site is too long after their original establishment date to detect differences in the averages and/or variances (though evidence from experimental evolution suggests that, even after 30 generations there is no change in certain behaviors when comparing domestic guinea pigs with 30 generations of wild-caught captive guinea pigs @kunzl2003wild, whereas artificial selection can induce changes in spatial ability in as little as two generations @kotrschal2013artificial)."
Mery and Burns, 2010. Behavioral plasticity: an interaction between evolution and experience Evolutionary Ecology, 24 (2010), pp. 571-583
Dunlap and Stephens, 2016. Reliability, uncertainty, and costs in the evolution of animal learning Curr. Opin. Behav. Sci., 12 (2016), pp. 73-79, 10.1016/j.cobeha.2016.09.010
COMMENT 40 (CN12): H1 > P1 alt 1 "If the sampled individuals had already been living at this location for long enough (or for their whole lives) to have learned what they need about this particular environment (e.g., there may no longer be evidence of increased flexibility/innovativeness/exploration/persisence), there may be no reason to maintain population diversity in these traits to continue to learn about this environment" Relevant : then focus on juveniles individuals in sampling, if possible
RESPONSE 40: We apologise for our lack of clarity. We focus on adult grackles for two key reasons: (i) they are more likely to have fully developed fine motor skills (e.g., holding/grasping objects with their bill – see Collias & Collias 1964 and Rutz et al. 2016 for ontogenetic differences in birds' capacity to mandibulate nesting material and sticks, for example) and (ii) we cannot distinguish between, for example, a juvenile bird of 8 months versus an adult of 12 months of age. Thus, we do not focus on juvenile individuals so as not to confound potential age-related variation in cognitive abilities and in fine motor-skill development with variation in our target variables of interest. We now include this rationale:
Methods > Planned Sample: "Great-tailed grackles are caught in the wild in Woodland, California and at a site to be determined in Central America. We aim to bring adult grackles, rather than juveniles, temporarily into the aviaries for behavioral choice tests to avoid the potential confound of variation in cognitive development due to age, as well as potential variation in fine motor-skill development (e.g., holding/grasping objects—early-life experience plays a role in the development of both of these behaviors; e.g., Collias & Collias 1964, Rutz et al. 2016) with variation in our target variables of interest. Adults will be identified from their eye color, which changes from brown to yellow upon reaching adulthood (Johnson and Peer 2001)."
COMMENT 41 (CN13): Figure 2: For non-bird experts these pictures should be associated to explanations to justify the choice of behavioural tests.
RESPONSE 41: We added an ecological relevance statement for the choice of our behavioral tests in the new Introduction. However, we agree that our figure caption was too sparse, and so we added the below text. We changed two labels in our figure ("reversal learning" changed to "flexibility" and "multiaccess box" changed to "innovativeness") to match the description:
Figure 2 > Legend: "Experimental protocol. Great-tailed grackles from core, middle, and edge populations will be tested for their: (top left) flexibility (number of trials to reverse a previously learned color tube-food association); (middle) innovativeness (number of options [lift, swing, pull, push] solved to obtain food from within a multi-access log); (bottom left) persistence (proportion of trials participated in during flexibility and innovativeness tests); and (far right) exploration (latency to approach/touch a novel object)."
COMMENT 42 (CN14): H2: "Changes in dispersal behavior, particularly for females, which is the sex that appears to be philopatric in the middle of the range expansion, facilitate the great-tailed grackle's geographic range expansion" Not clear to me why focus on females. Males is the usual dispersing sex. Females as the limiting factor (without females no nests)? Please clarify.
RESPONSE 42: We discovered that females are the philopatric sex in this species in a previous study (Sevchik et al. 2019 http://corinalogan.com/Preregistrations/gdispersal_manuscript.html), but, thanks to your comment, we realized this wasn't clear, therefore we added the citation. We also changed our predictions to make the expected effect clearer, in particular that we expect more dispersal at the edge. However, given that we know that males disperse in the middle of the range expansion, we might only see an increase in dispersal at the edge for females:
Q2 > Prediction 2: "a higher proportion of individuals, particularly females, which is the sex that appears to be philopatric in the middle of the range expansion [@sevchik2019dispersal], disperse in a more recently established population"
COMMENT 43 (CN15): H2 > P2 "If a change in dispersal behavior is facilitating the expansion, then we predict more dispersal at the edge: a higher proportion of individuals disperse in a more recently established population and, accordingly, fewer individuals are closely related to each other" This appears to be true in many species, but it may be necessary but not sufficient to colonize new areas. Innovation may be needed in addition to increased dispersal at distribution edges.
RESPONSE 43: We now made it clearer throughout the text that we are not talking about competing alternative hypotheses, but that the range expansion could be associated with all or none of the variables we are measuring (see Response 4). For example, we might find that the individuals in the edge population show higher levels of innovation as well as being more likely to have dispersed and we cannot and do not intend to tease these apart. We made sure to clarify this as follows:
Q2 > P2: "We predict more dispersal at the edge: a higher proportion of individuals, particularly females, which is the sex that appears to be philopatric in the middle of the range expansion [@sevchik2019dispersal], disperse in a more recently established population and, accordingly, fewer individuals are closely related to each other. This would support the hypothesis that changes in dispersal behavior are involved in the great-tailed grackle's geographic range expansion."
COMMENT 44 (CN16): H2 > P2 alt 1"If the original dispersal behavior was already well adapted to facilitate a range expansion, we predict that the proportion of individuals dispersing is not related to when the population established at a particular site and, accordingly, the average relatedness is similar across populations." This explains that relatedness measures are collected. However do you have evidence that your markers for quantifying relatedness (microsat?I got it is snp later on) will be variable enough to detect such limited changes in relatedness? Would another behavioural test perhaps be a better estimate of dispersal (perhaps, propensity to leave a cage for another in large field enclosures, perhaps?)?
RESPONSE 44: Thanks to your comment, we realized this wasn't clear. We have evidence that these markers work for quantifying relatedness because we have already conducted a dispersal study on part of the Arizona great-tailed grackle population (Sevchik et al. 2019 http://corinalogan.com/Preregistrations/gdispersal_manuscript.html). We now explain this in better detail in the new Introduction:
Introduction: "To determine whether females and/or males move away from the location they hatched, we will assess whether their average relatedness (calculated using single nucleotide polymorphisms, SNPs) is lower than what we would expect if individuals move randomly [@sevchik2019dispersal]."
COMMENT 45 (CN17): Table 1: "The number of generations at a site is based on a generation length of 5.6 years for this species (@GTGRbirdlife2018) and on the first year in which this species was reported to breed at the location" At what age do they start to breed ?
RESPONSE 45: They start breeding at age 1. We added this to:
Table 1 legend: "(note: this species starts breeding at age 1)"
COMMENT 46 (CN18): Table 1: Nice contrasted population sites
RESPONSE 46: Thank you!
COMMENT 47 (CN19): H3 > P4 "Over the past few decades, GTGR has increased the habitat breadth that they can occupy, whereas BTGR continues to use the same limited habitat types." Re: Habitat breadth: Which are ?
RESPONSE 47: We now include known habitat differences between these two species in the new Introduction, which appear to be related to suitable nesting habitat - thank you for pointing out that we did not include this information! This is what we added:
Introduction: "Detailed reports (@selander1961analysis, @wehtje2003range) on the breeding ecology of these two species indicate that range expansion in boat- but not great-tailed grackles may be constrained by the availability of suitable nesting sites. Boat-tailed grackles nest primarily in coastal marshes, whereas great-tailed grackles nest in a variety of locations (e.g., palm trees, bamboo stalks, riparian vegetation, pines, oaks). However, this apparent difference in habitat breadth has yet to be rigorously quantified."
COMMENT 48 (CN20): H3 > P5 "Some inherent trait allows GTGR to expand even though both species have unused habitat available to them." It would be relevant to quantify behavioural traits in the sister species as well.
RESPONSE 48: We agree and we have future plans to do a behavioral comparison with BTGR, however it is beyond the scope of our current funding period.
COMMENT 49 (CN21): Figure 3 "Comparing the availability of suitable habitat between great-tailed grackles (GTGR), which are rapidly expanding their geographic range, and boat-tailed grackles (BTGR), which are not" They certainly have different habitat requirements given that their distribution ranges do not overlap. It will be hard to make a useful comparison between the two species without quantifying behavioural traits in the sister, not expanding, species.
RESPONSE 49: Please see our Response 48. The ranges of the two species do overlap in Texas, Louisiana, Mississippi, and Alabama (Selander & Giller 1961, eBird.org). Additionally, because our hypotheses about behavior and habitat are not mutually exclusive, we can still determine whether habitat changes play a role in the boat-tailed grackle's lack of a rapid range expansion.
COMMENT 50 (CN22): Methods > Planned sample "Great-tailed grackles are caught in the wild in Woodland, California and at a site to be determined in Central America" Focus on juveniles (as suggested above)?
RESPONSE 50: Please see our Response 40.
COMMENT 51 (CN23): Methods > Planned sample: "We catch grackles with a variety of methods (e.g., walk-in traps, mist nets, bow nets), some of which decrease the likelihood of a selection bias for exploratory and bold individuals because grackles cannot see the traps (i.e., mist nets)" good
COMMENT 52 (CN24): Methods > Data collection stopping rule: "We will stop collecting data on wild-caught grackles in H1 and H2 (data for H3 are collected from the literature)" this is very surprising : what type of data ? please specify.
RESPONSE 52: We now clarified this in the new Introduction:
"Secondly, we aim to investigate whether habitat availability, not necessarily inherent species differences, explains why great-tailed grackles are able to much more rapidly expand their range than their closest relative, boat-tailed grackles (Q. major) [@post1996boat; @wehtje2003range]. Detailed reports on the breeding ecology of these two species indicate that range expansion in boat- but not great-tailed grackles may be constrained by the availability of suitable nesting sites [@selander1961analysis; @wehtje2003range]. Boat-tailed grackles nest primarily in coastal marshes, whereas great-tailed grackles nest in a variety of locations (e.g., palm trees, bamboo stalks, riparian vegetation, pines, oaks). However, this apparent difference in habitat breadth has yet to be rigorously quantified. Great-tailed grackles inhabit a wide variety of habitats (but not forests) at a variety of elevations (0-2134m), while remaining near water bodies, while boat-tailed grackles exist mainly in coastal areas [@selander1961analysis]. Both species have similar foraging habits: they are generalists and forage in a variety of substrates on a variety of different food items [@selander1961analysis]. We will use ecological niche modeling to examine temporal habitat changes over the past few decades using observation data for both grackle species from existing citizen science databases. We will compare this data with existing data on a variety of habitat variables. We identified suitable habitat variables from @selander1961analysis, @johnson2001great, and @post1996boat (e.g., types of suitable land cover including marine coastal, wetlands, arable land, grassland, mangrove, urban), and we added additional variables relevant to our hypotheses (e.g., distance to nearest uninhabited suitable habitat patch to the north, presence/absence of water in the area). A suitable habitat map will be generated across the Americas using ecological niche models. This will allow us to determine whether the range of great-tailed grackles, but not boat-tailed grackles, might have increased because their habitat suitability and connectivity (which combined determines whether habitat is available) has increased, or whether great-tailed grackles now occupy a larger proportion of habitat that was previously available."
COMMENT 53 (CN25): Methods > Protocols and open materials > Suitable habitat: "We identified suitable habitat variables from Selander and Giller (1961), Johnson and Peer (2001), and Post et al. (1996), and we added additional variables relevant to our hypotheses. A suitable habitat map will be generated across the Americas using GIS. " This is central to explain because it is not straightforward to see the relevance and feasability
RESPONSE 53: Please see the new Introduction where we clarified this.
COMMENT 54 (CN26): Analysis plan: This seems very well done but I have not read with total attention
RESPONSE 54: Thank you very much! It was our first time fully implementing what we have been learning from Richard McElreath's Statistical Rethinking book and course. We're really proud that we were able to develop these models because they are much better suited to our questions than standard analyses.
COMMENT 55 (CN27): Analysis Plan > P3 > Explanatory variables 1-6. Here are the variables for habitat comparison. It would be useful to specify what range of values for these variables are relevant for each of the two species (is it known?)
RESPONSE 55: The range of values for these two species is not known, and it is one of the aims of our investigation to determine this. Please see our Response 29 for many more details on each explanatory variable.
To help clarify, we added to the Analysis Plan > Q3 which analyses we will run to answer our questions:
Analysis 1 (P3: different habitats): does the range of variables that characterize suitable habitat for GTGR differ from that of BTGR? We will fit species distribution models for both species in 2018 to identify the variables that characterize suitable habitat. We will examine the raw distributions of these variables from known grackle occurrence points or extract information on how the predicted probability of grackle presence changes across the ranges for each habitat variable. The habitat variables for each species will be visualized in a figure that shows the ranges of each variable and how much the ranges of the variables overlap between the two species or not.
Analysis 2 (P3: habitat suitability): has the available habitat for both species increased over time? We will fit species distribution models for both species in 1970 and in 2018 and determine for each variable, the range in which grackles are present (we define this as the habitat suitability for each species). Then we will take these variables and identify which locations in the Americas fall within the grackle-suitable ranges in 1970 and in 2018. We will then be able to compare the maps (1970 and 2018) to determine whether the amount of suitable habitat has increased or decreased.
If we are able to find data for these variables before 1970 across the Americas, we will additionally run models using the oldest available data to estimate the range of suitable habitat earlier in their range expansion.
Analysis 3 (P3: habitat connectivity): has the habitat connectivity for both species increased over time? If the connectivity distances are smaller in 2018, this will indicate that habitat connectivity has increased over time. We will calculate the least cost path from the northern edge to the nearest suitable habitat patch. To compare the distances between 1970 and 2018, and between the two species, we will run two models where both have the distance as the response variable and a random effect of location to match the location points over time. The explanatory variable for model 1 will be the year (1970, 2018), and for model 2 it will be the species (GTGR, BTGR).
If we are able to find data for these variables before 1970 across the Americas, we will additionally run models using the oldest available data to estimate the range of connected habitat earlier in their range expansion.
Analysis 4 (P4: habitat breadth): has the habitat breadth of both species changed over time? We will count the number of different land cover categories each species is or was present in for 1970 and 2018. To determine whether this influences their distributions, we will calculate how much area in the Americas is in each land cover category, which would then indicate how much habitat is suitable (based solely on land cover) for each species.
|
CommonCrawl
|
Functional identities of degree 2 in CSL algebras
D. Han
School of Mathematics and Information Science, Henan Polytechnic University, Jiaozuo, 454000, P.R. China.
Let $\mathscr{L}$ be a commutative subspace lattice generated by finite many commuting independent nests on a complex separable Hilbert space $\mathbf{H}$ with ${\rm dim}\hspace{2pt}\mathbf{H}\geq 3$, ${\rm Alg}\mathscr{L}$ the CSL algebra associated with $\mathscr{L}$ and $\mathscr{M}$ be an algebra containing ${\rm Alg}\mathscr{L}$. This article is aimed at describing the form of additive mapppings $F_1, F_2, G_1, G_2\colon {\rm Alg}\mathscr{L}\longrightarrow \mathscr{M}$ satisfying functional identity $F_1(X)Y+F_2(Y)X+XG_2(Y)+YG_1(X)=0$ for all $X, Y\in {\rm Alg}\mathscr{L}$. As an application generalized inner biderivations and commuting additive mappings are determined.
Functional identity
CSL algebra
generalized inner biderivation
commuting mapping
47-XX Operator theory
R.L. An and J.C. Hou, Characterization of derivations on reexive algebras, Linear Multilinear Algebra 61 (2013), no. 1, 1408--1418.
W. Arveson, Operator algebras and invariant subspaces, Ann. Math. (2) 100 (1974) 438--532.
K.I. Beidar, M. Brešar and M.A. Chebotar, Functional identities on upper triangular matrix algebras, J. Math. Sci. (New York) 102 (2000), no. 6, 4557--4565.
M. Brešar, On generalized biderivations and related maps, J. Algebra 172 (1995), no. 3, 764--786.
M. Brešar, M.A. Chebotar and W.S. Martindale, Functional Identities, Frontiers in Mathematics, Birkhauser Verlag, Basel, 2007.
Y.H. Chen and J.K. Li, Mappings on some reexive algebras characterized by action on zero products or Jordan zero products, Studia Math. 206 (2011), no. 2, 121--134.
W.S. Cheung, Commuting maps of triangular algebras, J. Lond. Math. Soc. (2) 63 (2001), no. 1, 117--127.
K.R. Davidson, Nest Algebras, Pitman Research Notes in Mathematics Series 191, Longman Scientific & Technical, Harlow; Copublished in the United States with John Wiley & Sons, New York, 1988.
D. Eremita, Functional identities of degree 2 in triangular rings, Linear Algebra Appl. 438 (2013) 584--597.
D. Eremita, Functional identities of degree 2 in triangular rings revisited, Linear Multi-linear Algebra 63 (2015), no. 3, 534--553.
F. Gilfeather, Derivations on certain CSL algebras, J. Operator Theory 11 (1984) 145--156.
F. Gilfeather and D.R. Larson, Commutants modulo the compact operators of certain CSL algebras, in: Topics in Modern Operator Theory (Timişoara/Herculane, 1980), pp. 105--120, Operator Theory: Adv. Appl. 2, Birkhauser, Basel-Boston, 1981.
J.K. Li and J.B. Guo, Characterizations of higher derivations and Jordan higher derivations on CSL algebras, Bull. Aust. Math. Soc. 83 (2011), no. 3, 486--499.
W.Y. Yu and J.H. Zhang, Lie triple derivations of CSL algebras, Internat. J. Theoret. Phys. 52 (2013), no. 6, 2118--2127.
J.H. Zhang, S. Feng, H.X. Li and R.H. Wu, Generalized biderivations of nest algebras, Linear Algebra Appl. 418 (2006), no. 1, 225--233.
Revise Date: 05 July 2016
Accept Date: 12 July 2016
Han, D. (2017). Functional identities of degree 2 in CSL algebras. Bulletin of the Iranian Mathematical Society, 43(6), 1601-1619.
D. Han. "Functional identities of degree 2 in CSL algebras". Bulletin of the Iranian Mathematical Society, 43, 6, 2017, 1601-1619.
Han, D. (2017). 'Functional identities of degree 2 in CSL algebras', Bulletin of the Iranian Mathematical Society, 43(6), pp. 1601-1619.
Han, D. Functional identities of degree 2 in CSL algebras. Bulletin of the Iranian Mathematical Society, 2017; 43(6): 1601-1619.
|
CommonCrawl
|
The spin Drude weight of the spin-1/2 $XXZ$ chain: An analytic finite size study
by Andreas Klümper, Kazumitsu Sakai
As Contributors: Kazumitsu Sakai
Submitted by: Sakai, Kazumitsu
Domain(s): Theoretical
Subject area: Mathematical Physics
The Drude weight for the spin transport of the spin-1/2 $XXZ$ Heisenberg chain in the critical regime is evaluated exactly for finite temperatures. We combine the thermodynamic Bethe ansatz with the functional relations of type $Y$-system satisfied by the row-to-row transfer matrices. This makes it possible to evaluate the asymptotic behavior of the finite temperature spin Drude weight with respect to the system size. As a result, the Drude weight converges to the results obtained by Zotos (Phys. Rev. Lett. 82, 1764 (1999)), however with very slow convergence upon increase of the system size. This strong size dependence may explain that extrapolations from various numerical approaches yield conflicting results.
Editor-in-charge assigned
Submission 1904.11253v1 on 26 April 2019
1- New non-trivial results on the finite-size scaling of a thermodynamic quantity.
1- The presentation is rather technical and unlikely suitable to non-experts.
2- Some important details appear to be missing.
- In spite of some significant progress on the subject over the past few years, the introductory paragraphs, in particular the 2nd and 3rd paragraphs which largely mention 20 years old results, do not seem to be particularly appreciative of this. For example, "Nevertheless, by utilizing the integrability, several transport coeffcients of the model have been calculated within linear response theory.", is an obvious misrepresentation of the present understanding of quantum transport properties in integrable quantum systems, including the Heisenberg model considered here.
I would like to draw the authors attention to refs. [22,28] and additionally to [Ilievski and De Nardis, PRB 96(8), p.081118 (2017)]. There a powerful framework of hydrodynamics, developed in [25,26], has been used to obtain general and universal (that is state and quantity independent) closed-form expressions for the Drude weights valid for a broad range of integrable systems. Furthermore, there are even several exact results regarding spin transport on sub-balistic scales. I believe that, mostly for the reader's benefit, these developments deserve to be acknowledged in the introductory paragraphs in some transparent manner.
- I do not think referring to the "optimal lower bound" is an optimal choice of words. In fact, the bound of ref. [19] is now known to be the exact value of the spin Drude weight, see ref. [22]. In the infinite-temperature limit, the expression of [19] has been re-obtained explicitly in [Collura et al., PRB 97(8) p.081111 (2018)].
In the same paragaph: while [22,27] seem to be relevant references, [25,26,29] do not concern the Drude weights.
- The meaning of the so-called "extended" Thermodynamic Bethe Ansatz is a bit obscure I must say; the Drude weights are an equilibrium property after all, cf. [22,28]. Let me also mention [Ilievski and De Nardis, PRB 96(8), p.081118 (2017)] which equivalences between various approaches (i.e. the hydrodynamic formulae, Mazur equality, and Kohn's twisting formula employed by Zotos).
- Given that formula (11) is a well-known result, it may be worthwhile citing also ref. [5] and e.g. [Castella et al., PRL 74, 972 (1995)].
- "Instead of solving the BAE (3.8) directly, we introduce for convenience a more general family of transfer matrices (T-functions)..."
A small remark just to prevent confusion: the T-functions do not refer to transfer matrices but instead to their eigenvalues.
- I think the definition of $s^{\prime}$ (cf. Eq.(4.6)) is missing.
- The introduction of quantity $\psi$ in Eq. (4.6) seems rather ad-hoc at this point.
Frankly, even afterwards in Sec. 6.1. where $\psi$ appears to have an important role, its purpose is not particularly transparent. Hopefully the authors can expand on this.
- Missing hyperlink to the reference to Appendix A.
- Parameters $\zeta^k_j$, as introduced in Eq.(4.1), take in general complex values. But apparently here the so-called holes rapidities $\zeta^k_j$ are assumed to lie on the real axis.
Is this an assumption or it instead follows from some formal property of the T-system?
- "This equation exactly agrees with the one derived by the string hypothesis [33, 34]. However, we emphasize that our formula does not rely on the string hypothesis, but only on the simple analytical assumption explained previously."
Well, doesn't this suggest that the analytical assumption is simply equivalent to the string "hypothesis"?
- How to deal with the modified integration contour, defined in Eq.(4.8), in the thermodynamic limit when $\zeta^k_j$ condense in some way on the real axis. Here, in particular, I mean the second equality in Eq. (5.10) which looks ambiguous to me. Since in my view this is the key step in the context of finite-volume analysis, I hope the authors can give a precise explanation here.
- On a related note, in Eq.(5.9) the $L\to \infty$ limit has already been taken, while afterwards Eq. (5.12) explicitly depends on system size L. How should one therefore interpret the contour prescription?
- I noticed that Eq.(5.9), which eventually leads to Eq. (5.12), is only applicable in the case of the thermal (Gibbs) distribution of Bethe holes. A natural question here is whether there exist a general prescription which would permits to evaluate the thermodynamic Kohn's formula for generic equilibrium ensembles?
- Missing hyperlink in the reference to Appendix B.
- In the spirit of the above remarks, Eq.(6.2) which involves both the thermodynamic Y-functions and length L explicitly. The analysis there appears to crucially rely on the Cauchy's theorem, which requires $Y^{th}_j$ to have isolated poles in the physical domain. But since according to Eq.(5.1) $\zeta^k_j$ approach each other as $1/L$, I presume that poles of $Y^{th}_j$ are no longer isolated after taking the limit. How exactly does the logic work?
- "As shown in the previous subsection, the Drude weight D for arbitrary system size L is given by (6.6), and converges in the thermodynamic limit $L->\infty$ to the result derived by Zotos (6.18)."
How come the formula (6.6) is exact for any L given that it need the thermal distribution of holes as an input?
- "For finite size L we use discrete distributions that approximate the continuous densities as closely as possible."
Could the authors clarify how is this step implemented in practice? In my (possibly to naive) understanding, a simple sampling of thermal hole densities will typical violate the "on-shell" condition (4.3). I can imagine however that any explicit reference to quantization equations will render the whole analysis immediately infeasible. I hope that the authors can expand on this since this step appears to be quite vital to the main result of the paper.
- "the Drude weight exhibits a fractal dependence on $\Delta$"
I can recommend to include here also ref. [22], which first establishes saturation of the "fractal bound" of ref. [19] by invoking the complete quasi-particle spectrum.
- "For more quantitative and rigorous analysis of this intriguing behavior, a formula describing the Drude weight for any irrational numbers is highly desired."
I wonder why the authors chose to limit themselves to the restricted discrete set of simple roots of unity, given that the form of the Y-system for generic points is well known.
Requested changes
1- Please address the remarks and provide additional clarifications.
validity: good
significance: ok
originality: ok
clarity: ok
formatting: excellent
grammar: good
|
CommonCrawl
|
The supremum
A non-empty set $S \subseteq \mathbb{R}$ is bounded from above if there exists $M \in \mathbb{R}$ such that
$$x \le M, \quad \forall x \in S.$$
The number $M$ is called an upper bound of $S$.
If a set is bounded from above, then it has infinitely many upper bounds, because every number greater then the upper bound is also an upper bound. Among all the upper bounds, we are interested in the smallest.
Let $S \subseteq \mathbb{R}$ be bounded from above. A real number $L$ is called the supremum of the set $S$ if the following is valid:
(i) $L$ is an upper bound of $S$:
$$x \le L, \quad \forall x \in S,$$
(ii) $L$ is the least upper bound:
$$(\forall \epsilon > 0) (\exists x \in S)(L – \epsilon < x).$$
The supremum of $S$ we denote as
$$L = \sup S$$
$$L = \sup_{x \in S} \{x\}.$$
If $L \in S$, then we say that $L$ is a maximum of $S$ and we write
$$L= \max S$$
$$ L= \max_{x \in S}\{x\}.$$
If the set $S$ it is not bounded from above, then we write $\sup S = + \infty$.
Proposition 1. If the number $A \in \mathbb{R}$ is an upper bound for a set $S$, then $A = \sup S$.
The question is, does every non- empty set bounded from above has a supremum? Consider the following example.
Example 1. Determine a supremum of the following set
$$ S = \{x \in \mathbb{Q}| x^2 < 2 \} \subseteq \mathbb{Q}.$$
The set $S$ is a subset of the set of rational numbers. According to the definition of a supremum, $\sqrt{2}$ is the supremum of the given set. However, a set $S$ does not have a supremum, because $\sqrt{2}$ is not a rational number. The example shows that in the set $\mathbb{Q}$ there are sets bounded from above that do not have a supremum, which is not the case in the set $\mathbb{R}$.
In a set of real numbers the completeness axiom is valid:
Every non-empty set of real numbers which is bounded from above has a supremum.
It is an axiom that distinguishes a set of real numbers from a set of rational numbers.
The infimum
In a similar way we define terms related to sets which are bounded from below.
A non-empty set $S \subseteq \mathbb{R}$ is bounded from below if there exists $m \in \mathbb{R}$ such that
$$m \le x, \quad \forall x \in S.$$
The number $m$ is called a lower bound of $S$.
Let $S \subseteq \mathbb{R}$ be bounded from below. A real number $L$ is called the infimum of the set $S$ if the following is valid:
(i) $L$ is a lower bound:
$$L \le x, \quad \forall x \in S,$$
(ii) $L$ is the greatest lower bound:
$$(\forall \epsilon > 0) ( \exists x \in S) ( x < L + \epsilon).$$
The infimum of $S$ we denote as
$$L = \inf S$$
$$L = \inf_{x \in S} \{x\}.$$
If $ L \in S$, then we say that $L$ is the minimum and we write
$$L= \min S$$
$$ L= \min_{x \in S}\{x\}.$$
If the set $S$ it is not bounded from below, then we write $\inf S = – \infty$.
The existence of a infimum is given as a theorem.
Theorem. Every non-empty set of real numbers which is bounded from below has a infimum.
Proposition 2. Let $a , b \in \mathbb{R}$ such that $a<b$. Then
(i) $\sup \langle a, b \rangle = \sup \langle a, b] = \sup [a, b \rangle = \sup [a, b] = b$,
(ii) $ \sup \langle a, + \infty \rangle = \sup [a, + \infty \rangle = + \infty$,
(iii) $\inf \langle a, b \rangle = \inf \langle a, b] = \inf [a, b \rangle = \inf [a, b] = a$,
(iv) $\inf \langle – \infty, a \rangle = \inf \langle – \infty, a ] = – \infty$,
(v) $\sup \langle – \infty, a \rangle = \sup \langle – \infty, a] = \inf \langle a, + \infty \rangle = \inf [a, + \infty \rangle = a$.
Example 2. Determine $\sup S$, $\inf S$, $\max S$ and $\min S$ if
$$ S = \{ x \in \mathbb{R} | \frac{1}{x-1} > 2 \}.$$
Solution. Firstly, we have to check what are the $x$-s:
$$ \frac{1}{x-1} > 2$$
$$\frac{1}{x-1} – 2 > 0$$
$$\frac{3 – 2x}{x-1} >0$$
The inequality above will be less then zero if the numerator and denominator are both positive or both negative. We distinguish two cases:
1.) $3-2x >0$ and $x-1 > 0$, that is, $ x < \frac{3}{2}$ and $ x > 1$. It follows $ x \in \langle 1, \frac{3}{2} \rangle$.
2.) $3 – 2x < 0$ and $ x-1 < 0$, that is, $x > \frac{3}{2}$ and $ x < 1$. It follows $ x \in \emptyset$.
$\Longrightarrow S = \langle 1, \frac{3}{2} \rangle$
From the proposition 2. follows that $\sup S = \frac{3}{2}$ and $\inf S = 1$.
The minimum and maximum do not exist ( because we have no limits of the interval).
$$ S = \{ \frac{x}{x+1}| x \in \mathbb{N} \}.$$
Firstly, we will write first few terms of $S$:
$$S= \{ \frac{1}{2}, \frac{2}{3}, \frac{3}{4}, \frac{4}{5}, \cdots \}.$$
We can assume that the smallest term is $\frac{1}{2}$ and there is no largest term, however, we can see that all terms do not exceed $1$. That is, we assume $\inf S = \min S = \frac{1}{2}$, $\sup S = 1$ and $\max S$ do dot exists. Let's prove it!
To prove that $1$ is the supremum of $S$, we must first show that $1$ is an upper bound:
$$\frac{x}{x+1} < 1$$
$$\Longleftrightarrow x < x +1$$
$$ \Longleftrightarrow 0 < 1,$$
which is always valid. Therefore, $1$ is an upper bound. Now we must show that $1$ is the least upper bound. Let's take some $\epsilon < 1$ and show that then exists $x_0 \in \mathbb{N}$ such that
$$\frac{x_0}{x_0 +1} > \epsilon$$
$$\Longleftrightarrow x_0 > \epsilon (x_0 + 1)$$
$$\Longleftrightarrow x_0 ( 1- \epsilon) > \epsilon$$
$$\Longleftrightarrow x_0 > \frac{\epsilon}{1-\epsilon},$$
and such $x_0$ surely exists. Therefore, $\sup S = 1$.
However, $1$ is not the maximum. Namely, if $1 \in S$, then $\exists x_1 \in \mathbb{N}$ such that
$$\frac{x_1}{x_1 + 1} = 1$$
$$\Longleftrightarrow x_1 = x_1 +1$$
$$ \Longleftrightarrow 0=1,$$
which is the contradiction. It follows that the maximum of $S$ does not exists.
Now we will prove that $\min S = \frac{1}{2}$.
Since $\frac{1}{2} \in S$, it is enough to show that $\frac{1}{2}$ is a lower bound of $S$. According to this, we have
$$ \frac{x}{x+1} \ge \frac{1}{2}$$
$$ \Longleftrightarrow 2x \ge x+1$$
$$\Longleftrightarrow x \ge 1,$$
which is valid for all $x \in \mathbb{N}$. Therefore, $\inf S = \min S = \frac{1}{2}$.
Vector spaces
|
CommonCrawl
|
nLab > Latest Changes: Baire space
CommentTimeJun 24th 2011
Format: TextOn the page [[Baire space]], I see a comment that the Baire space of irrationals is "not a very important example". I think it actually is a pretty important example; for people working with Polish spaces and descriptive set theory, it is often used as an archetypal example. At least I don't understand what the purpose of this parenthetical remark would be.
On the page Baire space, I see a comment that the Baire space of irrationals is "not a very important example". I think it actually is a pretty important example; for people working with Polish spaces and descriptive set theory, it is often used as an archetypal example. At least I don't understand what the purpose of this parenthetical remark would be.
(edited Jun 27th 2011)
Format: MarkdownItexIt's a very important space to some people, which is why we (I) write about it (at [[Baire space of irrational numbers|its own page]]). But is it important *as a Baire space*? (For example, if every quotient space of a Baire space were Baire, then this would be important, and it would follow that every Polish space is Baire.) In other words, the Baire category theorem for complete metric spaces is important, and the Baire category theorem for locally compact Hausdorff spaces is important, but is the Baire category theorem for $J$ important? I don't think so, but maybe I'm wrong about this. Another issue is that this space is an example of a complete metric space, so why bring it up specifically in the list of examples? Only because of the coincidence of the names. (And it *is* a coincidence, as far as I can tell, although I would be delighted to learn otherwise.)
It's a very important space to some people, which is why we (I) write about it (at its own page). But is it important as a Baire space? (For example, if every quotient space of a Baire space were Baire, then this would be important, and it would follow that every Polish space is Baire.)
In other words, the Baire category theorem for complete metric spaces is important, and the Baire category theorem for locally compact Hausdorff spaces is important, but is the Baire category theorem for JJ important? I don't think so, but maybe I'm wrong about this.
Another issue is that this space is an example of a complete metric space, so why bring it up specifically in the list of examples? Only because of the coincidence of the names. (And it is a coincidence, as far as I can tell, although I would be delighted to learn otherwise.)
Format: MarkdownItexI still don't understand the need for the parenthetical comment, because it's no less important than any other example (and it's plenty important in other contexts). It just seems like the comment might lead to some confusion in people's minds. I think it's enough just to disambiguate the terminology.
I still don't understand the need for the parenthetical comment, because it's no less important than any other example (and it's plenty important in other contexts). It just seems like the comment might lead to some confusion in people's minds. I think it's enough just to disambiguate the terminology.
Format: MarkdownItexMaybe I'm making too big a deal about this, but I went ahead and reworded it to reflect what I think what was meant. (But I'm not as fussed about it as I was earlier.)
Maybe I'm making too big a deal about this, but I went ahead and reworded it to reflect what I think what was meant. (But I'm not as fussed about it as I was earlier.)
Format: MarkdownItexOK. I promoted it again to an actual example, which I assume you're happy with.
OK. I promoted it again to an actual example, which I assume you're happy with.
Format: MarkdownItexIn order to un-gray links, I gave _[[Baire category theorem]]_ an entry, but at the moment it does nothing but point to Wikipedia. The lead-in sentence of the Idea-section at _Baire space_ is really not helpful, it should be changed. Instead I added pointer to "Baire category theorem" in the Examples-section, where the theorem was stated without naming it.
In order to un-gray links, I gave Baire category theorem an entry, but at the moment it does nothing but point to Wikipedia.
The lead-in sentence of the Idea-section at Baire space is really not helpful, it should be changed. Instead I added pointer to "Baire category theorem" in the Examples-section, where the theorem was stated without naming it.
CommentAuthorJoshua Meyers
CommentTimeOct 6th 2019
Author: Joshua Meyers
Format: MarkdownItex"A dense $G_\delta$ set (i.e. a countable intersection of dense opens) in a Baire space is a Baire space under the subspace topology. See Dan Ma's blog, specifically Theorem 3 here." Dan Ma's blog only proves that *dense* $G^\delta$ sets in Baire spaces are Baire, not that all $G^\delta$ sets in Baire spaces are Baire. Thus either "dense" should be added to the page or it should cite something else. Do we have a reason to believe that all $G^\delta$ sets in Baire space are Baire?
"A dense G δG_\delta set (i.e. a countable intersection of dense opens) in a Baire space is a Baire space under the subspace topology. See Dan Ma's blog, specifically Theorem 3 here."
Dan Ma's blog only proves that dense G δG^\delta sets in Baire spaces are Baire, not that all G δG^\delta sets in Baire spaces are Baire. Thus either "dense" should be added to the page or it should cite something else. Do we have a reason to believe that all G δG^\delta sets in Baire space are Baire?
Format: MarkdownItexI quite fail to see what is thought to be wrong here, since the sentence as quoted does say "dense $G_\delta$ set". Assuming the quotation is correct, I don't see that "dense" has to be added again. (Also, please note that the convention has $\delta$ as a subscript, not as a superscript.) I'll check the entry to see if anything is wrong. I don't immediately have a reference to hand that says anything about general $G_\delta$ sets in a general Baire space, although I do have one about general $G_\delta$ sets in a locally compact Hausdorff space. I had a vague memory that it held true more generally, but that would need corroboration.
I quite fail to see what is thought to be wrong here, since the sentence as quoted does say "dense G δG_\delta set". Assuming the quotation is correct, I don't see that "dense" has to be added again. (Also, please note that the convention has δ\delta as a subscript, not as a superscript.)
I'll check the entry to see if anything is wrong. I don't immediately have a reference to hand that says anything about general G δG_\delta sets in a general Baire space, although I do have one about general G δG_\delta sets in a locally compact Hausdorff space. I had a vague memory that it held true more generally, but that would need corroboration.
Format: MarkdownItexYou're right, I don't know how I missed that. Maybe it's because I saw that above (in the second bullet point of "Examples") it talks about a $G_\delta$ set in a locally compact Hausdorff space being Baire. Incidentally, what is your reference for that?
You're right, I don't know how I missed that. Maybe it's because I saw that above (in the second bullet point of "Examples") it talks about a G δG_\delta set in a locally compact Hausdorff space being Baire. Incidentally, what is your reference for that?
Format: MarkdownItexIt's in Munkres's textbook, in the part about Baire spaces. Perhaps in exercises. I'm still searching for an example those shows that the density assumption is important in the general case.
It's in Munkres's textbook, in the part about Baire spaces. Perhaps in exercises.
I'm still searching for an example those shows that the density assumption is important in the general case.
Format: MarkdownItexI do think the phrasing "dense $G_\delta$ set (i.e. a countable intersection of dense opens)" was a bit confusing, since "i.e." means "in other words", but here it applies only to the immediately preceding words "$G_\delta$ set" rather than the entire phrase "dense $G_\delta$ set". So I changed it to "dense $G_\delta$ set (i.e. a countable intersection of dense opens that is itself dense)". <a href="https://ncatlab.org/nlab/revision/diff/Baire+space/13">diff</a>, <a href="https://ncatlab.org/nlab/revision/Baire+space/13">v13</a>, <a href="https://ncatlab.org/nlab/show/Baire+space">current</a>
I do think the phrasing "dense G δG_\delta set (i.e. a countable intersection of dense opens)" was a bit confusing, since "i.e." means "in other words", but here it applies only to the immediately preceding words "G δG_\delta set" rather than the entire phrase "dense G δG_\delta set". So I changed it to "dense G δG_\delta set (i.e. a countable intersection of dense opens that is itself dense)".
diff, v13, current
(edited Oct 7th 2019)
Format: MarkdownItexI don't see what's wrong with it. In a Baire space $X$, a subset $A$ is a dense $G_\delta$ set iff it is a countable intersection of dense opens. The only possible quibble -- I think a very minor one -- would be the placement in the sentence of the assumption that the ambient space is Baire. In fact, I don't like how the sentence currently reads, and so I'll just make that adjustment.
I don't see what's wrong with it. In a Baire space XX, a subset AA is a dense G δG_\delta set iff it is a countable intersection of dense opens. The only possible quibble – I think a very minor one – would be the placement in the sentence of the assumption that the ambient space is Baire. In fact, I don't like how the sentence currently reads, and so I'll just make that adjustment.
Format: MarkdownItexIt also seemed strange to me to say *the* Baire category theorem, followed by two separate bulleted assertions. The classical Baire category theorem was about completely metrizable spaces, so I made the entry say that. <a href="https://ncatlab.org/nlab/revision/diff/Baire+space/14">diff</a>, <a href="https://ncatlab.org/nlab/revision/Baire+space/14">v14</a>, <a href="https://ncatlab.org/nlab/show/Baire+space">current</a>
It also seemed strange to me to say the Baire category theorem, followed by two separate bulleted assertions. The classical Baire category theorem was about completely metrizable spaces, so I made the entry say that.
Format: MarkdownItexAh, my bad, I missed that the opens were also dense. Sorry for the noise!
Ah, my bad, I missed that the opens were also dense. Sorry for the noise!
Format: MarkdownItexWith regard to a $G_\delta$ in a Baire space which is not itself a Baire space: apparently one "famous" example of a closed subspace $A$ of a Baire space $X$ that is not itself Baire is where $X$ is the space $\mathbb{R}^2 \setminus ((\mathbb{R} \setminus \mathbb{Q}) \times \{0\})$, and $A$ is the closed subspace $\mathbb{Q} \times \{0\}$. Don't ask me why $X$ is Baire, because I haven't figured that out yet. But, it's not hard to see that the same $A$ is a $G_\delta$ (and if I haven't made a mistake, in any separable metric space, every closed set is a $G_\delta$). This answers the question at the end of #7 ("no, we have no reason to believe that, because it's false"). I would add this to the entry, except that as I said, I don't yet know how to show $X$ is Baire.
With regard to a G δG_\delta in a Baire space which is not itself a Baire space: apparently one "famous" example of a closed subspace AA of a Baire space XX that is not itself Baire is where XX is the space ℝ 2∖((ℝ∖ℚ)×{0})\mathbb{R}^2 \setminus ((\mathbb{R} \setminus \mathbb{Q}) \times \{0\}), and AA is the closed subspace ℚ×{0}\mathbb{Q} \times \{0\}. Don't ask me why XX is Baire, because I haven't figured that out yet. But, it's not hard to see that the same AA is a G δG_\delta (and if I haven't made a mistake, in any separable metric space, every closed set is a G δG_\delta). This answers the question at the end of #7 ("no, we have no reason to believe that, because it's false").
I would add this to the entry, except that as I said, I don't yet know how to show XX is Baire.
|
CommonCrawl
|
What is the third cosmic velocity?
I have been studying Gravitation chapter and there I found one term:
Third cosmic velocity which is also known as interstellar speed. So what is it? What it really tells about? I tried to gather some information and I guess its something about velocity at infinity but still I am not sure what is it.
newtonian-gravity definition speed rocket-science escape-velocity
ShashankShashank
Cosmic Velocity has nothing to do with infinity.
A cosmic velocity is the minimum speed directed in the necessary direction to escape the gravitational attraction of a cosmic body such as a planet, a star, or a galaxy.
Here is a paper which a student wrote about the four cosmic velocities. I don't know if his exact classifications are in common usage, but here it is for what it's worth:
The cosmic velocities (Krzysztof Mastyna) (NB: PDF)
Here is a much more reliable analysis of the Third Cosmic Velocity:
Controversies about the value of the third cosmic velocity (NB: PDF)
You might also google Escape Velocity.
ErnieErnie
It's probably too late to change custom & convention now, but technically it is a speed rather than a velocity, since it refers to the magnitude of a velocity vector. The third cosmic speed $v_3\approx 16.7 km/s$ is the initial escape speed needed for an un-powered projectile launched from a (non-rotating but orbiting) Earth to leave the gravity well of the solar system.
The simplest derivation is probably to consider the specific kinetic energy of the projectile relative to an instantaneous inertial frame for Earth. If Earth have been massless, then the specific kinetic escape energy would have been $$ \frac{1}{2}(v_{2,S} -v_{1,S})^2 ,\tag{1}$$ where $$ v_{1,S} ~:=~\sqrt{\frac{GM_S}{R_{SE}}}\tag{2}$$ is the orbital speed of Earth, and where $$ v_{2,S}~:=~\sqrt{2} v_{1,S} \tag{3}$$ is the escape speed from the Sun's gravity, starting from Earth's position. (Both speeds $v_{1,S}$ and $v_{2,S}$ are measured relative to the Heliocentric reference frame.)
Taking the Earth's gravitation into account, with escape speed (=the second cosmic speed) $$ v_2~:=~\sqrt{\frac{2GM_E}{R_{E}}}, \tag{4}$$ the specific kinetic escape energy becomes in total $$ \frac{1}{2}v_3^2~=~\frac{1}{2}v_2^2+\frac{1}{2}(v_{2,S} -v_{1,S})^2,\tag{5} $$ from which we can deduce the third cosmic speed $v_3$. Above we have used various simplifying assumptions. See Refs. 1 & 2 for more details.
S. Halas & A. Pacek, Controversies about the value of the third cosmic velocity, Ann. Phys. Sec. AAA 68 (2013) 63. (Hat tip: Ernie.)
http://hirophysics.com/Study/first-second-third-cosmic-velocities-blackholes.pdf
Qmechanic♦Qmechanic
In physics, escape velocity is the speed at which the sum of an object's kinetic energy and its gravitational potential energy is equal to zero.[nb 1] It is the speed needed to "break free" from the gravitational attraction of a massive body, without further propulsion, i.e., without spending more fuel.
For a spherically symmetric massive body such as a (non-rotating) star or planet, the escape velocity at a given distance is calculated by the formula[1]
$$v_e = \sqrt{\frac{2GM}{r}}$$
Third cosmic velocity is the velocity required to throw an object out of the solar system from the surface of earth to infinity with considering the gravitational effects of only sun and earth and ignoring the gravitational effects of other planets.
Not the answer you're looking for? Browse other questions tagged newtonian-gravity definition speed rocket-science escape-velocity or ask your own question.
What is the difference between orbital velocity and critical velocity? Are their values similar or not?
Does a black hole have any kind of mass?
The Energy Equation
What does it mean to find an equation of motion, given vector functions that describe both the object's position and velocity?
Identical games of billiards with one double the speed of the other
How is the speed of electricity determined?
How Can I Calculate the Speed Required for an Orbiting Planet to Pass Through a Given Point in Space?
|
CommonCrawl
|
Is Geometric Algebra isomorphic to Tensor Algebra?
Is geometric algebra (Clifford algebra) isomorphic to tensor algebra? If so, how then would one relate a unique 2-vector (this is what I'm going to call a multivector that is the sum of a scalar, vector, and bivector) to every 2nd rank tensor?
Edit by the OP, converted from "answer"
Okay. Well I'm still curious if there's a way to represent any 2nd-rank tensor by a bivector, vector, and scalar. Or in particular, can any $3 \times 3$ matrix be represented by a 2-vector in 3D.
It seems to me that they can't because I would guess the matrix representation of a bivector (grade 2 element of a 2-vector) would be exactly the same as the $3 \times 3$ matrix representation of the cross product (i.e. $[a \times b]_{ij} = a_j b_i - a_i b_j$) which only uniquely identifies 3 components.
I would also assume that the scalar part of the 2-vector would be represented by a scalar times the $3 \times 3$ identity matrix. This would fill in 3 numbers, but really only uniquely gives 1 component.
I don't know how to represent the vector component of the 2-vector as a $3 \times 3$ matrix but I don't see how it could identity the remaining 5 components by itself.
Am I right then in assuming that there is a canonical matrix representation of a general 2-vector, but that there are matrices that cannot be represented by any 2-vector?
abstract-algebra clifford-algebras geometric-algebras
Every element of a geometric algebra can be identified with a tensor, but not every tensor can be identified with an element of a geometric algebra.
It's helpful to consider the vector derivative of a linear operator, of a map from vectors from vectors. Call such a map $\underline A$. The vector derivative is then
$$\partial_a \underline A(a) = \partial_a \cdot \underline A(a) + \partial_a \wedge \underline A(a) = T + B$$
where $T$ is a scalar, the trace, and $B$ is a bivector. The linear map $\underline A$ can then be written as
$$\underline A(a) = \frac{T}{n} a + \frac{1}{2} a \cdot B + \underline S(a)$$
where $\underline S$ is some traceless, symmetric map. While the scalar can be turned into a multiple of the identity, in $T \underline I/n$, and the bivector can be directly turned into an antisymmetric map in $a \cdot B$, the map $\underline S$ is very much part of $\underline A$, yet not representable in general through a single algebraic element of the geometric algebra. This is just one example of such an object.
MuphridMuphrid
$\begingroup$ I'm not super familiar with the idea of representing a tensor by a multilinear map, but could we say that a tensor is a mapping $T$, such that $T(I \wedge J)=\alpha$ (or possibly $T(I,J)=\alpha$ if $I \wedge J$ doesn't make sense), where $I$ is a pseudoscalar (k-blade) in our tangent space, $J$ is a pseudoscalar (j-blade) in our cotangent space, and $\alpha$ is a scalar? $\endgroup$ – user137731 Jul 12 '14 at 14:52
$\begingroup$ @Bye_World Yeah, I think that wedge wouldn't make sense, but as separate arguments, sure. That said, not every tensor can be extended through outermorphism. The Riemann tensor is a good example, as it fundamentally takes bivector inputs. $\endgroup$ – Muphrid Jul 12 '14 at 15:28
A tensor (not an abstract tensor in the sense of Penrose) is a multilinear function from the repeated Cartesian product of a vector space the real numbers thus the multi-linear function $T(a_1,...,a_r)$ where $a_1$ through $a_r$ are vectors in the same vector space. The rank of this tensor would be $r$. All the standard tensor operations can be defined without regard to component indices. Go to GA and look at bookGA.pdf in "GA Notes". Note that any tensor has a definite rank (number of multi-linear vector arguments), but a multivector can be the sum of different pure grade multivectors. An even mulitvector (contains only even grade components) can represent a spinor, but a tensor cannot. In that sense a multivector is more general than a tensor. However, pure grade multivectors can only represent completely antisymmetric tensors. In that sense tensors are more general than pure grade multivectors. The rank of a tensor is not restricted by the dimension of the base vector space like the grade of a multivector.
$\begingroup$ In first line, isn't the multilinear function T called a tensorproduct rather than a tensor, while an element $T(a_1,\dots , a_r)=a_1\otimes\cdots\otimes a_r$ is called a tensor? $\endgroup$ – Lehs Jul 2 '16 at 5:22
Tensor algebras are (in general) infinite dimensional, but the most common Clifford algebras (over finite dimensional vector spaces) are finite dimensional, so they aren't isomorphic.
Clifford algebras can be constructed as quotients of tensor algebras, so there is a relationship between them.
rschwiebrschwieb
Similar question: what's the relationship of tensor and multivector
The short answer is that all multivectors are tensors, but not all tensors are multivectors, so geometric algebra is not and cannot be isomorphic to tensor algebra.
However, we can try to embed geometric algebra inside of tensor algebra. These resources give some guidelines for doing so (in the special case where the vector space is $\mathbb{R}^3$ and the inner product is the "regular" inner product, as opposed to say the Minkowski metric on $\mathbb{R}^4$):
http://www2.ic.uff.br/~laffernandes/teaching/2013.1/topicos_ag/lecture_18%20-%20Tensor%20Representation.pdf
https://www.docdroid.net/uwfvUxE/tensor-representation-of-geometric-algebra.pdf.html
Chill2Macht
what's the relationship of tensor and multivector
Which concepts in Differential Geometry can NOT be represented using Geometric Algebra?
What is the "taxonomy" or "hierarchy" (partial ordering) of algebraic objects used to attempt to capture geometric intuition?
Clifford Algebra Multiplication Intuition
Space time algebra isomorphic to matrix algebra
About the definition of norm in Clifford algebra
Geometric interpretation of multi vectors and/or geometry product
Are Clifford and exterior algebras isomorphic as "wedge product algebras"?
Solving for an unknown rotor in a real geometric algebra
The exact relationship between Clifford algebras and certain special isomorphic k-algebras
Geometric algebra: Rotation of a rotor
Wedge Product on $C\ell^+(0,3,0)$ Relationship to Quaternion Cross Product
|
CommonCrawl
|
Limiting distribution of a Markov Chain
The problem can be found in Markov Chains - dice problem.
You start with five dice. Roll all the dice and put aside those dice that come up 6. Then roll the remaining dice, putting aside those dice that come up 6. And so on. Let $X_n$ be the number of dice that are sixes after n rolls.
Question: Prove that $X_n$ has a limiting distribution
In using the notations and results of Markov Chains - dice problem, we have that the transition matrix, denoted, $P$ is diagonalizable. So $P=A D A^{-1}$ with
$$ D = \begin{pmatrix} \left(\frac{5}{6}\right)^{5} & 0 & 0 & 0 & 0 & 0 \\ 0 & \left(\frac{5}{6}\right)^{4} & 0 & 0 & 0 & 0 \\ 0 & 0 & \left(\frac{5}{6}\right)^{3} & 0 & 0 & 0 \\ 0 & 0 & 0 & \left(\frac{5}{6}\right)^{2} & 0 & 0 \\ 0 & 0 & 0 & 0 & \left(\frac{5}{6}\right) & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix} $$
It implies that $P^{n}$ converge to $A D^{*} A^{-1}$ with
$$ D^{*} = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{pmatrix} $$
However this is not enough to prove that the Markov Chain has a limiting distribution. We also need to prove that $A D^{*} A^{-1}$ has identical rows to conclude. And I am not sure how to show that.
Another way to prove that the Markov Chain has a limiting distribution would be to use the fact that it is a Doeblin Chain. But I am looking for more elegant solutions.
markov-process
Sara MunSara Mun
$\begingroup$ What exactly does the product $AD^{*}A^{-1}$ look like? You can determine this without doing any calculation, because you can use either the definition of matrix multiplication or reasoning about its eigenvalues to establish that it contains a great many zeros. $\endgroup$ – whuber♦ Nov 23 '19 at 22:43
$\begingroup$ For a finite state Markov chain, a limiting distribution exists iff it is irreducible and aperiodic. This is the most "elegant" solution I can think of. $\endgroup$ – Math1000 Dec 19 '19 at 2:21
Browse other questions tagged markov-process or ask your own question.
Markov Chains: Dice Problem I'm not sure how to start
Check memoryless property of a Markov chain
Using a Markov Chain to find the limiting probability?
Markov chains with a stationary distribution but no limiting distribution
Determining the limiting distribution
Is there a measure of how well a Markov chain allows movement between states?
How to prove Markov Chain formula?
Limiting distribution of Markov chain; two approximations
Deriving Transition Matrix of the Embedded Markov Chain given the generator matrix?
Compute the Limiting Distribution
|
CommonCrawl
|
Home > Journals > Tbilisi Math. J. > Volume 13 > Issue 1 > Article
January 2020 Approximation by trigonometric polynomials in weighted Morrey spaces
Z. Cakir, C. Aykol, D. Soylemez, A. Serbetci
Tbilisi Math. J. 13(1): 123-138 (January 2020). DOI: 10.32513/tbilisi/1585015225
In this paper we investigate the best approximation by trigonometric polynomials in weighted Morrey spaces $\mathcal{M}_{p,\lambda}(I_{0},w)$, where the weight function $w$ is in the Muckenhoupt class $A_{p}(I_{0})$ with $1 < p < \infty$ and $I_{0}=[0, 2\pi]$. We prove the direct and inverse theorems of approximation by trigonometric polynomials in the spaces $\mathcal{\widetilde{M}}_{p,\lambda}(I_{0},w)$ the closure of $C^{\infty}(I_{0})$ in $\mathcal{M}_{p,\lambda}(I_{0},w)$. We give the characterization of $K-$functionals in terms of the modulus of smoothness and obtain the Bernstein type inequality for trigonometric polynomials in the spaces $\mathcal{M}_{p,\lambda}(I_{0},w)$.
The research of Z. Cakir and C. Aykol was partially supported by the grant of Ankara University Scientific Research Project (BAP.17B0430003).
Z. Cakir. C. Aykol. D. Soylemez. A. Serbetci. "Approximation by trigonometric polynomials in weighted Morrey spaces." Tbilisi Math. J. 13 (1) 123 - 138, January 2020. https://doi.org/10.32513/tbilisi/1585015225
Received: 7 September 2019; Accepted: 7 December 2019; Published: January 2020
First available in Project Euclid: 24 March 2020
Digital Object Identifier: 10.32513/tbilisi/1585015225
Primary: 41A10
Secondary: 41A25, 42A10, 46E30, 46E35
Keywords: Bernstein inequality, Best approximation, Muckenhoupt class, trigonometric polynomials, weighted Morrey space
Rights: Copyright © 2020 Tbilisi Centre for Mathematical Sciences
Tbilisi Math. J.
Vol.13 • No. 1 • January 2020
Tbilisi Centre for Mathematical Sciences
Z. Cakir, C. Aykol, D. Soylemez, A. Serbetci "Approximation by trigonometric polynomials in weighted Morrey spaces," Tbilisi Mathematical Journal, Tbilisi Math. J. 13(1), 123-138, (January 2020)
|
CommonCrawl
|
lee kyu sung it's okay not to be okay
First of all we would calculate the concentration of the salt, CH 3 COONa. JEE Mains aspirants may download it for free, and make a self-assessment by solving the JEE Main Equilibrium Important Questions Chemistry. V 2 = 50 ml . I have created ratta.pk to promote the eductaion in Pakistan. This PDF below consists of the chemistry important questions for Jee Mains. $('#widget-tabs').css('display', 'none'); Chemical Equilibrium is the most important and interesting chapter of Chemistry. Often, however, the initial concentrations of the reactants are not the same, and/or one or more of the products may be present when the reaction starts. (The quadratic formula is presented in Essential Skills 7 in Section 15.7 .) Kp = 1.2 × 102. /*c�����=����������`�`b��� X � _�a�=�x�z�l]zs1cG/���䀏T����ۧ���Y �E�Y~���|��p��,�e�v��-��<9�z�pV��x1o����|����:�x y�͓نf �$(y��y�&1nS4~*�I�'W����%�����ɛG��y��O�����M��P9d�+�~Q��ǃ��f�8�B?�3��M�D�H}W��7�!0T����F��&B.�8*���O(���&g�_����)y��(���gDU����%j"�EF���GE@̱�.�e���_*���^�t���a��94�ueGk���z����NHG� ��v��((�Z����?����a�����E�Q���h��"�uXTN�1����ll8� y����Z8��0����B�R�^;1��[iu�9&w��0����G��d�Dڔ��$7Wi� ��z��C3���Q�np/�m#@� 4 0 obj and K = 3.5 × 10−2. K = 0.106 at 700 K. If a mixture of gases that initially contains 0.0150 M H2 and 0.0150 M CO2 is allowed to equilibrate at 700 K, what are the final concentrations of all substances present? Graph [Br2] versus moles of Br2(l) present; then write the equilibrium constant expression and determine K. Data accumulated for the reaction n-butane(g) ⇌ isobutane(g) at equilibrium are shown in the following table. In heterogeneous equilibrium, the reactants and products are present in two or more physical states or phases. From these calculations, we see that our initial assumption regarding x was correct: given two significant figures, 2.0 × 10−16 is certainly negligible compared with 0.78 and 0.21. Then substitute values from the table into the expression to solve for x (the change in concentration). To solve quantitative problems involving chemical equilibriums. Rate of reaction deals with how fast a reaction happens. At 303 K, Kp is 2.9 × 10−2. A Construct a table showing initial concentrations, concentrations that would be present if the reaction were to go to completion, changes in concentrations, and final concentrations. What is the equilibrium constant for this conversion? The exercise in Example 8 showed the reaction of hydrogen and iodine vapor to form hydrogen iodide, for which K = 54 at 425°C. A From the magnitude of the equilibrium constant, we see that the reaction goes essentially to completion. \( x + 2.6x =2.6 \) d) It decreases the velocity of the backward reaction. To two significant figures, this K is the same as the value given in the problem, so our answer is confirmed. $('#annoyingtags').css('display', 'none'); Increase in volume, i.e., decrease in pressure shifts the equilibrium in the direction in which the number of moles increases ( positive), c) C2H5OH(l) + CH3COOH(l) ⇌ CH3COOC2H5(l) + H2O(l) (Reaction carried in an inert solvent), \(\frac{[0.6]\times[0.6]}{[0.4]\times[0.4]}\\\), a) Both for physical and chemical equilibrium, Kc = \(\frac{[PCl_{3}][Cl_2]}{[PCl_5]}\\\), \(\frac{[0.2]\times[0.2]}{[0.8]}\\\) = 0.05, Kc = \(\frac{(K_p}{RT)}^{Δn}\\\) = \(\frac{1.8\times 10^{-3}}{(8.314\times 700)^1}\), d) Kp remains constant with change in P and x. Kp(equilibrium constant) is independent of pressure and concentration. A)the rates of the forward and reverse reactions are equal B)the rate constants of the forward and reverse reactions are equal C)all chemical reactions have ceased D)the value of the equilibrium constant is 1 For example, 1 mol of CO is produced for every 1 mol of H2O, so the change in the CO concentration can be expressed as Δ[CO] = +x. $('document').ready(function() { A Write the equilibrium constant expression for the reaction. K = 54 at 425°C. C The small x value indicates that our assumption concerning the reverse reaction is correct, and we can therefore calculate the final concentrations by evaluating the expressions from the last line of the table: We can verify our calculations by substituting the final concentrations into the equilibrium constant expression: \[K=\dfrac{[C_2H_6]}{[H_2][C_2H_4]}=\dfrac{0.155}{(0.045)(3.6 \times 10^{−19})}=9.6 \times 10^{18} \notag \]. Under these conditions, there is usually no way to simplify the problem, and we must determine the equilibrium concentrations with other means. Looking for the notes of 2nd year biology chapter 16 Support and Movement? Kp = 4.0 × 1031 at 47°C. c) It displaces the equilibrium state on the right side. Given: balanced equilibrium equation, concentrations of reactants, and K, Asked for: composition of reaction mixture at equilibrium. For the system 3A+2B ⇌ C, the expression for the equilibrium constant is. The post is tagged and categorized under. The German chemist Fritz Haber (1868–1934; Nobel Prize in Chemistry 1918) was able to synthesize ammonia (NH3) by reacting 0.1248 M H2 and 0.0416 M N2 at about 500°C. Methanol, a liquid used as an automobile fuel additive, is commercially produced from carbon monoxide and hydrogen at 300°C according to the following reaction: CO(g) + 2H2(g) ⇌ CH3OH(g) and Kp = 1.3 × 10−4. At 184.4°C, the overall reaction proceeds according to the following equation: \[I_{2(g)}+Br_{2(g)} \rightleftharpoons 2IBr_{(g)} \notag \]. Ch. If a mixture of 0.257 M H2 and 0.392 M Cl2 is allowed to equilibrate at 47°C, what is the equilibrium composition of the mixture? What is the total gas pressure of the system? d) Can be applied at a high temp. Please be sure you are familiar with the topics discussed in Essential Skills 7 (Section 15.7 ) before proceeding to the Numerical Problems. A Write the equilibrium equation. $('#pageFiles').css('display', 'none'); Describe how to determine the magnitude of the equilibrium constant for a reaction when not all concentrations of the substances are known. In this case, the equation is already balanced, and the equilibrium constant expression is as follows: \( K=\dfrac{\left [ NO_{2} \right ]^{2}\left [ Cl_{2} \right ]}{\left [ NOCl \right ]^{2}} \).
Elatha Fr4 God Roll Pve, Biscoff Brownies Delivered, Dr Talbots Nuby Infrared Thermometer, Traditional Upholstery Supplies, Elementary School Subjects In Usa, Media Studies Coursework, Buddha's Brew Kombucha Amazon, Pharmaceutical Microbiology Manual, Genie Machforce Wifi,
lee kyu sung it's okay not to be okay 2020
|
CommonCrawl
|
Comparison of different calibration techniques of laser induced breakdown spectroscopy in bakery products: on NaCl measurement
Gonca Bilge1,
Kemal Efe Eseller ORCID: orcid.org/0000-0002-9758-48522,
Halil Berberoglu3,
Banu Sezer4,
Ugur Tamer5 &
Ismail Hakki Boyaci4
Laser induced breakdown spectroscopy (LIBS) is a rapid optical spectroscopy technique for elemental determination, which has been used for quantitative analysis in many fields. However, the calibration involving atomic emission intensity and sample concentration, is still a challenge due to physical-chemical matrix effect of samples and fluctuations of experimental parameters. To overcome these problems, various chemometric data analysis techniques have been combined with LIBS technique. In this study, LIBS was used to show its potential as a routine analysis for Na measurements in bakery products. A series of standard bread samples containing various concentrations of NaCl (0.025%–3.5%) was prepared to compare different calibration techniques. Standard calibration curve (SCC), artificial neural network (ANN) and partial least square (PLS) techniques were used as calibration strategies. Among them, PLS was found to be more efficient for predicting the Na concentrations in bakery products with an increase in coefficient of determination value from 0.961 to 0.999 for standard bread samples and from 0.788 to 0.943 for commercial products.
Laser induced breakdown spectroscopy (LIBS) is an atomic emission spectroscopy technique in which laser beam excites and intensively heats the surface of sample. Excited sample is taken to a gaseous plasma state and dissociated to all molecules and fine particles, which produces a characteristic plasma light. Intensity of this plasma light is associated with concentration of the elements in the sample. LIBS has many advantages as it allows for rapid, real-time and in situ field analysis without the need for sample preparation [1,2,3,4,5,6,7,8,9,10]. Moreover, its application has expanded to the fields such as metallurgy, mining, environmental analysis and pharmacology [11,12,13,14].
Intensity of LIBS signal is influenced by various factors including laser energy, detection time window, distance between lens and chemical and physical matrix [15]. Chemical matrix effect is the most important one since the molecular and chemical composition of the sample is directly related with chemical matrix, and it perturbs the LIBS plasma [16]. Minor elements in the sample structure can cause matrix effects and interactions on the major element spectral lines. Furthermore, LIBS signal intensity is influenced by atmospheric composition, and occurred plasma products are interacted with sample surface. To overcome matrix effect, many approaches have been developed. Traditionally, spectral peak intensity or peak area is analyzed through LIBS data versus concentration of samples for quantitative analyses, which is the standard calibration curve method (SCC) [17]. Chemometric techniques are being used more widely in order to enhance analytical performance of LIBS [18]. Recent works have shown that multivariate analysis such as partial least square (PLS) and artificial neural network (ANN) give promising results for quantitative analysis [19,20,21,22]. These advanced techniques reduce the complexity of spectra and enable valuable information. In LIBS analysis, many fluctuating experimental parameters decrease the relation between elemental composition and LIBS intensity [23]. ANN provide a mathematical model from input data and gives information about unknown samples processing like a human neural network. It simulates the human intelligence for objective learning. ANN has been used for identification of polymers by LIBS [24] analysis of LIBS data for three chromium doped soils [25], and quantification of element in soils and rocks [26]. The other most commonly used chemometric method is PLS. It is a pattern recognition technique which can analyze bunch of spectral lines instead of a single specific line intensity as in standard calibration curve method. As a consequence, combination of chemometric methods and LIBS technique have given promising results for quantitative studies.
Na is an essential element for human diet. However, if consumed excessively, it may cause some health problems such as high blood pressure [27], strokes and coronary heart diseases [28]. Thus, sodium levels in food should be controlled. In a human diet, 70–75% of the total sodium chloride (NaCl) intake is obtained from processed foods, out of which cereal and cereal products constitute approximately 30% [29]. Therefore, NaCl content in bread, the most consumed food all over the world, should be lowered and adhered to Codex Alimentarus. Na content can be controlled by using standard methods such as flame atomic absorption spectrometry (AAS), titration and potentiometry [30, 31]. These methods are time consuming and complex due to their sample preparation process and their inconvenience for in situ and point detection analyses. Therefore, new, rapid and practical techniques are required.
LIBS has been used in several applications such as milk, bakery products, tea, vegetable oils, water, cereals, flour, potatoes, palm date and different types of meat [32]. Food supplements have been investigated to identify spectral signitures of minerals (Ca, Mg, C, P, Zn, Fe, Cu, and Cr) [33]. Quantitative analysis of NaCl in bakery products has been analyzed by standard calibration curve [34, 35]. For the present study, we performed a measurement of Na concentrations in bakery products by LIBS and conducted a direct comparison between standard calibration curve, ANN and PLS in terms of prediction accuracy and prediction precision. Combination of LIBS technique and PLS model is a promising method to perform routine analysis for Na measurements in bakery products. In this paper, three calibration methods (SCC, ANN, PLS) have been compared in bakery food applications for the first time.
LIBS experimental setup
LIBS spectra were recorded using a Quantel-Big Sky Nd:YAG-laser (Bozeman, MT, USA), HR2000 Ocean optics Spectrograph (Dunedin, FL, USA) and Stanford Research System Delay Generator SRS DG535 (Cleveland, OH, USA). Figure 1 shows the experimental setup. The excitation source was Q-switched Nd:YAG laser (Quantel, Centurion), operating at 532 nm with maximum energy of 18 mJ per pulse and approximately 9 ns (FWHM) pulse duration. Laser repetition rate is adjustable in the range of 1–100 Hz, but the experiment is performed at 1 Hz. The beam diameter at the exit was 3 mm with 5 mrad divergence. A 50 mm focal length lens was used to focus the beam size on to the pellet surface. Emitted plasma was collected with a pickup lens in 50 mm diameter and aligned at approximately 90 degree with respect to laser beam and then coupled to the fiber tip of the spectrometer. The distance between the pickup lens and the focal point of the laser beam is approximately 15 cm. In this work, HR2000 (Ocean Optics) spectrometer was used as the detection system with a resolution of approximately 0.5 nm in the 200–1100 nm range. 588.6 nm Na line was detected by gating the spectrometer 0.5 μs after the laser pulse and with a 20 μs gate width. All measurements were performed under ambient conditions and exposed to atmosphere. Samples were measured by the LIBS technique in triplicate, scanning five different locations and four excitations per location.
Schematic presentation of LIBS experimental setup
Bread flour and bread additive yeast were purchased from a local market. Nitric acid (HNO3) and NaCl were purchased from Sigma Aldrich (Steinheim, Germany). Standard bread samples were prepared in accordance with American Association of Cereal Chemists (AACC) Optimized Straight-Dough Bread-Baking Method No. 10–10.03 [33]. Twelve standard bread samples were produced by using this method at various salt concentrations ranged between 0.025 and 3.5%. The bread dough, comprising 100 g flour, 0.2 g bread additive, 25 ml of 8% yeast solution, 25 ml salt solution at various concentrations and 10 ml water, was kneaded by hand for 15 min. Dough pieces were rounded and incubated for 30 min during the first fermentation. 30 min later, the dough was punched and incubated for another 30 min during the second fermentation. After that, the dough was formed, placed into tins for the final fermentation and incubated for 55 min at 30 °C. Subsequently, the bread was baked for 30 min at 210 °C, taken out of the oven and cooled. Following this process,, bread samples were dried at 105 °C for 2 h and cooled in a desiccator to be used for the LIBS measurements. Then, 400 mg of dried powdered bread samples were shaped into a pellet under 10 t of pressure using a pellet press machine.
Na detection in bakery products by atomic absorption spectroscopy
Na content of standard bread samples and commercial samples were analyzed by atomic absorption spectroscopy (reference method for Na measurements). Samples were prepared based on the EPA Method 3051A through microwave-assisted digestion for atomic absorption spectroscopy measurements [36]. At the beginning, 0.3 g of the dried sample and 10 ml concentrated HNO3 were placed in a fluorocarbon polymer vessel. The samples were extracted by heating with a laboratory microwave unit. Next, the vessel was sealed and heated in the microwave unit. After cooling, the vessel contents were filtered with Whatman No. 1 filter paper and diluted in 100 ml of deionized water. The atomic absorption spectra for Na were recorded with the ATI-UNICAM 939 AA Spectrometer (Cambridge, UK) at 588.599 nm.
Data analyses were performed by SCC, PLS and ANN. Calibration and validation results were obtained and compared with each other. Performances of the models were evaluated according to coefficient of determination (R2), relative error of prediction (REP), and relative standard deviation (RSD). After that, LIBS spectra of commercial products were analyzed to examine the matrix effect. To compare the 3 methods, REP values were used to evaluate the prediction accuracy.
$$ REP\left(\%\right)=\frac{100}{N_v}{\sum}_{i=1}^{N_v}\left|\frac{{\hat{\mathrm{c}}}_i-{c}_i}{c_i}\right| $$
Nv = number of validation spectra.
ci = true concentration.
ĉi = predicted concentration.
In addition, we used RSD as a prediction precision indicator.
$$ RSD\left(\%\right)=\frac{100}{N_{conc}}{\sum}_{k=1}^{N_{conc}}\frac{\sigma_{c_k}}{C_k}\cdots \mathrm{with}\kern0.5em {\sigma}_{Ck^2}={\sum}_{i=1}^{\rho }{\frac{\left({\hat{c}}_{ik}-{c}_k\right)}{\rho -1}}^2 $$
Nconc = number of different concentrations in the validation set.
ρ = number of spectra per concentration.
σ = Standard deviation.
We first presented the quantitative results of LIBS data according to standard calibration method which is based on the measurement of Na atomic emission at 588.599 nm in standard bread samples. In this method, instrumental noise was subtracted from spectra. Then, background normalization was applied according to 575.522 nm where there is no atomic emission spectral line. Scanning five different locations and four excitations per location, we analyzed the samples by the LIBS technique in triplicate, for each sample (pellet) 20 shuts were accumulated. The calibration curves for the Na line at 588.599 nm were obtained by plotting its intensity (peak height) versus the Na concentrations in each sample. Twenty-six data (each of them includes 20 spectra) for calibration, 13 data for prediction of SCC method were used. Following that, LIBS spectra of commercial products were analyzed via SCC method, and the results were compared with measured Na concentrations by AAS.
In this study we used two different multivariate analysis methods. One of them is PLS in which we used the same data set as in previous work [34]. Data of LIBS spectra ranged between 538.424 nm and 800.881 nm were used instead of whole spectrum because the most quantitative data could be obtained from this region. LIBS data matrix was obtained by analyzing the spectra of 39 standard bread samples (26 samples for calibration, 13 samples for the validation) for PLS analysis. Data analysis was performed using PLS with Stand-alone Chemometrics Software (Version Solo 6.5 for Windows 7, Eigenvector Research Inc., Wenatchee, WA, USA). Data matrix of selected LIBS data and concentration was embedded into the software as calibration data, and PLS algorithm was performed using different components between 1 and 15. Mean center was applied as pre-processing to calibration input data. Prediction ability of obtained model was determined with the validation data set. Selection of latent variable's number related to the difference between cumulative variance and the prediction ability is very important. While cumulative variance increases with the latent variable number - which is 11 for this study-, prediction ability does not increase after obtaining the model. For this reason, it is important to find optimum approach between cumulative variance and prediction ability. In the PLS model, predictability was determined by calculating the root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) for the validation [37]. Minimum RMSEC and RMSEP values were selected for PLS model. After that, Na concentrations in commercial products were analyzed by PLS, and results were compared with results of AAS.
$$ RMSEC=\sqrt{\frac{\sum_{i=1}^m{\left( actual- calculated\right)}^2}{M-1}} $$
$$ RMSEP=\sqrt{\frac{\sum_{\dot{\mathrm{I}}=1}^n{\left( actual- calculated\right)}^2}{N}} $$
M: number of samples used in calibration data set.
N: number of samples used in prediction data set.
The other applied multivariate method is ANN. The same experimental data were used for quantitative analysis with The Neural Network Toolbox, MATLAB® Release 14 (The Mathworks, Natick, MA). The independent variable is the LIBS spectra between 538.424 nm and 800.881 nm, and dependent variable is the Na concentrations. Similar to the PLS method, 26 data set was used for calibration, 13 data set was used for validation of the trained network. We used the neural network functions of training for calibration and transfer functions of logsig and purelin for validation. Then, the number of nodes in hidden layer was optimized between 1 and 10, and it was found that seven hidden nodes showed the best performance.
The coefficient of determination (R2) value was considered for evaluating of the prediction capability of the method and choosing the network. Estimation value of ANN was determined by comparing the actual values and predicted values. After that, Na concentrations in commercial products were analyzed by ANN, and the results were compared with results of AAS.
For the calibration study, a total of 780 spectra (20 spectra for each pellet, 3 pellet for each sample, 13 standard bread sample) and for commercial products a total of 360 spectra (20 spectra for each pellet, 3 pellet for each sample, 13 standard bread sample) were recorded by LIBS. Figure 2. illustrates the LIBS spectra of different amount of NaCl containing standard bread samples. The peak at 588.599 nm belongs to Na and peak at 769.900 nm belongs to K according to NIST atomic data base [38]. Figure 2 shows that as the intensity of Na band at 588.599 nm increases, the NaCl levels in breads rises, as well.
LIBS spectra of standard bread samples at various salt concentrations
Calibration models were developed according to three different methods, which are SCC, ANN and PLS for quantitative treatments attained from a calibrated data set (standard bread samples). Then, prediction ability of the regression of obtained model was evaluated via validation data set (standard bread samples, excluding the calibration data set, were treated as unknown) to test the accuracy and precision of calibration model. After that, Na concentrations of commercial products were predicted and compared with results of standard method AAS. This treatment is found to be useful for evaluating the matrix effect and the potential of LIBS in commercial samples.
The traditional way to obtain calibration curve is using reference samples which contain constant concentrations of major element and varying concentrations of target element. For this purpose, standard bread samples were prepared at different salt concentrations and analyzed via LIBS. Standard calibration curve of Na was obtained by plotting its intensity at 588.599 nm versus the measured Na concentrations (Fig. 3a). Each point in calibration curve demonstrates the average value of 3 pellet samples, and each pellet contains accumulation of 20 laser shut. RSD, REP values and the other results were summarized for this calibration strategy in Table 1.
Calibration and validation plots developed with SCC (a), ANN (b), PLS (c) data analysis techniques
Table 1 Prediction of Na concentrations in standard bread samples with SCC, ANN and PLS
PLS calibration was performed using the same standard bread samples with known NaCl concentrations. The spectral interval of 538.424 nm to 800.881 nm was chosen because most of the atomic emission lines are in this region. For enhancing the performance of PLS method, mean center was applied as pre-processing to calibration input data. Formed PLS calibration model and validation data set are presented in Fig. 3c. Low RMSEC (0.01835) value and high coefficient of determination was chosen to develop the calibration model. Low RMSEC (0.01835) and RMSEP (0.10925) values were chosen for validation. The high coefficient of determination values, R2 = 0.999 for calibration and R2 = 0.991 for validation were observed (Fig. 3c). RSD and REP values of PLS is presented in Table 1.
In addition to the PLS method, ANN was also used for Na quantification in standard bread samples. Same calibration and validation data set were used for ANN model. The networks that had maximum R2 values between predicted and actual data were selected as the best trained network. Then, the best-trained network was used for prediction of Na content in standard breads with ANN. The predicted calibration and validation data sets were compared with experimental data sets and high correlations were obtained for Na concentrations (Fig. 3b). High coefficient of determination values, (R2 = 0.987) and (R2 = 0.964), were observed for calibration and validation data sets, respectively. REP and RSD values of ANN model is presented in Table 1.
When the PLS method was compared with standard calibration curve and ANN methods according to Table 1, the PLS method gave the best results with R2 values of 0.999 for calibration and 0.991 for validation. Furthermore, PLS has shown an excellent potential with high prediction precision and prediction accuracy compared to other methods.
To make comparison, Na concentrations in commercial samples such as biscuits, crackers and some kinds of breads were also analyzed with AAS. Comparative results between AAS and LIBS for commercial products analyzed with SCC, ANN and PLS models were presented in Fig. 4a, b, c and RSD, REP values were summarized in Table 2. SCC is the most commonly applied method for quantitative analyses because of its simplicity. However, this method is only useful if the standard sample's matrix resembles to the real sample's matrix (http://physics.nist.gov/PhysRefData/ASD/index.html).
Correlation between AAS and LIBS method for commercial products with SCC (a), ANN (b), PLS (c) data analysis techniques
Table 2 Prediction of Na concentration in commercial products with SCC, ANN and PLS
In addition experimental uncertanities may effect calibration curve. Low detection capabilities and matrix effects are negative factors for LIBS in SCC. Specific reference materials are prefered to obtain better calibration curves. However due to difficulties in obtaining proper reference materials, efficiency of SCC method is limited. Hence, this situation makes it more difficult to obtain the calibration set. Also in commercial products, PLS gave the best results with high prediction ability compared to other methods. Low RMSEC (0.29861) and RMSEP (0.13893) values were chosen for validation of commercial products. PLS model increased the R2 value from 0.788 to 0.943 for commercial products. When the standard calibration curve for standard bread samples is considered, it is clear that the relation between Na concentration and LIBS signal is linear. PLS generates new principal components based on response of measured concentration data. Linear regression of the new principal components help to control the variation response variable [39]. Hence PLS yields precise calibration. This tendency explains why PLS gave better results than ANN, which is more convenient for nonlinear models. Principal of ANN model is based on receiving a series of input data evaluated by each neuron; therefore, data are weighted dynamically. It compares the weighted sum of its inputs to a given threshold value and performs the nonlinear function to calculate the output [18]. On the other hand, the overall performance of PLS are quite satisfactory with a high prediction accuracy and precision compared to other methods. High RSD values can be explained with fluctuating of LIBS experimental parameters such as changing plasma conditions and spectral interference. These problems can be overcome by obtaining high numbers of shuts for each sample. Prediction ability of PLS model is more satisfactory for validation data of standard bread samples than validation data of commercial products. This is due to matrix differences between standard bread samples and commercial products. Application field of PLS has expanded to biomedical, pharmaceutical, social science and other fields [39,40,41,42], and it has recently shown great potential in LIBS applications [43].
Combining LIBS and PLS methods to measure Na concentrations gave acceptable results for also commercial products because the PLS as a multivariate analysis is more accurate, robust and reliable compared to SCC method. Limit of detection (LOD) was calculated as 0.0279%, and limit of quantification (LOQ) was calculated as 0.0930% for PLS. In some studies, authors obtained lower detection limits, as low as 5 ppm [44] and even 0.1 ppb by using dual-pulse and crossed-beam Nd:YAG lasers for Na on a water film [45]. However, our LOD and LOQ values are quite low for food products, which makes this method convenient for measurement of Na even in dietary food.
Na is an important ingredient in food products both for its potential to cause health problems such as heart diseases and stroke, and for its usage as a quality control parameter influencing taste, yeast activity, strength of the gluten network, and gas retention [46]. Thus, Na levels in food should be controlled in accordance with the recommendations. Measurement of Na in breads can be performed by titration, AAS and potentiometric methods. These regulatory methods are time consuming and require sample preparation. In contrast, LIBS can be a rapid and valuable tool for Na measurement in bakery products.
A comparative study between the standard calibration curve, ANN and PLS methods was conducted for measurement of NaCl in bakery products. Calibration data set was obtained by preparing the standard bread samples at various salt concentrations. Optimization was performed for each calibration method. According to the calibration results, PLS method gave the best results for validation curves and prediction of commercial samples. Experimental results showed that PLS method enhanced the performance of LIBS for quantitative analysis. Thanks to the PLS method LIBS can be a valuable tool for routine NaCl measurements in bakery products.
There is no data that needs to be shared.
Pandhija, S., Rai, N.K., Rai, A.K., Thakur, S.N.: Contaminant concentration in environmental samples using LIBS and CF-LIBS. Appl. Phys. B. Lasers. Opt. 98(1), 231–241 (2010). https://doi.org/10.1007/s00340-009-3763-x
St-Onge, L., Kwong, E., Sabsabi, M., Vadas, E.B., St-Onge, L., et al.: Rapid analysis of liquid formulations containing sodium chloride using laser-induced breakdown spectroscopy. J. Pharm. Biomed. Anal. 36(2), 277–284 (2004). https://doi.org/10.1016/j.jpba.2004.06.004
Bustamante, M.F., Rinaldi, C.A., Ferrero, J.C.: Laser induced breakdown spectroscopy characterization of ca in a soil depth profile. Spectrochim. Acta. Part. B. 57(2), 303–309 (2002). https://doi.org/10.1016/S0584-8547(01)00394-9
Maravelaki-Kalaitzaki, P., Anglos, D., Kilikoglou, V., Zafiropulos, V.: Compositional characterization of encrustation on marble with laser induced breakdown spectroscopy. Spectrochim. Acta. Part. B. 56(6), 887–903 (2001). https://doi.org/10.1016/S0584-8547(01)00226-9
Pandhija, S., Rai, A.K.: Laser-induced breakdown spectroscopy: a versatile tool for monitoring traces in materials. Pramana. 70(3), 553–563 (2008). https://doi.org/10.1007/s12043-008-0070-8
Gomba, J.M., D'Angelo, C., Bertuccelli, D., Bertuccelli, G.: Spectroscopic characterization of laser-induced breakdown in aluminum-lithium alloy samples for quantitative determination of traces. Spectrochim. Acta. Part. B. 56(6), 695–705 (2001). https://doi.org/10.1016/S0584-8547(01)00208-7
Lee, W.B., Wu, J.Y., Lee, Y.I., Sneddon, J.: Recent applications of laser-induced breakdown spectrometry: a review of material approaches. Appl. Spectrosc. Rev. 39(1), 27–97 (2004). https://doi.org/10.1081/ASR-120028868
Li, J., Lu, J., Lin, Z., Gong, S., Xie, C., Chang, L., Yang, L., Li, P.: Effects of experimental parameters on elemental analysis of coal by laserinduced breakdown spectroscopy. Opt. Laser Technol. 41(8), 907–913 (2009). https://doi.org/10.1016/j.optlastec.2009.03.003
Beldjilali, S., Borivent, D., Mercadier, L., Mothe, E., Clair, G., Hermann, J.: Evaluation of minor element concentrations in potatoes using laser-induced breakdown spectroscopy. Spectrochim. Acta. Part. B. 65(8), 727–733 (2010). https://doi.org/10.1016/j.sab.2010.04.015
Feng, J., Wang, Z., Li, Z., Ni, W.: Study to reduce laser-induced breakdown spectroscopy measurement uncertainty using plasma characteristic parameters. Spectrochim. Acta. Part. B. 65(7), 549–556 (2010). https://doi.org/10.1016/j.sab.2010.05.004
D. A. Rusak, , B. C. Castle, B. W.Smith, J. D. Winefordner, Fundamentals and applications of laser-induced breakdown spectroscopy, Crit. Rev. Anal. Chem., 27 (1997) 257–290, 4, DOI: https://doi.org/10.1080/10408349708050587
Sneddon, J., Lee, Y.-I.: Novel and recent applications of elemental determination by laser-induced breakdown spectrometry. Anal. Lett. 32(11), 2143–2162 (1999). https://doi.org/10.1080/00032719908542960
St-Onge, L., Kwong, E., Sabsabi, M., Vadas, E.B.: Quantitative analysis of pharmaceutical products by laser-induced breakdown spectroscopy. Spectrochim. Acta. Part. B. 57(7), 1131–1140 (2002). https://doi.org/10.1016/S0584-8547(02)00062-9
Tognoni, E., Palleschi, V., Corsi, M., Cristoforetti, G.: Quantitative micro-analysis by laser-induced breakdown spectroscopy: a review of the experimental approaches. Spectrochim. Acta. Part. B. 57(7), 1115–1130 (2002). https://doi.org/10.1016/S0584-8547(02)00053-8
Inakollua, P., Philipb, T., Raic, A.K., Yueha, F.-Y., Singh, J.P.: A comparative study of laser induced breakdown spectroscopy analysis for element concentrations in aluminum alloy using artificial neural networks and calibration methods. Spectrochim. Acta. Part. B. 64, 99–104 (2009)
Clegg, S.M., Sklute, E., Dyar, M.D., Barefield, J.E., Wiens, R.C.: Multivariate analysis of remote laser-induced breakdown spectroscopy spectra using partial least squares, principal component analysis, and related techniques. Spectrochim. Acta. Part. B. 64(1), 79–88 (2009). https://doi.org/10.1016/j.sab.2008.10.045
Salle, B., Lacour, J.-L., Mauchien, P., Fichet, P., Maurice, S., Manhès, G.: Comparative study of different methodologies for quantitative rock analysis by laser-induced breakdown spectroscopy in a simulated Martian atmosphere. Spectrochim. Acta. Part. B. 61(3), 301–313 (2006). https://doi.org/10.1016/j.sab.2006.02.003
Sirven, J.-B., Bousquet, B., Canioni, L., Sarger, L.: Laser-induced breakdown spectroscopy of composite samples: comparison of advanced Chemometrics methods. Anal. Chem. 78(5), 1462–1469 (2006). https://doi.org/10.1021/ac051721p
Sokullu, E., Palabıyık, I.M., Onur, F., Boyacı, I.H.: Chemometric methods for simultaneous quantification of lactic, malic and fumaric acids. Eng. Life Sci. 10(4), 297–303 (2010). https://doi.org/10.1002/elsc.200900080
Amador-Hernández, J., García-Ayuso, L.E., Fernandez-Romero, J.M., Luque de Castro, M.D.: Partial least squares regression for problem solving in precious metal analysis by laser induced breakdown spectrometry. J. Anal. At. Spectrom. 15(6), 587–593 (2000). https://doi.org/10.1039/B000813N
Kılıç, K., Bas, D., Boyacı, I.H.: An easy approach for the selection of optimal neural network structure. J. Food. 34(2), 73–81 (2009)
Yuab, K., Ren, J., Zhao, Y.: Principles, developments and applications of laser-induced breakdown spectroscopy in agriculture: a review. Artif. Intel. Agric. 4, 127–139 (2020)
Wang, Z., Feng, J., Li, L., Ni, W., Li, Z.: A multivariate model based on dominant factor for laser-induced breakdown spectroscopy measurements. J. Anal. At. Spectrom. 26(11), 2289–2299 (2011). https://doi.org/10.1039/c1ja10041f
Sattmann, R., Moench, I., Krause, H., Noll, R., Couris, S., Hatziapostolou, A., Mavromanolakis, A., Fotakis, C., Larrauri, E., Miguel, R.: Laser-induced breakdown spectroscopy for polymer identification. Appl. Spectrosc. 52(3), 456–461 (1998). https://doi.org/10.1366/0003702981943680
Sirven, J.-B., Bousquet, B., Canioni, L., Sarger, L., Tellier, S., Potin-Gantier, M., Le Hecho, I.: Qualitative and quantitative investigation of chromium-polluted soils by laser-induced breakdown spectroscopy combined with neural networks analysis. Anal. Bioanal. Chem. 385, 256 (2006)
Motto-Ros, V., Koujelev, A.S., Osinski, G.R., Dudelzak, A.E.: Quantitative multi-elemental laser-induced breakdown spectroscopy using artificial neural networks. J Eur Optical Soc. 3, 08011 (2008). https://doi.org/10.2971/jeos.2008.08011
Elliott, P., Stamler, J., Nichols, R., Dyer, A.R., Stamler, R., Kesteloot, H., Marmot, M.: Intersalt revisited: further analyses of 24 hour sodium excretion and blood pressure within and across populations. Br. Med. J. 312(7041), 1249–1253 (1996). https://doi.org/10.1136/bmj.312.7041.1249
Tuomilehto, J., Jousilahti, P., Rastenyte, D., Moltchanov, V., Tanskanen, A., Pietinen, P., Nissinen, A.: Urinary sodium excretion and cardiovascular mortality in Finland: a prospective study. Lancet. 357(9259), 848–851 (2001). https://doi.org/10.1016/S0140-6736(00)04199-4
FSAI (Food Safety Authority of Ireland). Salt and health: review of the scientific evidence and recommendations for public policy in Ireland, 2005. URL http://www.fsai.ie/uploadedFiles/Science_and_Health/salt_report-1.pdf. Accessed 28.09.2014
Capuano, E., Van der Veer, G., Verheijen, P.J.J., Heenan, S.P., Van de Laak, L.F.J., Koopmans, H.B.M., Van Ruth, S.M.: Comparison of a sodium-based and a chloride-based approach for the determination of sodium chloride content of processed foods in the Netherlands. J. Food Compos. Anal. 31(1), 129–136 (2013). https://doi.org/10.1016/j.jfca.2013.04.004
Smith, T., Haider, C.: Novel method for determination of sodium in foods by thermometric endpoint titrimetry (TET). J. Agric. Chem. Environ. 3(1B), 20–25 (2014)
Maria Markiewicz, K., Xavier Cama, M., Maria, G., Casado, P., Yash, D., Raquel Cama, M., Patrick, C., Carl, S.: Laser-induced breakdown spectroscopy (LIBS) for food analysis: a review. Trends Food Sci. Technol. 65, 80–93 (2017). https://doi.org/10.1016/j.tifs.2017.05.005
Agrawal, R., Kumar, R., Rai, S., Pathak, A.K., Rai, A.K., Rai, G.K.: LIBS: a quality control tool for food supplements. Food Biophysics. 6(4), 527–533 (2011). https://doi.org/10.1007/s11483-011-9235-y
Bilge, G., Boyacı, I.H., Eseller, K.E., Tamer, U.: Serhat Çakır, analysis of bakery products by laser-induced breakdown spectroscopy. Food Chem. 181, 186–190 (2015). https://doi.org/10.1016/j.foodchem.2015.02.090
Sezer, B., Bilge, G., Boyaci, I.H.: Capabilities and limitations of LIBS in food analysis. TrAC Trends Anal. Chem. 97, 345–353 (2017). https://doi.org/10.1016/j.trac.2017.10.003
AACCI (American Association of Cereal Chemists International). (2010). Approved Methods of Analysis. 11th Ed. AACCI: St. Paul. Methods 10–10.03
EPA Method 3051. (1994). Microwave assisted acid digestion of sediments, sludges, soils and oils
Uysal, R.S., Boyaci, I.H., Genis, H.E., Tamer, U.: Determination of butter adulteration with margarine using Raman spectroscopy. Food. Chem. 141(4), 4397–4403 (2013). https://doi.org/10.1016/j.foodchem.2013.06.061
Tripathi, M.M., Eseller, K.E., Yueh, F.-Y., Singh, J.P.: Multivariate calibration of spectra obtained by laser induced breakdown spectroscopy of plutonium oxide surrogate residues. Spectrochim. Acta Part B. 64(11-12), 1212–1218 (2009). https://doi.org/10.1016/j.sab.2009.09.003
Lengard, V., Kermit, M.: 3-way and 3-block PLS regressions in consumer preference analysis. Food Qual. Prefer. 17(3–4), 234–242 (2006). https://doi.org/10.1016/j.foodqual.2005.05.005
Krishnan, A., Williams, L.J., McIntosh, A.R., Abdi, H.: Partial least squares (PLS) methods for neuroimaging: a tutorial and review. NeuroImage. 56(2), 455–475 (2011). https://doi.org/10.1016/j.neuroimage.2010.07.034
Chiang, Y.-H.: Using a combined AHP and PLS path modelling on blog site evaluation in Taiwan. Comput. Hum. Behav. 29(4), 1325–1333 (2013). https://doi.org/10.1016/j.chb.2013.01.025
Ortiz, M.C., Sarabia, L., Jurado-Lopez, A., Luque de Castro, M.D.: Minimum value assured by a method to determine gold in alloys by using laser-induced breakdown spectroscopy and partial least-squares calibration model. Anal. Chim. Acta. 515(1), 151–157 (2004). https://doi.org/10.1016/j.aca.2004.01.003
Hussain, T., Gondal, M.A.: Laser induced breakdown spectroscopy (LIBS) as a rapid tool for material analysis. J. Phys. 439, 1–12 (2013)
Kuwako, A., Uchida, Y., Maeda, K.: Supersensitive detection of sodium in water with use of dual-pulse laser-induced breakdown spectroscopy. Appl. Opt. 42(50), 6052–6056 (2003). https://doi.org/10.1364/AO.42.006052
Lynch, E.J., Dal Bello, F., Sheehan, E.M., Cashman, K.D., Arendt, E.K., Lynch, E.J., et al.: Fundamental studies on the reduction of salt on dough and bread characteristics. Food Res. Int. 42(7), 885–891 (2009). https://doi.org/10.1016/j.foodres.2009.03.014
Furthermore, we gratefully thank Assist. Prof. Dr. Aysel Berkkan for performing atomic absorption spectroscopy analysis.
This research received no external funding.
Department of Food Engineering, Konya Food and Agriculture University, Meram, 42080, Konya, Turkey
Gonca Bilge
Department of Electrical and Electronics Engineering, Atilim University, 06836, Ankara, Turkey
Kemal Efe Eseller
Department of Physics, Ankara Hacı Bayram Veli University, 06900, Polatlı-Ankara, Turkey
Halil Berberoglu
Food Research Center, Hacettepe University, Beytepe, 06800, Ankara, Turkey
Banu Sezer & Ismail Hakki Boyaci
Department of Analytical Chemistry, Faculty of Pharmacy, Gazi University, 06330, Ankara, Turkey
Ugur Tamer
Banu Sezer
Ismail Hakki Boyaci
Data Analysis and sample preparation have been performed by GB and BS, System optical design and data processing has been done by KEE and HB. IHB and UT performed the validation of theoretical framework which this research based on, polished the text and brought the manuscript into its final form. The authors read and approved the final manuscript.
Correspondence to Kemal Efe Eseller.
Bilge, G., Eseller, K.E., Berberoglu, H. et al. Comparison of different calibration techniques of laser induced breakdown spectroscopy in bakery products: on NaCl measurement. J. Eur. Opt. Soc.-Rapid Publ. 17, 18 (2021). https://doi.org/10.1186/s41476-021-00164-9
Laser induced breakdown spectroscopy
Artificial neural network
Partial least square
|
CommonCrawl
|
A simple question on #P
Wiki (http://en.wikipedia.org/wiki/Sharp-P) says 'In computational complexity theory, the complexity class #P (pronounced "number P" or, sometimes "sharp P" or "hash P") is the set of the counting problems associated with the decision problems in the set NP'.
Is there a counting version for CoNP problems?
complexity-theory
T....T....
$\begingroup$ #P can count the number of accepting paths in a TM. Hence, not only you're able to know if there is at least one accepting path (for NP), but you can also check that all paths are accepting (for coNP). Hence #P is associated with both NP and coNP. $\endgroup$ – Tpecatte Sep 27 '13 at 22:21
$\begingroup$ It would perhaps be better to say that #P is the set of counting problems associated with polynomial-time nondeterministic Turing machines, rather than with NP specifically. $\endgroup$ – David Richerby Sep 28 '13 at 0:01
$\begingroup$ It's usually pronounced "sharp P". I believe the coiner of this notation intended it to be pronounced "number P", but he neglected to specify the pronunciation in his paper. But at least it's not pronounced "octothorpe P". $\endgroup$ – Peter Shor Sep 28 '13 at 1:07
$\begingroup$ @PeterShor "Sharp P" seems to be more common in North America; "Number P" in Europe. My own feeling is that it makes much more sense to use the pronunciation that describes what the class does, rather than describing the notation. (After all, we say "Parity-P", not "Plus-sign-in-a-circle-P".) $\endgroup$ – David Richerby Sep 28 '13 at 2:19
$\begingroup$ one can count the # of solutions of decision problems of any complexity class (its like an operator applied to a complexity class), but some are more obscure than others. $\endgroup$ – vzn Sep 28 '13 at 2:25
Expanding on Timot's comment, $\# P$ consists of all functions which can be expressed as the number of accepting computations of some non-deterministic Turing machine running in polynomial time. It is also the class of all functions which can be expressed at the number of non-accepting computations of some non-deterministic Turing machine running in polynomial time (exercise).
If $f \in \# P$ then $\{ x : f(x) \geq 1 \} \in N\! P$ and $\{x : f(x) = 0 \} \in \mathrm{co}N\! P$; and vice versa, if $L \in N\!P$ ($L \in \mathrm{co}N\!P$) then it can be expressed in the form $\{ x : f(x) \geq 1 \}$ ($\{ x : f(x) = 0 \}$) for some $f \in \# P$.
$\begingroup$ (But note that the difference between the number of accepting and the number of non-accepting computations is not necessarily in #P, of course) $\endgroup$ – Steven Stadnicki Sep 28 '13 at 0:00
Not the answer you're looking for? Browse other questions tagged complexity-theory or ask your own question.
Why are decision problems commonly used in complexity theory?
Is the time reversal symmetry of non-deterministic computations important?
Definition of $D^P$?
Does NP-completeness require to find the solution?
Why is the probability used in the definition of RP complexity classes, arbitrary?
On complete problems for $P/poly$ and $BPP$
Is there an NP-complete problem, such that the decision version of its counting problem is not PP-complete?
Decision version of the traveling salesman problem and NP-hardness
Complexity Classes UP and US
|
CommonCrawl
|
The UDWT image denoising method based on the PDE model of a convexity-preserving diffusion function
Xianghai Wang1,2,
Wenya Zhang3,
Rui Li3 &
Ruoxi Song1
EURASIP Journal on Image and Video Processing volume 2019, Article number: 81 (2019) Cite this article
It is a great challenge to maintain details while suppressing and eliminating noise of the image. Considering the nonconvexity property of the diffusion function and the hypersensitivity of the Laplace operator to noise in the Y-K model, a fourth-order PDE image denoising model (Con_G&L model) is proposed in this paper. This model is constructed by a new convexity-preserving diffusion function which guarantees the corresponding energy functional has a globally unique minimum solution. At the same time, the Gaussian filter is combined with the Laplace operator in this model, and as a result, the noisy image is smoothed before the diffusion process, which improves the ability of capturing the details and edges of the noisy image greatly. Furthermore, by analyzing the statistical properties of the undecimated discrete wavelet transform (UDWT) coefficients of noisy image, we observe that the noise information is mainly distributed in the high-frequency sub-bands, and based on this, the proposed Con_G&L model is applied in the high-frequency sub-bands of the UDWT to get the denoising method. The proposed method removes the image noise effectively with the image texture and other details of the image being maintained. Meanwhile, the generation of false edges and the staircase effect can be suppressed. A large number of simulation experiments verify the effectiveness of the proposed method.
In the process of image formation and transmission, some noise will be introduced, having a great impact on the subsequent applications. Therefore, effectively suppressing and eliminating noise in images has always been a popular research topic in the image processing area. Several of image modeling methods have been proposed to study the relationship between the image background and the noise component [1,2,3]. Generally, a good denoising method should be able to remove the noise from the image while maintaining the information of the edges, contours, and details of the image. In other words, it should remove noise and retain the spatial resolution of the images at the same time.
In recent years, the partial differential equation (PDE) method has become one of the most important mathematical tools for image modeling and representation due to its good property of flexibility and local adaptability. Based on the continuous mathematical image model, this kind of method makes the image follow a specified PDE, the processing result of which is considered the expected result [4,5,6]. In the field of image denoising, the second-order PDE nonlinear diffusion equation (P-M model) proposed by Perona and Malik is a pioneer method in PDE image denoising [7]. This model combines image denoising with edge detection organically and takes into account the preservation of details while denoising. However, in the process of iteration, this model is unbounded for the boundary detector oscillation when large noise is introduced, and the condition given by the model is smooth, which will affect the results. Also, the model is ill-posed. To tackle these, Alvarez et al. proposed regularized P-M model [8]. However, the regularized model is not stable when the difference is zero in the diffusion process. Additionally, the diffusion degree is weakened by the first-order partial differential at the edge and texture regions because of the second-order characteristics of the model, and after several iterations, the image gray level will appear as a piecewise constant, and the "block" effect will make it difficult to retain the texture details of the original image. To solve these problems, researchers ought to find a PDE model with higher order. The fourth-order PDE model is noted for its stability and computational efficiency. The Y-K model proposed by You and Kaveh in [9] and the LLT model proposed by Lysaker in [10] are some typical models. They efficiently suppress the "block" effect introduced by the second-order PDE model. However, the Y-K model causes an obvious gray difference between some points and their surroundings. In addition, black and white outliers usually emerge in the denoised image. The reason is that the Laplace operator in the model is sensitive to speckle noise, which restrains the diffusion of the model. The LLT model is based on the minimum L1 norm about the second derivative of the image, and there is a fast computational method of numerical solution [11]. Nevertheless, it is essentially a high-order filter, which is more sensitive to the high-frequency information of the image and will inevitably blur the image details and edge information with the diffusion deepening.
The wavelet transform is an effective tool for time-frequency analysis of signals due to its excellent time-frequency localization ability. Therefore, image denoising methods based on wavelets have frequently been considered. The most typical methods are based on a wavelet domain, including the general threshold method [12], extreme threshold method [13], St. Ein unbiased risk threshold method [14], and Bayesian threshold method [15,16,17]. These methods simply and rapidly obtain the corresponding denoising thresholds by using the frequency characteristics of decomposing the sub-band coefficients through image wavelets. However, these methods either have the tendency of "over-stifling" [12, 13] or "over-reserving" [14] the wavelet coefficients. The accuracy of the Bayesian threshold method in estimating the variance of sub-band noise remains to be improved.
Based on the above discussions, in this paper, a novel image denoising method is proposed by the improved Y-K model and UDWT. First, a novel convexity-preserving diffusion function is proposed by introducing the Gaussian convolution process to the traditional Y-K model, which ensures the unique minimum solution of the model and decreases the sensitivity of the Laplace operators with respect to speckle noise. Then, the statistical property of noisy image UDWT coefficients is studied; we observe that the noise information is mainly distributed in the high-frequency sub-bands, and based on this, the proposed Con_G&L model is applied in the high-frequency sub-bands of the UDWT to get the denoising method. A large number of experiments are carried out to verify the effectiveness of the proposed method. The results show that the proposed method can effectively remove the noise in the image while retaining the image edge, texture, and other information details.
The rest of the paper is organized as follows: Section 2 provides the statistical analysis of UDWT coefficient and gives the proposed denoising model, Section 3 gives the description of the datasets and the details of the experiments, and Section 4 presents the conclusions.
Statistical analysis of UDWT coefficients of noisy images
The traditional discrete wavelet transform can decompose an image to multi-scales and multi-directions. However, due to the downsampling and upsampling process, the wavelet transform is not shift-invariant. To solve this, the undecimated discrete wavelet transform (UDWT) was proposed by Shensa [18], which not only retains the properties of wavelet transform, but also has the good property of shift-invariant. UDWT does not utilize the downsampling operation in the process of signal decomposition, and instead, it inserts zeroes every two coefficients to expand the filter in the process of high-pass and low-pass filtering. The length of low-frequency and high-frequency signals obtained by UDWT decomposition is the same as that of the original signal, which not only preserves the time-frequency local characteristics and multiresolution analysis characteristics of the wavelet transform but also ensures preservation of the translation invariant characteristics. These characteristics lay a foundation for overcoming the drawbacks of traditional denoising methods and effectively suppress the generation of the pseudo-Gibbs phenomenon.
Assuming that the scale function and the wavelet \( \left(\phi, \psi, \tilde{\phi},\tilde{\psi}\right) \) are designed by the filter \( \left(h,g,\tilde{h},\tilde{g}\right) \), UDWT can efficiently decompose the input signal c0 into {ω1, ⋯, ωJ, cJ} through the following porous algorithm (àtrous) [19], where ωj (j ∈ {1, 2, ⋯, J}) represents the wavelet coefficients on the scale j and cJ represents the wavelet coefficients on the coarsest resolution:
$$ \left\{\begin{array}{l}{c}_{j+1}\left[l\right]=\left({\overline{h}}^{(j)}\ast {c}_j\right)\left[l\right]=\sum \limits_kh\left[k\right]{c}_j\left[l+{2}^jk\right]\\ {}{\omega}_{j+1}\left[l\right]=\left({\overline{g}}^{(j)}\ast {c}_j\right)\left[l\right]=\sum \limits_kg\left[k\right]{c}_j\left[l+{2}^jk\right]\end{array}\right.. $$
when l/2j is an integer, h(j)[l] = h[l], or h(j)[l] = 0; for example, h(1) = (⋯, h[−2], 0, h[−1], 0, h[1], 0, h[2], ⋯), cj can be reconstructed by
$$ {c}_j\left[l\right]=\left({\tilde{h}}^{(j)}\ast {c}_{j+1}\right)\left[l\right]+\left({\tilde{g}}^{(j)}\ast {\omega}_{j+1}\right)\left[l\right] $$
Since there is no downsampling process, the filter banks \( \left(h,g,\tilde{h},\tilde{g}\right) \) only need to satisfy the complete reconstruction conditions in (3) and do not need to satisfy the de-aliasing conditions.
$$ {\hat{h}}^{\ast }(v)\hat{\tilde{h}}(v)+{\hat{g}}^{\ast }(v)\hat{\tilde{g}}(v)=1 $$
Furthermore, the above porous algorithm can be extended to two-dimensional image decomposition in the following two-dimensional tensor product form:
$$ \left\{\begin{array}{c}{c}_{j+1}\left[k,l\right]=\left({\overline{h}}^{(j)}{\overline{h}}^{(j)}\ast {c}_j\right)\left[k,l\right],\kern1em {w}_{j+1}^1\left[k,l\right]=\left({\overline{g}}^{(j)}{\overline{h}}^{(j)}\ast {c}_j\right)\left[k,l\right]\\ {}{w}_{j+1}^2\left[k,l\right]=\left({\overline{h}}^{(j)}{\overline{g}}^{(j)}\ast {c}_j\right)\left[k,l\right],\kern1em {w}_{j+1}^3\left[k,l\right]=\left({\overline{g}}^{(j)}{\overline{g}}^{(j)}\ast {c}_j\right)\left[k,l\right]\end{array}\right. $$
where hg ∗ c is the convolution performed with separable filters hg, which means being convolved with h by column and then with g by row. Three detailed scale images w1, w2, and w3 are obtained at each scale, which have the same size as the original image.
To study the distribution of noise coefficients of the noisy images after UDWT transform, we use the maximum likelihood estimation (ML) method shown in (5) to obtain the variance estimation of each noisy observation sub-band [20]:
$$ {\hat{\sigma}}_{\boldsymbol{y}}^2=\frac{1}{n\times n}\sum \limits_i{\boldsymbol{y}}^2(i) $$
where n × n is the size of sub-band and y is the observed image.
In this paper, Tank, Elaine, and Sea (512 × 512, 8 bpp) are used as test images (see Fig. 1); Gaussian white noise with a mean value of 0 and variance of 30 dB and 50 dB are added using MATLAB. At the same time, three-layer UDWT decomposition is performed for the original image and noisy image, and three high-frequency sub-bands and one low-frequency sub-band are obtained. The standard deviation statistics about the low-frequency sub-bands and high-frequency sub-bands of the two kinds of images are given in Table 1.
Three test images. a Tank. b Elaine. c Sea
Table 1 The standard deviation estimate of each sub-band coefficients
As seen in Table 1, after decomposition by UDWT, the noise mainly distributes in the coefficients of the high-frequency sub-bands and the standard deviation of the low-frequency sub-band coefficients of the noisy images, and the images without noise are very close. That is, the influence of noise on the low-frequency sub-band of UDWT coefficients is very small. Hence, in the process of improved PDE model image denoising based on UDWT, we only need to deal with the high-frequency sub-band and keep the low-frequency sub-band state unchanged. In this way, we can improve the computation efficiency of diffusion and avoid the tendency of "over-strangling" the approximation sub-band coefficients.
Proposed PDE model of the convexity-preserving diffusion function
Analysis of Y-K model
For a continuous functional in the region Ω,
$$ E(u)={\int}_{\Omega}f\left(|{\nabla}^2u|\right) dxdy $$
where ∇2 is the Laplace operator, and when f(⋅) ≥ 0 and f′(⋅) > 0 is satisfied, E(u) reaches its minimum value. The minimum is considered as a variational minimum problem, and the Euler-Lagrange equation of (6) could be obtained by the gradient descent method.
$$ {\nabla}^2\left(g\left(|{\nabla}^2u|\right){\nabla}^2u\right)=0 $$
where g(x) = f′(x)/x. According to this, You and Kaveh proposed the Y-K model [4] as (8)
$$ \left\{\begin{array}{l}\frac{\partial u\left(x,y,t\right)}{\partial t}=-{\nabla}^2\left(g\left(|{\nabla}^2u|\right){\nabla}^2u\right)\\ {}u\left(x,y,0\right)=u{}_0\left(x,y\right)\end{array}\right. $$
where \( {\nabla}^2u=\frac{\partial^2u}{\partial {x}^2}+\frac{\partial^2u}{\partial {y}^2} \), u0(x, y) is the original image, u(x, y, t) is the image after smoothing, and u0(x, y) is in the time scale t, and we select \( g(s)=\frac{1}{1+{\left(s/k\right)}^2} \) as diffusion coefficient, where k is the edge threshold (constant).
The Y-K model reduces the block effect of low-order PDE models in the image denoising process and achieves a good balance between removing noise and preserving edges. However, in the process of denoising, the Y-K model will produce a "speckle" effect, which leads to secondary contamination of the image. The reasons are as follows:
The energy functional E(u) of (6) has the same concavity and convexity with f(⋅), and if f(⋅) is a convex function, we can know f′(s) ≥ 0 and f″(s) ≥ 0, when ∀s > 0. In this condition, E(u) has a globally unique minimum value. However, in the Y-K model, f(⋅) is determined by the selected diffusion function g(s) through f′(s) = sg(s). Overall, we obtain
$$ \left\{\begin{array}{c}{f}^{\prime }(s)=\frac{s}{1+{\left(s/k\right)}^2}\\ {}{f}^{{\prime\prime} }(s)=\frac{k^2\left({k}^2-{s}^2\right)}{{\left({k}^2+{s}^2\right)}^2}\end{array}\right. $$
Clearly, f″(s) < 0 when s > k, which means f(⋅) is no longer guaranteed to be convex. Therefore, it is not guaranteed that E(u) has a globally unique minimum solution.
Construction of the model
For the nonconvexity of the diffusion function in the Y-K model, a new convexity-preserving diffusion function is constructed. Considering that the functional purpose of the diffusion function in the Y-K model is to detect edge information in images and that the Laplace operator ∇2 is very sensitive to noise, it is difficult to detect edge information effectively in noisy images. For this reason, we combine the Gaussian filter with the Laplace operator, and Gaussian smoothing is performed before edge detection using the constructed convexity-preserving diffusion function. Based on this smoothing, an image diffusion denoising model (named Con_G&L) is proposed, combining the constructed convexity-preserving diffusion function with the Gaussian filter and Laplace operator. The specific form is
$$ \left\{\begin{array}{l}\frac{\partial u\left(x,y,t\right)}{\partial t}=-{\nabla}^2\left({g}_{\mathrm{Convex}}\Big(|{\nabla}^2\left({G}_{\sigma}\otimes u\right)|\Big){\nabla}^2u\right)\\ {}u\left(x,y,0\right)=u{}_0\left(x,y\right)\end{array}\right. $$
where \( {G}_{\sigma}\left(\cdot \right)=\frac{1}{\sqrt{2\pi \sigma}}\exp \left(-\frac{{\left|\cdot \right|}^2}{2{\sigma}^2}\right) \) is a Gaussian convolution kernel with variance σ(σ > 0), ⊗ is the two-dimensional convolution operation, ∇2 is the Laplace operator, u0(x, y) is the original image, and u(x, y, t) is the image after smoothing in the time scale t, gConvex(⋅) is the diffusion function proposed whose specific form is
$$ {g}_{\mathrm{Convex}}(s)=\frac{1}{\sqrt{1+{\left(s/k\right)}^{2p}}},p\in \left(0,\kern0.5em 1\right],s>0. $$
Here, k is the threshold parameter to distinguish the edges and smooth areas of the image. If the value of k is too small to distinguish the noise points well, it is prone to a step effect. If the value of k is too large, the image will be blurred by excessive denoising. To increase the locally adaptive property, according to [21], k is set as k = 1.4826 × median‖(| ∇2u| −median(| ∇2u| ))‖.
Furthermore, g(s) is a nonnegative monotonously decreasing function, which satisfies
$$ {g}_{\mathrm{Convex}}(0)=1,\kern1em \underset{s\to \infty }{\lim }{g}_{\mathrm{Convex}}(s)=0. $$
In addition, p ∈ (0, 1] is the regulatory factor. The larger the p is, the faster the function gConvex(s) decreases. Then, the better the edge information is preserved, the less the noise near the edges is removed. By contrast, the smaller P is, the slower the function gConvex(s) decreases. Then, the worse the edge information is preserved, the stronger the noise near the edges is removed. Figure 2 shows the diagram of gConvex(⋅) with s when k is 1 and P is 1, 0.9, 0.7, and 0.5.
The diagram of gConvex(s) when k = 1
Effectiveness analysis of the Con_G&L model is as follows:
Theorem. Based on the diffusion function gConvex(s), the energy functional E(u) = ∫Ωf(| ∇2u| )dxdy has a globally unique minimum value.
Proof: For the energy functional E(u) = ∫Ωf(| ∇2u| )dxdy, f(⋅) has the same concave-convex quality. According to the constructed diffusion function gConvex(s), we set f(⋅) as
$$ \left\{\begin{array}{l}\frac{\partial u\left(x,y,t\right)}{\partial t}=-{\nabla}^2\left(g\left(|{\nabla}^2u|\right){\nabla}^2u\right)\\ {}u\left(x,y,0\right)=u{}_0\left(x,y\right)\end{array}\right.,p\in \left(0,1\right] $$
Furthermore, we can obtain
$$ f"(s)=\frac{1-p}{{\left(1+{\left(s/k\right)}^{2p}\right)}^{3/2}} $$
due to p ∈ (0, 1], we can know f ' ' (x) ≥ 0. Thus, f(s) is a convex function, and E(u) has a globally unique minimum value.
Discretization of the Con_G&L model
In image processing, an image is usually defined as a rectangular field Ω, and it satisfies the following rectangular network in Ω:
$$ \left\{\begin{array}{c}x=i\Delta x,\kern1.62em i=0,1,\cdots, I-1;\\ {}y=j\Delta y,\kern1em j=0,1,\cdots, J-1;\end{array}\right. $$
where I × J is the size of an image, and we usually select Δx = 1, Δy = 1 for an image.
Let u(⋅, ⋅) be the processed image, u0(⋅, ⋅) be the original image, Δt be the time step, h be the space step, k be the iteration count in this condition, and mark u(i, j) = ui, j, \( {u}_0\left(i,j\right)={u}_{i,j}^0 \). Taking account of the accuracy of calculation, the central difference method is adopted in numerical calculation. The specific calculation format is
$$ \left\{\begin{array}{c}{\left({u}_{xx}\right)}_{i,j}^k=\frac{u_{i+1,j}^k-2{u}_{i,j}^k+{u}_{i-1,j}^k}{h^2}\kern8.62em \\ {}{\left({u}_{yy}\right)}_{i,j}^k=\frac{u_{i,j+1}^k-2{u}_{i,j}^k+{u}_{i,j-1}^k}{h^2}\kern8em \\ {}{\nabla}^2{u}_{i,j}^k={\left({u}_{xx}\right)}_{i,j}^k+{\left({u}_{yy}\right)}_{i,j}^k\kern9em \\ {}{\nabla}^2{\left({G}_{\sigma}\otimes u\right)}_{i,j}^k={\left({\left({G}_{\sigma}\otimes u\right)}_{xx}\right)}_{i,j}^k+{\left({\left({G}_{\sigma}\otimes u\right)}_{yy}\right)}_{i,j}^k\kern0.5em \end{array}\right. $$
$$ \left\{\begin{array}{c}c\left({\nabla}^2u\right)={g}_{\mathrm{Convex}}\left(|{\nabla}^2\left({G}_{\sigma}\otimes u\right)|\right){\nabla}^2u\\ {}{c}_{i,j}^k=c\left({\nabla}^2{u}_{i,j}^k\right)\kern8.62em \end{array}\right., $$
$$ {\nabla}^2{c}_{i,j}^k=\frac{c_{i+1,j}^k+{c}_{i-1,j}^k+{c}_{i,j+1}^k+{c}_{i,j-1}^k-4{c}_{i,j}^k}{h^2} $$
Then, the explicit difference scheme of the proposed Con_G&L model is
$$ {u}_{i,j}^{k+1}={u}_{i,j}^k-\Delta t{\nabla}^2{c}_{i,j}^k. $$
UDWT image denoising algorithm based on the Con_G&L model
From the analysis and discussion in Section 2.1, it can be seen that noise information is mainly contained in high-frequency sub-bands after UDWT decomposition of the noisy images. Based on that, this paper decomposes the noisy image into three layers via UDWT and then denoises the three high-frequency sub-bands using the proposed Con_G&L model, while the low-frequency sub-band information remains unchanged. Finally, the final denoised image is obtained by reconstructing the low-frequency sub-band and the denoised high-frequency sub-band. The flow chart of the algorithm is shown in Fig. 3.
Algorithm flow chart
The specific implementation process of the algorithm is as follows:
Step 1. Deal with the noisy images via UDWT.
Step 2. Denoise the high-frequency sub-band components by using the Con_G&L model after UDWT decomposition.
Step 2.1 calculates ∇2(Gσ ⊗ u) of the high-frequency sub-band and then calculates gConvex(| ∇2(Gσ ⊗ u)| ) according to formula (11).
Step 2.2. Diffuse the Con_G&L model according to the discrete form (16)–(19).
Step 2.3. If \( \mid {u}_{i,j}^{k+1}-{u}_{i,j}^k\mid <0.01 \), proceed to Step 3; otherwise, return to Step 2.1.
Step 3. Apply the inverse UDWT transform for the high-frequency sub-band components after diffusion, which combines the low-frequency sub-band components; then, we obtain the denoised images.
Results and discussions
To verify the effectiveness of the proposed algorithm, a large number of simulation experiments have been carried out in this paper. The experiment was done with MATLAB (R2018a). The test images are "Elaine," "Tank," "Sea," "Plane," and "Panzer" with size 512 × 512. In our experiments, the test images are corrupted by simulated additive noise with a standard deviation equal to 20, 30, 40, 50, and 60, respectively. Besides, we compare our model with the UDWT threshold, LLT, Y-K, and the denoising method proposed in [22]. The images are decomposed by three-layer UDWT (filter is db9/7 wavelet), the space step h is 1, the time step Δt is 0.2, and P is 1. Also, PSNR is used as the objective evaluation index of denoising effect:
$$ \mathrm{PSNR}=10\times \lg \frac{255^2\times m\times n}{\sum \limits_{i=1}^m\sum \limits_{j=1}^n{\left({u}^{\ast}\left(i,j\right)-{u}^0\left(i,j\right)\right)}^2} $$
where u∗(⋅, ⋅) is the denoised image, u0(⋅, ⋅) is the noiseless original image, m is the length of the image, and n is the width.
Figure 4 shows the denoising results of five test images after adding Gaussian white noise with a variance of 30. Figure 5 is the denoising results of a local region of "Sea" when enlarging two times.
The comparison of the denoising results for the test images
The comparison of the denoising results of a local region when enlarging two times
Table 2 shows the PNSR statistical results for the five denoised images of the algorithm in this paper and UDWT threshold method, LLT model, Y-K model, and the denoising method proposed in [22]. All the images are corrupted with Gaussian white noise with variances of 20, 30, 40, 50, and 60 before denoising.
Table 2 The PSNR of denoising for images of different methods
It can be observed from Figs. 4 and 5 that, compared with the UDWT threshold method, LLT model method, Y-K model method, and the denoising method proposed in [22], the proposed method is better at maintaining the image texture, edges and details of the information, while removing noise. It can be seen from Table 2 that our method achieves superior evaluation indexes in most situations; proposed algorithm has higher PSNR than the other four denoising methods through comparing PSNR values in Table 2, especially for high variance noise.
As a typical representative of the fourth-order PDE model, the Y-K model can effectively suppress the "blocky" effect produced by the second-order PDE model in the process of denoising and achieves a good balance between image denoising and edge preservation. However, the nonconvexity of the diffusion function in the model makes it impossible for the energy function to have a globally unique minimum solution, which results in the "speckle" effect in the process of denoising. In addition, the Laplace operator used in the model is hypersensitive to noise, making it difficult to detect edges in noisy images. Thus, the model loses the transformation and detail information of the image in the process of denoising. In this paper, we construct an image denoising model, Con_G&L, based on a convexity-preserving diffusion function and Gaussian convolution. Then, the globally unique minimum solution of the energy functional is guaranteed by the convexity-preserving diffusion function, and the secondary pollution in the denoising process is avoided. At the same time, the ability to recognize details, such as edges in images, is improved by smoothing the Gaussian convolution. Then, an image denoising method based on UDWT and the Con_G&L model is proposed. In this method, the Con_G&L model is applied to deal with the high-frequency sub-band of the UDWT in the noise image, which effectively suppresses the false edges and staircase effect. In short, this method removes the image noise effectively and maintains the image texture and other details at the same time.
The authors declare that the data and materials are available.
PDE:
UDWT:
Undecimated discrete wavelet transform
C.G. Yan, H.T. Xie, J.J. Chen, A fast uyghur text detector for complex background images. IEEE Trans. Multimed. 20(12), 3389–3398 (2018)
C. Yan, L. Li, C. Zhang, B. Liu, Cross-modality bridging and knowledge transferring for image understanding. IEEE Trans. Multimed.. https://doi.org/10.1109/TMM.2019.2903448
C. Yan, Y.B. Tu, X.Z. Wang, STAT: Spatial-temporal attention mechanism for video captioning. IEEE Trans. Multimed.. https://doi.org/10.1109/TMM.2019.2924576
A. Pandeyand, K.K. Singh, An overview of image denoising and image denoising techniques. Adv. Res. Electr. Electron. Eng. 2, 6–8 (2015)
G.A. Kumar, K. Kusagur, Evaluation of Image Denoising Techniques a Performance Perspective, International Conference on Signal Processing, Communication, Power and Embedded System (IEEE, Paralakhemundi, 2017), pp. 1836–1839
J.F. Sun, H.Y. Liu, Q. Caiand, A survey of image denoising based on wavelet transform. Boletín Técnico 55, 256–262 (2017)
P. Perona, J. Malik, Scale space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990)
L. Alvarez, P.L. Lions, J.M. Morel, Image selective smoothing and edge detection by no-linear diffusion. SIAM J. Numer. Anal. 29(1), 182–193 (1992)
MathSciNet Article Google Scholar
Y.L. You, M. Kaveh, Fourth-order partial differential equation for noise removal. IEEE Trans. Image Process. 9(10), 1723–1730 (2000)
M. Lysaker, A. Lundervold, C. Taix, Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Trans. Image Process. 12(12), 1579–1590 (2003)
T.W. Wang, J.Y. Wen, S.Y. Zhang, B.H. He, H.W. Luo, LLT denoising model based on fixed-point proximity algorithm. J. Jilin Univ.(Science Edition) 52(4), 794–796 (2014)
MathSciNet Google Scholar
D.L. Donoho, I.M. Johnstone, G. Kerkyacharian, D. Picark, Wavelet shrinkage: asymptopia. J. R. Stat. Soc. Ser. 57, 301–369 (1995)
B. Vidakovic, Statistical modeling by wavelets. Wiley Series in Probability and Statistics: Applied Probability and Statistics. A Wiley-Interscience Publication. (Wiley, New York, 1999), ISBN: 0-471-29365-2
T.S. Qu, Y.S. Dai, S.X. Wang, Adaptive wavelet thresholding denoising method based on SURE estimation. Acta Electron. Sin. 30(2), 266–268 (2002)
F. Abramovich, T. Sapatinas, B.W. Sliverman, Wavelet thresholding via a bayesian approach [tech.rep.Bristol BS8 1TW] (1996)
H.X. Wang, A complex wavelet based spatially adaptive method for noised image enhancement. J. Comput. Aided Des. Comput. Graph 17(9), 1911–1916 (2005)
B. Fu, X.H. Wang, Image denoise algorithm based on inter correlation of wavelet coefficients at finer scale. Comput. Sci. 35(10), 246–249 (2008)
M.J. Shensa, Discrete wavelet transform: Wedding the a Trous and Mallat algorithms. IEEE Trans. Signal Process. 40(10), 2464–2482 (1992)
M.J. Shensa, The discrete wavelet transform: Wedding the àtrous and Mallatalgorithms. IEEE Trans. Signal Process 40(10), 2464–2482 (1992)
J.X. Li, P.L. Shui, Wavelet domain LMMSE-like denoising algorithm based on GGD ML estimation. J. Electron. Inf. Technol. 29(12), 2854–2857 (2007)
P.J. Rousseeuw, A.M. Leroy, Regression and Outlier Detection (Wiley, NewYork, 1987), pp. 39–46
S. Mukherjee, J. Farrand, W. Yao, A study of total-variation based noise reduction algorithms for low-dose cone-beam computed tomography. Int. J. Image Process. 10(4), 188–204 (2016)
The authors would like to thank the editor and anonymous reviewers for their helpful comments and valuable suggestions.
This research has been funded by the National Natural Science Foundation of China (Grant Nos. 41671439, 41971388) and Innovation Team Support Program of Liaoning Higher Education Department (Grant No. LT2017013).
School of Geography, Liaoning Normal University, Dalian, 116029, China
Xianghai Wang & Ruoxi Song
School of Computer and Information Technology, Liaoning Normal University, Dalian, 116029, China
Xianghai Wang
School of Mathematics, Liaoning Normal University, Dalian, 116029, China
Wenya Zhang & Rui Li
Wenya Zhang
Rui Li
Ruoxi Song
XW conceived of the study and drafted the manuscript. WZ performed the statistical analysis and drafted the manuscript. RL carried out the design of the algorithm. RS carried out the comparative analysis on the research progress and the existing method. All authors read and approved the final manuscript.
Correspondence to Xianghai Wang or Ruoxi Song.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Wang, X., Zhang, W., Li, R. et al. The UDWT image denoising method based on the PDE model of a convexity-preserving diffusion function. J Image Video Proc. 2019, 81 (2019). https://doi.org/10.1186/s13640-019-0480-1
Undecimated discrete wavelet transform (UDWT)
Fourth-order partial differential equations
Diffuse function
Convexity-preserving
Image denoising
|
CommonCrawl
|
Search all SpringerOpen articles
Fire Ecology
Original research | Open | Published: 10 April 2019
Examining post-fire vegetation recovery with Landsat time series analysis in three western North American forest types
Benjamin C. Bright ORCID: orcid.org/0000-0002-8363-08031,
Andrew T. Hudak1,
Robert E. Kennedy2,
Justin D. Braaten2 &
Azad Henareh Khalyani3
Fire Ecologyvolume 15, Article number: 8 (2019) | Download Citation
Few studies have examined post-fire vegetation recovery in temperate forest ecosystems with Landsat time series analysis. We analyzed time series of Normalized Burn Ratio (NBR) derived from LandTrendr spectral-temporal segmentation fitting to examine post-fire NBR recovery for several wildfires that occurred in three different coniferous forest types in western North America during the years 2000 to 2007. We summarized NBR recovery trends, and investigated the influence of burn severity, post-fire climate, and topography on post-fire vegetation recovery via random forest (RF) analysis.
NBR recovery across forest types averaged 30 to 44% five years post fire, 47 to 72% ten years post fire, and 54 to 77% 13 years post fire, and varied by time since fire, severity, and forest type. Recovery rates were generally greatest for several years following fire. Recovery in terms of percent NBR was often greater for higher-severity patches. Recovery rates varied between forest types, with conifer−oak−chaparral showing the greatest NBR recovery rates, mixed conifer showing intermediate rates, and ponderosa pine showing slowest rates. Between 1 and 28% of patches had recovered to pre-fire NBR levels 9 to 16 years after fire, with greater percentages of low-severity patches showing full NBR recovery.
Precipitation decreased and temperatures generally remained the same or increased post fire. Pre-fire NBR and burn severity were important predictors of NBR recovery for all forest types, and explained 2 to 6% of the variation in post-fire NBR recovery. Post-fire climate anomalies were also important predictors of NBR recovery and explained an additional 30 to 41% of the variation in post-fire NBR recovery.
Landsat time series analysis was a useful means of describing and analyzing post-fire vegetation recovery across mixed-severity wildfire extents. We demonstrated that a relationship exists between post-fire vegetation recovery and climate in temperate ecosystems of western North America. Our methods could be applied to other burned landscapes for which spatially explicit measurements of post-fire vegetation recovery are needed.
Pocos estudios han examinado la recuperación post-fuego de la vegetación en ecosistemas de bosques templados mediante el análisis de series temporales de imágenes Landsat. Analizamos series temporales de la Relación de Quemas Normalizadas (NBR) derivadas del ajuste de la segmentación espectro-temporal de LandTrendr para examinar la recuperación post-fuego de la NBR para diferentes incendios ocurridos en tres tipos de bosques de coníferas en el oeste de Norte América durante los años 2000 a 2007. Resumimos las tendencias de la NBR e investigamos la influencia de la severidad de los incendios, el clima post-fuego y la topografía en la recuperación post-fuego de la vegetación a través del análisis de bosques al azar (RF).
La recuperación de la NBR entre los tipos forestales promedió del 30 al 44 % cinco años post fuego, de 47 a 72% diez años post fuego, y de 54 al 77% 13 años post fuego y variaron por el tiempo desde el incendio, la severidad de cada fuego y el tipo forestal. La tasa de recuperación fue generalmente más grande después de muchos años de ocurrido los incendios. La recuperación en términos de porcentaje la NBR fue frecuentemente mayor para los parches quemados con alta severidad. Las tasas de recuperación variaron entre tipos forestales, en las que el tipo conífera−roble−chaparral mostró las tasas más altas de recuperación de la NBR, siendo intermedias para el tipo de coníferas mixtas, y las más lentas para el tipo pino ponderosa. Entre el 1 y el 28% de los parches se habían recuperado a niveles de NBR pre fuego entre 9 y 16 años del post fuego, con los mayores porcentajes de parches de baja severidad mostrando la más alta recuperación de la NBR. Las precipitaciones decrecieron y las temperaturas permanecieron iguales o aumentaron durante el post fuego. La NBR previa a los incendios y la severidad del fuego fueron importantes predictores de la recuperación de la NBR para todos los tipos forestales, y explicaron del 2 al 6% de la variación en la recuperación de la NBR post fuego. Las anomalías climáticas post fuego fueron también importantes predictores de la recuperación de la NBR y explicaron de un 30 a un 41% adicional en la variación de recuperación post fuego de la NBR.
El análisis de las series temporales de Landsat resultó un método útil para describir y analizar la recuperación de la vegetación post fuego a través de eventos de fuego de diferente severidad. Demostramos que existe una relación entre la recuperación de la vegetación post fuego y el clima en bosques templados del oeste de Norte América. Nuestros métodos pueden ser aplicados a otros paisajes quemados para los cuales sean necesarias mediciones explícitas de la recuperación de la vegetación.
%RMSE::
Percent root mean square error
%VE::
percent variance explained
CBI::
Composite Burn Index
CURV::
McNab's curvature
DIST::
Distance to unburn
dNBR::
differenced Normalized Burn Ratio
ELEV::
ETM+::
Landsat Enhanced Thematic Mapper Plus
EVI::
Enhanced Vegetation Index
GSP::
Post-fire anomaly of growing season precipitation
LandTrendr:
Landsat-based detection of Trends in Disturbance and Recovery
MAT::
Post-fire anomaly of mean annual temperature
MIR::
Model Improvement Ratio
MMAX::
Post-fire anomaly of mean maximum temperature in warmest month
MMIN::
Post-fire anomaly of mean minimum temperature in coldest month
MTBS::
Monitoring Trends in Burn Severity
NBR::
Normalized Burn Ratio
NDSWIR::
Normalized Difference Shortwave Infrared Index
NDVI::
Normalized Difference Vegetation Index
OLI::
Landsat Operational Land Imager
RF::
Random Forest
SLOPE::
SR::
Simple Ratio
SRTM::
Shuttle Radar Topographic Mission
TM::
Landsat Thematic Mapper
TRASP::
Transformed aspect
VRI::
Vegetation Recovery Index
WINP::
Post-fire anomaly of winter precipitation
Wildfires have burned millions of hectares in western North America in recent decades (Littell et al. 2009, Yang et al. 2015, White et al. 2017). Increased wildfire activity is expected to continue under warmer and drier conditions (Westerling et al. 2006; Abatzoglou 2016), making ecosystem resilience and post-fire vegetation recovery of concern to researchers and land managers (Allen and Breshears 2015). Burn severity, the degree to which fire has affected vegetation and soil (Keeley 2009), can have a large influence on post-fire vegetation recovery (Chappell 1996; Turner and Romme 1999; Crotteau and Varner III 2013; Meng et al. 2015; Liu 2016; Yang et al. 2017; Meng et al. 2018). Other factors that can be important to post-fire vegetation recovery include vegetation type (Yang et al. 2017; Díaz-Delgado et al. 2002; Epting 2005), climate (Chappell 1996; Meng et al. 2015; Liu 2016), topography (Díaz-Delgado et al. 2002; Wittenberg et al. 2007; Sever and Leach 2012; Meng et al. 2015; Liu 2016), and distance to unburned patches and seed sources (Donato et al. 2009; Harvey and Donato 2016; Kemp and Higuera 2016). Long-term measurements of post-fire vegetation recovery for differing forest types and burn severities can provide useful information to researchers and land managers who seek to identify areas that could benefit from post-fire management.
Burn severity has traditionally been estimated on the ground based on post-fire tree crown status. Recently, the Composite Burn Index (CBI) has been widely used to estimate ground burn severity for relation to satellite measurements (Key 2006). Good correlation has been found between ground estimates of burn severity and the differenced Normalized Burn Ratio (dNBR; van Wagtendonk et al. 2004, Key 2006, Hudak et al. 2007, Keeley 2009) NBR is defined as:
$$ NBR=\frac{NIR- SWIR}{NIR+ SWIR} $$
where NIR and SWIR are near and shortwave infrared Landsat bands, respectively, which are sensitive to healthy vegetation and burned surfaces (White et al. 1996). NIR and SWIR bands correspond to Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) bands 4 and 7, respectively, and Landsat Operational Land Imager (OLI) bands 5 and 7, respectively. The dNBR is defined as the difference of pre-fire NBR and post-fire NBR:
$$ dNBR=\left(\left({NBR}_{prefire}-{NBR}_{postfire}\right)\times 1000\right) $$
The Monitoring Trends in Burn Severity (MTBS) program aims to map burn severity using dNBR for fires >404 ha (>1000 acres) in size (western United States) from 1984 to present (Eidenshink et al. 2007). Maps of dNBR classified into low-, moderate-, and high-severity classes are commonly used to assess and describe burn severity and associated ecological impacts. Although classification into low-, moderate-, and high-severity classes under the MTBS program can be somewhat subjective (Eidenshink et al. 2007), generally, low-severity burn corresponds to damage to and consumption of ground herbaceous vegetation and some understory shrubs; moderate-severity burn indicates total damage to and consumption of understory vegetation with some canopy tree mortality; and high-severity burn indicates high or complete canopy tree mortality (Keeley 2009).
Collectively, the Landsat TM, ETM+, and OLI satellites have acquired 30 m resolution imagery of Earth at least every 16 days since 1984. Methodologies such as the Landsat-based detection of Trends in Disturbance and Recovery (LandTrendr) algorithms that perform time series analysis of this rich image archive have become popular for vegetation trend analyses (Huang et al. 2010, Kennedy and Yang 2010; Verbesselt et al. 2010; Banskota et al. 2014). Numerous studies have described post-fire vegetation recovery with multispectral time series analysis in Mediterranean ecosystems (Viedma et al. 1997; Díaz-Delgado 2001; Díaz-Delgado et al. 2002; Riaño et al. 2002; Díaz-Delgado and Lloret 2003; Malak 2006; Hope and Tague 2007; Wittenberg et al. 2007; Röder et al. 2008; Minchella et al. 2009; Gouveia and DaCamara 2010; Solans Vila 2010; Vicente-Serrano and Pérez-Cabello 2011; Veraverbeke et al. 2012; Fernandez-Manso and Quintano 2016; Lanorte et al. 2014; Meng et al. 2014; Petropoulos et al. 2014; Yang et al. 2017) and boreal ecosystems (Hicke et al. 2003; Epting 2005; Goetz and Fiske 2006; Cuevas-González et al. 2009; Jin et al. 2012; Frazier and Coops 2015; Bartels et al. 2016; Liu 2016; Pickell et al. 2016; White et al. 2017; Yang et al. 2017; Frazier et al. 2018); other forest types have been less studied (Idris and Kuraji 2005; Lhermitte et al. 2011; Sever and Leach 2012; Chen et al. 2014; Chompuchan 2017; Yang et al. 2017; Hislop et al. 2018), with only a few studies conducted in ponderosa pine and mixed conifer forests of western North America (White et al. 1996; van Leeuwen 2008; van Leeuwen et al. 2010; Chen et al. 2011; Meng et al. 2015). Among these studies, the Normalized Difference Vegetation Index (NDVI) has most frequently been applied to indicate vegetation greenness. However, recent studies have found NBR to be less prone to saturation than NDVI when characterizing post-fire vegetation recovery (Chen et al. 2011; Pickell et al. 2016; White et al. 2017; Hislop et al. 2018), possibly because NBR is more sensitive than NDVI to vegetation structure and soil background reflectance, which is inversely proportional to green vegetation cover (or non-photosynthetic vegetation [NPV] cover; Key 2006, Pickell et al. 2016). Therefore, we chose to calculate NBR to capture not just the initial impact of the fire, but also to monitor vegetation or greenness recovery. To our knowledge, only a few studies have investigated the relationship between post-fire satellite-derived vegetation recovery and climate (Meng et al. 2014; Meng et al. 2015; Liu 2016) and few or no previous studies have demonstrated the use of cloud-based computation with satellite data for investigating post-fire vegetation recovery.
Here we analyzed post-fire vegetation recovery of 12 wildfires that occurred across the western United States during the years 2000 to 2007. Vegetation greenness, our metric of vegetation recovery, was inferred from LandTrendr-derived trajectories of NBR, which were generated in Google Earth Engine (Gorelick et al. 2017; Kennedy et al. 2018). Nonparametric random forest (RF) modeling was used to describe relationships between vegetation greenness and burn severity, climate, and topography. We sought to answer several fundamental questions: 1) How do rates of NBR recovery vary over time? 2) How quickly do fire patches appear to return to pre-fire spectral condition? 3) How do pre-fire condition, burn severity, and climate affect recovery?
We focused on 12 named wildfire events in western North America that burned during the years 2000 to 2007 (Fig. 1, Table 1). Wildfires occurred in three different forest types: ponderosa pine, mixed conifer, and conifer−oak−chaparral. Mixed conifer forests consisted of grand fir (Abies grandis [Douglas ex D. Don] Lindl.), subalpine fir (Abies lasiocarpa [Hook.] Nutt.), western larch (Larix occidentalis Nutt.), Engelmann spruce (Picea engelmannii Parry ex Engelm.), lodgepole pine (Pinus contorta Douglas ex Loudon), ponderosa pine (Pinus ponderosa Lawson & C. Lawson), Douglas-fir (Pseudotsuga menziesii [Mirb.] Franco), and quaking aspen (Populus tremuloides Michx.). The Old and Grand Prix fires (Table 1) occurred in forests dominated by California black oak (Quercus kelloggii Newberry), canyon live oak (Quercus chrysolepis Liebm.), and Coulter pine (Pinus coulteri D. Don). Mean annual temperature and precipitation varied between 2.8 and 14.9 °C and 328 and 662 mm, respectively, across fire extents (1981 to 2010 climate normals; Table 1).
Locations and names of the 12 wildfires in the western United States that were analyzed. Wildfires burned during the years 2000 to 2007
Table 1 Study area wildfires and characteristics
Study area stratification
Our study areas were part of a larger project (JFSP-14-1-02-27) that selected these wildfires for investigation and included field sampling of trees, understory vegetation, and fuels. To ensure representative sampling, pixels within each wildfire were stratified by burn severity, elevation, and transformed aspect (TRASP, Roberts 1989; Fig. 2). TRASP is defined as:
$$ TRASP=\frac{1-\left(\cos \left( aspect-30\right)\right)}{2} $$
Landscape stratification approach exemplified by the Hayman Fire, Colorado, USA, that burned in 2002. Elevation and transformed aspect grids were classified as low or high using medians. The MTBS burn severity class grid was then intersected with elevation and TRASP strata grids to create a stratification grid. For this analysis, we only considered areas burned at low, moderate, and high severity
where aspect is in degrees. TRASP ranges from 0 to 1, with values of 0 corresponding to cooler, wetter north-northeastern aspects, and values of 1 corresponding to hotter, dryer south-southwestern aspects. To create the strata grid for each fire, a digital elevation model (DEM) grid and a TRASP grid were classified as low or high using the median, and the classified DEM, TRASP, and MTBS burn severity class grids were overlaid (Fig. 2). We grouped adjoining pixels of identical strata into patches, and used these patches as analysis units; no spatial patterning methods were used in patch creation. We chose to use patches rather than pixels as analysis units to reduce the millions of pixels covering our study areas to a more manageable size for analysis.
Landsat time series data
The NBR data used in this study were derived from the LandTrendr spectral-temporal segmentation algorithm. LandTrendr identifies breakpoints (referred to as vertices) in an image pixel time series between periods of relatively consistent spectral trajectory. From these breakpoints, a new time series is constructed, for which each annual observation is interpolated to fit on a line segment between vertices (Fig. 3). The result is an idealized, trajectory-based time series free from noise, for which each observation is placed in the context of a spectral-temporal trend. We chose this fitted data format over unaltered surface reflectance to reduce the influence of low-level time series variability resulting from variation in climate, atmosphere, phenology, sun angle, and other ephemeral effects on the calculation of post-fire percent NBR recovery, and to place the recovery in terms of a point along a trajectory.
Conceptual example of LandTrendr fitting normalized burn ratio (NBR) values to spectral-temporal segments for a pixel burned at high severity in the Cooney Ridge Fire in 2003. Original NBR values for a pixel time series are displayed as solid gray circles, and the corresponding fitted result from LandTrendr are open black circles. LandTrendr adjusts the original time series to fit to line segments between breakpoints in spectral trends. It eliminates noise and places each value in the context of spectral trajectories that combine to reveal the dominant, underlying spectral history of a pixel. Percent NBR recovery in 2016 was defined as distance 2 divided by distance 1 multiplied by 100
Fitted NBR data were produced using the Google Earth Engine (Gorelick et al. 2017) implementation of LandTrendr (Kennedy et al. 2018). For each region, we assembled a collection of US Geological Survey surface reflectance images (Masek et al. 2006; Vermote et al. 2016) from 1984 to 2016, for dates 1 June through 30 September. The collection included images from TM, ETM+, and OLI sensors. Each image in the collection was masked to exclude clouds and cloud shadows using the CFMASK algorithm (Zhu and Wang 2015), which is provided with the surface reflectance product. Additionally, OLI image bands 2, 3, 4, 5, 6, and 7 were transformed to the spectral properties of ETM+ bands 1, 2, 3, 4, 5, and 7, respectively, using slopes and intercepts from reduced major axis regressions reported in Table 2 of Roy et al. (Roy et al. 2016).
Table 2 Climate and topographic variable names and descriptions
Transforming OLI data to match ETM+ data permitted inter-sensor compositing to reduce multiple observations per year to a single annual spectral value, which is a requirement of the LandTrendr algorithm. To calculate composites, we used a medoid approach: for a given image pixel, the medoid is the value for a given band that is numerically closest to the median of all corresponding pixels among images considered. We selected the medoid compositing method to reduce variability that can be frequently introduced when using either maximum or minimum, and also to retain actual pixel values, as opposed to a summary statistic when using mean or median.
Medoid compositing was performed for each year in the collection and included images from any sensor contributing to the annual set of summer-season observations for the year being processed. The result was a single multi-band image, per year, free of clouds and cloud shadows, and represented median summer-season surface reflectance. From these annual medoid composites, NBR was calculated and provided as the time series input to the LandTrendr algorithm, whose parameters were set according to Table 2 of Kennedy et al. (Kennedy et al. 2012). The result from LandTrendr was an annual time series of NBR fitted to vertices of spectral-temporal segmentation.
We calculated zonal means of fitted NBR for the years 1984 to 2016 for each patch to be used for analysis. Percent NBR recovery, defined as the magnitude of NBR recovery divided by the magnitude of fire-induced decrease in NBR multiplied by 100 (Fig. 3), was calculated for each patch and year post fire. We tested whether percent NBR recovery varied significantly by burn severity at four, eight, and twelve years post fire for all three forest types with nonparametric Mann-Whitney tests. Percent NBR recovery nine years post fire was used as a response variable in RF models.
Climate and topographic data
We applied a spline model of climate developed for the western United States (e.g., Rehfeldt 2006; Rehfeldt et al. 2015) to produce the annual climatic predictor variables. The original model was applied to 30-year averages at ~1 km resolution. These spatial and temporal scales could not capture finer scale variations in microclimate. To produce fine resolution annual climatic surfaces at 30 m resolution, we applied the model to annual climatic data for the years 1981 to 2010 using the digital elevation model from the Shuttle Radar Topographic Mission (SRTM). We used the ANUSPLIN program for thin plate spline interpolation of climatic variables (Hutchinson 2000). We then derived additional climatic indices and interactive variables from the original surface variables created by the model.
Climate variable indices were converted to post-fire climate anomaly grids because we were interested in how post-fire climate affected vegetation recovery, and so that across-fire climate comparisons could be made (Arnold et al. 2014; Meng et al. 2015; Liu 2016). Anomalies were calculated using the Z-statistic:
$$ Z=\frac{\upmu_{\mathrm{post}}-{\upmu}_{\mathrm{norm}}}{\upsigma} $$
where μpost is the post-fire mean (one year post fire to 2010), μnorm is the climate normal (1984 to 2010) mean, and σ is the climate normal standard deviation; the result was one mean anomaly grid for each climate variable. Post-fire means ended at 2010 because that was the last year of available climate data.
Topographic variable grids were derived from DEMs of each fire extent (Table 2). McNab's curvature was calculated using the spatialEco package in R (McNab 1989; R Core Team 2017; Evans 2017), and is a measure of slope shape, whether convex or concave. Curvature can affect soil moisture, erosion, and deposition, and thus vegetation growth. Distance to unburned was calculated as the shortest distance to the fire perimeter or an unburned patch. We calculated zonal means of climate anomalies and topographic variables for each patch to be used for RF analysis. Raster processing was performed in R using the raster package (Hijmans 2016; R Core Team 2017).
Random forest analysis
We explored the relationship between climate, topography, and post-fire NBR recovery by relating percent NBR recovery nine years post fire to post-fire climate anomaly and topographic variables (Table 2) via random forest (RF) modeling, implemented in R (Breiman 2001; Liaw 2002; R Core Team 2017). We chose to use a nonparametric modeling method because most variable distributions were non-normal and caused visibly non-random trends in residuals of initial linear models. RF modeling does not require variables to be normally distributed, can handle tens of thousands of cases, and provides variable importance scores.
We created RF models for each forest type using three different sets of explanatory variables to investigate how climate and topographic variables contributed to explaining variance in NBR recovery. Initial RF models included only pre-fire NBR (indicator of pre-fire vegetation cover) and dNBR (burn severity) as explanatory variables. Post-fire climate explanatory variables were then added and additional RF models were created, and finally topography variables were added as explanatory variables. We were computationally unable to create an RF model with all 263 449 mixed conifer patches; therefore, we created 10 models using 10% samples of the mixed conifer patches, and averaged model results. RF models were evaluated with percent root mean square error (%RMSE) and percent variance explained (%VE) in percent NBR recovery, calculated as:
$$ \% RMSE=\frac{\sqrt{MSE}}{\overline{NBR_{\mathrm{rec}}}}\times 100 $$
$$ \% VE=\frac{1- MSE}{\mathit{\operatorname{var}}\left({NBR}_{\mathrm{rec}}\right)}\times 100 $$
where MSE is the mean square error, the sum of squared residuals divided by n, and NBRrec is percent NBR recovery. For RF models that included all explanatory variables, we calculated the model improvement ratio (MIR), a standardized measure of variable importance ranging between 0 and 1, for each variable (Murphy and Evans 2010). MIR measures were advantageous to raw importance scores because they were comparable between RF models. A MIR score of 1 indicated most important, 0 least important.
Patch statistics
We examined a total of 435 367 patches of variable size (Table 3). Patch size averaged 0.67 ha and ranged from 0.09 to 683.73 ha. Minimum and median patch sizes were one and two pixels, respectively. The Cascade and East Zone fires were the largest fires analyzed; the Black Mountain 2, Cooney Ridge, and School fires were the smallest fires analyzed.
Table 3 Patch number and size statistics for each fire event
Vegetation recovery
NBR recovery rates varied non-linearly by time since fire (Fig. 4, Additional file 1). The rate of NBR recovery was greatest several years following fire, after which it decreased. Patches that burned at high severity generally showed the greatest recovery, especially for the mixed conifer forest type. For ponderosa pine, patches that burned at moderate and high severity began to show less recovery than patches that burned at low severity, beginning 13 years post fire. Percent NBR recovery differed significantly by burn severity at four, eight, and twelve years post fire (considered representative of the recovery trends) for all three forest types (Mann-Whitney tests, P < 0.001); large sample sizes (Table 3) gave statistical tests considerable power so that even small differences were significant (Fig. 4).
Time series of percent recovery of mean percent Normalized Burn Ratio (NBR) for 12 wildfires in western North America that burned during the years 2000 to 2007. Bars show ±1 standard deviation. NBR recovery rates varied by severity, time since fire, and forest type
The rate of NBR recovery was smallest in the ponderosa pine forest type, intermediate in the mixed conifer forest type, and greatest in the conifer−oak−chaparral forest type (Fig. 4, Table 4). Recovery patterns for each fire event reflected this pattern, although individual fires showed some unique patterns as well (Additional file 1).
Table 4 Average percent NBR recovery of patches by forest type and time since fire
Most patches had not completely recovered to pre-fire NBR levels 9 to16 years after fire (Fig. 5). Among fires, 6 to 28, 4 to 26, and 1 to 22% of low-, moderate-, and high-severity patches, respectively, had recovered to pre-fire NBR levels. Greater percentages of patches that burned 13 to 16 years previously had recovered than patches that burned 9 to 11 years previously, with the exception of the Jasper and Hayman fires, which showed percent recovery values similar to the more recent fires.
Percent of patches recovered to pre-fire Normalize Burn Ratio (NBR) in 2016, by burn severity. The number of years since each fire are shown above column groups. Greater percentages of low- and moderate-severity patches had recovered to pre-fire levels 9 to16 years after fire. BMCR = Black Mountain 2 and Cooney Ridge fires, CEZ = Cascade and East Zone fires, WCR = Wedge Canyon and Robert fires, OGP = Old and Grand Prix fires
Post-fire climate anomalies
Precipitation decreased post fire and temperatures generally remained the same or increased post fire, relative to climate normals, although exceptions existed (Fig. 6). Growing season precipitation decreased more than winter precipitation in most fire extents; post-fire precipitation decreases were greatest for the Cascade and East Zone fires. Mean annual temperatures decreased in the Cascade, East Zone, Wedge Canyon, and Robert fire extents; increased slightly in the School, Egley, Black Mountain 2, Cooney Ridge, and Jasper fire extents; and increased greatly in the Old, Grand Prix, and Hayman fire extents. Mean maximum temperatures in the warmest month were higher than long-term means in all fire extents except Cascade and East Zone, for which they stayed the same. Mean minimum temperatures in the coldest month decreased in the Egley, Cascade, East Zone, and Jasper fire extents; and increased in the School, Old, Grand Prix, Wedge Canyon, Robert, Black Mountain 2, Cooney Ridge, and Hayman fire extents.
Distributions of post-fire climate anomalies relative to climate normals (1984 to 2010). GSP = growing season precipitation, WINP = winter precipitation, MAT = mean annual temperature, MMAX = mean maximum temperature in warmest month, MMIN = mean minimum temperature in coldest month. Precipitation, especially growing season precipitation, decreased post fire for most areas. Temperature remained the same or increased for most areas. BMCR = Black Mountain 2 and Cooney Ridge fires, CEZ = Cascade and East Zone fires, WCR = Wedge Canyon and Robert fires, OGP = Old and Grand Prix fires. WA = Washington, CA = California, SD = South Dakota, OR = Oregon, MT = Montana, CO = Colorado, ID = Idaho
RF models that included only pre-fire NBR and dNBR as explanatory variables explained 2 to 6% of the variation in NBR recovery nine years post fire (Table 5). Pre-fire NBR was important in predicting recovery across forest types (Table 6). Burn severity was less important in predicting recovery nine years post fire, especially for the ponderosa pine forest type. Post-fire climate explained an additional 30 to 41% of the variation in post-fire NBR recovery, and climate variables were important or very important predictors of recovery (Tables 5 and 6). Topographic variables were the least important predictors, and only explained an additional 2 to 6% of the variation in post-fire NBR recovery.
Table 5 Percent variance explained and percent root mean square error (%RMSE) of random forest (RF) models predicting percent recovery of the Normalized Burn Ratio (NBR) nine years post fire from NBR variables alone; NBR and post-fire climate variables; and NBR, post-fire climate, and topographic variables
Table 6 Model improvement ratio (MIR) scores of explanatory variables in random forest (RF) models predicting percent recovery of the Normalized Burn Ratio (NBR) nine years post fire. Variable importance scores range between 0 and 1, with 1 indicating most important and 0 indicating least important. dNBR = diffrenced Normalzed Burn Ratio. See Table 2 for definitions of variables
We described post-fire vegetation recovery using NBR time series, and related post-fire climate and topographic variables to NBR recovery for 12 fires that occurred in temperate ecosystems of western North America. We documented average NBR recovery levels of 54 to 77% 13 years post fire, and that, in addition to pre-fire NBR and burn severity, post-fire climate was an important determinant of the degree that vegetation greenness had recovered post fire.
We found that most patches were still recovering to pre-fire NBR levels 9 to 16 years post fire. Longer time series of areas that burned earlier in the Landsat TM satellite record (beginning in 1984) would document more complete recovery to pre-fire NBR levels. Field observations of percent vegetation cover taken across our study areas 9 to 15 years post-fire confirmed that herbaceous and shrub species growth were responsible for the observed increase in NBR. Others who have also described vegetation recovery with satellite indices have generally reported faster or similar recovery rates for various vegetation types (Table 7). Like Epting and Verbyla (Epting 2005), Jin et al. (Jin et al. 2012), and Meng et al. (Meng et al. 2018), we found that areas burned at higher severities recovered at faster rates than areas burned at lower severities, even after normalization by burn severity (Fig. 3), possibly because these are fire-adapted forest types in which fire creates favorable conditions for vegetation germination and regeneration (Zasada et al. 1983; Agee 1993; Meng et al. 2018).
Table 7 Average recovery times reported in previous studies that used satellite indices to describe post-fire vegetation recovery. NDVI = Normalized Difference Vegetation Index, NDSWIR = Normalized Difference Shortwave Infrared Index, EVI = Enhanced Vegetation Index, VRI = Vegetation Recovery Index, BRR = Burn Recovery Ratio, NPP = Net Primary Productivity (derived from NDVI and Simple Ratio [SR])
Post-fire climate explained substantial variation in post-fire vegetation recovery. Although this finding is not surprising, as temperate forests of western North America are limited by summer precipitation and cold temperatures (Churkina 1998; Nemani et al. 2003), to our knowledge, few studies have documented the importance of climate to post-fire vegetation recovery. Meng et al. (2015) found that post-fire wet season precipitation and January minimum temperature helped explain variation in NDVI five years post fire in mixed conifer and red fir forests in the Sierra Nevada Mountains of California, USA. They suggested that January minimum temperature might be a proxy for drought effects or indicate solar radiation and temperature limitations on vegetation growth in their forest types. Liu (2016) reported that post-fire summer precipitation was important to vegetation recovery five years post fire in boreal larch (Larix gmelinii [Rupr.] Rupr.) forests in China. Additional studies that use modeling approaches different from ours, like those of Meng et al. (2015) and Liu (2016), could describe relationships between burn severity, post-fire climate, topography, and post-fire vegetation recovery more specifically for the temperate coniferous forests that we studied. Although we found topographic variables to be unimportant predictors of NBR recovery, variations in fine-scale climate data were caused by local variability in elevation, so climate and elevation were convolved, and topography was possibly more influential than our modeling results suggested.
Post-fire NBR recovery was fastest and greatest for the conifer−oak−chaparral forest type (Fig. 4); this is possibly because vegetation was less limited by cold temperatures relative to other forest types (Table 1). Mixed conifer forests showed a greater post-fire recovery rate than ponderosa pine forests (Fig. 4), possibly because of richer species diversity and because mixed conifer forests tend to receive more precipitation (Table 1); the relative rapid recovery of the Wedge Canyon and Robert fires, which averaged the most precipitation of all our study areas, also supports this idea (Additional file 1: Figure S1). Likewise, the slower recovery of the Cascade and East Zone fires relative to the other mixed conifer areas might have been due to greater relative decreases in post-fire precipitation (Additional file 1, Fig. 6).
We used recovery of NBR, a satellite index, as an indicator of vegetation recovery. Although satellite observations contain valuable information about vegetation conditions, they are simply measurements of reflected light and are therefore limited in their interpretability. NBR is an indication of the ratio of vegetation to soil cover, but tells us little about vegetation type and structure. Relating ground and satellite observations can increase interpretability of satellite observations (Hudak et al. 2007), as well as provide a means for applying spatially limited ground observations across landscapes. We chose to limit this analysis to NBR observations because we wished to describe landscape-wide vegetation recovery in general. Future studies that predict ground-measured vegetation characteristics from time series of multispectral imagery could describe post-fire recovery trajectories of more specific vegetation characteristics.
Landsat time series analysis can provide landscape-wide information on post-fire vegetation recovery. Our analysis revealed that complete post-fire recovery of NBR in the temperate forest ecosystems of western North America takes longer than 9 to 16 years for most areas. We found burn severity, pre-fire NBR, and post-fire climate to be important to vegetation recovery for the fires that we studied. Methods similar to ours could be applied to other burned areas for which landscape-wide information on post-disturbance vegetation recovery is needed, and could be used to inform management decisions; for instance, individual patches showing little or no recovery could be identified for post-fire management. Our finding that post-fire climate influences vegetation recovery suggests that climate change will affect post-fire vegetation recovery in western North America.
Abatzoglou, J.T., and A.P. Williams. 2016. Impact of anthropogenic climate change on wildfire across western US forests. PNAS 113: 11770–11775. https://doi.org/10.1073/pnas.1607171113.
Agee, J.K. 1993. Fire ecology of Pacific Northwest forests. Washington, D.C: Island Press.
Allen, C.D., D.D. Breshears, and N.G. McDowell. 2015. On underestimation of global vulnerability to tree mortality and forest die-off from hotter drought in the Anthropocene. Ecosphere 6: 1–55. https://doi.org/10.1890/ES15-00203.1.
Arnold, J.D., S.C. Brewer, and P.E. Dennison. 2014. Modeling climate-fire connections within the Great Basin and Upper Colorado River Basin, western United States. Fire Ecology 10: 64–75. https://doi.org/10.4996/fireecology.1002064.
Banskota, A., N. Kayastha, M.J. Falkowski, M.A. Wulder, R.E. Froese, and J.C. White. 2014. Forest monitoring using Landsat time series data: a review. Canadian Journal of Remote Sensing 40: 3623–3684. https://doi.org/10.1080/07038992.2014.987376.
Bartels, S.F., H.Y.H. Chen, M.A. Wulder, and J.C. White. 2016. Trends in post-disturbance recovery rates of Canada's forests following wildfire and harvest. Forest Ecology and Management 361: 194–207. https://doi.org/10.1016/j.foreco.2015.11.015.
Breiman, L. 2001. Random forests. Machine Learning 45: 5–32. https://doi.org/10.1023/A:1010933404324.
Chappell, C.B., and J.K. Agee. 1996. Fire severity and tree seedling establishment in Abies magnifica forests, southern Cascades, Oregon. Ecological Applications 6: 628–640. https://doi.org/10.2307/2269397.
Chen, W., K. Moriya, T. Sakai, L. .Koyama, and C. Cao. 2014. Monitoring of post-fire forest recovery under different restoration modes based on time series Landsat data. European Journal of Remote Sensing 47: 153–168. https://doi.org/10.5721/EuJRS20144710.
Chen, X., J.E. Vogelmann, M. Rollins, D. Ohlen, C.H. Key, L. Yang, C. Huang, and H. Shi. 2011. Detecting post-fire burn severity and vegetation recovery using multitemporal remote sensing spectral indices and field-collected composite burn index data in a ponderosa pine forest. International Journal of Remote Sensing 32: 7905–7927. https://doi.org/10.1080/01431161.2010.524678.
Chompuchan, C., and C.Y. Lin. 2017. Assessment of forest recovery at Wu-Ling fire scars in Taiwan using multi-temporal Landsat imagery. Ecological Indicators 79: 196–206. https://doi.org/10.1016/j.ecolind.2017.04.038.
Churkina, G., and S.W. Running. 1998. Contrasting climatic controls on the estimated productivity of global terrestrial biomes. Ecosystems 1: 206–215. https://doi.org/10.1007/s100219900016.
Crotteau, J.S., J.M. Varner III, and M.W. Ritchie. 2013. Post-fire regeneration across a fire severity gradient in the southern Cascades. Forest Ecology and Management 287: 103–112. https://doi.org/10.1016/j.foreco.2012.09.022.
Cuevas-González, M., F. Gerard, H. Balzter, and D. Riaño. 2009. Analysing forest recovery after wildfire disturbance in boreal Siberia using remotely sensed vegetation indices. Global Change Biology 15: 561–577. https://doi.org/10.1111/j.1365-2486.2008.01784.x.
Díaz-Delgado, R., and X. Pons. 2001. Spatial patterns of forest fires in Catalonia (NE of Spain) along the period 1975-1995: analysis of vegetation recovery after fire. Forest Ecology and Management 147: 67–74. https://doi.org/10.1016/S0378-1127(00)00434-5.
Díaz-Delgado, R., F. Lloret, and X. Pons. 2003. Influence of fire severity on plant regeneration by means of remote sensing imagery. International Journal of Remote Sensing 24: 1751–1763. https://doi.org/10.1080/01431160210144732.
Díaz-Delgado, R., F. Lloret, X. Pons, and J. Terradas. 2002. Satellite evidence of decreasing resilience in Mediterranean plant communities after recurrent wildfires. Ecology 83: 2293–2303.
Donato, D.C., J.B. Fontaine, J.L. Campbell, W.D. Robinson, J.B. Kauffman, and B.E. Law. 2009. Conifer regeneration in stand-replacement portions of a large mixed-severity wildfire in the Klamath-Siskiyou Mountains. Canadian Journal of Forest Research 39: 823–838. https://doi.org/10.1139/X09-016.
Eidenshink, J., B. Schwind, K. Brewer, Z. Zhu, B. Quayle, and S. Howard. 2007. A project for monitoring trends in burn severity. Fire Ecology 3: 3–21. https://doi.org/10.4996/fireecology.0301003.
Epting, J., and J. Verbyla. 2005. Landscape-level interactions of prefire vegetation, burn severity, and postfire vegetation over a 16-year period in interior Alaska. Canadian Journal of Forest Research 35: 1367–1377. https://doi.org/10.1139/X05-060.
Evans, J.S. 2017. spatialEco. R package version 0.0.1-7. https://CRAN.R-project.org/package=spatialEco. Accessed January 2018
Fernandez-Manso, A., C. Quintano, and D.A. Roberts. 2016. Burn severity influence on post-fire vegetation cover resilience from Landsat MESMA fraction images time series in Mediterranean forest ecosystems. Remote Sensing of Environment 184: 112–123. https://doi.org/10.1016/j.rse.2016.06.015.
Frazier, R.J., N.C. .Coops, M.A. Wulder, T. Hermosilla, and J.C. White. 2018. Analyzing spatial and temporal variability in short-term rates of post-fire vegetation return from Landsat time series. Remote Sensing of Environment 205: 32–45. https://doi.org/10.1016/j.rse.2017.11.007.
Frazier, R.J., N.C. Coops, and M.A. Wulder. 2015. Boreal Shield forest disturbance and recovery trends using Landsat time series. Remote Sensing of Environment 170: 317–327. https://doi.org/10.1016/j.rse.2015.09.015.
Goetz, S.J., G.J. Fiske, and A.G. Bunn. 2006. Using satellite time-series data sets to analyze fire disturbance and forest recovery across Canada. Remote Sensing of Environment 101: 352–365. https://doi.org/10.1016/j.rse.2006.01.011.
Gorelick, N., M. Hancher, M. Dixon, S. Ilyushchenko, D. Thau, and R. Moore. 2017. Google Earth Engine: planetary-scale geospatial analysis for everyone. Remote Sensing of Environment 202: 18–27. https://doi.org/10.1016/j.rse.2017.06.031.
Gouveia, C., C.C. DaCamara, and R.M. Trigo. 2010. Post-fire vegetation recovery in Portugal based on spot/vegetation data. Natural Hazards and Earth System Sciences 10: 673–684. https://doi.org/10.5194/nhess-10-673-2010.
Harvey, B.J., D.C. Donato, and M.G. Turner. 2016. High and dry: post-fire tree seedling establishment in subalpine forests decreases with post-fire drought and large stand-replacing burn patches. Global Ecology and Biogeography 25: 655–669. https://doi.org/10.1111/geb.12443.
Hicke, J.A., G.P. Asner, E.S. Kasischke, N.H.F. French, J.T. Randerson, G.J. Collatz, B.J. Stocks, C.J. Tucker, S.O. Los, and C.B. Field. 2003. Postfire response of North American boreal forest net primary productivity analyzed with satellite observations. Global Change Biology 9: 1145–1157. https://doi.org/10.1046/j.1365-2486.2003.00658.x.
Hijmans, R.J. 2016. raster: geographic data analysis and modeling. R package version 2: 5–8 https://CRAN.R-project.org/package=raster. Accessed January 2018.
Hislop, S., S. Jones, M. Soto-Berelov, A. Skidmore, A. .Haywood, and T.H. Nguyen. 2018. Using Landsat spectral indices in time-series to assess wildfire disturbance and recovery. Remote Sensing 10: 460. https://doi.org/10.3390/rs10030460.
Hope, A., C. Tague, and R. Clark. 2007. Characterizing post-fire vegetation recovery of California chaparral using TM/ETM+ time-series data. International Journal of Remote Sensing 28: 1339–1354. https://doi.org/10.1080/01431160600908924.
Huang, C., S.N. Goward, J.G. Masek, N. Thomas, Z. Zhu, and J.E. Vogelmann. 2010. An automated approach for reconstructing recent forest disturbance history using dense Landsat time series stacks. Remote Sensing of Environment 114: 183–198. https://doi.org/10.1016/j.rse.2009.08.017.
Hudak, A.T., P. Morgan, M.J. Bobbitt, A.M.S. Smith, S.A. Lewis, L.B. Lentile, P.R. Robichaud, J.T. Clark, and R.A. McKinley. 2007. The relationship of multispectral satellite imagery to immediate fire effects. Fire Ecology 3: 64–90. https://doi.org/10.4996/fireecology.0301064.
Hutchinson, M.F. 2000. ANUSPLIN user guide version 4.1. Centre for Resource and Environmental Studies. Canberra: Australian National University.
Idris, M.H., K. Kuraji, and M. Suzuki. 2005. Evaluating vegetation recovery following large-scale forest fires in Borneo and northeastern China using multi-temporal NOAA/AVHRR images. Journal of Forest Research 10: 101–111. https://doi.org/10.1007/s10310-004-0106-y.
Jin, Y., J.T. Randerson, S.J. Goetz, P.S.A. Beck, M.M. Loranty, and M.L. Goulden. 2012. The influence of burn severity on postfire vegetation recovery and albedo change during early succession in North American boreal forests. Journal of Geophysical Research 117: G01036. https://doi.org/10.1029/2011JG001886.
Keeley, J.E. 2009. Fire intensity, fire severity and burn severity: a brief review and suggested usage. International Journal of Wildland Fire 18: 116–126. https://doi.org/10.1071/WF07049.
Kemp, K.B., P.E. Higuera, and P. Morgan. 2016. Fire legacies impact conifer regeneration across environmental gradients in the US northern Rockies. Landscape Ecology 31: 619. https://doi.org/10.1007/s10980-015-0268-3.
Kennedy, R.E., Z. Yang, W.B. Cohen, E. Pfaff, J. Braaten, and P. Nelson. 2012. Spatial and temporal patterns of forest disturbance and regrowth within the area of the Northwest Forest Plan. Remote Sensing of Environment 122: 117–133. https://doi.org/10.1016/j.rse.2011.09.024.
Kennedy, R.E., Z. Yang, N. Gorelick, J. Braaten, L. Cavalcante, W.B. Cohen, and S. Healey. 2018. Implementation of the LandTrendr algorithm on Google Earth Engine. Remote Sensing 10: 691. https://doi.org/10.3390/rs10050691.
Kennedy, R.E., Z.G. Yang, and W.B. Cohen. 2010. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr- temporal segmentation algorithms. Remote Sensing of Environment 114: 2897–2910. https://doi.org/10.1016/j.rse.2010.07.008.
Key, C.H., and N.C. Benson. 2006. Landscape assessment: ground measure of severity, the Composite Burn Index; and remote sensing of severity, the Normalized Burn Ratio. Pages LA1-LA51. In FIREMON: fire effects monitoring and inventory system, ed. D.C. Lutes, R.E. Keane, J.F. Caratti, C.H. Key, N.C. Benson, S. Sutherland, and L.J. Gangi. Fort Collins: USDA Forest Service General Technical Report RMRS-GTR-164-CD, Rocky Mountain Research Station.
Lanorte, A., R. Lasaponara, M. Lovallo, and L. Telesca. 2014. Fisher-Shannon information plane analysis of SPOT/VEGETATION Normalized Difference Vegetation Index (NDVI) time series to characterize vegetation recovery after fire disturbance. International Journal of Applied Earth Observation and Geoinformation 26: 441–446. https://doi.org/10.1016/j.jag.2013.05.008.
Lhermitte, S., J. Verbesselt, W.W. Verstraeten, S. Veraverbeke, and P. Coppin. 2011. Assessing intra-annual vegetation regrowth after fire using the pixel based regeneration index. ISPRS Journal of Photogrammetry and Remote Sensing 66: 17–27. https://doi.org/10.1016/j.isprsjprs.2010.08.004.
Liaw, A., and M. Wiener. 2002. Classification and regression by randomForest. R News 2: 18–22 https://cran.r-project.org/doc/Rnews/Rnews_2002-3.pdf. Accessed Jan 2018.
Littell, J.S., D. McKenzie, D.L. Peterson, and A.L. Westerling. 2009. Climate and wildfire area burned in western US ecoprovinces, 1916-2003. Ecological Applications 19: 1003–1021. https://doi.org/10.1890/07-1183.1.
Liu, Z. 2016. Effects of climate and fire on short-term vegetation recovery in the boreal larch forests of northeastern China. Scientific Reports 6: 37572. https://doi.org/10.1038/srep37572.
Malak, D.A., and J.G. Pausas. 2006. Fire regime and post-fire Normalized Difference Vegetation Index changes in the eastern Iberian peninsula (Mediterranean Basin). International Journal of Wildland Fire 15: 407–413. https://doi.org/10.1071/WF05052.
Masek, J.G., E.F. Vermote, N.E. Saleous, R. Wolfe, F.G. Hall, K.F. Huemmrich, F. Gao, J. Kutler, and T.-K. Lim. 2006. A Landsat surface reflectance dataset for North America, 1990-2000. IEEE Geoscience and Remote Sensing Letters 3: 68–72. https://doi.org/10.1109/LGRS.2005.857030.
McNab, H.W. 1989. Terrain shape index: quantifying effect of minor landforms on tree height. Forest Science 35: 91–104.
Meng, R., P.E. Dennison, C.M. D'Antonio, and M.A. Moritz. 2014. Remote Sensing Analysis of vegetation recovery following short-interval fires in southern California shrublands. PLoS ONE 9: e110637. https://doi.org/10.1371/journal.pone.0110637.
Meng, R., P.E. Dennison, C. Huang, M.A. Moritz, and C. D'Antonio. 2015. Effects of fire severity and post-fire climate on short-term vegetation recovery of mixed-conifer and red fir forests in the Sierra Nevada mountains of California. Remote Sensing of Environment 171: 311–325. https://doi.org/10.1016/j.rse.2015.10.024.
Meng, R., J. Wu, F. Zhao, B.D. Cook, R.P. Hanavan, and S.P. Serbin. 2018. Measuring short-term post-fire forest recovery across a burn severity gradient in a mixed pine-oak forest using multi-sensor remote sensing techniques. Remote Sensing of Environment 210: 282–296. https://doi.org/10.1016/j.rse.2018.03.019.
Minchella, A., F. Del Frate, F. Capogna, S. Anselmi, and F. Manes. 2009. Use of multitemporal SAR data for monitoring vegetation recovery of Mediterranean burned areas. Remote Sensing of Environment 113: 588–597. https://doi.org/10.1016/j.rse.2008.11.004.
Murphy, M.A., J.S. Evans, and A.S. Storfer. 2010. Quantify Bufo boreas connectivity in Yellowstone National Park with landscape genetics. Ecology 91: 252–261. https://doi.org/10.1890/08-0879.1.
Nemani, R.R., C.D. Keeling, H. Hashimoto, W.M. Jolly, S.C. Piper, C.J. Tucker, R.B. Myneni, and S.W. Running. 2003. Climate-driven increases in global terrestrial net primary production from 1982 to 1999. Science 300: 1560–1563. https://doi.org/10.1126/science.1082750.
Petropoulos, G.P., H.M. Griffiths, and D.P. Kalivas. 2014. Quantifying spatial and temporal vegetation recovery dynamics following a wildfire event in a Mediterranean landscape using EO data and GIS. Applied Geography 50: 120–131. https://doi.org/10.1016/j.apgeog.2014.02.006.
Pickell, P.D., T. Hermosilla, R.J. Frazier, N.C. Coops, and M.A. Wulder. 2016. Forest recovery trends derived from Landsat time series for North American boreal forests. International Journal of Remote Sensing 37: 138–149. https://doi.org/10.1080/2150704X.2015.1126375.
R Core Team. 2017. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing.
Rehfeldt, G.E. 2006. A spline model of climate for the western United States, USDA Forest Service General technical Report RMRS-GTR-165. Fort Collins: Rocky Mountain Research Station.
Rehfeldt, G.E., J.J. Worrall, S.B. Marchetti, and N.L. Crookston. 2015. Adapting forest management to climate change using bioclimate models with topographic drivers. Forestry 88: 528–539. https://doi.org/10.1093/forestry/cpv019.
Riaño, D., E. Chuvieco, S. Ustin, R. Zomer, P. Dennison, D. Roberts, and J. Salas. 2002. Assessment of vegetation regeneration after fire through multitemporal analysis of AVIRIS images in the Santa Monica Mountains. Remote Sensing of Environment 79: 60–71. https://doi.org/10.1016/S0034-4257(01)00239-5.
Roberts, D.W., and S.V. Cooper. 1989. Concepts and techniques of vegetation mapping. Pages 90–96. In compilers. Proceedings of a symposium—land classifications based on vegetation: applications for resource management. USDA Forest Service General Technical Report INT-257, ed. D.E. Ferguson, P. Morgan, and F.D. Johnson. Ogden: Intermountain Research Station.
Röder, A., J. Hill, B. Duguy, J.A. Alloza, and R. Vallejo. 2008. Using long time series of Landsat data to monitor fire events and post-fire dynamics and identify driving factors. A case study in the Ayora region (eastern Spain). Remote Sensing of Environment 112: 259–273. https://doi.org/10.1016/j.rse.2007.05.001.
Roy, D.P., V. Kovalskyy, H.K. Zhang, E.F. Vermote, L. Yan, S.S. Kumar, and A. Egorov. 2016. Characterization of Landsat-7 to Landsat-8 reflective wavelength and normalized difference vegetation index continuity. Remote Sensing of Environment 185: 57–70. https://doi.org/10.1016/j.rse.2015.12.024.
Sever, L., J. Leach, and L. Bren. 2012. Remote sensing of post-fire vegetation recovery; a study using Landsat 5 TM imagery and NDVI in north-east Victoria. Journal of Spatial Science 57: 175–191. https://doi.org/10.1080/14498596.2012.733618.
Solans Vila, J.P., and P. Barbosa. 2010. Post-fire vegetation regrowth detection in the Deiva Marina region (Liguria-Italy) using Landsat TM and ETM+ data. Ecological Modelling 221: 75–84. https://doi.org/10.1016/j.ecolmodel.2009.03.011.
Turner, M.G., W.H. Romme, and R.H. Gardner. 1999. Pre-fire heterogeneity, fire severity, and early post-fire plant reestablishment in subalpine forests of Yellowstone National Park, Wyoming. International Journal of Wildland Fire 9: 21–36. https://doi.org/10.1071/WF99003.
van Leeuwen, W.J.D. 2008. Monitoring the effects of forest restoration treatments on post-fire vegetation recovery with MODIS multitemporal data. Sensors 8: 2017–2042. https://doi.org/10.3390/s8032017.
van Leeuwen, W.J.D., G.M. Casady, D.G. Neary, S. Bautista, J.A. Alloza, Y. Carmel, L. Wittenberg, D. Malkinson, and B.J. Orr. 2010. Monitoring post-wildfire vegetation response with remotely sensed time-series data in Spain, USA and Israel. International Journal of Wildland Fire 19: 75–93. https://doi.org/10.1071/WF08078.
van Wagtendonk, J.W., R.R. Root, and C.H. Key. 2004. Comparison of AVIRIS and Landsat ETM+ detection capabilities for burn severity. Remote Sensing of Environment 92: 397–408. https://doi.org/10.1016/j.rse.2003.12.015.
Veraverbeke, S., I. Gitas, T. Katagis, A. Polychronaki, B. Somers, and R. Goossens. 2012. Assessing post-fire vegetation recovery using red-near infrared vegetation indices: accounting for background and vegetation variability. ISPRS Journal of Photogrammetry and Remote Sensing 68: 28–39. https://doi.org/10.1016/j.isprsjprs.2011.12.007.
Verbesselt, J., R. Hyndman, G. Newnham, and D. Culvenor. 2010. Detecting trend and seasonal changes in satellite image time series. Remote Sensing of Environment 114: 106–115. https://doi.org/10.1016/j.rse.2009.08.014.
Vermote, E., C. Justice, M. Claverie, and B. Franch. 2016. Preliminary analysis of the performance of the Landsat 8/OLI land surface reflectance product. Remote Sensing of Environment 185: 46–56. https://doi.org/10.1016/j.rse.2016.04.008.
Vicente-Serrano, S.M., F. Pérez-Cabello, and T. Lasanta. 2011. Pinus halepensis regeneration after a wildfire in a semiarid environment: assessment using multitemporal Landsat images. International Journal of Wildland Fire 20: 195–208. https://doi.org/10.1071/WF08203.
Viedma, O., J. Meliá, D. Segarra, and J. García-Haro. 1997. modeling rates of ecosystem recovery after fires by using Landsat TM data. Remote Sensing of Environment 61: 383–398. https://doi.org/10.1016/S0034-4257(97)00048-5.
Westerling, A.L., H.G. Hidalgo, D.R. Cayan, and T.W. Swetnam. 2006. Warming and earlier spring increase western US forest wildfire activity. Science 313: 940–943. https://doi.org/10.1126/science.1128834.
White, J.C., M.A. Wulder, T. Hermosilla, N.C. Coops, and G.W. Hobart. 2017. A nationwide annual characterization of 25 years of forest disturbance and recovery for Canada using Landsat time series. Remote Sensing of Environment 194: 303–321. https://doi.org/10.1016/j.rse.2017.03.035.
White, J.D., K.C. Ryan, C.C. Key, and S.W. Running. 1996. Remote sensing of forest fire severity and vegetation recovery. International Journal of Wildland Fire 6: 125–136. https://doi.org/10.1071/WF9960125.
Wittenberg, L., D. Malkinson, O. Beeri, A. Halutzy, and N. Tesler. 2007. Spatial and temporal patterns of vegetation recovery following sequences of forest fires in a Mediterranean landscape, Mt. Carmel Israel. Catena 71: 76–83. https://doi.org/10.1016/j.catena.2006.10.007.
Yang, J., S. Pan, S. Dangal, B. Zhang, S. Wang, and H. Tian. 2017. Continental-scale quantification of post-fire vegetation greenness recovery in temperate and boreal North America. Remote Sensing of Environment 199: 277–290. https://doi.org/10.1016/j.rse.2017.07.022.
Yang, J., H. Tian, B. Tao, W. Ren, S. Pan, Y. Liu, and Y. Wang. 2015. A growing importance of large fires in conterminous United States during 1984-2012. Journal of Geophysical Research 120: 2625–2640. https://doi.org/10.1002/2015JG002965.
Zasada, J.C., R.A. Norum, R.M. Van Veldhuizen, and C.E. Teutsch. 1983. Artificial regeneration of trees and tall shrubs in experimentally burned upland black spruce/feather moss stands in Alaska. Canadian Journal of Forest Research 13: 903–913. https://doi.org/10.1139/x83-120.
Zhu, Z., S. Wang, and C.E. Woodcock. 2015. Improvement and expansion of the Fmask algorithm: cloud, cloud shadow, and snow detection for Landsats 4–7, 8, and Sentinel 2 images. Remote Sensing of Environment 159: 269–277. https://doi.org/10.1016/j.rse.2014.12.014.
The authors thank three anonymous reviewers for their time and helpful reviews.
Research was funded by the Joint Fire Science Program (JFSP-14-1-02-27).
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
USDA Forest Service, Rocky Mountain Research Station, 1221 S. Main Street, Moscow, Idaho, 83843, USA
Benjamin C. Bright
& Andrew T. Hudak
Oregon State University, College of Earth, Ocean, and Atmospheric Sciences, 104 CEOAS Administration Building, 101 SW 26th Street, Corvallis, Oregon, 97331, USA
Robert E. Kennedy
& Justin D. Braaten
Natural Resource Ecology Laboratory, Colorado State University, NESB Building, 1499 Campus Delivery, A215, Fort Collins, Colorado, 80523, USA
Azad Henareh Khalyani
Search for Benjamin C. Bright in:
Search for Andrew T. Hudak in:
Search for Robert E. Kennedy in:
Search for Justin D. Braaten in:
Search for Azad Henareh Khalyani in:
BB and AH conceived and designed the analysis; BB performed the analysis; RK and JB provided Landsat time series data. AHK provided climate data. BB, AH, RK, JB, and AHK wrote the manuscript. All authors read and approved the final manuscript.
Correspondence to Benjamin C. Bright.
Ethics declarations
Time series of percent recovery of mean percent Normalized Burn Ratio (NBR) by burn severity for each fire. Bars show ±1 standard deviation. NBR recovery rates varied by fire, severity, time since fire, and forest type. Forest type, year of fire, mean average temperature (°C), and mean annual precipitation (mm) are given in the upper left-hand corner of each panel. (TIF 457 kb)
burn severity
landsat
Frontiers in Fire Ecology
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page
|
CommonCrawl
|
Aspergillus oryzae lipase-catalyzed synthesis of glucose laurate with excellent productivity
Xiao-Sheng Lin1,
Kai-Hua Zhao1,
Qiu-Ling Zhou2,
Kai-Qi Xie2,
Peter J. Halling3 and
Zhen Yang1Email author
Bioresources and Bioprocessing20163:2
© Lin et al. 2016
As nonionic surfactants derived from naturally renewable resources, sugar fatty acid esters (SFAEs) have been widely utilized in food, cosmetic, and pharmaceutical industries.
In this study, six enzymes were screened as catalyst for synthesis of glucose laurate. Aspergillus oryzae lipase (AOL) and Aspergillus niger lipase (ANL) yielded conversions comparable to the results obtained by commercial enzymes such as Novozyme 435 and Lipozyme TLIM. The productivity obtained by AOL catalysis in anhydrous 2–methyl–2–butanol (2M2B) (38.7 mmol/L/h and 461.0 μmol/h/g) was much higher than the other literature results. Factors affecting the synthetic reaction were investigated, including water content, enzyme amount, substrate concentrations and reaction temperature. The process was greatly improved by applying the Box-Behnken design of response surface methodology (RSM). Solubilities of glucose in 14 different organic solvents were determined, which were found to be closely associated with the polarity of the solvents.
Aspergillus oryzae lipase is a promising enzyme capable of efficiently catalyzing the synthesis of sugar fatty acid esters with excellent productivity.
Graphical Abstract:
Aspergillus oryzae lipase-catalyzed synthesis of sugar fatty acid esters with superior productivity
Aspergillus oryzae lipase (AOL)
Aspergillus niger lipase (ANL)
Sugar fatty acid ester (SFAE)
Glucose laurate
Organic media
Response surface methodology (RSM)
Box-Behnken design (BBD)
Sugar fatty acid esters (SFAEs) are nonionic surfactants which can be produced from naturally renewable resources: carbohydrates (e.g., glucose and sucrose) and fatty acids (e.g., lauric and palmitic acids). SFAEs are tasteless, odorless, nontoxic, nonirritant, and biodegradable; their functional properties such as critical micelle concentration (CMC) and hydrophilic–lipophilic balance (HLB) can be altered over a broad range by tuning the constitutive sugar and fatty acid moieties of the sugar esters, and some of them also possess insecticidal (Puterka et al. 2003) and antimicrobial (Ferrer et al. 2005) activities. All these advantageous features have made SFAEs attractive for use in food, cosmetic, and pharmaceutical industries (Ye and Hayes 2014; Gumel et al. 2011; Kobayashi 2011; Chen et al. 2007).
Enzymatic preparation of sugar esters in organic media has been proved to be superior to the currently dominating chemical synthesis in terms of mild reaction conditions, simple operational procedures, high productivity, excellent regioselectivity, and easy product separation (Ye and Hayes 2014; Gumel et al. 2011; Kobayashi 2011; Chen et al. 2007). The synthetic reaction is normally performed in an organic solvent such as tert-butanol and 2-methyl-2-butanol (2M2B) (Ferrer et al. 2000, 2005; Ye and Hayes 2014; Gumel et al. 2011; Kobayashi 2011; Ikeda and Klibanov 1993; Flores et al. 2002; Pöhnlein et al. 2014; Walsh et al. 2009; Cao et al. 1996).
In terms of enzyme selection, among different lipases (EC 3.1.1.3) that have been tested, Novozym 435 from Novozyme (Candida antarctica lipase B immobilized on acrylic resin) is the most popular one for catalyzing sugar ester synthesis (Ferrer et al. 2005; Ye and Hayes 2014; Gumel et al. 2011; Kobayashi 2011; Flores et al. 2002; Pöhnlein et al. 2014; Cao et al. 1996; Lin et al. 2015). It is necessary to search for some other lipases that are cheap and catalytically efficient for this application.
Factors affecting the synthetic process include substrate concentrations, enzyme amount, reaction temperature, etc. The traditional one-variable-at-a-time optimization approach is not sufficient to maximize the conversion because it is not only laborious and time-consuming, but also cannot guarantee determination of optimal conditions due to the fact that possible interactions among various operational factors have not been considered. The use of the response surface methodology (RSM) as a tool for experimental design and optimization has been applied to a variety of processes because it can enable the building of models and the evaluation of the significance of the different factors considered as well as their interactions (Bezerra et al. 2008). With the aid of experimental design via RSM, optimal conditions can be obtained and optimization achieved by running only a small number of experimental trials.
In this current study, synthesis of glucose laurate by transesterification in 2M2B between d-glucose and vinyl laurate was taken as a model reaction to demonstrate that Aspergillus oryzae lipase (AOL) is a promising catalyst for SFAE synthesis, and Box-Behnken design (BBD) of response surface methodology (RSM) was successfully applied to improve the conversion and productivity.
Novozym 435 and Lipozyme TLIM were purchased from Novozymes (China) Investment Co., Ltd. Lipases from Penicillium expansum (PEL), Rhizopus chinensis (RCL), A. oryzae (AOL) and A. niger (ANL), all produced by spraying the concentrated supernatant from fermentation with addition of a certain amount of starch as a thickening agent, were kindly donated by Shenzhen Leveking Bioengineering Co. Ltd., Shenzhen, China. Vinyl laurate was purchased from Sigma-Aldrich China Inc. α-d-Glucose, lauric acid, molecular sieves (4 Å), and all other reagents used were of analytical grade from local manufacturers in China.
Dissolution and solubility of glucose
Different methods of dissolving glucose in organic solvents have been tested, which include agitation, vortex mixing, sonication, microwave, and their combinations, and the one that yielded the fastest dissolution turned out to be a combination of vortex mixing and incubating/shaking. Glucose (100 mg) was added to a test tube containing 2.0 ml of a given solvent (dried with molecular sieves), followed by vortex mixing for 5 min and then shaken in an incubator/shaker at 30 °C and 220 rpm for 12 h. After settling at room temperature (25 °C) for 30 min and then centrifugation at 6000 rpm for 5 min, the supernatant was obtained for determining the glucose solubility by using the dinitrosalicylic acid (DNS) method (Miller 1959). All solubility tests were performed at least three times subjected to less than 5 % error.
Enzymatic reaction
A typical reaction was carried out by adding 0.27 g glucose (corresponding to 0.3 mol−1 of the reaction system, but only partially dissolved) to 5.0 ml 2M2B (dried over molecular sieves for over a week prior to use) containing 0.3 M vinyl laurate (totally dissolved) and 1.0 g 4Å molecular sieves. After vortex mixing for 5 min to maximize the glucose dissolution in the solvent, 0.5 g of the enzyme was added, and the flask was placed in an incubator/shaker with agitation of 300 rpm at 40 °C to start the reaction. Periodically, a 100 µl sample was taken for HPLC analysis (Lin et al. 2015). The conversion was calculated based on the total amount of glucose added to the reaction system. In order to optimize the reaction conditions, the effect of each affecting factor (water content, enzyme amount, reaction temperature and substrate concentrations) was studied independently, i.e., to obtain the optimum for one factor at a time while keeping others constant. The product is 6-O-lauroyl-d-glucopyranose, which has been verified by using HPLC and structural analyses with NMR, IR, and MS (Lin et al. unpublished results). All conversion results were subject to less than 10 % error.
RSM experimental design
Three factors, namely reaction temperature, enzyme amount, and molar ratio of the substrates (VL/Glc), were selected for optimization. The appropriate range for each variable was selected based on the single-factor experiments (see "Enzymatic reaction" section). Then a 3–factor–3–level–Behnken design (BBD) of response surface methodology (RSM) was carried out using Design-Expert 8.0.6, a DOE software developed by Stat-Ease, Inc. Experimental results were analyzed by applying ANOVA (analysis of variance) technique implemented in the Design-Expert software.
Solubility of glucose in different organic solvents
Solubility of glucose has been determined in 14 different organic solvents. The sugar dissolved fairly well in alcohols (e.g., 45.2 mM in methanol) while an almost negligible solubility was obtained in all the four alkanes tested (e.g., hexane, cyclohexane, isooctane and heptane). This is easily understood when taking the "likes dissolve likes" principle into consideration. Glucose is a polar molecule with a polyol structure; thus, polar solvents are expected to facilitate glucose dissolution. Indeed, the glucose solubilities obtained in our study are well associated with the solvent polarities (Fig. 1a); here the polarity of a solvent is represented by its E T (30) value, the molar transition energy of a negatively solvatochromic pyridinium N–phenolate betaine dye (as a probe molecule) dissolved in the solvent (Reichardt 1994). An excellent correlation can also be observed when plotting the glucose solubility data against the log P values of the solvents (Fig. 1b). Here P is the partition coefficient of a solvent in an octanol/H2O biphasic system, which can be used to represent the hydrophobicity of the solvent (Laane et al. 1987). The correlation between glucose solubility and log P is reasonable, because the hydrophobicity of a solvent always tends to be inversely correlated with its polarity. However, no such relationship was observed in the study conducted by Cao et al. 1996 and Degn and Zimmermann 2001, who have determined the glucose solubility in 9 and 11 organic solvents, respectively.
Relationship between glucose solubility and the polarity (a) and hydrophobicity (b) of the solvent. The polarity [ET (30)] and hydrophobicity (log P) data were obtained from (Lide 2001) and (Laane et al. 1987), respectively
For our reaction involving two substrates of opposite polarity (i.e., the hydrophilic sugar and the hydrophobic fatty acid), an appropriate solvent has to be selected based on a compromise between the two solubilities, while meanwhile the enzyme compatibility and environmental and health risks have also to be considered. Tertiary alcohols such as tert-butanol and 2M2B have been considered as good candidates for this application (Ferrer et al. 2005; Ye and Hayes 2014; Gumel et al. 2011; Kobayashi 2011; Ikeda and Klibanov 1993; Flores et al. 2002; Pöhnlein et al. 2014; Walsh et al. 2009). 2M2B was used throughout this study.
Enzyme screening
Six commercial lipases, including Novozym 435 (Candida antarctica lipase B, CALB) and Lipozyme TLIM (Thermomyces lanuginosus lipase, TLL) from Novozymes and four from Leveking (PEL, RCL, AOL, and ANL), were screened for their activities in catalyzing the synthesis of glucose laurate in 2M2B by comparing the conversion obtained with the addition of 0.5 g of each enzyme (Fig. 2). Clearly, Nozozym 435 yielded the highest conversion, followed by Lipozyme TLIM, AOL, and ANL, whereas a negligible amount of the sugar ester product was produced by RCL and PEL. Although Novozym 435 has been the most popular enzyme used in sugar ester synthesis and a few other lipases have also been applied for this application (Ye and Hayes 2014; Gumel et al. 2011; Kobayashi 2011; Ikeda and Klibanov 1993; Ferrer et al. 2000; Walsh et al. 2009; H-Kittikun et al. 2012), our experiment has offered two more lipases (AOL and ANL) that are considerably active in catalyzing SFAE synthesis. AOL was used as the catalyst for the subsequent reactions.
Comparison of the conversions obtained by different lipases in catalyzing the synthesis of glucose laurate in 2M2B. Reaction conditions: 0.27 g glucose (equivalent to 0.3 mol−1 of the reaction system), 0.3 M vinyl laurate, 5.0 ml 2M2B, 1.0 g molecular sieves, 0.5 g enzyme, 300 rpm, 40 °C
AOL-catalyzed synthetic reaction and its affecting factors
Catalyzed by the lipase AOL, the glucose laurate product (GL) was progressively produced via transesterification of glucose with vinyl laurate (VL), while a by-product, lauric acid (LA), was also generated (Fig. 3). The free lauric acid can be produced from the hydrolysis of both the co-substrate (vinyl laurate) and the product (glucose laurate), as long as there is a trace amount of water present in the reaction system. Lipases are carboxyl-esterases catalyzing hydrolysis or transesterification of esters, following a mechanism similar to the well-established acyl-enzyme mechanism for serine proteases (Reis et al. 2009). During the reaction course, water can compete with glucose for the acyl-enzyme intermediate to generate the free fatty acid. Therefore, the total amount of the three components (GL, VL, and LA) remained fairly constant as the reaction went on (Fig. 3), simply because of the decrease in the amount of the reactant, vinyl laurate (VL), which was accompanied by an increase in the formation of both the product, glucose laurate (GL) and the co-product, lauric acid (LA). Because the formation of glucose laurate reached a plateau after reaction for 5 h, for later experiments conversions obtained at 5 h were reported.
Variation of the concentrations of vinyl laurate (VL), lauric acid (LA), and glucose laurate (GL) during the reaction course. Reaction conditions: 0.27 g glucose (equivalent to 0.3 mol−1 of the reaction system), 0.3 M vinyl laurate, 5.0 ml 2M2B, 1.0 g molecular sieves, 0.3 g enzyme (AOL), 300 rpm, 40 °C
As shown in Fig. 4a, an increase in water content in the reaction system resulted in a decrease in the formation of glucose laurate. As mentioned above, water can trigger a hydrolysis of both the co-substrate (VL) and the product (GL), thus leading to a lower conversion. In nonaqueous media, an enzyme normally requires an essential amount of water for its activation; but too much water may be harmful to the enzyme leading to its aggregation and eventually to its inactivation (Liu et al. 1991). Therefore, it is necessary to keep the water level in the reaction system as low as possible. Adding molecular sieves has been proven to be a convenient way to remove the water that is present in the reaction system or the water that is produced from the reverse hydrolytic reaction, and both the solvent and the reactants have to be thoroughly dried prior to the reaction.
Effect of water content (a), enzyme amount (b), reaction temperature (c) and substrate concentrations (d and e) on production of glucose laurate. The basic reaction conditions were the same as for Fig. 2, but for each experiment, one condition was altered while all others were kept unchanged
The impact of enzyme amount on the glucose laurate synthesis has also been examined (Fig. 4b). The conversion was raised first and then declined as the enzyme amount increased with an optimum obtained at 0.4 g. Part of the reasons for the later decline may be related to the hydrolysis of the product as mentioned above. Additionally, too high a concentration of the immobilized enzyme makes the mixing of the reaction slurry difficult. Taking the enzyme cost into consideration, using excessive enzyme is counterproductive. In this study, use of 0.4 g of the enzyme in the 5 ml reaction system is recommended.
Other factors affecting the enzymatic reaction include the reaction temperature and substrate concentrations. A range of temperatures (40–80 °C) has been tested, and the optimal reaction temperature was found to be 60 °C (Fig. 4c). In terms of substrate concentrations, when the concentration of vinyl laurate was fixed at 0.3 M (totally dissolved) while the amount of glucose varied in a range of 0–0.4 mol−1 of the reaction system (most of it was initially undissolved and unreacted), the formation of glucose laurate increased gradually, accompanied by a decrease in the conversion (Fig. 4d). The reduction in conversion with an increase in the applied glucose amount is not surprising, for the reported conversion here was calculated based on the amount of glucose that was applied, while most of this added glucose was initially undissolved and unreacted. Applying the sugar in solid state is favorable for the reaction because its suspended particles continuously replenish the dissolved pool of the substrate as it is consumed by the reaction, thus pushing the reaction forward. On the other hand, when the glucose input was fixed at 0.3 mol−1 of the reaction system while vinyl laurate was added varying in the range of 0–1.2 M, an optimal conversion was obtained at a VL concentration of 0.6 M (Fig. 4e).
The above single-factor experiments suggested the optimum for each reaction condition: reaction temperature at 60 °C, 400 mg enzyme, 2:1 M ratio for VL/Glc (with 0.3 mol of Glc per liter of the reaction system) in 5.0 ml dried 2M2B. After a reaction was conducted under these conditions for 5 h, a conversion of 50.9 % was obtained, which is obviously improved as compared to those obtained previously.
Optimization through RSM
Based on the above preliminary single-factor results, a response surface methodology (RSM) with a three-factor-three-level Box-Behnken design (BBD) was employed for modeling and optimization of the enzymatic synthesis of glucose laurate. The three factors (i.e., enzyme amount, VL/Glc molar ratio and reaction temperature) and their varying levels are listed in Table 1. A total of 17 runs were carried out, among which five were at the central point. The conversion obtained at 5 h was taken as a response parameter for the model.
Variables and levels used for the Box-Behnken design
Enzyme amount (mg)
VL/Glc molar ratio
Reaction temperature (°C)
The conversions obtained by prediction from the model were linearly correlated with the experimentally obtained values with a slope of 0.9488 and an R 2 -value of 0.9488, indicating that the model is able to fit the experimental data fairly well. The model F-value of 14.41 implies that the model is significant. The adequate precision of 13.142 indicates an adequate signal. No statistical evidence of multi-collinearity was found because the VIF (variance inflation factor) values calculated for all the terms included in the model were around 1.0. The conversion can then be characterized by the following polynomial equation:
$$\begin{aligned} {\text{Y}} &= - 60.34 + 0.545{\text{X}}_{1} + 6.696{\text{X}}_{2} - 1.131{\text{X}}_{3} + 0.050{\text{X}}_{1} {\text{X}}_{2} \\ &\quad + 0.002{\text{X}}_{1} {\text{X}}_{3} + 0.389{\text{X}}_{2} {\text{X}}_{3} - 0.0009{\text{X}}_{ 1}^{2} - 10.462{\text{X}}_{2}^{2} + 0.001{\text{X}}_{3}^{2} \end{aligned}\\$$
where Y is the predicted conversion (%), while X1, X2, and X3 refer to enzyme amount (mg), VL/Glc molar ratio and reaction temperature (°C), respectively.
A 3D response surface with contour plots is depicted in Fig. 5. A maximal conversion of 63.13 % was predicted by the model with a set of reaction conditions suggested: 420 mg enzyme, 2.3:1 VL/Glc M ratio, 70 °C. Under these conditions, three tests were conducted and an average conversion of 64.54 % (±0.51 %) was obtained, which is reasonably close to the predicted value. This further confirms the validity and adequacy of the model.
Response surface plot showing the mutual effects of VL/Glc molar ratio with reaction temperature on the conversion obtained in the enzymatic synthesis of glucose laurate in 2M2B
The RSM-mediated optimization has significantly improved the conversion from 25.8 % (Fig. 2) to 64.5 %. Moreover, the productivity obtained in our optimized system is also much higher than others reported in literature (Ferrer et al. 2000, 2005; Ikeda and Klibanov 1993; Pöhnlein et al. 2014; Yan et al. 1999) (Table 2). It has to be noted, however, that the optimal reaction temperature may not have been reached within the range tested in this study (50–70 °C). It is therefore anticipated that further optimizing the reaction conditions, as well as introducing the reaction into ionic liquid systems (Lin et al. 2015; Yang and Huang 2012), may result in an even more significant improvement. It is worth mentioning that although AOL has shown to be a promising enzyme for SFAE synthesis in terms of excellent productivity, specific productivity, and shorter reaction time, the conversion obtained by this enzyme is relatively low. One possibility for this may be related to product inhibition, which is under further investigation in our laboratory.
Conversion and productivity obtained in this study as compared to those reported in literature
(Ikeda and Klibanov 1993)
(Yan et al. 1999)
(Ferrer et al. 2000)
(Pöhnlein et al. 2014)
N-butyryl-glucosamine
Acyl donor
Vinyl acrylate
Palmitic acid methyl ester
Vinyl laurate
Methyl hexanoate
tert-butanol
2M2B
Pseudomonas sp. lipoprotein lipase
Novozym SP525
Helvina lanuginosa lipase
Tidestromia. lanuginosus lipase
Novozym 435
Reaction conditions
Sugar input (M)
Enzyme amount (mg/ml)
Reaction time (h)
Conversion (%)
Productivity (mmol/L/h)
Specific productivity (μmol/h/g)
XL conceived of the study and carried out the RSM design. KZ assisted in data analysis and supervised the whole experiment. QZ and KX assisted in experimentation. PJH assisted in result discussion and draft revision. ZY was responsible for draft preparation and submission. All authors read and approved the final manuscript.
This work was supported by the National Natural Science Foundation of China (Grant number 21276159).
College of Life Sciences, Shenzhen Key Laboratory of Microbial Genetic Engineering, Shenzhen University, Shenzhen, 518060, Guangdong, China
College of Life Sciences, Shenzhen Key Laboratory of Marine Bioresources and Ecology, Shenzhen University, Shenzhen, 518060, Guangdong, China
Department of Pure and Applied Chemistry, University of Strathclyde, Glasgow, G1 1XL, UK
Bezerra MA, Santelli RE, Oliveira EP, Villar LS, Escaleira LA (2008) Response surface methodology (RSM) as a tool for optimization in analytical chemistry. Talanta 76:965–977View ArticleGoogle Scholar
Cao L, Bornscheuer UT, Schmid RD (1996) Lipase-catalyzed solid phase synthesis of sugar esters. Fett/Lipid 98:332-335View ArticleGoogle Scholar
Chen Z-G, Zong M-H, Lou W-Y (2007) Advance in enzymatic synthesis of sugar ester in non-aqueous media. J Mol Catal B Enzym 21:90–95Google Scholar
Degn P, Zimmermann W (2001) Optimization of carbohydrate fatty acid ester synthesis in organic media by a lipase from Candida antarctica. Biotechnol Bioeng 74:483–491View ArticleGoogle Scholar
Ferrer M, Cruces MA, Plou FJ, Bernabé M, Ballesteros A (2000) A simple procedure for the regioselective synthesis of fatty acid esters of maltose, leucrose, maltotriose and n–dodecyl maltosides. Tetrahedron 56:4053–4061View ArticleGoogle Scholar
Ferrer M, Soliveri J, Plou FJ, López-Cortés N, Reyes-Duarte D, Christensen M, Copa-Patiño JL, Ballesteros A (2005) Synthesis of sugar esters in solvent mixtures by lipases from Thermomyces lanuginosus and Candida antarctica B, and their antimicrobial properties. Enzyme Microb Technol 36:391–398View ArticleGoogle Scholar
Flores MV, Naraghi K, Engasser J-M, Halling PJ (2002) Influence of glucose solubility and dissolution rate on the kinetics of lipase catalyzed synthesis of glucose laurate in 2–methyl 2–butanol. Biotechnol Bioeng 78:815–821View ArticleGoogle Scholar
Gumel AM, Annuar MSM, Heidelberg T, Chisti Y (2011) Lipase mediated synthesis of sugar fatty acid esters. Process Biochem 46:2079–2090View ArticleGoogle Scholar
H-Kittikun A, Prasertsan P, Zimmermann W, Seesuriyachan P, Chaiyaso T (2012) Sugar ester synthesis by thermostable lipase from Streptomyces thermocarboxydus ME168. Appl Biochem Biotechnol 166:1969–1982View ArticleGoogle Scholar
Ikeda I, Klibanov AM (1993) Lipase-catalyzed acylation of sugars solubilized in hydrophobic solvents by complexation. Biotechnol Bioeng 42:788–791View ArticleGoogle Scholar
Kobayashi T (2011) Lipase-catalyzed syntheses of sugar esters in non-aqueous media. Biotechnol Lett 33:1911–1919View ArticleGoogle Scholar
Laane C, Boeren S, Vos K, Veeger C (1987) Rules for optimization of biocatalysis in organic solvents. Biotechnol Bioeng 30:81–87View ArticleGoogle Scholar
Lide DR (2001) CRC handbook of chemistry and physics, 82nd edn. CRC Press, Boca RatonGoogle Scholar
Lin X-S, Wen Q, Huang Z-L, Cai Y-Z, Halling PJ, Yang Z (2015) Impacts of ionic liquids on enzymatic synthesis of glucose laurate and optimization with superior productivity by response surface methodology. Process Biochem 50:1852–1858View ArticleGoogle Scholar
Liu WR, Langer RL, Klibanov AM (1991) Moisture-induced aggregation of lyophilized proteins in the solid state. Biotechnol Bioeng 37:177–184View ArticleGoogle Scholar
Miller GL (1959) Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem 31:426–428View ArticleGoogle Scholar
Pöhnlein M, Slomka C, Kukharenko O, Gärtner T, Wiemann LO, Sieber V, Syldatk C, Hausmann R (2014) Enzymatic synthesis of amino sugar fatty acid esters. Eur J Lipid Sci Technol 116:423–428View ArticleGoogle Scholar
Puterka GJ, Farone W, Palmer T, Barrington A (2003) Structure-function relationships affecting the insecticidal and miticidal activity of sugar esters. J Econ Entomol 96:636–644View ArticleGoogle Scholar
Reichardt C (1994) Solvatochromic dyes as solvent polarity indicators. Chem Rev 94:2319–2358View ArticleGoogle Scholar
Reis P, Holmberg K, Watzke H, Leser ME, Miller R (2009) Lipases at interfaces: a review. Adv Colloid Interf Sci 147–148:237–250View ArticleGoogle Scholar
Walsh MK, Bombyk RA, Wagh A, Bingham A, Berreau LM (2009) Synthesis of lactose monolaurate as influenced by various lipases and solvents. J Mol Catal B Enzym 60:171–177View ArticleGoogle Scholar
Yan Y, Bornscheuer UT, Cao L, Schmid RD (1999) Lipase-catalyzed solid-phase synthesis of sugar fatty acid esters Removal of byproducts by azeotropic distillation. Enz Microb Technol 25:725–728View ArticleGoogle Scholar
Yang Z, Huang Z-L (2012) Enzymatic synthesis of sugar fatty acid esters in ionic liquids. Catal Sci Technol 2:1767–1777View ArticleGoogle Scholar
Ye R, Hayes DG (2014) Recent progress for lipase-catalysed synthesis of sugar fatty acid esters. J Oil Palm Res 26:355–365Google Scholar
|
CommonCrawl
|
Age-Specific Income Trends in Europe: The Role of Employment, Wages, and Social Transfers
Bernhard Hammer ORCID: orcid.org/0000-0003-3082-63831,
Sonja Spitzer2 &
Alexia Prskawetz ORCID: orcid.org/0000-0002-2850-66821
Social Indicators Research volume 162, pages 525–547 (2022)Cite this article
This study analyses age-specific differences in income trends in nine European countries. Based on data from National Accounts and the European Union Statistics on Income and Living Conditions, we quantify age-specific changes in income between 2008 and 2017 and decompose these changes into employment, wages, and public transfer components. Results show that income of the younger age groups stagnated or declined in most countries since 2008, while income of the older population increased. The decomposition analysis indicates that the main drivers of the diverging trends are higher employment among the older population and a strong increase in public pensions, especially for women.
Income of young adults fell behind income of the older population in most European countries between 2008 and 2017, even in countries where employment rates among the young increased (Eurostat, 2020d, c). Previous research has, however, put little emphasis on the extent and the causes of age-specific differences in income trends. The limited evidence focuses on equivalised household income and shows that income of households with children and young adults stagnated or decreased in most of Europe after the financial crisis, while it increased for the older population (Chen et al., 2018; Wittgenstein Centre, 2020). While employment is identified as an important explanation of the age-specific changes in income, other factors have rarely been studied directly. Chen et al. (2018) suggests that beside employment, public transfers are important drivers of generational disparities in income.
Knowledge about age-specific income trends and their determinants is necessary to understand central economic and demographic developments. After the financial crisis, poverty rates increased or stagnated at high level among young adults and family households in many countries. By contrast, poverty rates among older adults declined during the same period (Eurostat, 2020b). Such developments require attention, because the economic situation of young adults is a crucial determinant of whether and when they form a family, and the number of children they have. Consequently, the deteriorating economic situation of young adults is considered an important factor for the fertility declines in several European countries (Matysiak et al., 2020; IMF, 2018; Goldstein et al., 2013). In addition, most European countries experienced a baby boom in the decades after the second world war, with varying degree and length. The baby-boomer are currently entering retirement or will reach retirement age in the next two decades, which will increase old-age related spending (European Commission, 2018, 2021) and the demand for higher taxes from the working age population. The COVID-19 pandemic will further increase the pressure on young generations in the years to come. It is thus of utmost importance to study the drivers of income changes at each life-stage, to understand their relation with social, economic and demographic developments.
Employment is particularly important for the understanding of age-specific differences in income trends, most notably in Southern Europe. Between 2008 and 2017, employment rates of the 15–39-year-old decreased by 9 percentage-points in Italy, Greece, and Spain, while employment among the population aged 40–59 increased (Eurostat, 2020c). In addition, young people find themselves increasingly entrapped in insecure and temporary work (Barbieri et al., 2019; Garibaldi and Taddei, 2013). This strong deterioration of employment opportunities for the young in Southern Europe is found to be reinforced by labour market institutions that generate a pronounced duality in the labour markets: the older, permanent employees are strongly protected, while the insecurity due to a flexibilisation of the labour market has been entirely placed on young cohorts (Barbieri, 2011; Chauvel and Schröder, 2014). Employment of the young also decreased in many Western and Northern European countries, however, to a much lesser degree than in Southern Europe. By contrast, in most Central and Eastern European (CEE) countries employment increased for young adults, and even more so for older working age adults (Eurostat, 2020c). Aliaj et al. (2016) show that employment among the population 50+ increased in Belgium, France, Germany, and the Netherlands in the last decades uninterrupted by the financial crisis, but they do observe a reduction in average hours worked. The change of employment rates in percentage points was more favourable for the population 40–59, compared to the population 15–39, in all European countries except the three Baltic countries, Iceland, and Switzerland. Changes in employment rates differ not only by age groups, but also by gender. Employment among the older working age population increased much stronger for women, compared to men.
Although employment is an important driver of income trends, it does not entirely explain the unequal developments in income across ages. For example, equivalised income of the population aged 65+ increased by five per cent in Italy and seven per cent in Spain, yet without associated increases in employment (Eurostat 2020d). This suggests that in addition to employment, changes in wages and public transfers are important components to consider when analysing the underlying causes of diverging income trends across age groups. Chen et al. (2018), for example, suggests that in addition to labour market developments, the design of social protection is an important dimension to consider, since it potentially guards the income of older persons while offering little assistance to young individuals.
Despite the important implications of age-specific income trends for societal and demographic development, no previous research, to our knowledge, has systematically analysed these changes in a comparative manner for Europe. Moreover, it remains unclear how employment, wages, and public transfers contribute to changes in income across different generations in the previous decades. Our study aims at closing these gaps by addressing the following research questions based on data from the European system of Accounts (ESA) and the European Statistics on Income and Living Conditions (EU-SILC):
How did aggregate per-capita income and its components change between 2008 and 2017?
How do these changes differ between age groups?
What drives age-specific income trends: changes in employment, wages, or transfer income?
The countries included in the analysis are Austria, Estonia, Greece, France, Italy, Poland, Slovenia, Spain, and Sweden. EU-SILC provides comparable information on individual gross income and on household net income in all EU countries. The nine countries included in our analysis also provide information on individual net income, i.e. on net income from employment, self-employment, and net public benefits. We focus on individual income rather than household income, since the latter potentially hides age-specific changes when various generations live in the same household.
Moreover, the set of countries allows us to study different welfare regimes and various economic developments during the observed period. In the presentation of the results, we group countries based on their developments in aggregate and age-specific income. In particular, we group the Southern European countries together (Italy, Greece and Spain), the Western and Central European countries (Austria, France and Slovenia), and the Eastern and Northern European countries (Poland, Estonia and Sweden). The development in aggregate and age-specific income was similar within these groups of countries, while it differed considerably across these country-groups.
Our paper is divided into three parts, in line with our three research questions. First, we measure total changes in income and public redistribution between 2008 and 2017, using aggregate economic ESA data. Aggregate data are more comprehensive and reliable than survey-based data and thus constitute a benchmark for comparing and verifying the survey-based results. Second, we analyse age-specific changes in income between 2008 and 2017, using micro-level EU-SILC data. In the third part of the paper, we decompose the age-specific changes in income into its main components, thereby identifying how changes in employment, wages and social benefits affect age-specific income trends.
2 Data and Methodology
2.1 Aggregate Income in the European System of Accounts
Data from ESA provide detailed information on the level and type of household sector income as well as on public redistribution (for a detailed description of ESA see European Commission, 2013). The annual sector accounts are available from the Eurostat database (Eurostat, 2020a). Our focus is on four quantities: (1) primary income before public redistribution, (2) payments of taxes and social contributions, (3) receipt of social benefits, and (4) disposable income. Taken together, these four quantities summarize changes in total income, the extent of public redistribution, and its consequence for disposable income. For comparison, we also consider GDP per capita and its changes, because it is the most common measure of economic development.
Primary income in ESA measures income that is generated by direct participation in the production process, before any payment of taxes or social contributions. The largest component of primary income of the household sector is the compensation of employees, which consists of all types of remuneration for work, including social contributions paid by employers. Furthermore, primary income includes asset income, such as interest, dividends, the return to owners of unincorporated enterprises and owner-occupied housing.
The income tax ratio measures income taxes relative to primary income and thus serves as an indicator of the total size of public redistribution. For calculating the income tax ratio, we combine several quantities from ESA and information on the tax structure (European Commission, 2019). In particular, our definition of the tax ratio takes into account taxes on production, taxes on income and wealth, as well as social contributions. Taxes paid on public transfers such as pensions are not included, because national variations in tax systems could bias cross-country comparisons. The concept differs from the more common tax-to-GDP ratio, since our goal is not the measurement of the total tax burden, but to capture all direct taxes that are paid out of primary income. Consequently, taxes on products are not included in the income tax ratio, because they constitute part of consumption expenditure and affect disposable income only indirectly.
The benefit ratio is calculated as the ratio of total cash benefits from the government to primary income of households. It serves as an indicator of the importance of public redistribution via cash transfers. In ESA, the largest type of benefits are social benefits, including pensions, unemployment benefits, family allowances and other types of social benefits. To be consistent with our definition of the tax ratio, we measure benefits net of taxes and social contributions that are paid on these benefits. In addition, the benefit ratio includes other current transfers from the government sector to households.
Disposable income is an indicator of economic wellbeing of households. It is calculated as primary income less taxes plus cash benefits and represents the income of households that can be used for consumption and saving.
2.2 Age-Specific Income in EU-SILC
The age-specific analysis of income changes is based on EU-SILC data. We distinguish three age-groups: the young working age population at age 20–39, the older working age population at age 40–59, and the elderly population aged 60+. We opted for 20-year age groups to have sufficient observations for reliable estimates when distinguishing also by type of income and gender. Ages 20–39 coincide with the life stage of early labour market career, family formation and childbearing. The age group 40–59 consists mainly of persons that made their family decisions already, who do not have care responsibilities for small children anymore, and who are active on the labour market. The age border of 60 corresponds to actual retirement age in most countries; thus, the age-group 60+ captures therefore mainly the group of retirees. For each of the three age groups, we calculate the changes in age-specific mean income in real terms between 2008 and 2017. Means are required by the decomposition analysis we apply to individual income as described below. However, we also provide estimates of changes in median income as well as the changes in the first and the third quartile.
All income components are assigned to individuals, also components that are given at household level in EU-SILC, including family benefits, imputed rents, asset income and income of persons younger than 16. The details of allocating household level income to individuals are given in "Allocation of Household-Level Income to Individuals" in the Appendix. Sensitivity analysis has indicated that the assumptions for the distribution of household level income have little effect on the level and change of age-specific income. First, income at household level accounts for less than 20 percent of total income, with imputed rents as largest component. Second, the age group of recipients of household-level income can be identified unambiguously in most households. The same is true for gender-specific results except for Poland, where an increase in the monetary support of families resulted in an increase of income among the 20–39-year-old. Hence, the gender-specific results for Poland depend on how these increased benefits are allocated within a couple.
Age-specific changes over time are often analysed using an age-period-cohort (APC) approach. An APC approach distinguishes period effects that influence income earners at all ages and from all birth cohorts, age-effects that comprise changes in fundamental life-cycle patterns, and cohort-effects that only affect persons born in given years, such as an economic shock that affects all persons newly entering the labour market (De Fraja et al., 2017; Mroz and Savage, 2006). Since the three APC dimensions are perfectly collinear, it is not possible to unambiguously disentangle the distinct effects. Although previous literature has suggested methods to overcome this identification issue, their applications are frequently disputed and their results often unintuitive (Bell, 2020; Fosse and Winship, 2019). The difficulty to distinguish between cohort and age-effects is aggravated by the short time span covered by EU-SILC, which only provides suitable pseudo-panel data for a period of ten years. Since economic shocks, such as the financial crisis or the sovereign debt crisis, affected consecutive birth cohorts in a similar way, it is currently not possible to identify if the effect is indeed short-lived and affects only few cohorts, or if it resulted in a lasting change of the age patterns. We thus refrain from decomposing APC effects, acknowledging that the difference in income trends between age groups reflects both, age and cohort effects.
2.3 Decomposition of Income Changes
Changes in mean income between 2008 and 2017 are decomposed to identify the role of employment, wages, and public transfers. This analysis is solely based on income and employment data in EU-SILC. We distinguish between three income components: Per-capita averages of total income (Y) are the sum of average income from employment (YL), public benefits (YB) and other income (YO), which includes for example asset income and inter-household transfers. The sub-components of YO are comparably small, which is the reason for combining them into a single category.
For each age group i, we decompose the percentage change in total income between 2008 and 2017 into the sum of the changes of its components, with \(\varDelta _\%\) referring to changes in percent of total income in 2008:Footnote 1
$$\begin{aligned} \varDelta _\% \text {Y}_i = \varDelta _\% \mathrm {YL}_{i} + \varDelta _\% \text {YB}_{i} + \varDelta _\% \text {YO}_{i} \end{aligned}$$
To identify the effect of changes in employment rates on \(\varDelta _\% Y_i\), we further decompose the components \(\varDelta _\% \mathrm {YL}_{i}\) and \(\varDelta _\% \mathrm {YB}_{i}\), which are both strongly related to employment rates. While employment-income arises exclusively to persons in employment, benefits are directed mainly at the non-employed population, such as pensioners, unemployed persons, or persons on paternal leave. Average income from employment can be written as product of the employment rate (\(l_i\)) and average income of each employed person (\(\text {yl}_i\)):
$$\begin{aligned} \text {YL}_i = \text {yl}_i*l_i \end{aligned}$$
Likewise, average benefits can be written as product of the share of non-employed persons \((1-l_i)\) and average benefits per non-employed person (\(\text {yb}_i\)):
$$\begin{aligned} \text {YB}_i = \text {yb}_i*(1-l_i) \end{aligned}$$
The total change of \(\mathrm {YL}_i\) and \(\mathrm {YB}_i\) is allocated to each factor according to its relative change, i.e. if income per employed person increases by 10 percent and the employment rate by 5 percent, we would allocate 2/3 of \(\varDelta _\% \text {YL}\) to income per employed person and 1/3 to the employment rate. We can then write the total changes in income as sum of five components:
$$\begin{aligned} \varDelta _\% \text {Y}_i = \varDelta _\% \mathrm {l}_{i} + \varDelta _\% \mathrm {yl}_{i} + \varDelta _\% (1-\text {l}_{i}) + \varDelta _\% \text {yb}_{i} + \varDelta _\% \text {YO}_{i} \end{aligned}$$
The method of decomposing \(\varDelta _\% \mathrm {YL}_{i}\) and \(\varDelta _\% \mathrm {YB}_{i}\) is described in more detail and with an example in "Decomposition of Income Changes" of the Appendix.
3.1 Changes in Aggregate Income
The financial crisis and the sovereign debt crisis had a much stronger effect on income of the household sector than indicated by changes in GDP. While GDP per capita declined from 2008 to 2017 only in Greece and Italy (Column 1 in Table 1), per capita primary income of households declined in six out of the nine analysed countries (Column 2). The decline was most pronounced in Greece with 32 per cent, and in Italy and Spain, with a decline of more than 10 per cent. Primary income decreased slightly in Austria, France and Slovenia and slightly increased in Poland. Only households in Estonia and Sweden had considerably more income per capita in 2017 than in 2008 with an increase of 16 per cent.
The extent of public redistribution increased in all nine countries during the period 2008–2017. The tax ratio stagnated in Spain and Sweden, and increased in the other countries (Column 3 of Table 1), with the highest increase of 7 per cent in Greece. Part of the higher taxes were used to finance increasing cash transfers; the benefit ratio increased in all analysed countries. This increase of social benefits relative to primary income was most pronounced in Italy with an increase of 5 percentage points, and in France and Spain with an increase of 4 percentage points. The changes in the tax ratio and the benefit ratio measure mainly the redistribution within the household sector, with a small effect on the household sector as a whole. Therefore, changes in disposable income follow largely the changes in primary income. Disposable income of the household sector increased in Estonia, Poland and Sweden, and declined in the other six countries.
Table 1 Real income per capita, taxes and benefits and their change during the 2008–2017 period (in 2018 prices).
3.2 Age-Specific Changes
The analysis of income changes by age reveals large disparities between age groups (Table 2). Income of the 20–39-year-old declined in five of the nine analysed countries and only in Estonia it increased by more than 5 per cent. The income trends were more favourable for the older working age population at age 40–59, where only in Greece, Italy, and Spain income per capita declined. However, the gains in income were concentrated in the population 60+, with absolute gains in all countries except Greece. Even in Greece the decline in income was much less for the population 60+ compared to the prime working age population. Income of the elderly population increased also in Italy and Spain, despite the strong declines in the working age population. It increased strongly in Austria, France and Slovenia, while income of the working age population merely stagnated with a change of 4 percent or less over the whole period from 2008 to 2017.
Table 2 Mean individual net income by age
The overall pattern of absolute and relative income losses of the young are similar across the whole income distribution. However, mean values can be strongly influenced by large incomes, their changes may not be representative for individuals in the middle or at the bottom end of the distribution. We therefore present changes by quartiles in the Appendix, Tables 5, 6 and 7. While the overall findings hold across the income distribution, we observe a pronounced decline of the first quartile among young adults in Greece, Italy and Spain. This decline can be explained by an increase of the share of persons with a low level of employment and low income to more than one quarter of the population and hence encompasses the first quartile of the income distribution in 2017.
Differences between ESA (Table 1) and EU-SILC data (Table 2) are due to the fact that EU-SILC captures only part of the income of households. These differences are described in detail in Table 8 in the Appendix. In particular, asset income is poorly captured in EU-SILC with the exception of France. Since the decline in income per capita in Austria, Italy and Slovenia (Table 1) was largely the result of a decline in asset income, EU-SILC based income estimates decline less, respectively increased stronger, compared to ESA. Furthermore, changes in the survey may also affect age-specific income. The large increase in income of the working age population in Estonia is partly the result of a better estimation of labour income in EU-SILC; the EU-SILC estimates in 2017 corresponded exactly to the ESA values, while in 2008 the EU-SILC-based estimates corresponded to only 89 percent of the value in ESA. Furthermore, the huge decline in labour income in Greece may be overstated; the estimate of total labour income in EU-SILC was 75 percent of the ESA aggregate in 2017, compared to 89 percent in 2008, suggesting that EU-SILC captured a lower part of total labour income in 2017. By contrast, the estimates of social benefits in EU-SILC increased from 67 to 77 percent of the ESA value, which may explain part of the decline in income of the young relative to the elderly.
The estimation of standard errors is a challenge in any EU-SILC-based analysis. First, the data lacks sample design variables to calculate the correct standard errors due to sampling. Second, further random errors are introduced through imputation and re-weighting of the data (Goedemé 2013), which is particularly problematic for variables with a high share of imputed values, such as asset income. We calculated the standard errors for the age-specific estimates of income changes using the method suggested by Goedemé (2013) and Trindade and Goedemé (2016), and report the results in Table 9 in the Appendix. The standard errors are particularly high in France, which can be explained by the high coverage of unequally distributed asset income and the consequent much higher dispersion of total income. However, a large part of the income data is imputed and not sampled. Treating the data as if it would emerge from a random sample overestimates the standard errors. So far, we are not aware of methodology that enable the estimation of standard errors and confidence intervals of EU-SILC variables with a large share of imputed data. To evaluate our point estimates of income, we carefully compare them with the more reliable aggregate data.
3.3 Age-Specific Changes by Gender
The increase in average income in old age is mainly explained by the substantial increase in female labour force participation in the last decades. As a consequence, women earn more and receive higher pensions than in the past, visible in the strong increase in income of women aged 60+ (Table 3). Although the change was much less pronounced for men, also income of men in the age group 60+ increased, compared to the working age population. Exceptions are Estonia and Poland, where the income of the older male and female population increased less than the income of the working age population.
Table 3 Changes in mean net income by age and gender
Most of the age-specific differences in income trends can be explained through changing employment rates (\(\varDelta _\% l_i\)) and an increase in transfer income for the age group 60+ (\(\varDelta _\% \text {YB}_{60+}\)). Despite huge cross-country differences in the extent of income changes, we observe several common patterns (Fig. 1 and associated Table 4). In all countries, employment among the older working age population and the population 60+ increased, which is mainly the result of higher female employment and later retirement for both genders. Higher employment among older age groups reduced the share of non-employed persons and pension receivers (\(\varDelta _\% (1-l_{60+})\)) and should have reduced the share of pensions in total income. Instead, in almost all countries, this effect was offset by an increase in benefits per retiree (\(\varDelta _\% \text {yb}_{60+}\)). Higher employment is the main reason for increasing income of the older working age population aged 40-59; higher employment together with the increase in benefits per retiree is the explanation for the increasing income of the population 60+.
Changes in income per employee (\(\varDelta _\% \text {yl}_i\)) are an important driver for changes in income of the working age population. The decline in the income of the young working age population in Greece, Italy, and Spain is a combined effect of lower employment rates and lower income per employee. The decline for the older working age population is mainly due to lower income per employee. Likewise, income of the working age population increased in Sweden and Estonia, mainly because of higher income. In Poland, income of the young working age increased because of higher benefits.
In general, the other income components (\(\text {YO}_i\)) constitute a small part of household income and explain a small part of the income changes with the exception of France and Estonia. In France, this pattern reflects the decline in asset income, an income component which is much better captured in France, compared to other countries. The results for Estonia have to be taken with a grain of salt. The decline in other income is mainly due to a decline in imputed rents in EU-SILC, which we regard unrealistic in its extent and at odds with aggregate data from ESA.
Decomposition of income changes by age
Table 4 Decomposition of income changes 2007–2008 (data associated with Fig. 1)
Understanding age-specific income trends is of central importance in the context of ageing populations and the sustainability of pay-as-you-go pension schemes. Public pension schemes were designed during a period that was characterised by high growth rates of GDP, baby-boom cohorts entering working age, a decline in the number of dependent children, and an increase in the labour force participation of women. These favourable conditions are, however, quickly waning; the size of cohorts entering retirement age in the next years will exceed those entering the labour force by a factor of 1.5, with the exception of France and Sweden (Fig. 2). The change in the age structure of the population will pose a challenge for European economies, in particular the labour markets and public transfer systems. Ageing, its consequences, and possible solutions are therefore widely discussed among policy makers. Only recently, the European commission presented a green paper to launch a broad policy debate on the challenges and opportunities of Europe's ageing society (European Commission 2021). One of these challenges is the prevention of further income deterioration of young generations. Our results show that the period from 2008 to 2017 was characterised by stagnating/declining income for the young in many countries, increasing taxes, and increasing social benefits for the elderly population. The pressure on income of the young will additionally intensify with the ongoing retirement of the baby-boomer and the consequences of the COVID-19-crisis.
Source: Eurostat
Relative size of birth cohorts in 2020 (avg. size of cohorts aged \(0-24 = 1\)).
Our results also show that population ageing is only one of the challenges for public transfers in the next decades. The pension systems are also pressured by considerable increases in per-capita benefits. Generous pension rules implemented during demographically favourable periods and the increase in employment resulted in higher pension rights, which explains part of the overall increase in transfer income of the age group 60+. Figure 3 shows the changes in age- and cohort-specific employment rates for the example of France, but the pattern looks very similar in the other countries. In particular, female employment rates have increased in the decades after the baby-boom. Consequently, the cohorts of women that reach retirement age are characterized by higher employment over the whole working life, compared to the already retired cohorts. The increases of female employment rates levelled off for younger age groups after 2008, but increasing employment rates are a major driver of higher labour income in late working age and the increase in public benefits in the age group 60+. The increases in pensions of women are highly desirable, given their low pensions compared to men. They lead, however, to an increase in the average pension compared to income from employment and thereby contribute to the difficulties in balancing contributions and benefits.
Change in employment by age an birth cohorts, France
While there is no evidence of a straightforward relationship between income and fertility, low income and economic insecurity can explain part of the fertility decline in European countries. The decline in birth rates continued in the years since 2017 and accelerated in the COVID-19 crises, with a particular large drop in Spain (Sobotka et al., 2021). Especially the Southern-European countries may find themselves in a economic vicious circle of demographic and economic developments, where ageing populations result in a shift of income from young to old, which in turn may result in a lower number of children and an acceleration of the ageing process.
The better position of young generations in Poland and Estonia is probably a result of the more dynamic development of the economy in these countries, reflected also in higher growth of GDP per capita. Such developments open work opportunities for younger generations and triggers demand for skills and knowledge that are more widespread among younger persons, e.g. skills regarding information and communications technology, resulting in an increase of their average real wage. An additional factor may be the high net-outmigration in these countries, supporting the wages of the remaining population (e.g. Dustmann et al., 2015; Elsner 2013).
In our sample, all countries with declining national income are characterized by a decrease in income of the young relative to the older population. However, escaping from the vicious circle mentioned above requires institutions that distribute the costs of economic crises across all age groups rather than concentrating the losses of income among the young. A rich literature explores mechanism to adapt pension systems automatically to a changing economic and demographic environment and to balance contributions and expenses (e.g. Börsch-Supan 2007; Vidal-Meliá et al., 2009). Such mechanism could also reduce the differences in age-specific income trends, by avoiding a decoupling of changes in primary income and in pension benefits. In Europe, Sweden is a role model in implementing such stabilizers in the pension system, by adjusting pensions to life expectancy and by linking them to changes in primary income. The constant share of taxes in this country compared to the increase in most other countries suggests that these mechanisms might work as desired (see Table 1). Our work documents age-specific income trends, but further work is needed to identify possible ways towards a sustainable allocation and redistribution of income between generations.
Our study reveals large differences in age-specific income trends in all nine countries analysed. Although the extent of age-specific differences varies greatly across countries, we observe common patterns.
In most countries, the period 2008–2017 is characterized by a stagnation or decline in the income of households and an increase in public sector redistribution. While GDP per capita decreased only in Greece and Italy between 2008 and 2017, income per capita decreased in six out of the nine analysed countries. Only in Estonia and Sweden, income increased significantly with about 15 percent over the whole period. During the period from 2008 to 2017, income taxes relative to primary income increased in seven of nine countries and stagnated in two, while benefits relative to primary income increased in all countries.
The age group 20–39 lost income relative to older age groups in all countries except Estonia and Poland. In five out of nine countries, the young lost even in absolute terms. The differences in age-specific income trends are particularly high in Southern Europe. In Italy and Spain, mean income in the population at age 20–39 declined by about 17 percent and for the older working age population at age 40–59 by about eight percent. By contrast, income increased for the elderly population aged 60+. In Greece, income declined for all age groups, but less for the older populations. In Austria, France, Slovenia and Sweden, mean income of the population 20–39 merely stagnated, while it increased strongly for the population aged 60 and older.
A decomposition analysis revealed that the main drivers of these age-specific differences in income trends are (i) a decline or stagnation of employment rates and income of the 20–39-year-old, (ii) an increase in employment in the older age groups 40–59 and 60+, and (iii) a strong increase in benefits for the population 60+. The increase in employment and income among older population is mainly due to a increased labour force participation and higher pensions for women.
In summary, this paper revealed important intergenerational disparities in the development of individual income, especially in countries that were hit hard during the previous financial crisis. These findings are crucial with regard to the current COVID-19 pandemic, with its unprecedented societal and economic consequences. For many young Europeans, the pandemic adds to their already precarious economic situation. Knowledge about age-specific income trends may help to find better and generationally balanced answers to economic crises.
The tables included in the paper are also provided as supplementary material in Excel format.
The code for the age-specific analysis and the decomposition is provided as supplementary material to the paper. It requires access to EU-SILC micro-data.
For example, \(\varDelta _\% \mathrm {YL}_i=\frac{\mathrm {YL}_{i,2017} - \mathrm {YL}_{i,2008}}{\mathrm {Y}_{i,2008}}\).
Adult persons are defined as all individuals 35 and older. A person from age 16 to 34 is counted adult only if he or she is in employment, focusing on domestic work, or if there are only persons below age 35 who are not in employment in the household.
Aliaj, A., Flawinne, X., Jousten, A., Perelman, S., & Shi, L. (2016). Old-age employment and hours of work trends: Empirical analysis for four european countries. IZA Journal of European Labor Studies, 5(1), 1–22.
Barbieri, P. (2011). Italy: No country for young men (and women): The Italian way of coping with increasing demands for labour market flexibility and rising welfare problems. In Globalized labour markets and social inequality in Europe (pp. 108–145). Springer.
Barbieri, P., Cutuli, G., Luijkx, R., Mari, G., & Scherer, S. (2019). Substitution, entrapment, and inefficiency? cohort inequalities in a two-tier labour market. Socio-Economic Review, 17(2), 409–431.
Bell, A. (2020). Age period cohort analysis: A review of what we should and shouldn't do. Annals of Human Biology, 47(2), 208–217.
Börsch-Supan, A. (2007). Rational pension reform. The Geneva Papers on Risk and Insurance-Issues and Practice, 32(4), 430–446.
Chauvel, L., & Schröder, M. (2014). Generational inequalities and welfare regimes. Social Forces, 92(4), 1259–1283.
Chen, T., Hallaert, J.-J., Pitt, A., Qu, H., Queyranne, M., Rhee, A., Shabunina, A., Vandenbussche, J., & Yackovlev, I. (2018). Inequality and Poverty across Generations in the European Union. Washington: International Monetary Fund.
De Fraja, G., Lemos, S., Rockey, J., et al. (2017). The wounds that do not heal: The life-time scar of youth unemployment. Centre for Economic Policy Research Discussion Papers 11852.
Dustmann, C., Frattini, T., & Rosso, A. (2015). The effect of emigration from Poland on Polish wages. The Scandinavian Journal of Economics, 117(2), 522–564.
Elsner, B. (2013). Emigration and wages: The EU enlargement experiment. Journal of International Economics, 91(1), 154–163.
European Commission (2013). European System of accounts: ESA 2010. Technical report, European Union.
European Commission (2018). The 2018 Ageing Report: Economic and budgetary projections for the 28 EU Member States (2016-2070). European Economy. Institutional paper 079.
European Commission (2019). Taxation Trends in the European Union: Data for EU-Member States, Iceland and Norway (p. 2019). Luxembourg: Publications Office of the European Union.
European Commission (2021). Green paper on ageing: Fostering solidarity and responsibility between generations.
Eurostat (2020a). Data: Annual Sector Accounts (ESA 2010), non-financial transactions. Table nasa\_10\_nf\_tr.
Eurostat (2020b). Data: At-risk-of-poverty rate by poverty threshold, age and sex - EU-SILC and ECHP surveys. Table [ilc\_li02].
Eurostat (2020c). Data: Employment rates by sex, age and citizenship (%). Table [lfsa\_ergan].
Eurostat (2020d). Data: Mean and median income by age and sex - EU-SILC and ECHP surveys. Table [ilc\_di03].
Fosse, E., & Winship, C. (2019). Analyzing age-period-cohort data: A review and critique. Annual Review of Sociology, 45, 467–492.
Garibaldi, P., & Taddei, F. (2013). Italy: A dual labour market in transition.
Goedemé, T. (2013). How much confidence can we have in EU-SILC? Complex sample designs and the standard error of the Europe 2020 poverty indicators. Social Indicators Research, 110(1), 89–110.
Goldstein, J., Kreyenfeld, M., Jasilioniene, A., & Örsal, D. D. K. (2013). Fertility reactions to the great recession in europe: Recent evidence from order-specific data. Demographic Research, 29, 85–104.
IMF (2018). World economic outlook 2018: Challenges to steady growth. Technical report, International Monetary Fund.
Matysiak, A., Sobotka, T., & Vignoli, D. (2020). The Great Recession and fertility in Europe: A sub-national analysis. European Journal of Population, 1–36.
Mroz, T. A., & Savage, T. H. (2006). The long-term effects of youth unemployment. Journal of Human Resources, 41(2), 259–293.
Sobotka, T., Jasilioniene, A., Galarza, A. A., Zeman, K., Nemeth, L., & Jdanov, D. (2021). Baby bust in the wake of the COVID-19 pandemic? First results from the new STFF data series. SocArXiv papers, https://doi.org/10.31235/osf.io/mvy62
Törmälehto, V.-M. (2019). Reconciliation of eu statistics on income and living conditions (eu-silc) data and national accounts. Eurostat working papers.
Trindade, L. Z., & Goedemé, T. (2016). Notes on updating the EU-SILC UDB sample design variables 2012-2014. Technical report.
Vidal-Meliá, C., del Carmen Boado-Penas, M., & Settergren, O. (2009). Automatic balance mechanisms in pay-as-you-go pension systems. The Geneva Papers on Risk and Insurance-Issues and Practice, 34(2), 287–317.
Wittgenstein Centre (2020). European Demographic Datasheet 2020. Available at: www.populationeurope.org.
This paper uses micro data from the European Union Statistics on Income and Living Conditions (Eurostat).
Open access funding provided by TU Wien (TUW). Our research is supported by JPI-MYBL and funded by the Austrian Federal Ministry of Education, Science and Research and the Jubiläumsfonds of the Austrian National Bank under project no. 18465. JPI-MYBL is supported by J-Age II, which is funded by Horizon 2020, the EU framework programme for research and innovation, under grant agreement 643850
TU Wien, Institute of Statistics and Mathematical Methods in Economics and Wittgenstein Centre for Demography and Global Human Capital (IIASA, OeAW, University of Vienna), Vienna, Austria
Bernhard Hammer & Alexia Prskawetz
University of Vienna, Department of Demography, Wittgenstein Centre for Demography and Global Human Capital (IIASA, OeAW, University of Vienna), Vienna, Austria
Sonja Spitzer
Bernhard Hammer
Alexia Prskawetz
Correspondence to Bernhard Hammer.
Supplementary material 1 (XLSX 82 kb)
Supplementary material 2 (PDF 274 kb)
1.1 Allocation of Household-Level Income to Individuals
Some income components in EU-SILC are only given at household level and need to be assigned to individuals for this analysis. Family benefits are assigned to the parents of the economically dependent children in the household. Since parental leave benefits as part of total family benefits are targeted at the person taking over most of the care responsibilities, we distribute family benefits within couples according to the inverse share of their labour income. If one of the partners has no income because of the engagement in care work, this partner is assumed to receive all family benefits.
Imputed rent is regarded as a type of asset income and assigned to the household members that report to be responsible for the accommodation. Asset income is assumed to be equally shared among all adults in the household.Footnote 2 Finally, personal income of persons below 16 is assigned to the 15-year-old members, or the oldest child in case there are no 15-year-old present in the household.
We want to emphasize again that these allocation rules have no strong effect on our results. An alternative allocation of household-level income to all adults in equal shares resulted in negligible differences in the total changes of income.
This section describes the decomposition of the changes in average income from employment \(\varDelta _\% \mathrm {YL}_{i}\) and average benefits \(\varDelta _\% \mathrm {YB}_{i}\) for each age group i, and provides an illustrative example at the end of the section.
The decomposition relies on information about employment during the income reference period (the calendar year preceding the year of the survey). To calculate employment rates, we use the information on economic status in each month (Variables PL073-PL076 in EU-SILC). For example, an observation counts as fully employed if he or she reports employment for 12 months. If an observation, however, reports only six months in employment, he or she is considered as half employed and half non-employed. We refrain from further accounting for the extent of employment, since we do not have information on the exact numbers of hours worked. Note, that also individuals who are fully employed may receive public transfers, e.g. child allowance. Most of public transfers, however, target individuals outside employment via pensions, parental leave benefits or unemployment benefits. These are also the components that account for the income changes over time.
Remember that in our analysis we measure the change of each income component relative to total income \(Y_i\). For example, \(\varDelta _\% \mathrm {YL}_i=\frac{\mathrm {YL}_{i,2017} - \mathrm {YL}_{i,2008}}{\mathrm {Y}_{i,2008}}\). We can also write \(\varDelta _\% \mathrm {YL}_i\) as percentage change of \(\mathrm {YL}_i\) (indicated by \(\delta _\% \mathrm {YL}_i\)), scaled with the share of \(\mathrm {YL}_i\) in total income: \(\frac{\mathrm {YL}_{i,2017} - \mathrm {YL}_{i,2008}}{\mathrm {YL}_{i,2008}}*\frac{\mathrm {YL}_{i,2008}}{\mathrm {Y}_{i,2008}} = \delta _\% \mathrm {YL}_i*\frac{\mathrm {YL}_{i,2008}}{\mathrm {Y}_{i,2008}}\). Likewise, \(\varDelta _\% \mathrm {YB}_i\) is the percentage change of \(\mathrm {YB}_i\) scaled with its share in total income: \(\varDelta _\% \mathrm {YB}_i = \delta _\% \mathrm {YB}_i*\frac{\mathrm {YB}_{i,2008}}{\mathrm {Y}_{i,2008}}\).
We write \(\mathrm {YL}_i\) as product of employment rate (\(l_i\)) and average income of each employed person (\(\text {yl}_i\)): \(\text {YL}_i = \text {yl}_i*l_i\). Because public benefits are mainly directed at persons not in employment, such as pensioners, unemployed persons or parents on leave, we can write average benefits as product of the share of non-employed persons \((1-l_i)\) and average benefits per non-employed person (\(\text {yb}_i\)): \(\text {YB}_i = \text {yb}_i*(1-l_i)\). Because the decomposition-approach is the same for \(\text {YL}_i\) and \(\text {YB}_i\), we illustrate it now only for income from employment \(\text {YL}_i\).
In continuous time, i.e. if we would look at an infinitesimal time interval, we could calculate the percentage change of \(\text {YL}_i\) as derivative of the logarithm with respect to time and write it as sum of the percentage change of each of the two factors: \(\frac{d}{{dt}}\ln \mathrm {YL(t)}_i = \frac{d}{{dt}}\ln \left( l(t)_i*\text {yl(t)}_i\right) =\frac{d}{{dt}}\ln l(t)_i + \frac{d}{{dt}}\ln \text {yl(t)}_i\).
Such an approach is not completely correct for our discrete, ten-year interval. The percentage changes \(\delta _\% \text {yl}_i\) and \(\delta _\% \text {l}_i\) do not exactly add up to \(\delta _\% \text {YL}_i\). However, even for our 10-year time interval the difference is very small. Therefore, we approximate the additive contributions of each factor to \(\delta _\% \text {YL}_i\) by their percentage change. To ensure that the components add up, we calculate \(\delta _\% \text {yl}_i\) as residual: \(\delta _\% \text {yl}_i = \delta _\% \text {YL}_i - \delta _\% l_i\). To measure the contributions of each of these factors in terms of total income, we rescale \(\delta _\% \mathrm {yl}_i\) and \(\delta _\% \mathrm {l}_i\) by \(\frac{\text {YL}_{i,2008}}{\text {Y}_{i,2008}}\). Using this procedure also for the benefits, we can write the total changes in income as sum of the five components (Equation 4 in Sect. 2.3):
For example, in Spain the change in average employment-income for all adults aged 20+ (the group "Total" in Table 4) \(\delta _\% \text {YL}_{20+}\) was -13 percent. The employment rate decreased by four percent (\(\varDelta l_{20+} = -4\)). We approximated the change in income per employed person: \(\delta \text {yl}_{20+} = \delta \text {YL}_{20+} - \delta l_{20+} = -13 - (-4) = -9\). Because income from employment (\(\text {YL}_{20+}\)) amounted to 57 percent of total income (\(\text {Y}_{20+}\)), we rescaled these numbers to measure their change in terms of total income: \(\varDelta _\% \mathrm {YL}_{20+} = 9*0.57 \approx \mathrm {5~percent}\) and \(\varDelta _\% \mathrm {l}_{20+} = 4*0.57 \approx \mathrm {2~percent}\). The two components together add up to the change of -7 percent in employment income \(\varDelta _\% \mathrm {YL}_{20+}\).
1.3 Age-Specific Income Changes: Percentiles
See Table 5, 6 and 7
Table 5 Net income and its changes 2008 - 2017, \(1^{st}\) Quartile
Table 6 Median net income and its changes 2008 - 2017
Table 7 Net income and its changes 2008 - 2017, \(3^{rd}\) Quartile
1.4 Explaining the Difference between Aggregate Income in ESA and EU-SILC
Table 8 compares EU-SILC and ESA data by measuring the share of aggregate income that is covered in EU-SILC by type of income (coverage ratio, CR). Furthermore, it shows the change of the CR between 2008 and 2017. In general, wages and social benefits are similar in both surveys. Differences between ESA and EU-SILC are mainly due to large differences observed for asset income (Törmälehto, 2019). The CR for total income increased considerably in Austria, Italy and France. In Austria and Italy, this increase can be explained with the decline in asset income. Since only a small part of asset income is captured in EU-SILC, the decline in this component increases the share of total income that is captured. In both countries, the CR of employment income and social benefits actually decreased. In France, where asset income may be over-covered by EU-SILC (Törmälehto, 2019), the increase in the total CR is explained by an increase in the CR of both employment income and social benefits. In Estonia, the coverage of income from employment and of social benefits increased, but not the coverage of total income. We thus observe a unrealistic, strong decline of imputed rents in EU-SILC, reducing the coverage of total income. In Greece, the CR of employment income decreased strongly (-14 per cent), while the CR of social benefits increased by 10 per cent.
Table 8 The coverage ratio of EU-SILC over time: aggregate income in ESA compared to EU-SILC
1.5 Standard Errors of Income Changes
See Table 9 .
Table 9 Standard errors of income changes
Hammer, B., Spitzer, S. & Prskawetz, A. Age-Specific Income Trends in Europe: The Role of Employment, Wages, and Social Transfers. Soc Indic Res 162, 525–547 (2022). https://doi.org/10.1007/s11205-021-02838-w
Issue Date: July 2022
DOI: https://doi.org/10.1007/s11205-021-02838-w
Generational economy
Intergenerational equity
|
CommonCrawl
|
On primes and period growth for Hamiltonian diffeomorphisms
JMD Home
Hölder foliations, revisited
January 2012, 6(1): 59-78. doi: 10.3934/jmd.2012.6.59
Reducibility of quasiperiodic cocycles under a Brjuno-Rüssmann arithmetical condition
Claire Chavaudret 1, and Stefano Marmi 2,
Département deMathématiques, Université de Nice Sophia-Antipolis, Parc Valrose, 06108 Nice Cedex 02, France
Scuola Normale Superiore, Piazza dei Cavalieri, 7, 56126, Pisa (PI)
Received November 2011 Published May 2012
The arithmetics of the frequency and of the rotation number play a fundamental role in the study of reducibility of analytic quasiperiodic cocycles which are sufficiently close to a constant. In this paper we show how to generalize previous works by L.H. Eliasson which deal with the diophantine case so as to implement a Brjuno-Rüssmann arithmetical condition both on the frequency and on the rotation number. Our approach adapts the Pöschel-Rüssmann KAM method, which was previously used in the problem of linearization of vector fields, to the problem of reducing cocycles.
Keywords: reducibility, Quasiperiodic cocycle, KAM method., Brjuno condition, diophantine condition, rotation number.
Mathematics Subject Classification: Primary: 34C20; Secondary: 37CX.
Citation: Claire Chavaudret, Stefano Marmi. Reducibility of quasiperiodic cocycles under a Brjuno-Rüssmann arithmetical condition. Journal of Modern Dynamics, 2012, 6 (1) : 59-78. doi: 10.3934/jmd.2012.6.59
A. Avila, B. Fayad and R. Krikorian, A KAM scheme for $SL(2,R)$ cocycles with Liouvillean frequencies,, Geom. Funct. Anal., 21 (2011), 1001. doi: 10.1007/s00039-011-0135-6. Google Scholar
A. D. Brjuno, An analytic form of differential equations,, Math. Notes, 6 (1969), 927. doi: 10.1007/BF01146416. Google Scholar
C. Chavaudret, Strong almost reducibility for analytic and Gevrey quasi-periodic cocycles,, to appear in Bull. Soc. Math. France, (2010). Google Scholar
L. H. Eliasson, Floquet solutions for the 1-dimensional quasi-periodic Schrödinger equation,, Comm. Math. Phys., 146 (1992), 447. doi: 10.1007/BF02097013. Google Scholar
L. H. Eliasson, Almost reducibility of linear quasi-periodic systems,, in, 69 (2001), 679. Google Scholar
A. Giorgilli and S. Marmi, Convergence radius in the Poincaré-Siegel problem,, Discrete Contin. Dyn. Syst. Ser. S, 3 (2010), 601. Google Scholar
R. Johnson and J. Moser, The rotation number for almost periodic potentials,, Comm. Math. Phys., 84 (1982), 403. doi: 10.1007/BF01208484. Google Scholar
S. Marmi, P. Moussa and J.-C. Yoccoz, The Brjuno functions and their regularity properties,, Comm. Math. Phys., 186 (1997), 265. doi: 10.1007/s002200050110. Google Scholar
J. Pöschel, KAM à la R,, Regul. Chaotic Dyn., 16 (2011), 17. doi: 10.1134/S1560354710520060. Google Scholar
H. Rüssmann, KAM iteration with nearly infinitely small steps in dynamical systems of polynomial character,, Discrete Contin. Dyn. Syst. Ser. S, 3 (2010), 683. doi: 10.3934/dcdss.2010.3.683. Google Scholar
J.-C. Yoccoz, "Petits Diviseurs en Dimension 1",, Astérisque, 231 (1995). Google Scholar
L.-S. Young, Lyapunov exponents for some quasi-periodic cocycles,, Ergodic Theory Dynam. Systems, 17 (1997), 483. doi: 10.1017/S0143385797079170. Google Scholar
J. Wang and Q. Zhou, Reducibility results for quasiperiodic cocycles with Liouvillean frequency,, J. Dynam. Differential Equations, 24 (2011), 61. doi: 10.1007/s10884-011-9235-0. Google Scholar
Claire Chavaudret, Stefano Marmi. Erratum: Reducibility of quasiperiodic cocycles under a Brjuno-Rüssmann arithmetical condition. Journal of Modern Dynamics, 2015, 9: 285-287. doi: 10.3934/jmd.2015.9.285
João Lopes Dias. Brjuno condition and renormalization for Poincaré flows. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 641-656. doi: 10.3934/dcds.2006.15.641
Hans Koch, João Lopes Dias. Renormalization of diophantine skew flows, with applications to the reducibility problem. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 477-500. doi: 10.3934/dcds.2008.21.477
Percy A. Deift, Thomas Trogdon, Govind Menon. On the condition number of the critically-scaled Laguerre Unitary Ensemble. Discrete & Continuous Dynamical Systems - A, 2016, 36 (8) : 4287-4347. doi: 10.3934/dcds.2016.36.4287
Michel Laurent, Arnaldo Nogueira. Rotation number of contracted rotations. Journal of Modern Dynamics, 2018, 12: 175-191. doi: 10.3934/jmd.2018007
Suddhasattwa Das, Yoshitaka Saiki, Evelyn Sander, James A. Yorke. Solving the Babylonian problem of quasiperiodic rotation rates. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 2279-2305. doi: 10.3934/dcdss.2019145
E. Muñoz Garcia, R. Pérez-Marco. Diophantine conditions in small divisors and transcendental number theory. Discrete & Continuous Dynamical Systems - A, 2003, 9 (6) : 1401-1409. doi: 10.3934/dcds.2003.9.1401
Masaru Ikehata. On finding an obstacle with the Leontovich boundary condition via the time domain enclosure method. Inverse Problems & Imaging, 2017, 11 (1) : 99-123. doi: 10.3934/ipi.2017006
Lan Zeng, Guoxi Ni, Yingying Li. Low Mach number limit of strong solutions for 3-D full compressible MHD equations with Dirichlet boundary condition. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 1-20. doi: 10.3934/dcdsb.2019068
Hongxiu Zhong, Guoliang Chen, Xueping Guo. Semi-local convergence of the Newton-HSS method under the center Lipschitz condition. Numerical Algebra, Control & Optimization, 2019, 9 (1) : 85-99. doi: 10.3934/naco.2019007
Y. Latushkin, B. Layton. The optimal gap condition for invariant manifolds. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 233-268. doi: 10.3934/dcds.1999.5.233
TÔn Vı$\underset{.}{\overset{\hat{\ }}{\mathop{\text{E}}}}\, $T T$\mathop {\text{A}}\limits_. $, Linhthi hoai Nguyen, Atsushi Yagi. A sustainability condition for stochastic forest model. Communications on Pure & Applied Analysis, 2017, 16 (2) : 699-718. doi: 10.3934/cpaa.2017034
Baojun Bian, Pengfei Guan. A structural condition for microscopic convexity principle. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 789-807. doi: 10.3934/dcds.2010.28.789
Shie Mannor, Vianney Perchet, Gilles Stoltz. A primal condition for approachability with partial monitoring. Journal of Dynamics & Games, 2014, 1 (3) : 447-469. doi: 10.3934/jdg.2014.1.447
Wenxian Shen. Global attractor and rotation number of a class of nonlinear noisy oscillators. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 597-611. doi: 10.3934/dcds.2007.18.597
Danijela Damjanovic and Anatole Katok. Local rigidity of actions of higher rank abelian groups and KAM method. Electronic Research Announcements, 2004, 10: 142-154.
Scott Nollet, Frederico Xavier. Global inversion via the Palais-Smale condition. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 17-28. doi: 10.3934/dcds.2002.8.17
Antonio Azzollini. On a functional satisfying a weak Palais-Smale condition. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1829-1840. doi: 10.3934/dcds.2014.34.1829
Alberto Bressan, Marta Lewicka. A uniqueness condition for hyperbolic systems of conservation laws. Discrete & Continuous Dynamical Systems - A, 2000, 6 (3) : 673-682. doi: 10.3934/dcds.2000.6.673
PDF downloads (5)
Claire Chavaudret Stefano Marmi
|
CommonCrawl
|
Surface EMG signals in very late-stage of Duchenne muscular dystrophy: a case study
Joan Lobo-Prat ORCID: orcid.org/0000-0003-4197-13911,
Mariska M.H.P. Janssen2,
Bart F.J.M. Koopman1,
Arno H.A. Stienen1 &
Imelda J.M de Groot2
Robotic arm supports aim at improving the quality of life for adults with Duchenne muscular dystrophy (DMD) by augmenting their residual functional abilities. A critical component of robotic arm supports is the control interface, as is it responsible for the human-machine interaction. Our previous studies showed the feasibility of using surface electromyography (sEMG) as a control interface to operate robotic arm supports in adults with DMD (22-24 years-old). However, in the biomedical engineering community there is an often raised skepticism on whether adults with DMD at the last stage of their disease have sEMG signals that can be measured and used for control.
In this study sEMG signals from Biceps and Triceps Brachii muscles were measured for the first time in a 37 year-old man with DMD (Brooke 6) that lost his arm function 15 years ago. The sEMG signals were measured during maximal and sub-maximal voluntary isometric contractions and evaluated in terms of signal-to-noise ratio and co-activation ratio. Beyond the profound deterioration of the muscles, we found that sEMG signals from both Biceps and Triceps muscles were measurable in this individual, although with a maximum signal amplitude 100 times lower compared to sEMG from healthy subjects. The participant was able to voluntarily modulate the required level of muscle activation during the sub-maximal voluntary isometric contractions. Despite the low sEMG amplitude and a considerable level of muscle co-activation, simulations of an elbow orthosis using the measured sEMG as driving signal indicated that the sEMG signals of the participant had the potential to provide control of elbow movements.
To the best of our knowledge this is the first time that sEMG signals from a man with DMD at the last-stage of the disease were measured, analyzed and reported. These findings offer promising perspectives to the use of sEMG as an intuitive and natural control interface for robotic arm supports in adults with DMD until the last stage of the disease.
People with Duchenne muscular dystrophy (DMD) lose independent ambulation by the age of ten years, followed by the development of scoliosis and loss of upper extremity function during their teens, and develop severe cardiomyopathies and respiratory problems during their twenties [1]. Life expectancy of people with DMD has substantially improved over the last five decades, due to improvements in care, drugs, and the introduction of home care technology, such as artificial ventilators [2]. As a result, there is currently a considerable group of adults with DMD living with severe physical impairments and a strong dependency on care up to their 30's [3].
Several arm supports that compensate the weight of the arms are commercially available and have shown an increase of independence and quality of life for teenagers with DMD [4]. However, in adults with DMD, the decrease of muscle force combined with an increase of passive joint-stiffness [5], reduces the effectiveness of current arm supports [6, 7]. More advanced robotic arm supports can provide extra assistance, and have the potential to enable adults with DMD to continue performing activities of daily living, increasing their independence and participation in social activities.
In order to operate robotic arm supports, the user needs to communicate his motion intention to the device through a control interface. Currently, the only control interface available for adults with DMD are hand joysticks and switches, which are used to control wheelchairs and external robotic arms. We consider that the use of control interfaces that detect the motion intention from physiological signals that are implicitly related to the supported motion can result in a more natural and intuitive interaction with the robotic arm support. In this direction, we have developed and evaluated force and surface electromyography (sEMG) based control interfaces [8, 9].
The ability for adults with DMD to use sEMG-based control, a well-established control interface in upper-extremity prosthetics [10], depends on the availability and quality of their sEMG signals. EMG signals have been used for decades in DMD patients for diagnosis and carrier detection [11]. These studies are mostly based on invasive needle EMG recordings. Studies with sEMG in boys and men with Duchenne are less common and most of them report measurements of lower extremity, facial or oral muscles [12–14]. From a comprehensive literature review we found that Priez et. al. [15], Bowen et. al [16], Kumangai and Yamada [17], Fascarelli et al. [18], Lobo-Prat et al. [9], and Janssen et al. [19] measured sEMG signals from upper extremity muscles in subjects with DMD aged between 5 to 24 years. To the best of our knowledge there are no published studies that report the measurement of upper-extremity sEMG signals in men with DMD older than 24 years, which is the period when robotic arm supports are most needed.
In a previous study [9], we showed that both sEMG- and force-based control interfaces were feasible solutions for the control of elbow movements in adults with DMD (22-24 years-old). Force-based control was experienced as fatiguing by all participants, a fact that indicates that sEMG-based control is probably the only viable interface for adults with DMD at the last stage of the disease. However, in the biomedical engineering community there is an often raised skepticism over adults with DMD at the last stage of their disease (Brooke score 6) having sEMG signals that can be measured and used for control. As a consequence, development of assistive devices for this group of patients is getting low attention. We think that this skepticism might be based on a wrong pre-conception and that it is thus important to investigate if sEMG signals from upper-extremity muscles of men with DMD at the last stage of the disease are measurable and can be used for control.
DMD patients at the last stage of their disease are very rare and getting them involved in any study is difficult and delicate, because they easily get overwhelmed by the exercises. As a consequence, conducting tests with even just a few of these subjects is very unlikely and general conclusions will have to be drawn from a number of independent studies. We were able to get the kind collaboration of a 37 year-old man with DMD for this study to evaluate his sEMG signals from upper-extremity biceps and triceps muscles. Albeit results from only one subject are insufficient to draw general conclusions, they are relevant to be communicated because of their exceptional nature and will encourage similar studies.
While we hypothesized that the neural activation of the muscle is still measurable in men with DMD that have lost their arm function long time ago (DMD is a disease affecting the contraction of the muscle cells only), we expected their sEMG signals to have a much lower amplitude than in the case of healthy individuals: the infiltration of fatty and connective tissue in the muscle is known to increase the electrical impedance [20]. The quality of the sEMG signals was evaluated in terms of signal-to-noise ratio (SNR) and co-activation ratio (CAR). Additionally, we evaluated the feasibility of decoding the user's movement intention from the measured sEMG simulating a sEMG-controlled elbow orthosis.
A 37 year-old man with DMD participated in this study. The participant was classified according to the Brooke upper extremity function scale [21] with a score of 6 (i.e. no arm/hand function was left) and lost his arm function long time ago: shoulder movements more than 20 years ago, and elbow flexion-extension more than 15 years ago. He was able to control an electric wheelchair with a highly sensitive joystick and several push buttons that were operated using residual motion of the fingers of both hands. No other functional tasks were possible with his arms or hands. The participant also presented joint contractures caused by the disuse of the arms, which severely limited the range-of-motion of all the upper-extremity joints.
Signal acquisition and processing
The sEMG signals were measured from the Biceps Brachii and Triceps Brachii muscles of the right arm, which originally was the dominant arm. Two 99.9% Ag parallel bars electrodes (contact: 10 mm x 1 mm each) spaced 10 mm apart (Bagnoli DE-2.1. Delsys; Boston, Massachusetts) were placed in parallel with the muscle fibers following to the SENIAM recommendations [22] and manual muscle exploration, after the skin was shaved and scrubbed clean. The signals were amplified with a Delsys Bagnoli-16 Main Amplifier and Conditioning Unit (Delsys; Boston, Massachusetts) with a bandwidth of 20 to 450 Hz and a gain of 1000.
The sEMG signals were measured with a data acquisition card with a sampling frequency of 1 kHz and 16 bit resolution. The offset of the raw sEMG signals was removed on-line using a fourth-order Butterworth high-pass filter with a cutoff frequency of 20 Hz. Subsequently the signals were full-wave rectified and smoothed using a second-order low-pass Butterworth filter with a cutoff frequency of 1 Hz to obtain the signal envelope. The envelopes were used for the visual feedback during the maximal voluntary isometric contractions (MVIC) and sub-maximal voluntary isometric contractions (SVIC), for the analysis of the CAR, and for the simulation of the elbow orthosis.
Measurement protocol
The participant was asked to perform a series of MVICs and SVICs with his biceps and triceps muscles. The MVIC were measured to investigate the maximal signal amplitude, and the SVIC were measured to investigate if the participant could voluntarily modulate the intensity of the muscle activation. Since the participant had not been able to voluntarily flex and extend his elbow joint for a long time, the examiner passively moved his elbow joint and applied pressure against the intended movement to familiarize the subject with the task. For the MVIC measurements the researcher asked the participant to maximally flex and extend his elbow during three seconds, three times for each muscle. For the SVIC measurements the subject was asked to reach three activation levels (20%, 40% and 80% of MVIC) during three seconds for each muscle. Note that the actual activation levels achieved by the participant were slightly different from these target levels (see "Results" section). Having three activation levels, only one run for each level could be performed to avoid overwhelming the participant with too long a session. All measurements were guided with real-time visual feedback of the sEMG envelopes displayed on a computer screen. For the SVICs the researcher pointed at three levels on the computer screen used for visual feedback. Between each muscle contraction the participant rested during five to ten seconds and between each series of MVIC or SVIC tasks the participant rested during five to ten minutes.
The quality of the sEMG signals was evaluated in terms of the SNR and the CAR. Additionally, the measured sEMG signals were used to drive a simulated elbow orthosis to evaluate the feasibility of using the sEMG signals for control purposes.
The SNR of the biceps (S N R b ) and of the triceps (S N R t ) were calculated by taking the ratio between the power of the root-mean-squared amplitude of the raw sEMG signal (R M S b , R M S t ) during the three MVIC and SVIC measurements, and the power of the RMS amplitude of the raw sEMG signal during a resting period, which represented the noise level (R M S nb , R M S nt ):
$$ {SNR}_{b}=\left(\frac{RMS_{b}}{RMS_{nb}}\right)^{2},\quad {SNR}_{t}=\left(\frac{RMS_{t}}{RMS_{nt}}\right)^{2}. $$
The SNR of the biceps (\({SNR}_{b_{dB}}\)) and of the triceps (\({SNR}_{t_{dB}}\)) expressed in decibels (dB) was calculated using:
$$ {}{SNR}_{b_{dB}}\,=\,10\!\log_{10}\!\left(\!\frac{RMS_{b}}{RMS_{nb}}\!\right)^{2}\!,\ \ {SNR}_{t_{dB}}\,=\,10\!\log_{10}\!\left(\!\frac{RMS_{t}}{RMS_{nt}}\!\right)^{2}\!. $$
The highest RMS value found within the three repetitions of the MVIC measurement was used to normalize the EMG signals of the biceps and triceps muscles for the CAR calculation.
Co-activation Ratio
Similarly to the SNR, we evaluated the involuntary activation of the antagonistic muscle by calculating the CAR of the biceps (C A R b ) and triceps (C A R t ). The CAR was defined as the ratio between the RMS amplitude of the normalized sEMG signal envelope of the agonist muscle (R M S Nb , R M S Nt ), and the RMS amplitude of the normalized sEMG signal envelope of the antagonist muscle (R M S Nt , R M S Nb ) during the three MVIC and SVIC measurements (Eq. 3). A high CAR value indicates a high level of co-activation while a low CAR value indicates that a low level co-activation is present.
$$ {CAR}_{b}=\frac{RMS_{Nt}}{RMS_{Nb}},\quad {CAR}_{t}=\frac{RMS_{Nb}}{RMS_{Nt}} $$
The normalized signal envelopes of the biceps (R M S Nb ) and triceps (R M S Nt ) were calculated by dividing the RMS value of the signal envelope (R M S Eb , R M S Et ) by the RMS value of the signal envelope of the MVIC (\({RMS}_{Eb_{MVIC}}\), \({RMS}_{Et_{MVIC}}\); Eq. 4). Note that the RMS value of the noise envelope (R M S Enb , R M S Ent ) was subtracted from signal envelopes before calculating R M S Nb and R M S Nt .
$$ \begin{aligned} {RMS}_{Nb}=\frac{({RMS}_{Eb} - {RMS}_{Enb})}{{RMS}_{{Eb}_{MVIC}}- {RMS}_{Enb})}, \\ {RMS}_{Nt}=\frac{({RMS}_{Et} - {RMS}_{Ent})}{{RMS}_{{Et}_{MVIC}}- {RMS}_{Ent})} \end{aligned} $$
Elbow orthosis simulation
In order to evaluate the feasibility of using the measured sEMG signals for control purposes, we performed an off-line simulation (i.e. open-loop control) of a sEMG-controlled elbow orthosis. This method was chosen because the high intrinsic stiffness and contractures present in the elbow joint of the participant prevented the use of a real robotic elbow orthoses.
The sEMG-based control method implemented in the simulation and described in this subsection was the same used in our previous study [9] were elbow movements of adults with DMD were supported with an elbow orthosis. An estimation of the elbow torque (\(\hat {\tau }_{e}\)) was obtained by multiplying the signal envelops of the biceps and triceps sEMG signals (E b , E t ) by a mapping gain (K b , K t ) and subsequently subtracting the triceps signal from the biceps signal:
$$ \hat{\tau}_{e}=E_{b} K_{b}-E_{t} K_{t}. $$
Note that a fix offset resulting from the sEMG signal noise was removed from the signal envelopes to obtain E b and E t . The offset value was calculated by taking the mean value of the signal envelope during three seconds while the participant was relaxed. For the simulation, the signals measured during the MVIC of the biceps and of the triceps (shown in Fig. 1) were concatenated and used as E b and E t . The mapping gains K b =2 Nm/mV and K t =0.72 Nm/mV were chosen to properly distinguish biceps from triceps activation and to obtain a symmetric elbow flexion-extension movement.
Raw sEMG signals during three maximal voluntary isometric contractions (MVIC) of biceps and triceps muscles. a Agonist activation of biceps in blue; signal of antagonist muscle (triceps) in red. RMS values for each MVIC of the biceps: M V I C b 1=0.0021 mV, M V I C b 2=0.0016 mV, M V I C b 3=0.0019 mV. b Agonist activation of triceps in red; signal of antagonist muscle (biceps) in blue. RMS values for each MVIC of the triceps: M V I C t 1=0.0024 mV, M V I C t 2=0.0026 mV, M V I C t 3=0.0025 mV
The estimated elbow torque was then used as input for an admittance model that rendered the dynamics of a mass-damper system and had as output the elbow angle (θ e ):
$$ \theta_{e} (s)=\frac{1}{(I_{v} s^{2}+B_{v} s)}\hat{\tau}_{e} (s), $$
where I v and B v represent the virtual mass and damping parameters of the admittance model respectively, and s is the Laplace transform variable. Note that in a real elbow orthosis, the elbow angle (θ e ; or the angular velocity) resulting from Eq. (6) would be used as reference signal for a low-level position (or velocity) controller of the elbow orthosis as in [9]. The simulation was carried out with I v =4·10−3 kg m2 and B v =1·10−3 Nm s/rad respectively, which resulted in a motion close to the natural range of the elbow joint. In a real elbow orthosis the interface dynamics and the mapping gains would be chosen to the convenience of the user.
Maximal voluntary isometric contractions (MVIC)
The raw sEMG signals of the MVICs during three seconds presented maximum amplitudes of 0.01 and 0.015 mV for biceps and triceps muscles respectively (Fig. 1). The RMS values of the sEMG signals of the agonist muscle were higher for the triceps (M V I C t in Fig. 1) than for the biceps (M V I C b in Fig. 1) in all three repetitions, with an average value of 0.0025 mV for the triceps and 0.0019 mV for the biceps. Also, the mean SNR (Fig. 4 a) of the triceps during MVIC was double than that of the biceps (S N R t :4.16±0.5 (12.4±1 dB); S N R b :2.17±0.5 (6.7±2 dB); Fig. 4 a and b). Both RMS and SNR values were lower for the biceps than for the triceps which could indicate a higher muscle atrophy of the biceps (infiltration of fatty and connective tissue in the muscle, which degrades the sEMG signal).
Mean CAR values for both muscles during the MVICs were 0.55±0.2 for the biceps and 0.44±0.2 for the triceps (C A R b and C A R t respectively in Eq. 3), which indicated profound co-activation of the antagonist muscle, even more so for the biceps than for the triceps (Fig. 4 c).
Sub-maximal voluntary isometric contractions (SVIC)
When performing the SVICs, the participant was able to generate sEMG signals that followed the level of effort demanded by the task (Figs. 2 and 3), indicating that the participant could voluntarily modulate the required level of muscle activation. The SNR of the SVICs increased for both biceps and triceps muscles with the increase of SVIC level (Fig. 4 a), as expected because of the raise in signal amplitude. In agreement with the results of the MVICs, the SNRs of the triceps muscle were higher than the ones of the biceps muscle for all conditions.
Envelope of the sEMG signals during sub-maximal voluntary isometric contractions of biceps muscle. a Envelope of the sEMG signals measured during the 20%, 40% and 80% S V I C b of the biceps muscle in blue. Envelope of the antagonist muscle (triceps) in dashed red. b Boxplots of the 3000 data points (i.e. 3 seconds) measured for each of the SVIC levels shown in A. In blue the boxplots of the biceps sEMG signals and in faded red the boxplots of the antagonist muscle (triceps). Note that the noise level of the sEMG signal during relaxation is also shown
Envelope of the sEMG signals during sub-maximal voluntary isometric contractions of triceps muscle. a Envelope of the sEMG signals measured during the 20%, 40% and 80% S V I C t of the triceps muscle in red. Envelope of the antagonist muscle (biceps) in dashed blue. b Boxplots of the 3000 data points (i.e. 3 seconds) measured for each of the SVIC levels shown in A. In red the boxplots of the triceps sEMG signals and in faded blue the boxplots of the antagonist muscle (biceps). Note that the noise level of the sEMG signal during relaxation is also shown
The CARs of the biceps muscle were higher than the CARs of the triceps muscle for all levels of SVIC (Fig. 4 c). While the CARs of the triceps muscle presented values close to 0.5 for all SVICs (with the exception of the 20% SVIC level) the CARs of the biceps muscle presented more variability over activation levels: the CAR of the 20% SVIC level was above 1.5, the CAR of the 40% SVIC level and of 100% was close to 0.5, and the CAR of the 80% SVIC level was close to 1.
Figure 5 a and b show the raw and the envelope of the sEMG signals respectively used as input for the simulation. Figure 5 c shows the estimated muscle torque calculated by multiplying the signal envelope with the mapping gains K b and K t . Note that after applying the mapping gains, the biceps signal is larger than the triceps signal for the first half of the movement and the triceps signals is larger then the biceps signal for the second half of the movement. Figure 5 d shows the estimated elbow torque resulting from the difference between the biceps and triceps muscle torques. Figure 5 e presents the output of the admittance controller and shows that the simulated system responded with a positive angular velocity during the first part of the movement (biceps activation) and a negative angular velocity during the second part (triceps activation). The integral of this angular velocity is shown in Fig. 5 f, which indicates that the simulated orthosis would successfully support an elbow flexion and extension movement of 1 rad in 20 s with a maximum speed of ±0.2 rad/s under the control of the realistic sEMG signals shown in Fig. 5 a. Note that the apparent delay between the muscle activation signal and the initiation of the movement is due to the phase lag of the second order dynamics of the admittance model with low mass and damping Eq. (6). In our previous studies [9, 23], we have seen that these kind of interaction dynamics are usable by adults with DMD.
Signal-to-noise ratio (SNR) and co-activation ratio (CAR) for the biceps and triceps sEMG signals as function of activation level. a Signal-to-noise ratios of the biceps (blue) and triceps (red) sEMG signals during the three SVIC and MVIC. b Signal-to-noise ratios of the biceps (blue) and triceps (red) sEMG signals during the three SVIC and MVIC expressed in decibels (dB). c Co-activation ratios of the biceps (blue) and triceps (red) sEMG signals during the three SVIC and MVIC. Note that the error bar is only shown for the MVIC as this measurement was repeated three times
Simulation of the sEMG-controlled elbow orthosis. a Raw sEMG signals used as input for the simulation. Specifically the first (M V I C b 1) and third (M V I C t 3) MVIC attempts of the biceps (blue) and triceps (red). b Envelopes of the raw sEMG signals of the biceps (blue) and triceps (red). c Estimated muscle torque of the biceps (blue) and triceps (red) obtained by multiplying the envelopes multiplying by the mapping gains K b and K t . d Estimated elbow torque calculated by subtracting the estimated triceps torque from the estimated biceps torque (Eq. 5). e Angular velocity resulting from the admittance model (Eq. 6). f Elbow angle displacement resulting from the integral of the angular velocity (Eq. 6)
Figures 6 and 7 in Additional file 5 show the results obtained with the same elbow orthosis model but using all three repetitions of the MVICs and of the SVICs as input signal. These results indicate that it is possible to distinguish three flexion movements following the three MVICs or SVICs of the biceps, and three extension movements following the three MVICs or SVICs of the triceps. Differences in angular velocity and displacement of the movements are due to differences in amplitude and duration of each of the MVICs or SVICs signals. Additional files 1, 2, 3, and 4 contain the datasets of sEMG signals used in this study.
Our results revealed the profound deterioration of the upper-extremity muscles of the 37-year-old man with DMD. The maximum amplitudes of the sEMG signals of the participant were 100 times lower than those typical of healthy individuals (i.e. our measurements of 0.01 mV vs. the 1 mV measured in healthy individuals [24]). These low signal amplitudes implied low SNRs. We also found profound involuntary activation of the antagonistic muscle as revealed by the measured CARs, which could be caused by the disuse of the arms. Note that arm immobilizations of 12 h are sufficient to significantly reduce motor performance in healthy subjects [25]. Probably with practice the participant would learn to isolate better the activation of the muscles.
Despite the deterioration of the muscles, we cannot underestimate the fact that sEMG signals from the biceps and triceps muscles were still measurable in a 37-year-old man with DMD that presented considerable muscle deterioration since 20 years ago and completely lost his arm function 15 years ago. Our results also indicate that the participant was able to adjust his voluntary isometric contractions as demanded by the exercises.
The relevance of detecting measurable upper-extremity sEMG signals in adults with DMD at the late-stage of the disease extends beyond its clinical interest. sEMG signals can be used to detect the users' motion intention and control rehabilitation or assistive devices, which have the potential to delay the disease progression and increase the quality of life for men with DMD [26, 27]. On this regard, our simulation, which used the measured sEMG signals as input, suggests that, if the high degree of joint stiffness and contractures were not present, the participant could control an active elbow orthosis to perform flexion-extension movements with the same proportional sEMG-based control method used in our previous study [9]. The angular velocity and displacement obtained by the simulation indicated that it is possible to detect the elbow flexion/extension movement intention of the user from the measured sEMG of the biceps and the triceps muscle. Nevertheless, these results need to be regarded with caution since we did not test the performance of the sEMG-based control using a real system.
Currently, the use of robotic elbow orthoses in adults with DMD in the last stage of the disease is not an option due to the high intrinsic stiffness and joint contractures. However, the use of arm supports, would allow people with DMD to keep using their arms and therefore contribute to the delay of their functional deterioration. We expect that in the future, boys and men with DMD will use arm supports from an early age, which would preserve the range of motion of their joints and potentially benefit from the use of sEMG-controlled arm supports until the last stage of the disease.
The results of the present case study indicate that sEMG signals from the biceps and triceps muscles were very deteriorated but still measurable in a 37 year-old man with DMD that lost his arm function several years ago. Also, the participant was able to adjust his muscle activation level as demanded by the SVIC tasks. To the best of our knowledge this is the first time that sEMG signals from a man with DMD at the last-stage of the disease were measured and reported. Despite the muscle deterioration, the measured signals could be successfully used as input for the control of a simulated elbow orthosis. These results offer promising perspectives to the use of sEMG as an intuitive and natural control interface for assistive devices in adults with DMD until the last stage of the disease, provided that the use of assistive devices since an early stage of the disease reduce joint stiffness and contractures.
Results from only one subject are insufficient to draw general conclusions, but the difficulties to involve participants with DMD in the last stage of the disease make the inclusion of several patients in one single study highly complicated. Thus, sufficient evidence should come by integrating independent studies performed in different laboratories. We hope that the results presented in this Short Report will start breaking the current general opinion that sEMG signals are too weak in DMD patients at the last stage of the disease to be used for control, and will encourage similar studies in other parts of the world, finally leading to better assistive devices for adults with DMD.
DMD:
Duchenne muscular dystrophy
MVIC:
Maximal voluntary isometric contraction
RMS:
Root mean squared
sEMG:
surface electromyography
SNR:
Signal to noise ratio
SVIC:
Sub-maximal voluntary isometric contraction
Yiu EM, Kornberg AJ. Duchenne muscular dystrophy. J Paediatr Child Health. 2015; 51(8):759–64.
Eagle M, Bourke J, Bullock R, Gibson M, Mehta J, Giddings D, Straub V, Bushby K. Managing Duchenne muscular dystrophy – The additive effect of spinal surgery and home nocturnal ventilation in improving survival. Neuromuscul Disord. 2007; 17(6):470–5.
Rahbek J, Steffensen BF, Bushby K, de Groot IJM. 206th ENMC International Workshop: Care for a novel group of patients – adults with Duchenne muscular dystrophy Naarden, The Netherlands, 23–25 May 2014. Neuromuscul Disord. 2015; 25(9):727–38.
van der Heide LA, Gelderblom GJ, de Witte LP. Effects and effectiveness of dynamic arm supports: a technical review. Am J Phys Med Rehabil/A of Acad Physiatrists. 2015; 94(1):44–62.
Cornu C, Goubel F, Fardeau M. Muscle and joint elastic properties during elbow flexion in Duchenne muscular dystrophy. J Physiol. 2001; 533(2):605–16.
Rahman T, Ramanathan R, Stroud S, Sample W, Seliktar R, Harwin W, Alexander M, Scavina M. Towards the control of a powered orthosis for people with muscular dystrophy. Proc Inst Mech Eng H J Eng Med. 2001; 215(3):267–74.
Ragonesi D, Agrawal SK, Sample W, Rahman T. Quantifying Anti-Gravity Torques for the Design of a Powered Exoskeleton. IEEE Trans Neural Syst Rehabil Eng. 2013; 21(2):283–8.
Lobo-Prat J, Keemink AQ, Stienen AH, Schouten AC, Veltink4 PH, Koopman BF. Evaluation of EMG, force and joystick as control interfaces for active arm supports. J NeuroEngineering Rehabil. 2014; 11(1):68.
Lobo-Prat J, Kooren P, Janssen M, Keemink AQ, Veltink PH, Stienen AH, Koopman BF. Implementation of EMG- and Force-based Control Interfaces in Active Elbow Supports for Men with Duchenne Muscular Dystrophy: a Feasibility Study. IEEE Trans Neural Syst Rehabil Eng. 2016; 24:1179–90.
Schultz AE, Kuiken TA. Neural Interfaces for Control of Upper Limb Prostheses: The State of the Art and Future Possibilities. PM&R. 2011; 3(1):55–67.
Liguori R, Fuglsang-Frederiksen A, Nix W, Fawcett PR, Andersen K. Electromyography in myopathy. Neurophysiol Clin = Clin Neurophysiol. 1997; 27(3):200–3.
Eckardt L, Harzer W. Am J Orthod Dentofac Orthop Off Publ AA Orthodontists, Its Constituent Soc, and the Am Board of Orthodontics. 1996; 110(2):185–90.
Armand S, Mercier M, Watelain E, Patte K, Pelissier J, Rivier F. A comparison of gait in spinal muscular atrophy, type II and Duchenne muscular dystrophy. Gait Posture. 2005; 21(4):369–78.
van den Engel-Hoek L, Erasmus CE, Hendriks JCM, Geurts ACH, Klein WM, Pillen S, Sie LT, de Swart BJM, de Groot IJM. Oral muscles are progressively affected in Duchenne muscular dystrophy: implications for dysphagia treatment. J Neurol. 2013; 260(5):1295–1303.
Priez A, Duchene J, Goubel F. Duchenne muscular dystrophy quantification: a multivariate analysis of surface EMG. Medical Biol Eng C. 1992; 30(3):283–91.
Bowen RC, Seliktar R, Rahman T, Alexander M. Surface EMG and motor control of the upper extremity in muscular dystrophy: a pilot study. In: Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2001. New York: IEEE: 2001. p. 1220–12232.
Kumagai K, Yamada M. The Clinical Use of Multichannel Surface Electromyography. Pediatr Int. 1991; 33(2):228–37.
Frascarelli M, Rocchi L, Feola I. EMG computerized analysis of localized fatigue in Duchenne muscular dystrophy. Muscle Nerve. 1988; 11(7):757–61.
Janssen MMHP, Harlaar J, de Groot IJM. Surface EMG to assess arm function in boys with DMD: a pilot study. J Electromyogr Kinesiol Off J Int Soc Electrophysiological Kinesiol. 2015; 25(2):323–8.
Kuiken TA, Lowery MM, Stoykov NS. The effect of subcutaneous fat on myoelectric signal amplitude and cross-talk. Prosthetics Orthot Int. 2003; 27(1):48–54.
Brooke MH, Griggs RC, Mendell JR, Fenichel GM, Shumate JB, Pellegrino RJ. Clinical trial in duchenne dystrophy. I, The design of the protocol. Muscle Nerve. 1981; 4(3):186–97.
Hermens HJ, Freriks B, Merletti R, Stegeman D, Blok J, Rau G, Disselhorst-Klug C, Hagg G. European Recommendations for Surface Electromyography. Enschede: Roessingh Research and Development b.v; 1999.
Nizamis K, Lobo-Prat J, Keemink AQL, Carloni R, Stienen AHA, Koopman BFJM. Switching proportional EMG control of a 3d endpoint arm support for people with duchenne muscular dystrophy. In: 2015 IEEE International Conference on Rehabilitation Robotics (ICORR). New York: IEEE: 2015. p. 235–40.
Semmler JG, Tucker KJ, Allen TJ, Proske U. Eccentric exercise increases EMG amplitude and force fluctuations during submaximal contractions of elbow flexor muscles. J Appl Physiol. 2007; 103(3):979–89.
Moisello C, Bove M, Huber R, Abbruzzese G, Battaglia F, Tononi G, Ghilardi MF. Short-Term Limb Immobilization Affects Motor Performance. J Mot Behav. 2008; 40(2):165–76.
Jansen M, van Alfen N, Geurts ACH, de Groot IJM. Assisted bicycle training delays functional deterioration in boys with Duchenne muscular dystrophy: the randomized controlled trial "no use is disuse". Neurorehabil Neural Repair. 2013; 27(9):816–27.
Jansen M, Burgers J, Jannink M, van Alfen N, Groot IJM. Upper Limb Training with Dynamic Arm Support in Boys with Duchenne Muscular Dystrophy: A Feasibility Study. Int J Phys Med Rehabil. 2015; 3(256):2. Accessed 13 Jan 2016.
The authors would like to thank Prof. Peter H. Veltink and Arvid Q.L. Keemink for their support on the data analysis.
This research was supported by the Dutch Technology Foundation STW* (project number: 11832), the Duchenne Parent Project, Spieren voor Spieren, Prinses Beatrix Spierfonds, Johanna Kinderfonds and Rotterdams Kinderrevalidatie Fonds Adriaanstichting, Focal Meditech, OIM Orthopedie, Ambroise and InteSpring.
* which is part of the Netherlands Organisation for Scientific Research (NWO), and which is partly funded by the Ministry of Economic Affairs.
The dataset(s) supporting the conclusions of this article is(are) included within the article (and its additional file(s)).
Department of Biomechanical Engineering, University of Twente, Drienerlolaan 5, Enschede, 7522 NB, The Netherlands
Joan Lobo-Prat, Bart F.J.M. Koopman & Arno H.A. Stienen
Department of Rehabilitation, Radboud University Medical Center, Reinier Postlaan 4, Nijmegen, 6500 HB, The Netherlands
Mariska M.H.P. Janssen & Imelda J.M de Groot
Joan Lobo-Prat
Mariska M.H.P. Janssen
Bart F.J.M. Koopman
Arno H.A. Stienen
Imelda J.M de Groot
JLP, MJ, AS, ID, BK conceived and designed the experiment. JLP was the main editor of the manuscript. MJ, AS and IG contributed to the writing of the manuscript. JLP and MJ performed the experiment. JLP, MJ, AS and IG analyzed the data. MJ and IG obtained the ethical approval. All authors read and approved the final manuscript.
Correspondence to Joan Lobo-Prat.
The Medical Ethics Committee of the Radboud University Nijmegen Medical Center approved the study design, protocols and procedures, and written informed consent was obtained from each subject.
Written informed consent was obtained from the participants for the publication of this report and any accompanying images.
Raw sEMG signals during the biceps MVICs. MAT file containing two vectors of data (30001x1) of the raw sEMG signals of the biceps (EMGb) and the triceps (EMGt). Data was measured at 1 kHz. (MAT 421 kb)
Raw sEMG signals during the triceps MVICs. MAT file containing two vectors of data (30001x1) of the raw sEMG signals of the biceps (EMGb) and the triceps (EMGt). Data was measured at 1 kHz. (MAT 424 kb)
Raw sEMG signals during the biceps SVICs. MAT file containing two vectors of data (59806x1) of the raw sEMG signals of the biceps (EMGb) and the triceps (EMGt). Data was measured at 1 kHz. (MAT 854 kb)
Raw sEMG signals during the triceps SVICs. MAT file containing two vectors of data (59706x1) of the raw sEMG signals of the biceps (EMGb) and the triceps (EMGt). Data was measured at 1 kHz. (MAT 853 kb)
Additional results of the simulation of the sEMG-controlled elbow orthosis. PDF file showing two figures with additional results of the simulation of the sEMG-controlled elbow orthosis. (PDF 899 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Lobo-Prat, J., Janssen, M.M., Koopman, B.F. et al. Surface EMG signals in very late-stage of Duchenne muscular dystrophy: a case study. J NeuroEngineering Rehabil 14, 86 (2017). https://doi.org/10.1186/s12984-017-0292-4
Duchenne
Surface electromyography (sEMG)
|
CommonCrawl
|
Search Results: 1 - 10 of 6705 matches for " Alessandro Bazzi "
Editorial of the Open Journal on Modelling and Simulation [PDF]
Alessandro Bazzi
Open Journal of Modelling and Simulation (OJMSi) , 2013, DOI: 10.4236/ojmsi.2013.13004
Abstract: Editorial of the Open Journal on Modelling and Simulation
Multiradio Resource Management: Parallel Transmission for Higher Throughput?
Alessandro Bazzi,Gianni Pasolini,Oreste Andrisano
EURASIP Journal on Advances in Signal Processing , 2008, DOI: 10.1155/2008/763264
Abstract: Mobile communication systems beyond the third generation will see the interconnection of heterogeneous radio access networks (UMTS, WiMax, wireless local area networks, etc.) in order to always provide the best quality of service (QoS) to users with multimode terminals. This scenario poses a number of critical issues, which have to be faced in order to get the best from the integrated access network. In this paper, we will investigate the issue of parallel transmission over multiple radio access technologies (RATs), focusing the attention on the QoS perceived by final users. We will show that the achievement of a real benefit from parallel transmission over multiple RATs is conditioned to the fulfilment of some requirements related to the kind of RATs, the multiradio resource management (MRRM) strategy, and the transport-level protocol behaviour. All these aspects will be carefully considered in our investigation, which will be carried out partly adopting an analytical approach and partly by means of simulations. In this paper, in particular, we will propose a simple but effective MRRM algorithm, whose performance will be investigated in IEEE802.11a-UMTS and IEEE802.11a-IEEE802.16e heterogeneous networks (adopted as case studies).
Evaluation of Non-Genetic Factors Affecting Birth Weight in Sistani Cattle
Hossein Bazzi
Journal of Animal and Veterinary Advances , 2012, DOI: 10.3923/javaa.2011.3095.3099
Abstract: The study under taken investigates the effects of some non-genetic factors (Sex of calf, year and season of birth, parity and calving difficulty) affecting on birth weight in Sistani cattle. Data were collected on 932 (466 males and 466 females) Sistani calves from the progenies born in the Sistani Cattle Research station of Sistani and Baluchistan province in Iran during the period from 1989-2007. Analysis of variance indicated that the effects of sex of calf, year and season of birth, parity and calving difficulty with gestation length as a covariate on birth weight were significant (p<0.01). The least square mean for birth weight of Sistani calves was found to be 24.143±0.509 kg. The effect of calf sex on birth weight was highly significant (p<0.01). Male calves were 1.935 kg heavier at birth than females. Birth weights of male calves were 7-8% heavier than female calves. The winter born calves had the highest (25.168 kg) birth weight. Calves born in early parities were lighter in weight than those born to late-parity dams. Difference between the means for maximum and minimum years is 6.023 kg. Sistani cattle difficult calving occurred in 1.2%. First parity cows exhibited more frequent calving difficulty whereas among other parities there were no statistically significant differences.
Weight distribution of cosets of small codes with good dual properties
Louay Bazzi
Abstract: The {\em bilateral minimum distance} of a binary linear code is the maximum $d$ such that all nonzero codewords have weights between $d$ and $n-d$. Let $Q\subset \{0,1\}^n$ be a binary linear code whose dual has bilateral minimum distance at least $d$, where $d$ is odd. Roughly speaking, we show that the average $L_\infty$-distance -- and consequently the $L_1$-distance -- between the weight distribution of a random cosets of $Q$ and the binomial distribution decays quickly as the bilateral minimum distance $d$ of the dual of $Q$ increases. For $d = \Theta(1)$, it decays like $n^{-\Theta(d)}$. On the other $d=\Theta(n)$ extreme, it decays like and $e^{-\Theta(d)}$. It follows that, almost all cosets of $Q$ have weight distributions very close to the to the binomial distribution. In particular, we establish the following bounds. If the dual of $Q$ has bilateral minimum distance at least $d=2t+1$, where $t\geq 1$ is an integer, then the average $L_\infty$-distance is at most $\min\{\left(e\ln{\frac{n}{2t}}\right)^{t}\left(\frac{2t}{n}\right)^{\frac{t}{2} }, \sqrt{2} e^{-\frac{t}{10}}\}$. For the average $L_1$-distance, we conclude the bound $\min\{(2t+1)\left(e\ln{\frac{n}{2t}}\right)^{t} \left(\frac{2t}{n}\right)^{\frac{t}{2}-1},\sqrt{2}(n+1)e^{-\frac{t}{10}}\}$, which gives nontrivial results for $t\geq 3$. We given applications to the weight distribution of cosets of extended Hadamard codes and extended dual BCH codes. Our argument is based on Fourier analysis, linear programming, and polynomial approximation techniques.
The Effects of Some Environmental Factors Affecting on the Weaning Weight of Sistani Beef Calves
Hossein Bazzi,Mahmoud Ghazaghi
Abstract: This experiment analyses the growth of calves of the conservation nucleus for Sistani cattle. Data in this study were obtained from the Sistani cattle Research Station of Sistan and Baluchistan province in Iran. Weaning weights data was available on 372 Sistani beef calves (198 male and 174 female), born between 2003 and 2007. The effects of sire, age of dam, year/season of birth, sex of calf and birth weight was used as a covariate on the 205 days weaning weight which was computed by analysis of variance (GLM). Overall mean of the 205 days weaning weight of all calves was 127.25 kg. According to the age of the dam, the weaning weight increased up to 7 years (with the exception of 6 years) and after the maximum (137.6 kg) decreased. The minimum values were found in the group of 8 years old (97.7 kg) cows. With respect to birth year, the highest weaning weight (141.1 kg) was observed in 2007 and the lowest (101.66 kg) in 2003. The year effect varied but the trend observed is that of an increase in weaning weight with time. For birth season, Spring, Summer, Autumn and Winter, the 205 days weaning weight was 112.2, 115.3, 123.2 and 131 kg, respectively. Male calves reached 121.6 kg and female calves 119.2 kg mean value of the adjusted weaning weight.
Effects of Environmental Factors on Body Weight of Sistani Goat at Different Ages
Abstract: The objective of this study was to evaluate the effect of some non-genetic factors on pre-weaning and post-weaning growth in Sistani goat kids. Data from 81 records (36 males and 45 females) were analyzed. The data were collected from the Institute of Research Domesticated Animals, Zabol University in Sistan and Baluchestan province and Iran in 2008. The effects of dam weight after kidding, sex, birth type and time of weaning on the weights at birth, weaning, 6, 9 and 12 months of age and average pre and post weaning daily gain were studied. The average birth weight of male kids was about 3% higher than female kids. The overall means of Body Weight at birth (BW), Weaning 3 (WW3), Weaning 4 (WW4), 6 (W6), 9 (W9) and 12 (W12) months of age were 1.915, 7.796, 9.900, 15.597, 26.253 and 34.2 kg, respectively. Birth weight averaged 2.398, 2.355, 1.858 and 1.865 kg for single males and females, twin males and females, respectively. Kids had a faster growth rate from 6-9 months with daily gain 118.4 (g day-1) that the average daily gain in kids decreased with the age increase from 9-12 months of age. Average pre-weaning daily gain of female was also higher than male but there was no significant difference observed in pre-weaning gain heavy dam produced heavy kid (r = 0.319) but later weight had no relation with dam weight. Male kids in comparison with female kids and single born kids in comparison with twin born kids had higher birth weight and weaning weight.
Hossein Bazzi,Masoud Alipanah
Abstract: This experiment analyses the growth of calves of the conservation nucleus for Sistani cattle. Data in this study were obtained from the Sistani cattle Research station of Sistan and Baluchistan province in Iran. Weaning weights data was available on 372 Sistani beef calves (198 male and 174 female), born between 2003 and 2007. The effects of sire, age of dam, year/season of birth, sex of calf and birth weight was used as a covariate on the 205 days weaning weight which was computed by analysis of variance (GLM). Overall mean of the 205 days weaning weight of all calves was 127.25 kg. According to the age of the dam, the weaning weight increased up to 7 years (with the exception of 6 years) and after the maximum (137.6 kg) decreased. The minimum values were found in the group of 8 years old (97.7 kg) cows. With respect to birth year, the highest weaning weight (141.1 kg) was observed in 2007 and the lowest (101.66 kg) in 2003. The year effect varied but the trend observed is that of an increase in weaning weight with time. For birth season, spring, summer, autumn and winter the 205 days weaning weight was 112.2, 115.3, 123.2 and 131 kg, respectively. Male calves reached 121.6 kg and female calves 119.2 kg mean value of the adjusted weaning weight.
Impact of redundant checks on the LP decoding thresholds of LDPC codes
Louay Bazzi,Hani Audah
Abstract: Feldman et al.(2005) asked whether the performance of the LP decoder can be improved by adding redundant parity checks to tighten the LP relaxation. We prove that for LDPC codes, even if we include all redundant checks, asymptotically there is no gain in the LP decoder threshold on the BSC under certain conditions on the base Tanner graph. First, we show that if the graph has bounded check-degree and satisfies a condition which we call asymptotic strength, then including high degree redundant checks in the LP does not significantly improve the threshold in the following sense: for each constant delta>0, there is a constant k>0 such that the threshold of the LP decoder containing all redundant checks of degree at most k improves by at most delta upon adding to the LP all redundant checks of degree larger than k. We conclude that if the graph satisfies a rigidity condition, then including all redundant checks does not improve the threshold of the base LP. We call the graph asymptotically strong if the LP decoder corrects a constant fraction of errors even if the LLRs of the correct variables are arbitrarily small. By building on the work of Feldman et al.(2007) and Viderman(2013), we show that asymptotic strength follows from sufficiently large expansion. We also give a geometric interpretation of asymptotic strength in terms pseudocodewords. We call the graph rigid if the minimum weight of a sum of check nodes involving a cycle tends to infinity as the block length tends to infinity. Under the assumptions that the graph girth is logarithmic and the minimum check degree is at least 3, rigidity is equivalent to the nondegeneracy property that adding at least logarithmically many checks does not give a constant weight check. We argue that nondegeneracy is a typical property of random check-regular graphs.
Effect of Dye Structure on the Photodegradation Kinetic Using TiO2 Nanoparticles [PDF]
Hawraa Ayoub, Mounir Kassir, Mohammad Raad, Houssein Bazzi, Akram Hijazi
Journal of Materials Science and Chemical Engineering (MSCE) , 2017, DOI: 10.4236/msce.2017.56004
Abstract: In this study the effect of pH, adsorption behavior and the chemical struc-tures of two dyes (Methyl Orange and Bromothymol Blue) on the photo-degradation rate constant, was investigated. Adsorption isotherm shows that the adsorption amount of dyes on TiO2 surface is highly related to the pH of the solution and to the pKa of each dye. In acidic medium the adsorption percentage of Methyl Orange on TiO2 surface was 76% facing 5% for Bromothymol Blue. The kinetic study shows compatibility between the degradation rate constant and the adsorption percentage on the surface. In basic medium the adsorption percentage of Methyl orange and Bromothymol Blue is similar while the degradation rate of Methyl orange is two times faster than that of Bromothymol Blue which reveals the role of chemical structure in the photodegradation rate.
Pre-Concentration and Determination of Molybdenum in the Rouge River, Michigan, USA, with Graphite Furnace Atomic Absorption Spectrometry
Ali Bazzi,Bo Ra Ye
International Journal of Chemistry , 2012, DOI: 10.5539/ijc.v4n6p54
Abstract: Molybdenum is an essential element to humans because of its role in several enzymes, and its occurrence in natural waters is of significance from environmental and biochemical standpoints. Owing to the low concentration of molybdenum in natural waters, pre-concentration is required prior to its determination with atomic spectroscopic techniques. This paper reports on the pre-concentration and determination of molybdenum in the Rouge River, Michigan, USA with graphite furnace atomic absorption spectrometry (GFAAS). Sample preparation and pre-concentration were performed using ultra-trace analysis methodology in a class 100 clean room laboratory. The molybdenum was pre-concentrated on a Bio-Rad Chelex 100 resin, followed by elution from the resin with ammonia solution. Subsequently, the single-point standard addition method was used, and the absorbance owing to molybdenum was measured at 313.3 nm. An overall concentration factor of ten was realized for the final pre-concentrated volume, and the results from several sampling locations on the four branches of the Rouge River yielded molybdenum concentrations ranging from 1.98 to 4.21 ug·L-1 with an overall average of 2.94 ug L-1. The precision of the results, based on quintuplet determinations from each sampling site, varied between 6.1 to 8.8 %relative standard deviation (%RSD). Although the concentration of molybdenum in the Rouge River is in line with the lower reported molybdenum levels in the US and world rivers, it is higher than the level that arises from natural sources only and therefore has anthropogenic causes.
|
CommonCrawl
|
Vinogradov's method for sums of more than three primes
In Hardy-Littlewood's 1923 paper "Some problems of 'Partitio Numerorum' III" it is proven, assuming a weak version of GRH (namely that there is $\varepsilon>0$ s.t. all zeroes of $L(s,\chi)$ have $\Re(s)<3/4-\varepsilon$), that for all $k\geqslant 3$, when $n\to\infty$ through the integers with same parity than $k$: $$ r_{k}(n) \sim \frac{2C_k}{(k-1)!}\frac{n^{k-1}}{\log(n)^k}\prod_{\substack{p\mid n \\ p\geqslant 3}} \left(\frac{(p-1)^k+(-1)^k(p-1)}{(p-1)^k -(-1)^k} \right), $$
where $r_{k}(n):= \{(p_1,\ldots,p_k)\in\mathbb{P}^k:\sum p_i = n \}$ and $C_k$ is a constant given by: $$ C_k := \prod_{p\geqslant 3} \left(1- \frac{(-1)^k}{(p-1)^k}\right). $$
Roughly 15 years later Vinogradov introduced his influencial technique that allowed him to prove this estimate unconditionally for $k=3$. One thing that bothers me is: what about the other $k$?
The wikipedia article on Goldbach's conjecture has (as of today [08/Oct/2016]) the following passage:
This formula has been rigorously proven to be asymptotically valid for $k \geqslant 3$ from the work of Vinogradov, but is still only a conjecture when $k=2$.$^{\text{[citation needed]}}$
I gave a quick look at K. F. Roth & Anne Davenport's translation of Vinogradov's The Method of Trigonometrical Sums in the Theory of Numbers, and in Chapter X. "Goldbach's Problem" it has the following passage at the start:
In the present chapter I give a solution of Goldbach's problem concerning the representability of every sufficiently large odd number $N$ as the sum of three primes, and I establish an asymptotic formula for the number of representations.
The method used here enables one also to solve more general additive problems involving primes, for example the question of representability of large numbers $N$ in the form $$ N = p_1^n + \ldots + p_s^n $$ (Waring's problem for primes). But I do not consider these more general questions here.
Every other source I consulted only talks about the case $k=3$. Does Vinogradov's method for $k=3$ implies the other cases as a corollary? Where can I find more (historical) information about the other $k$ in this Hardy & Littlewood's estimate? Thanks in advance!
nt.number-theory reference-request analytic-number-theory circle-method
GH from MO
AlufatAlufat
$\begingroup$ Larger $k$ follow trivially from the case $k=3$. For example, the number of ways of writing $n$ as a sum of $4$ primes is just $\sum_{p\le n} R_3(n-p)$ (maybe divide by $4$ for repeats) where $R_3(n-p)$ is the number of ways of writing $n-p$ as a sum of $3$ primes, and now use Vinogradov. $\endgroup$ – Lucia Oct 8 '16 at 15:38
$\begingroup$ @Lucia Hmm, it is not really obvious to me that everything fits perfectly just by looking at this asymptotic expression, but that's a pretty good point, I must confess I had not thought of that! Many thanks. $\endgroup$ – Alufat Oct 8 '16 at 15:53
Your question is concerned with the so-called Waring-Goldbach problem. A classic in this topic is Hua Lo Keng's book, Additive theory of prime numbers (Translations of Mathematical Monographs, 13, American Mathematical Society, Providence, R.I. 1965), which focuses on how large the number of summands $s$ should be in terms of $n$ for an asymptotic formula to hold. It is a characteristic feature of the method that if it works for some value of $s$, then it also works for all larger values of $s$. Hua himself derived an asymptotic formula for $s>cn^2\log(2n)$, with some absolute constant $c>0$, and a more concrete lower bound is available for individual $n$'s. The method has been refined in may ways since, but this book is certainly a good starting point. For later developments, and a good overview in general, I recommend warmly the survey by Kumchev-Tolev: see especially Theorem 3 and the subsequent comments there.
GH from MOGH from MO
$\begingroup$ Wow, I had no idea how general this problem could get. This survey will save me months of being stuck at the library! Thank you very much :) $\endgroup$ – Alufat Oct 8 '16 at 16:45
$\begingroup$ @ChrisTáfula: I am glad I could help! $\endgroup$ – GH from MO Oct 8 '16 at 19:03
$\begingroup$ Also in The Hardy-Littlewood Method by Robert C. Vaughan. The important part is that the second edition mentions me. $\endgroup$ – Will Jagy Oct 9 '16 at 0:29
Not the answer you're looking for? Browse other questions tagged nt.number-theory reference-request analytic-number-theory circle-method or ask your own question.
Where does the "Hardy-Littlewood" conjecture that pi(x+y) < pi(x) + pi(y) originate?
New proofs to major theorems leading to new insights and results?
Asymptotics for the number of ways to sum primes such that the sum is <= n
Two different ways to count Mersenne Primes
Using Vinogradov's theorem for finding prime solutions to a linear equation (an exercise from Vaughan's book)
Are sets with similar asymptotic behavior as the primes necessarily finite additive bases?
Does Zhang's theorem generalize to $3$ or more primes in an interval of fixed length?
Erdös-Turán via Hardy-Littlewood circle method?
Error term for Vinogradov's three prime theorem
Proportion of numbers with prime divisors from restricted set
|
CommonCrawl
|
DNA Structure, Triplet Repeats, and Hereditary Neurological Diseases
Wells, Rebort D. 2
Characterization of Thioltransferase from Kale
Sa, Jae-Hoon;Yong, Mi-Young;Song, Byung-Lim;Lim, Chang-Jin 20
Thioltransferase, also known as glutaredoxin, is an enzyme that catalyzes the reduction of a variety of disulfides, including protein disulfides, in the presence of reduced glutathione. Thioltransferase was purified from kale through ammonium sulfate fractionation, DE-52 ion-exchange chromatography, Sephadex G-75 gel filtration, and Q-Sepharose ion-exchange chromatography. Its molecular size was estimated to be about 31,000 daltons on SDS-PAGE. The purified enzyme has an optimum pH of about 8.0 with 2-hydroxyethyl disulfide as a substrate. The enzyme also utilizes L-sulfocysteine, L-cystine, bovine serum albumin, and insulin as substrates in the presence of GSH. The enzyme has $K_m$ values of 0.24-0.67 mM for these substrates. The enzyme was partly inactivated after heating at $80^{\circ}C$ or higher temperature for 30 min. The enzyme was stimulated by various thiol compounds such as reduced glutathione, dithiothreitol, L-cysteine, and $\beta$-mercaptoethanol. This is a second example of a plant thioltransferase which was purified and characterized.
Unusual Allosteric Property of L-alanine Dehydrogenase from Bacillus subtilis
Kim, Soo-Ja;Lee, Woo-Yiel;Kim, Kwang-Hyun 25
Kinetic studies of L-Alanine dehydrogenase from Bacillus subtilis-catalyzed reactions in the presence of $Zn^{2+}$ were carried out. The substrate (L-alanine) saturation curve is hyperbolic in the absence of the metal ion but it becomes sigmoidal when $Zn^{2+}$ is added to the reaction mixture indicating the positive cooperative binding of the substrate in the presence of zinc ion. The cooperativity of substrate binding depends on the xinc ion concentration: the Hill coefficients ($n_H$) varied from 1.0 to 1.95 when the zinc ion concentration varied from 0 to $60\;{\mu}m$. The inhibition of AlaDH by $Zn^{2+}$ is reversible and noncompetitive with respect to $NAD^+$ ($K_i\;=\;5.28{\times}10^{-5}\;M$). $Zn^{2+}$ itself binds to AlaDH with positive cooperativity and the cooperativity is independent of substrate concentration. The Hill coefficients of substrate biding in the presence of $Zn^{2+}$ are not affected by the enzyme concentration indicating that $Zn^{2+}$ binding does not change the polymerization-depolymerization equilibria of the enzyme. Among other metal ions, $Zn^{2+}$ appears to be a specific reversible inhibitor inducing conformational change through the intersubunit interaction. These results indicate that $Zn^{2+}$ is an allosteric competitive inhibitor and substrate being a non-cooperative per se, excludes the $Zn^{2+}$ from its binding site and thus exhibits positive cooperativity. The allosteric mechanism of AlaDh from Bacillus subtilis is consistent with both MWC and Koshland's allosteric model.
Purification and Properties of Phenylalanine Ammonia-lyase from Chinese Cabbage
Lim, Hye-Won;Sa, Jae-Hoon;Kim, Tae-Soo;Park, Eun-Hee;Park, Soo-Sun;Lim, Chang-Jin 31
Phenylalanine ammonia-lyase (PAL; EC 4.3.1.5), the first enzyme in the phenylpropanoid biosynthesis, catalyzes the elimination reaction of ammonium ion from L-phenylalanine. PAL was purified from the cytosolic fraction of Chinese cabbage (Brassica campestris ssp. napus var. pekinensis) through ammonium sulfate fractionation, DEAE-cellulose chromatography, Sephadex G-200 chromatography, and Q-Sepharose chromatography. It consists of four identical subunits, the molecular mass of which was estimated to be about 38,000 daltons on SDS-PAGE. The optimal pH and temperature of the purified enzyme are 8~9 and $45^{\circ}C$, respectively. Its activity is greatly inhibited by $Zn^{2+}$ ion, and strongly activated by caffeic acid. The purified PAL has some different characteristics compared to those obtained with other PALs.
Purification and Characterization of the Catabolic α-Acetolactate Synthase from Serratia marcescens
Joo, Han-Seung;Kim, Soung-Soo 37
The catabolic ${\alpha}$-acetolactate synthase was purified to homogeneity from Serratia marcescens ATCC 25419 using ammonium sulfate fractionation, DEAE-Sepharose, Phenyl-Sepharose, and Hydroxylapatite column chromatography. The native molecular weight of the enzyme was approximately 150 kDa and composed of two identical subunits with molecular weights of 64 kDa each. The N-terminal amino acid sequence of the enzyme was determined to be Ala-Gln-Glu-Lys-Thr-Gly-Asn-Asp-Trp-Gln-His-Gly-Ala-Asp-Leu-Val-Val-Lys-Asn-Leu. It was not inhibited by the branched chain amino acids and sulfometuron methyl herbicide. The optimum pH of the enzyme was around pH 5.5 and the pI value was 6.1. The catabolic ${\alpha}$-acetolactate synthase showed weak immunological relationships with recombinant tobacco ALS, barley ALS, and the valine-sensitive ALS isozyme from Serratia marcescens.
19F NMR Investigation of F1-ATPase of Escherichia coli Using Fluorinated Ligands
Jung, Seun-Ho;Kim, Hyun-Won 44
Asymmetry amongst nucleotide binding sites of Escherichia coli $F_1$-ATPase was examined using $^{19}F$ NMR signal from fluorinated analogs of adenine nucleotides bound to nucleotide binding sites. ADP-$CF_2-{PO_3}^{2-}$ showed no inhibitory effect to $F_1$-ATPase. But ADP-CHF-${PO_3}^{2-}$ (racemic mixture) showed competitive inhibition of $F_1$-ATPase with $K_i$ of $60\;{\mu}m$. ADP-CHF-${PO_3}^{2-}$ shows only negligible binding to $EF_1$ in the absence of $Mg^2+$. With the addition of $Mg^2+$ to the medium, the $^{19}F$ resonance of free ADP-CHF-${PO_3}^{2-}$ disappeared and the new broad resonances appeared. Appearance of more than two new asymmetric resonances following the binding of ADP-CHF-${PO_3}^{2-}$ to $EF_1$ may indicate that at least one of the isomers showed split resonances. This may suggest that the region between ${\alpha}$-and ${\beta}$-phosphate of ADP-CHF-${PO_3}^{2-}$ which is bound to catalytic sites is experiencing a different environment at different sites.
Overexpression and Spectroscopic Characterization of a Recombinant Human Tumor Suppressor p16INK4
Lee, Weon-Tae;Jang, Ji-Uk;Kim, Dong-Myeong;Son, Ho-Sun;Yang, Beon-Seok 48
$p16^{INK4}$, which is a 16-kDa polypeptide protein, inhibits the catalytic activity of the CDK4-cyclinD complex to suppress rumor growth. Both unlabeled and isotope-labeled human tumor suppressor $p16^{INK4}$ protein were overexpressed and purified to characterize biochemical and structural properties. The purified p16 binds to monomeric GST-CDK4 and exists in a monomer conformation for several weeks at $4^{\circ}C$. The circular dichroism (CD) data indicates that p16 contains high percentage of ${\alpha}$-helix and that the helix percentage maximized at pH value of 7.0. One-and two-dimensional nuclear magnetic resonance (NMR) data suggest that purified p16 from our construct has a unique folded conformation under our experimental conditions and exhibits quite stable conformational characteristics.
Expression and Characterization of CMCax Having β-1,4-Endoglucanase Activity from Acetobacter xylinum
Koo, Hyun-Min;Song, Sung-Hee;Pyun, Yu-Ryang;Kim, Yu-Sam 53
The CMCax gene from Acetobacter xylinum ATCC 23769 was cloned and expressed in E. coli. With this gene, three gene products - mature CMCax, CMCax containing signal peptide(pre-CMCax), and a glutathione-S-transferase(GST)-CMCax fusion enzyme - were expressed. CMCax and pre-CMCax are aggregated to multimeric forms which showed high CMC hydrolysis activity, whereas GST-CMCax was less aggregated and showed lower activity, indicating that oligomerization of CMCax controbutes to the cellulose hydrolysis activity to achieve greater efficiency. The enzyme was identified to be an $\beta$-1,4-endoglucanase, which catalyzes the cleavage of internal $\beta$-1,4-glycosidic bonds of cellulose. The reaction products, cellobiose and cellotriose, from cellopentaose as a substrate, were identified by HPLC. Substrate specificity of cellotetraose by this enzyme was poor, and the reaction products consisted of glucose, cellobiose, and cellotriose in a very low yield. Theses results suggested that cellopentaose might be the oligosaccharide substrate consisting of the lowest number of glucose. The optimum pH of CMCax and pre CMCax was about 4.5, whereas that of GST-CMCas was rather broad at pH 4.5-8. The physiological significance of cellulose-hydrolyzing enzyme, CMCax, having such low $\beta$-1,4-endoglucanase activity and low optimum pH in cellulose-producing A. xylinum is not clearly known yet, but it seems to be closely related to the production of cellulose.
Characterization and Epitope Mapping of KI-41, a Murine Monoclonal Antibody Specific for the gp41 Envelope Protein of the Human Immunodeficiency Virus-1
Shin, Song-Yub;Park, Jung-Hyun;Jang, So-Youn;Lee, Myung-Kyu;Hahm, Kyung-Soo 58
In this study, a mouse monoclonal antibody (mAb) against gp41(584-618), the immunodominant epitope protein, was generated. For this purpose, BALB/c mice were immunized with double branched multiple antigenic peptides derived from the HIV-1 gp41(584-618) sequence, and antibody-secreting hybridoma were produced by fusion of mice splenocytes with SP2/0 myeloma cells. One clone producing an antigen specific mAb, termed KI-41(isotype IgG1) was identified, whose specific reactivity against gp41(584-618) could be confirmed by ELISA and Western blot analysis. Epitope mapping revealed the recognition site of the mAb KI-41 to be located around the sequence RILAVERYLKDQQLLG, which comprises the N-terminal region within the immunized gp41(584-618) peptied. Since this mAb recognizes this specific epitope within the HIV-1 gp41 without any cross-reactivity to other immunodominant regions in the HIV-2 gp35, KI-41 will provide some alternative possibilities in further applications such as the development of indirect or competitive ELISA for specific antibody detection in HIV-1 infection or for other basic researches regarding the role and function of HIV-1 gp41.
Blockage of the Immune Complex-triggered Transmembrane Proximity Between Complement Receptor Type 3 and Microfilaments by Staurosporine and Methyl-2,5-dihydroxycinnamate
Poo, Ha-Ryoung;Lee, Young-Ik;Todd, Robert F. III;Petty, Howard R. 64
Recent studies have suggested that integrin (CR3) participates in the signal transduction pathways of certain GPI-anchored phagocytic receptors including $Fc{\gamma}RIIIB$. One consequence of this functional linkage is an inducible association between CR3 and cortical microfilaments that is triggered by $Fc{\gamma}RIIIB$ binding to immobilized immune complexes (IC). That this signaling event requires the co-expression of $Fc{\gamma}RIIIB$ with CR3 was documented by the use of NIH 3T3 transfectants expressing both CR3 and $Fc{\gamma}RIIIB$ (clone 3-23), CR3 alone (clone 3-19), and $Fc{\gamma}RIIIB$ alone (clone 3-15). Pretreatment of 3-23 cells with protein kinase inhibitors such as staurosporine and methyl 2,5-dihydroxycinnamate (MDHC) blocked IC-stimulated CR3 microfilament proximity without affecting the extent to which $Fc{\gamma}RIIIB$ constrains the lateral membrane mobility of a subset of CR3 on the cell surface (as measured in fluorescence recovery after photobleaching experiments). These data support that CR3 and $Fc{\gamma}RIIIB$ molecules are physically and functionally associated and that ligation of FcgRIIIB triggers CR3-dependent signal transduction.
Biochemical Characterization of 1-Aminocyclopropane-1-Carboxylate Oxidase in Mung Bean Hypocotyls
Jin, Eon-Seon;Lee, Jae-Hyeok;Kim, Woo-Taek 70
The final step in ethylene biosynthesis is catalyzed by the enzyme 1-aminocyclopropane-1-carboxylate (ACC) oxidase. ACC oxidase was extracted from mung bean hypocotyls and its biochemical characteristics were determined. In vitro ACC oxidase activity required ascorbate and $Fe^{2+}$, and was enhanced by sodium bicarbonate. Maximum specific activity (approximately 20 nl ethylene $h^{-1}$ mg $protein^{-1}$) was obtained in an assay medium containing 100 mM MOPS (pH 7.5), $25\;{\mu}M$ $FeSO_4$, 6 mM sodium ascorbate, 1 mM ACC, 5 mM sodium bicarbonate and 10% glycerol. The apparent $K_m$ for ACC was $80{\pm}3\;{\mu}M$. Pretreating mung bean hypocotyls with ethylene increased in vitro ACC oxidase activity twofold. ACC oxidase activity was strongly inhibited by metal ions such as $Co^{2+}$, $Cu^{2+}$, $Zn^{2+}$, and $Mn^{2+}$, and by salicylic acid. Inactivation of ACC oxidase by salicylic acid could be overcome by increasing the $Fe^{2+}$ concentration of the assay medium. The possible mode of inhibition of ACC oxidase activity by salicylic acid is discussed.
An Efficient System for the Expression and Purification of Yeast Geranylgeranyl Protein Transferase Type I
Kim, Hyun-Kyung;Kim, Young-Ah;Yang, Chul-Hak 77
To purify the geranylgeranyl protein transferase type I (GGPT-I) efficiently, a gene expression system using the pGEX-4T-1 vector was constructed. The cal1 gene, encoding the ${\beta}$ subunit of GGPT-I, was subcloned into the pGEX-4T-1 vector and co-transformed into E. coli cells harboring the ram2 gene, the ${\alpha}$ subunit gene of GGPT-I. GGPT-I was highly expressed as a fusion protein with glutathione S-transferase (GST) in E. coli, purified to homogeneity by glutathione-agarose affinity chromatography, and the GST moiety was excised by thrombin treatment. The purified yeast GGPT-I showed a dose-dependent increase in the transferase activity, and its apparent $K_m$ value for an undecapeptide fused with GST (GST-PEP) was $0.66\;{\mu}M$ and the apparent <$K_m$ value for geranylgeranyl pyrophosphate (GGPP) was $0.071\;{\mu}M$.
Up-Regulation of Interleukin-4 Receptor Expression by Interleukin-4 and CD40 Ligation via Tyrosine Kinase-Dependent Pathway
Kim, Hyun-Il;So, Eui-Young;Yoon, Suk-Ran;Han, Mi-Young;Lee, Choong-Eun 83
Recently a B cell surface molecule, CD40, has emerged as a receptor mediating a co-stimulatory signal for B cell proliferation and differentiation. To investigate the mechanism of synergy between interleukin-4 (IL-4) and CD40 ligation in B cell activation, we have examined the effect of CE40 cross-linking on the IL-4 receptor expression in human B cells using anti-CE40 antibody. We observed that IL-4 and anti-CD40 both induce IL-4 receptor gene expression with a rapid kinetics resulting in a noticeable accumulation of IL-4 receptor mRNA within 4 h. While IL-4 caused a dose-dependent induction of surface IL-4 receptor expression, the inclusion of anti-CD40 in the IL-4-treated culture, further up-regulated the IL-4-induced IL-4 receptor expression as analyzed by flow cytometry. Pretreatment of B cells with inhibitors of protein tyrosine kinase (PTK) resulted in a significant inhibition of both the IL-4- and anti-CD40-induced IL-4 receptor mRNA levels, while protein kinase C (PKC) inhibitors had no effects. These results suggest that IL-4 and CD40 ligation generate B cell signals, which via PTK-dependent pathways, lead to the synergistic induction of IL-4 receptor gene expression. The rapid induction of IL-4 receptor gene expression through the tyrosine kinase-mediated signal transduction by B cell activating stimuli, would provide cells capacity for an efficient response to IL-4 in the early phase of IL-4 action, and may in part constitute the molecular basis of the reported anti-CD40 co-stimulatory effect on the IL-4-induced response.
Structural and Dynamic Studies of the Central Segments in the Self-complementary Decamer DNA Duplexes d(ACGTATACGT)2 and d(ACGTTAACGT)2
Park, Jin-Young;Lee, Joon-Hwa;Choi, Byong-Seok 89
The structures of the self-complementary decamer duplexes, $d(ACGTATACGT)_2$ (TATA-duplex) and $d(ACGTTAACGT)_2$, (TTAA-duplex) has been obtained in solution by proton NMR spectroscopy and restrained molecular dynamics. The duplexes are essentially B-type, with distortions apparent at the TATA and TTAA steps. Theses distortions and their effects on dynamics have been investigated by the measurement of imino proton exchange time of the base-pairs. The unusual opening kinetics of central A T base-pairs could be correlated to the abnormal structural properties of the corresponding sequences.
Convenient Preparation of Tumor-specific Immunoliposomes Containing Doxorubicin
Nam, Sang-Min;Cho, Jang-Eun;Son, Byoung-Soo;Park, Yong-Serk 95
Two innovative methods to prepare target-sensitive immunoliposomes containing doxorubicin by coupling monoclonal antibodies (mAb DH2, SH1) specific to cancer cell surface antigens ($G_{M3}$, $Le^X$) have been developed and are described here. Firstly, liposomes containing N-glutaryl phosphatidylethanolamine (NGPE) were prepared, followed by the encapsulation of doxorubicin, DH2 or SH1 antibodies were conjugated to NGPE in the liposomes (direct coupling). Secondly, liposomes were prepared with NGPE/mAb conjugates by the detergent dialysis method (conjugate insertion), and then doxorubicin was encapsulated by proton gradient. The immunoliposomes prepared by both methods were able to specifically bind to the surface of the tumor cells - B16BL6 mouse melanoma cells. The efficiencies of doxorubicin-entrapping into liposomes prepared by direct coupling and conjugate insertion was about 98% and 25%, respectively. These types of liposomal formulation are sensitive to target cells, which can be useful for various clinical applications.
Evaluation of Two Types of Biosensors for Immunoassay of Botulinum Toxin
Choi, Ki-Bong;Seo, Won-Jun;Cha, Seung-Hee;Choi, Jung-Do 101
Immunoassay of botulinum toxin (BTX) B type was investigated using two typed of biosensors: light addressable potentiometric sensor (LAPS) and surface plasmon resonance (SPR) sensor. Urease-tagged and immuno-filtration capture method have been used for LAPS. Tag-free and direct binding real-time detection method have been used for SPR sensor. The detection limit of sandwich assay format with LAPS was 10 ng/ml, which was the lowest among methods tested. SPR has the advantage of being more convenient because tag-free direct binding assay can be used and reaction time was reduced, regardless of low sensitivity. This result shows that sandwich assay format with LAPS can be used as an alternative method of BTX mouse bioassay which is known as the most sensitive method for the detection of BTX.
Properties of the Endonuclease Secreted by Human B Lymphoblastic IM9 Cells
Kwon, Hyung-Joo;Kim, Doo-Sik 106
We have employed a DNA-native-polyacrylamide gel electrophoresis (DNA-native-PAGE) assay system to characterize the enzyme activity of the endonuclease secreted by human B lymphoblastic IM9 cells. Experimental results clearly demonstrated that the endonuclease activity of IM9 cell culture medium is distinct from that of DNase I in the DNA-native-PAGE assay system. Immunoprecipitation analysis using anti-DNase I antibody showed that the secreted endonuclease is not recognized by the antibody. The secreted endonuclease was estimated using supercoiled plasmid DNA as a substrate. The pH optimum required for the catalytic activity was determined to be in the range of pH 6.6-7.4. No significant difference in the endonuclease secretion was observed by stimulation of the IM9 cells with interferon-${\gamma}$ or interleukin-$1{\beta}$.
|
CommonCrawl
|
Jakubowski, Adam (2)
Boone, James R. (1)
Burke, D. K. (1)
Davis, S. W. (1)
Dehghan, Mohammad Ali (1)
Pacific Journal of Mathematics (3)
Rocky Mountain Journal of Mathematics (2)
Bulletin of the American Mathematical Society (1)
Banach Journal of Mathematical Analysis (1)
Electronic Communications in Probability (1)
54D55 (12)
54A10 (2)
sequential spaces (2)
$\beta$-space (1)
$\gamma$-cover (1)
$\gamma_k$-cover (1)
$\mathbf{\delta}-$frame (1)
Subject: 54D55
New characterizations of the $S$ topology on the Skorokhod space
Jakubowski, Adam
Electronic Communications in Probability Volume 23, (2018).
More by Adam Jakubowski
On $\mathbf{\mathbf{E}}$-Frames in separable Hilbert Spaces
Dehghan, Mohammad Ali and Talebi, Gholamreza
Banach Journal of Mathematical Analysis Volume 9, Number 3 (2015), 43-74.
More by Mohammad Ali Dehghan
More by Gholamreza Talebi
SELECTION PRINCIPLES RELATED TO αi-PROPERTIES
Kočinac, Ljubiša D. R.
Taiwanese Journal of Mathematics Volume 12, Number 3 (2008), 561-571.
More by Ljubiša D. R. Kočinac
On Sequentially Compact Subspaces of without the Axiom of Choice
Keremedis, Kyriakos and Tachtsis, Eleftherios
Notre Dame Journal of Formal Logic Volume 44, Number 3 (2003), 175-184.
More by Kyriakos Keremedis
More by Eleftherios Tachtsis
A Non-Skorohod Topology on the Skorohod Space
Electronic Journal of Probability Volume 2, (1997).
Subsets of $^{\omega}\omega $ and generalized metric spaces.
Burke, D. K. and Davis, S. W.
Pacific Journal of Mathematics Volume 110, Number 2 (1984), 273-281.
More by D. K. Burke
More by S. W. Davis
A note on linearly ordered net spaces.
Boone, James R.
Pacific Journal of Mathematics Volume 98, Number 1 (1982), 25-35.
More by James R. Boone
Products of generalized metric spaces
Gittings, Raymond F.
Rocky Mountain Journal of Mathematics Volume 9, Number 3 (Summer 1979), 479-498.
More by Raymond F. Gittings
Quotient-universal sequential spaces.
Sirois-Dumais, R. and Willard, S.
Pacific Journal of Mathematics Volume 66, Number 1 (1976), 281-284.
More by R. Sirois-Dumais
More by S. Willard
Generalizations of the first axiom of countability
Siwiec, Frank
Rocky Mountain Journal of Mathematics Volume 5, Number 1 (Winter 1975), 1-60.
More by Frank Siwiec
|
CommonCrawl
|
Publication Info.
Wind and Structures
Construction/Transportation > Design/Analysis for Facilities
The WIND AND STRUCTURES, An International Journal, aims at: - Major publication channel for research in the general area of wind and structural engineering, - Wider distribution at more affordable subscription rates; - Faster reviewing and publication for manuscripts submitted. The main theme of the Journal is the wind effects on structures. Areas covered by the journal include: - Wind loads and structural response - Bluff-body aerodynamics - Computational method - Wind tunnel modeling - Local wind environment - Codes and regulations - Wind effects on large scale structures
http://www.techno-press.org/papers/ KSCI KCI SCOPUS SCIE
Volume 5 Issue 2_3_4
The subtle effect of integral scale on the drag of a circular cylinder in turbulent cross flow
Younis, Nibras;Ting, David S.K. 463
https://doi.org/10.12989/was.2012.15.6.463 KSCI
The effects of Reynolds number (Re), freestream turbulence intensity (Tu) and integral length scale (${\Lambda}$) on the drag coefficient ($C_d$) of a circular cylinder in cross flow were experimentally studied for $6.45{\times}10^3$ < Re < $1.82{\times}10^4$. With the help of orificed plates, Tu was fixed at approximately 0.5%, 5%, 7% and 9% and the normalized integral length scale (L/D) was varied from 0.35 to 1.05. Our turbulent results confirmed the general trend of decreasing $C_d$ with increasing Tu. The effectiveness of Tu in reducing $C_d$ is found to lessen with increasing ${\Lambda}$/D. Most interestingly, freestream turbulence of low Tu (${\approx}5%$) and large ${\Lambda}$/D (${\approx}1.05$) can increase the $C_d$ above the corresponding smooth flow value.
Calculated external pressure coefficients on livestock buildings and comparison with Eurocode 1
Kateris, D.L.;Fragos, V.P.;Kotsopoulos, T.A.;Martzopoulou, A.G.;Moshou, D. 481
The greenhouse type metal structures are increasingly used in modern construction of livestock farms because they are less laborious to construct and they provide a more favorable microclimate for the growth of animals compared to conventional livestock structures. A key stress factor for metal structures is the wind. The external pressure coefficient ($c_{pe}$) is used for the calculation of the wind effect on the structures. A high pressure coefficient value leads to an increase of the construction weight and subsequently to an increase in the construction cost. The EC1 in conjunction with EN 13031-1:2001, which is specialized for greenhouses, gives values for this coefficient. This value must satisfy two requirements: the safety of the structure and a reduced construction cost. In this paper, the Navier - Stokes and continuity equations are solved numerically with the finite element method (Galerkin Method) in order to simulate the two dimensional, incompressible, viscous air flow over the vaulted roofs of single span and twin-span with eaves livestock greenhouses' structures, with a height of 4.5 meters and with length of span of 9.6 and 14 m. The simulation was carried out in a wind tunnel. The numerical results of pressure coefficients, as well as, the distribution of them are presented and compared with data from Eurocodes for wind actions (EC1, EN 13031-1:2001). The results of the numerical experiment were close to the values given by the Eurocodes mainly on the leeward area of the roof while on the windward area a further segmentation is suggested.
Acrosswind aeroelastic response of square tall buildings: a semi-analytical approach based of wind tunnel tests on rigid models
Venanzi, I.;Materazzi, A.L. 495
The present paper is focused on the prediction of the acrosswind aeroelastic response of square tall buildings. In particular, a semi-analytical procedure is proposed based on the assumption that square tall buildings, for reduced velocities corresponding to operational conditions, do not experience vortex shedding resonance or galloping and fall in the range of positive aerodynamic damping. Under these conditions, aeroelastic wind tunnel tests can be unnecessary and the response can be correctly evaluated using wind tunnel tests on rigid models and analytical modeling of the aerodynamic damping. The proposed procedure consists of two phases. First, simultaneous measurements of the pressure time histories are carried out in the wind tunnel on rigid models, in order to obtain the aerodynamic forces. Then, aeroelastic forces are analytically evaluated and the structural response is computed through direct integration of the equations of motion considering the contribution of both the aerodynamic and aeroelastic forces. The procedure, which gives a conservative estimate of the aeroelastic response, has the advantage that aeroelastic tests are avoided, at least in the preliminary design phase.
Pressure field of a rotating square plate with application to windborne debris
Martinez-Vazquez, P.;Kakimpa, B.;Sterling, M.;Baker, C.J.;Quinn, A.D.;Richards, P.J.;Owen, J.S. 509
Traditionally, a quasi steady response concerning the aerodynamic force and moment coefficients acting on a flat plate while 'flying' through the air has been assumed. Such an assumption has enabled the flight paths of windborne debris to be predicted and an indication of its potential damage to be inferred. In order to investigate this assumption in detail, a series of physical and numerical simulations relating to flat plates subject to autorotation has been undertaken. The physical experiments have been carried out using a novel pressure acquisition technique which provides a description of the pressure distribution on a square plate which was allowed to auto-rotate at different speeds by modifying the velocity of the incoming flow. The current work has for the first time, enabled characteristic pressure signals on the surface of an auto-rotating flat plate to be attributed to vortex shedding.
Effect of a through-building gap on wind-induced loading and dynamic responses of a tall building
To, Alex P.;Lam, K.M.;Wong, S.Y.;Xie, Z.N. 531
Many tall buildings possess through-building gaps at middle levels of the building elevation. Some of these floors are used as sky gardens, or refuge floors, through which wind can flow with limited blockage. It has been reported in the literature that through-building gaps can be effective in reducing across-wind excitation of tall buildings. This paper systematically examines the effectiveness of two configurations of a through-building gap, at the mid-height of a tall building, in reducing the wind-induced dynamic responses of the building. The two configurations differ in the pattern of through-building opening on the gap floor, one with opening through the central portion of the floor and the other with opening on the perimeter of the floor around a central core. Wind forces and moments on the building models were measured with a high-frequency force balance from which dynamic building responses were computed. The results show that both configurations of a through-building gap are effective in reducing the across-wind excitation with the one with opening around the perimeter of the floor being significantly more effective. Wind pressures were measured on the building faces with electronic pressure scanners to help understand the generation of wind excitation loading. The data suggest that the through-building gap reduces the fluctuating across-wind forces through a disturbance of the coherence and phase-alignment of vortex excitation.
|
CommonCrawl
|
Given Eigenvectors and Eigenvalues, Compute a Matrix Product (Stanford University Exam)
Suppose that $\begin{bmatrix}
\end{bmatrix}$ is an eigenvector of a matrix $A$ corresponding to the eigenvalue $3$ and that $\begin{bmatrix}
\end{bmatrix}$ is an eigenvector of $A$ corresponding to the eigenvalue $-2$.
Compute $A^2\begin{bmatrix}
\end{bmatrix}$.
(Stanford University Linear Algebra Exam Problem)
Determine Eigenvalues, Eigenvectors, Diagonalizable From a Partial Information of a Matrix
Suppose the following information is known about a $3\times 3$ matrix $A$.
\[A\begin{bmatrix}
\end{bmatrix}=6\begin{bmatrix}
\end{bmatrix},
\quad
A\begin{bmatrix}
-1 \\
\end{bmatrix}, \quad
(a) Find the eigenvalues of $A$.
(b) Find the corresponding eigenspaces.
(c) In each of the following questions, you must give a correct reason (based on the theory of eigenvalues and eigenvectors) to get full credit.
Is $A$ a diagonalizable matrix?
Is $A$ an invertible matrix?
Is $A$ an idempotent matrix?
(Johns Hopkins University Linear Algebra Exam)
$\sqrt[m]{2}$ is an Irrational Number
Prove that $\sqrt[m]{2}$ is an irrational number for any integer $m \geq 2$.
Characteristic Polynomial, Eigenvalues, Diagonalization Problem (Princeton University Exam)
\[\begin{bmatrix}
(a) Find the characteristic polynomial and all the eigenvalues (real and complex) of $A$. Is $A$ diagonalizable over the complex numbers?
(b) Calculate $A^{2009}$.
(Princeton University, Linear Algebra Exam)
The Ideal Generated by a Non-Unit Irreducible Element in a PID is Maximal
Let $R$ be a principal ideal domain (PID). Let $a\in R$ be a non-unit irreducible element.
Then show that the ideal $(a)$ generated by the element $a$ is a maximal ideal.
Idempotent Matrix and its Eigenvalues
Let $A$ be an $n \times n$ matrix. We say that $A$ is idempotent if $A^2=A$.
(a) Find a nonzero, nonidentity idempotent matrix.
(b) Show that eigenvalues of an idempotent matrix $A$ is either $0$ or $1$.
(The Ohio State University, Linear Algebra Final Exam Problem)
In a Principal Ideal Domain (PID), a Prime Ideal is a Maximal Ideal
Let $R$ be a principal ideal domain (PID) and let $P$ be a nonzero prime ideal in $R$.
Show that $P$ is a maximal ideal in $R$.
Equivalent Conditions For a Prime Ideal in a Commutative Ring
Let $R$ be a commutative ring and let $P$ be an ideal of $R$. Prove that the following statements are equivalent:
(a) The ideal $P$ is a prime ideal.
(b) For any two ideals $I$ and $J$, if $IJ \subset P$ then we have either $I \subset P$ or $J \subset P$.
Prime Ideal is Irreducible in a Commutative Ring
Let $R$ be a commutative ring. An ideal $I$ of $R$ is said to be irreducible if it cannot be written as an intersection of two ideals of $R$ which are strictly larger than $I$.
Prove that if $\frakp$ is a prime ideal of the commutative ring $R$, then $\frakp$ is irreducible.
Ring is a Filed if and only if the Zero Ideal is a Maximal Ideal
Let $R$ be a commutative ring.
Then prove that $R$ is a field if and only if $\{0\}$ is a maximal ideal of $R$.
Nilpotent Element a in a Ring and Unit Element $1-ab$
Let $R$ be a commutative ring with $1 \neq 0$.
An element $a\in R$ is called nilpotent if $a^n=0$ for some positive integer $n$.
Then prove that if $a$ is a nilpotent element of $R$, then $1-ab$ is a unit for all $b \in R$.
Rings $2\Z$ and $3\Z$ are Not Isomorphic
Prove that the rings $2\Z$ and $3\Z$ are not isomorphic.
Find All the Values of $x$ so that a Given $3\times 3$ Matrix is Singular
Find all the values of $x$ so that the following matrix $A$ is a singular matrix.
x & x^2 & 1 \\
0 & -1 & 1
Find All Values of $x$ so that a Matrix is Singular
1 & -x & 0 & 0 \\
0 &1 & -x & 0 \\
0 & 0 & 1 & -x \\
0 & 1 & 0 & -1
\end{bmatrix}\] be a $4\times 4$ matrix. Find all values of $x$ so that the matrix $A$ is singular.
Abelian Groups and Surjective Group Homomorphism
Let $G, G'$ be groups. Suppose that we have a surjective group homomorphism $f:G\to G'$.
Show that if $G$ is an abelian group, then so is $G'$.
Subspace of Skew-Symmetric Matrices and Its Dimension
Let $V$ be the vector space of all $2\times 2$ matrices. Let $W$ be a subset of $V$ consisting of all $2\times 2$ skew-symmetric matrices. (Recall that a matrix $A$ is skew-symmetric if $A^{\trans}=-A$.)
(a) Prove that the subset $W$ is a subspace of $V$.
(b) Find the dimension of $W$.
(The Ohio State University Linear Algebra Exam Problem)
Vector Space of Polynomials and a Basis of Its Subspace
Let $P_2$ be the vector space of all polynomials of degree two or less.
Consider the subset in $P_2$
\[Q=\{ p_1(x), p_2(x), p_3(x), p_4(x)\},\] where
&p_1(x)=1, &p_2(x)=x^2+x+1, \\
&p_3(x)=2x^2, &p_4(x)=x^2-x+1.
(a) Use the basis $B=\{1, x, x^2\}$ of $P_2$, give the coordinate vectors of the vectors in $Q$.
(b) Find a basis of the span $\Span(Q)$ consisting of vectors in $Q$.
(c) For each vector in $Q$ which is not a basis vector you obtained in (b), express the vector as a linear combination of basis vectors.
A Matrix Representation of a Linear Transformation and Related Subspaces
Let $T:\R^4 \to \R^3$ be a linear transformation defined by
\[ T\left (\, \begin{bmatrix}
\end{bmatrix} \,\right) = \begin{bmatrix}
x_1+2x_2+3x_3-x_4 \\
3x_1+5x_2+8x_3-2x_4 \\
x_1+x_2+2x_3
(a) Find a matrix $A$ such that $T(\mathbf{x})=A\mathbf{x}$.
(b) Find a basis for the null space of $T$.
(c) Find the rank of the linear transformation $T$.
A Homomorphism from the Additive Group of Integers to Itself
Let $\Z$ be the additive group of integers. Let $f: \Z \to \Z$ be a group homomorphism.
Then show that there exists an integer $a$ such that
\[f(n)=an\] for any integer $n$.
Inner Product, Norm, and Orthogonal Vectors
Let $\mathbf{u}_1, \mathbf{u}_2, \mathbf{u}_3$ are vectors in $\R^n$. Suppose that vectors $\mathbf{u}_1$, $\mathbf{u}_2$ are orthogonal and the norm of $\mathbf{u}_2$ is $4$ and $\mathbf{u}_2^{\trans}\mathbf{u}_3=7$. Find the value of the real number $a$ in $\mathbf{u_1}=\mathbf{u_2}+a\mathbf{u}_3$.
(The Ohio State University, Linear Algebra Exam Problem)
Page 29 of 38« First«...1020...2627282930313233...»Last »
Is the Set of Nilpotent Element an Ideal?
The Quadratic Integer Ring $\Z[\sqrt{5}]$ is not a Unique Factorization Domain (UFD)
Diagonalize a 2 by 2 Matrix if Diagonalizable
Quiz 11. Find Eigenvalues and Eigenvectors/ Properties of Determinants
|
CommonCrawl
|
How should one implement a delegated shared trust protocol?
Consider the following (probably naive) scenario.
Alice, who is very limited in her knowledge of security in general (clueless about securing a private key for example), wishes to delegate certain contractual operations to Trent, an apparent trusted expert in that field.
However, Alice is rightly cautious and is of the opinion that Trent could be compromised (by Eve or Mallory) and so she asks Trent to provide a number of substitutes (Steve, Sam and Sarah) who can fulfil the same role as Trent (if he is away) and also make sure that Trent remains trustworthy (by observing his behaviour). Steve, Sam and Sarah cannot request to talk to Alice directly since all communication directly to her is through Trent.
Fortunately, Alice is protected by Walter, who she trusts completely, who is able to convert her simple password-like authorisation into a one-time only authorisation that Trent, Steve, Sam and Sarah can all see and verify.
From the above, it is apparent that in the normal situation when Alice requires a contract to be issued she provides Trent with the basic details, along with her authorisation via Walter. Trent creates and signs a contract on Alice's behalf (first problem) and then appeals to Steve, Sam or Sarah to counter-sign. Steve, Sam and Sarah all check Trent's public record of contracts to ensure that he is not exceeding agreed boundaries (second problem) and, after checking with Walter, sign if everything is OK. Once all signatures are in place then the contract is valid.
However, problems arise in the abnormal situations when Mallory enters the scene. Mallory may attempt to create a contract by taking Alice's authorisation and altering the provided details. Equally, since Trent is able to create arbitrary contracts, perhaps Mallory could create one that does not require Steve, Sam and Sarah to counter-sign but still appears to originate from Alice.
So my question is this: how can Alice ensure that her contracts can only be issued with her express permission without alteration, but still allow her to delegate the mechanics of this operation to others?
Please forgive the rather basic approach, I imagine that this scenario has been solved, but I'm not sure where I can find the details. Also, there are probably gaping holes which I would appreciate a more experienced eye pointing out.
It can be assumed that a contract requires a private key associated with Alice to enforce non-repudiation. However, if that private key could be hidden from both Trent and Mallory, perhaps by distributing it around Steve, Sam and Sarah that would be extra protection for Alice.
A real world example
The above is a rather generalised version of a problem that has arisen in the Bitcoin SE site (see https://bitcoin.stackexchange.com/q/517/117 for some interesting approaches) but has wider appeal than Bitcoin. However, using Bitcoin as a real world example may help clarifying some of the problems.
First, a quick overview of Bitcon. Bitcoin is a cryptocurrency that uses transactions to indicate a change in ownership of the underlying amount. It's not necessary to understand how this works, other than that a cryptographic signing protocol (elliptic curve DSA) is used to enforce non-repudiation. This signing protocol requires a public and private key.
So, to the problem. Given that Bitcoin is intended to provide a general-purpose replacement for cash it will be used by mainstream (non-technical) people - Alice in the above scenario. The vast majority of these people will want to delegate the security of their private key to someone who has put in place strong security measures, and use a password or OpenId type authentication and authorisation process (Walter).
Those people in charge of the private key (Trent) have the ability to create arbitrary transactions. Assuming those people are never compromised all is well. However, there have been many cases where servers have been hacked and keys stolen. As Bitcoin grows so that, for example, high value international trade with suitable escrow becomes a reality then the possibility for intimidation arises (force the owner of the escrow company to copy all the private keys across to Mallory). This is another attack vector to the private key. Since Bitcoin transactions are non-refundable, once the private key is lost all hope of retrieving the bitcoins is lost - it will be very hard for a legal system to help you recover the funds.
An approach to combat this is some form of shared trust protocol so that any single operator does not have full access to the private key. Instead a random group (or federation) of co-operating, largely trustworthy, operators can collaborate to construct the key and then use it. This protects each individual operator (the Trent's) from intimidation because they have no control so it's useless to expend effort compromising or intimidating them.
It might be possible to use oblivious transfer in some way, but I am not sufficiently versed in this area to comment.
public-key protocol-design non-repudiation
Gary RoweGary Rowe
$\begingroup$ Is that a tumbleweed I see before me? $\endgroup$
– Gary Rowe
$\begingroup$ I could be wrong but, this sounds very much like a replay attack that could simply be resolved by adding a nonce to Alice's authorisation. A nonce is a one time (typically integer or timestamp) value that can prevent replay attacks. Is this a replay attack that you are referring to? $\endgroup$
$\begingroup$ @Bill It's more complex than that. Perhaps this related question on the Bitcoin site may shed more light on a problem that would benefit from this protocol: bitcoin.stackexchange.com/q/517/117 $\endgroup$
$\begingroup$ @Bill Thanks for your interest in this. I'm not sure your approach is quite on the mark. Assume I'm Alice (or Bob :-) ). I don't have any understanding of crypto at all, so I pay Trent to issue contracts on my behalf using a PKI, in my case Trent is using a combination of SHA-256 and elliptic curve DSA. Unfortunately, Trent gets hacked by Mallory and suddenly my private key is exploited. Disaster! How can I protect myself against this? Trent can't rely on me encrypting my private key with a password since Mallory simply roots the server and monitors cleartext memory. $\endgroup$
$\begingroup$ In light of the comments to my solution, you can see why I was reluctant to post an answer. Not that any answer is incorrect; this other author and I just happen to have an opposing opinion. I hope that you find an answer to your problem. Sincerely. $\endgroup$
I found the original question vague/ambiguous but the Bitcoin example is concrete enough to answer. In short, what you want is impossible without relaxing some of your requirements.
One of the many requirements you list is that Alice, essentially, has no signing power and can only perform password-based authentication. This is problematic for a fundamental reason. If $a_i$ is some authentication token (whatever it may be) and $m_i$ is her instructions or transaction, a corrupted Trent can always take $\{a_i,m_i\}$ and replace $m_i$ with $\hat{m}_i$ of his choosing.
There are only two ways to prevent this and both are more-or-less blocked by your requirements:
Alice has a secure channel to a trusted party (Steve/Sarah/Sam).
Alice applies her authentication token to the message: $\{m_i, \mathsf{f}_{a_i}(m_i)\}$. Maybe it is possible to use something weaker than a signature for $\mathsf{f}$ but essentially you want a digital signature (in this case, $a_i$ is a secret but a related value can be used to verify $\mathsf{f}_{a_i}(m_i)$).
Contra 1, you want all communication relayed through Trent. Contra 2, you seem to object to Alice being able to sign things, however I think there is some room in your scenario to play with this. It seems your main objection to Alice being able to sign things for herself is that she needs to keep a stored private key (e.g., $\mathtt{wallet.dat}$) secure.
However it is possible to derive a signing key from a password using a password-based key derivation function (PBKDF). So an alternative Bitcoin architecture could be password-based, where the signing key is derived from the password every time Alice has a transaction to sign. Niether the password nor the key from it is ever stored.
A drawback is if she forgets it, just like loosing your $\mathtt{wallet.dat}$, it is unrecoverable. Another drawback of using a password (whether to derive a signing key or to log into a service that has signing power) is that it will have less entropy than a randomly generated private key that is not meant to be memorized (just stored).
This is where maybe some distributed entity could come in. They could hold shares of the password to allow recoverability (however if they collude, they could recover it).
Finally, if you are still interested in delegation, observe that if Steve/Sarah/Sam are able to act in the absence of Trent, they have to have sufficient power to sign anything. Since they are never corrupted by the adversary in your threat model, the trivial solution is to cut out Trent from any power except relaying messages (the messages will have to be signed or have some end-to-end encryption for the reasons above). Steve/Sarah/Sam will be a distributed authority, where you trust that no more than a certain threshold $k$ will collude and no more than $\ell$ will be offline/non-responsive. You can then use secret sharing (for recovery) or threshold signatures (for delegating the signing) to set the appropriate number of shares required to perform the operation.
PulpSpyPulpSpy
$\begingroup$ Thanks for a good answer. The reason that all communication goes through Trent is to allow him to build up a customer base and obtain an income from his efforts. Trent must have an incentive to act on Alice's behalf. Steve/Sarah/Sam are there to allow Alice to recover from an absence of Trent (either temporary or permanent). The general idea is that if enough substitutes are present then it is harder to attack the overall network. Finally, if Trent is the initiator, but not implementor, is there a way to ensure random selection of substitutes outside of Trent's control? $\endgroup$
$\begingroup$ Assuming all traffic goes through Trent, either Alice selects (and signs) a list of random substitutes, or you use the message itself (e.g., Alice's name) to select them. An example of the latter is a distributed hash table (DHT). (There are subtleties to using DHT securely. e.g., goo.gl/QqQln goo.gl/ThDlD ) $\endgroup$
– PulpSpy
A quick search has found this wiki article, which suggests employing a trusted third-party certificate authority in a Public-Key Infrastructure responsible for verifying the identity of a user of the system and issuing a tamper resistant and non-spoofable digital certificate for participants.
Such certificates are signed data blocks stating that this public key belongs to that person, company or other entity. This approach also has weaknesses. For example, the certificate authority issuing the certificate must be trusted to have properly checked the identity of the key-holder, the correctness of the public key when it issues a certificate, and has made arrangements with all participants to check all certificates before protected communications can begin.
Note that in the (more recent) OpenPGP specification, (a variant of the PGP protocol), trust signatures can be used to support creation of certificate authorities, also. So, whether you use PKI or PGP, the responsibility of security then falls on the certificate authority. If this is compromised, then I suppose nobody can be trusted.
$\begingroup$ I've removed all the comments under this question as I don't think there was anything positive in the discussion taking place. Just a polite reminder to everyone to keep it constructive and on-topic please. $\endgroup$
Not the answer you're looking for? Browse other questions tagged public-key protocol-design non-repudiation or ask your own question.
Is this authenticated one-way communication protocol secure?
Can one have an authentic, but repudiable, message without a previously shared secret?
Is symmetric key encrypted with server's public key secure
Key exchange and man in the middle issue. Secure solution
Private set intersection, using a semi-trusted server
Non-repudiation and digital signature of a dishonest participant
Is it possible for Alice and Bob to both sign a message "simultaneously"?
Revoke key without communication between the party who revoked it and the party who is validating
Encrypted data sharing in decentralised system
|
CommonCrawl
|
CCC'17 - Program
Schedule The slots for contributed talks are 30 minutes (including 5 minutes for questions and changing speakers). In addition there are tutorial sessions by Avi Wigderson on Thursday, Friday, and Saturday, 10:30-12:30. There is also a reception Wednesday at 18:00, a business meeting Thursday at 20:30, a rump session Friday at 18:15, and an excursion Saturday at 16:30.
Venue All activities except the excursion and banquet take place at the University of Latvia, Raina bulv. 19. The talks are in Auditorium 13 on the 3rd floor; the business meeting and rump session are in Auditorium 16 on the same floor. The banquet takes place in restaurant Andaluzijas suns, Elizabetes iela 83.
Slides Click the word "slides" in the program to see the presentations that the authors agreed to post on this website.
Proceedings The official version is openly accessible via LIPIcs.
Wednesday July 5
18:00-20:00 Reception
Thursday July 6
8:30 Random resolution refutations (abstract, slides)
Pavel Pudlák (Czech Academy of Sciences), Neil Thapen (Czech Academy of Sciences)
We study the random resolution refutation system defined in~[Buss et al. 2014].This attempts to capture the notion of a resolution refutation that may make mistakes but is correct most of the time. By proving the equivalence of several different definitions, we show that this concept is robust. On the other hand, if P\neq NP, then random resolution cannot be polynomially simulated by any proof system in which correctness of proofs is checkable in polynomial time.
We prove several upper and lower bounds on the width and size of random resolution refutations of explicit and random unsatisfiable CNF formulas. Our main result is a separation between polylogarithmic width random resolution and quasipolynomial size resolution, which solves the problem stated in~[Buss et al. 2014]. We also prove exponential size lower bounds on random resolution refutations of the pigeonhole principle CNFs, and of a family of CNFs which have polynomial size refutations in constant depth Frege.
9:00 Graph colouring is hard for algorithms based on Hilbert's Nullstellensatz and Gröbner bases (abstract, slides)
Massimo Lauria (Sapienza - Universita di Roma), Jakob Nordstrom (KTH Royal Institute of Technology)
We consider the natural encoding of the $k$-colouring problem for a given graph as a set of polynomial equations. We prove that there are bounded-degree graphs that do not have legal $k$-colourings but for which the polynomial calculus proof system defined in [Clegg et al. '96, Alekhnovich et al. '02] requires linear degree, and hence exponential size, to establish this fact. This implies a linear degree lower bound for any algorithms based on Gröbner bases solving $k$-colouring problems using this encoding. The same bound applies also for the algorithm studied in a sequence of papers [De Loera et al. '08,~'09,~'11,~'15] based on Hilbert's Nullstellensatz proofs for a slightly different encoding, thus resolving an open problem mentioned, e.g., in [De Loera et al. '09] and [Li et al. '16]. % We obtain our results by combining the polynomial calculus degree lower bound for functional pigeonhole principle (FPHP) formulas over bounded-degree bipartite graphs in [Mik\v{s}a and Nordström~'15] with a reduction from FPHP to $k$-colouring derivable by polynomial calculus in constant degree.
9:30 Representations of monotone Boolean functions by linear programs (abstract, slides)
Mateus De Oliveira Oliveira (University of Bergen), Pavel Pudlák (Czech Academy of Sciences)
We introduce the notion of monotone linear-programming circuits (MLP circuits), a model of computation for partial Boolean functions. Using this model, we prove the following results.
1. MLP circuits are superpolynomially stronger than monotone Boolean circuits.
2. MLP circuits are exponentially stronger than monotone span programs.
3. MLP circuits can be used to provide monotone feasibiliy interpolation theorems for Lovasz-Schrijver proof systems, and for mixed Lovasz-Schrijver proof systems.
4. The Lovasz-Schrijver proof system cannot be polynomially simulated by the cutting planes proof system. This is the first result showing a separation between these two proof systems.
10:30 Tutorial (part 1)
Operator scaling: theory, applications and connections (abstract)
Avi Wigderson (Institute for Advanced Study)
14:30 A note on amortized branching program complexity (abstract)
Aaron Potechin (Institute for Advanced Study)
In this paper, we show that while almost all functions require exponential size branching programs to compute, for all functions f there is a branching program computing a doubly exponential number of copies of f which has linear size per copy of f. This result disproves a conjecture about non-uniform catalytic computation, rules out a certain type of bottleneck argument for proving non-monotone space lower bounds, and can be thought of as a constructive analogue of Razborov's result that submodular complexity measures have maximum value O(n).
15:00 Derandomizing isolation in space-bounded settings (abstract, slides)
Dieter van Melkebeek (University of Wisconsin-Madison), Gautam Prakriya (University of Wisconsin-Madison)
We study the possibility of deterministic or randomness-efficient isolation in space-bounded models of computation: Can one efficiently reduce instances to equivalent instances that have at most one solution? We present results for branching programs (related to the complexity class NL) and for shallow semi-unbounded circuits (related to the complexity class LogCFL).
A common approach employs small weight assignments that make the solution of minimum weight unique (if a solution exists). The Isolation Lemma and other known procedures use $\Omega(n)$ random bits to generate weights of individual bitlength $O(\log n)$. We develop a derandomized version in both settings that uses $O((\log n)^{3/2})$ random bits and produces weights of bitlength $O((\log n)^{3/2})$ in logarithmic space. The construction allows us to show that every language in NL can be accepted by a nondeterministic machine that runs in polynomial time and $O((\log n)^{3/2})$ space simultaneously, and has at most one accepting computation path on every input. Similarly, we show that every language in LogCFL can be accepted by a nondeterministic machine that runs in polynomial time and $O((\log n)^{3/2})$ space simultaneously when given access to a stack that does not count towards the space bound, and has at most one accepting computation path on every input.
We also show that the existence of somewhat more restricted isolations for branching programs implies that all of NL has branching programs of polynomial size, as well as a similar result for LogCFL.
15:30 The computational complexity of integer programming with alternations (abstract)
Danny Nguyen (UCLA), Igor Pak (UCLA)
We prove that integer programming with three quantifier alternations is NP-complete, even for a fixed number of variables. This complements earlier results by Lenstra and Kannan, which together say that integer programming with at most two quantifier alternations can be done in polynomial time for a fixed number of variables. As a byproduct of the proof, we show that for two polytopes P,Q in R^4, counting the projection of integer points in Q\P is #P-complete. This contrasts the 2003 result by Barvinok and Woods, which allows counting in polynomial time the projection of integer points in P, Q and P\Q.
16:30 On the average-case complexity of MCSP and its variants (abstract, slides)
Shuichi Hirahara (University of Tokyo), Rahul Santhanam (University of Oxford)
We prove various results on the complexity of MCSP (Minimum Circuit Size Problem) and the related MKTP (Minimum Kolmogorov Time-Bounded Complexity Problem):
- We observe that under standard cryptographic assumptions, MCSP has a {\it pseudorandom self-reduction}. This is a new notion we define by relaxing the notion of a random self-reduction to allow queries to be pseudorandom rather than uniformly random. As a consequence we derive a weak form of an average-case to worst case reduction for (a promise version of) MCSP. Our result also distinguishes MCSP from natural $\mathsf{NP}$-complete problems, which are not known to have average-case to worst-case reductions. Indeed, it is known that strong forms of average-case to worst-case reductions for $\mathsf{NP}$-complete problems collapse the Polynomial Hierarchy.
- We prove the first non-trivial formula size lower bounds for MCSP by showing that MCSP requires nearly quadratic-size De Morgan formulas.
- We show average-case superpolynomial size lower bounds for MKTP against $\AC^0[p]$ for any prime $p$.
- We show the hardness of MKTP on average under assumptions that have been used in much recent work, such as Feige's assumptions, Alekhnovich's assumption and the Planted Clique conjecture. In addition, MCSP is hard under Alekhnovich's assumption. Using a version of Feige's assumption against co-nondeterministic algorithms that has been conjectured recently, we provide evidence for the first time that MKTP is not in $\mathsf{coNP}$. Our results suggest that it might worthwhile to focus on the {\it average-case hardness} of MKTP and MCSP when approaching the question of whether these problems are $\mathsf{NP}$-hard.
17:00 Easiness amplification and uniform circuit lower bounds (abstract, slides)
Cody Murray (MIT), Ryan Williams (MIT)
We present new consequences of the assumption that time-bounded algorithms can be ``compressed'' with non-uniform circuits. Our main contribution is an ``easiness amplification'' lemma for circuits. One instantiation of the lemma says: if $n^{1+\varepsilon}$-time, $\tilde{O}(n)$-space computations have $n^{1+o(1)}$ size (non-uniform) circuits for some $\varepsilon > 0$, then every problem solvable in \emph{polynomial} time and $\tilde{O}(n)$ space has $n^{1+o(1)}$ size (non-uniform) circuits as well. This amplification has several consequences:
* An easy problem without small LOGSPACE-uniform circuits. For all $\varepsilon > 0$, we give a natural decision problem {\sc General Circuit $n^{\varepsilon}$-Composition} that is solvable in $n^{1+\varepsilon}$ time, but we prove that \emph{arbitrary} poly-time and log-space preprocessing cannot produce $n^{1+o(1)}$-size circuits for the problem. This shows that there are problems solvable in $n^{1+\varepsilon}$ time which are not in LOGSPACE-uniform $n^{1+o(1)}$ size, the first result of its kind. We show that our lower bound is non-relativizing, by exhibiting an oracle relative to which the result is false.
* Problems without low-depth LOGSPACE-uniform circuits. For all $\varepsilon > 0$, $1 < d < 2$, and $d' < d$ we give another natural circuit composition problem computable in $\tilde{O}(n^{1+\varepsilon})$ time, or in $O((\log n)^d)$ space (though not necessarily simultaneously) that we prove does not have $SPACE[(\log n)^{d'}]$-uniform circuits of $\tilde{O}(n)$ size and $O((\log n)^{d'})$ depth. For NP-hard problems, we show SAT does not have circuits of $\tilde{O}(n)$ size and $\log^{2-o(1)} n$ depth that can be constructed in $\log^{2-o(1)} n$ space.
* A strong circuit complexity amplification. For every $\varepsilon > 0$, we give a natural problem {\sc Circuit $n^{\varepsilon}$-Composition} and show that if it has $\tilde{O}(n)$-size circuits (uniform or not), then every problem solvable in $2^{O(n)}$ time and $2^{O(\sqrt{n}\log n)}$ space (simultaneously) has $2^{O(\sqrt{n} \log n)}$-size circuits (uniform or not). We also show the same consequence holds assuming SAT has $\tilde{O}(n)$-size circuits.
As a corollary, if $n^{1.1}$ time computations (or $O(n)$ nondeterministic time computations) have $\tilde{O}(n)$-size circuits, then all problems in exponential time and subexponential space (such as quantified Boolean formulas) have significantly subexponential-size circuits. This is a new connection between the relative circuit complexities of easy and hard problems.
17:30 PPSZ for general k-SAT — making Hertli's analysis simpler and 3-SAT faster (abstract, slides)
Dominik Scheder (Shanghai Jiaotong University), John Steinberger (Tsinghua University)
The currently fastest known algorithm for k-SAT is PPSZ named after its inventors Paturi, Pudlak, Saks, and Zane. Analyzing its running time is much easier for input formulas with a unique satisfying assignment. In this paper, we achieve three goals. First, we simplify Hertli's analysis for input formulas with multiple satisfying assignments. Second, we show a "translation result": if you improve PPSZ for k-CNF formulas with a unique satisfying assignment, you will immediately get a (weaker) improvement for general k-CNF formulas. Combining this with a result by Hertli from 2014, in which he gives an algorithm for Unique-3-SAT slightly beating PPSZ, we obtain an algorithm beating PPSZ for general 3-SAT, thus obtaining the so far best known worst-case bounds for 3-SAT.
18:00 Break
20:30 Business meeting
Friday July 7
8:30 Noise stability is computable and approximately low-dimensional (abstract, slides)
Anindya De (Northwestern University), Elchanan Mossel (MIT), Joe Neeman (UT Austin)
Questions of noise stability play an important role in hardness of approximation in computer science as well as in the theory of voting. In many applications, the goal is to find an optimizer of noise stability among all possible partitions of $\R^n$ for $n \geq 1$ to $k$ parts with given Gaussian measures $\mu_1,\ldots,\mu_k$. We call a partition $\epsilon$-optimal, if its noise stability is optimal up to an additive $\epsilon$. In this paper, we give an explicit, computable function $n(\epsilon)$ such that an $\epsilon$-optimal partition exists in $\R^{n(\epsilon)}$. This result has implications for the computability of certain problems in non-interactive communication, which are addressed in a subsequent work.
9:00 From weak to strong LP gaps for all CSPs (abstract, slides)
Mrinalkanti Ghosh (TTI Chicago), Madhur Tulsiani (TTI Chicago)
We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations.
We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by $\Omega(\frac{\log n}{\log \log n})$ levels of the Sherali-Adams hierarchy on instances of size $n$.
It was proved by Chan et al. [FOCS 2013] (and recently strengthened by Kothari et al. [STOC 2017]) that for CSPs, any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy.
Combining this with our result also implies that any polynomial size LP extended formulation is no stronger than simply the \emph{basic} LP, which can be thought of as the base level of the Sherali-Adams hierarchy. This essentially gives a dichotomy result for approximation of CSPs by polynomial size LP extended formulations.
Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance for LPs. They provided a necessary and sufficient condition under which $\Omega(\log \log n)$ levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to $\Omega(\frac{\log n}{\log \log n})$ levels.
9:30 Query-to-communication lifting for P^NP (abstract, slides)
Mika Göös (Harvard University), Pritish Kamath (MIT), Toniann Pitassi (University of Toronto), Thomas Watson (University of Memphis)
We prove that the $\P^\NP$-type query complexity (alternatively, decision list width) of any boolean function $f$ is quadratically related to the $\P^\NP$-type communication complexity of a lifted version of $f$. As an application, we show that a certain ``product'' lower bound method of Impagliazzo and Williams (CCC 2010) fails to capture $\P^\NP$ communication complexity up to polynomial factors, which answers a question of Papakonstantinou, Scheder, and Song (CCC 2014).
14:30 Improved bounds for quantified derandomization of constant-depth circuits and polynomials (abstract, slides)
Roei Tell (Weizmann Institute of Science)
This work studies the question of \emph{quantified derandomization}, which was introduced by Goldreich and Wigderson (STOC 2014). The generic quantified derandomization problem is the following: For a circuit class $\mathcal{C}$ and a parameter $B=B(n)$, given a circuit $C\in\mathcal{C}$ with $n$ input bits, decide whether $C$ rejects all of its inputs, or accepts all but $B(n)$ of its inputs. In the current work we consider three settings for this question. In each setting, we bring closer the parameter setting for which we can unconditionally construct relatively fast quantified derandomization algorithms, and the ``threshold'' values (for the parameters) for which any quantified derandomization algorithm implies a similar algorithm for standard derandomization.
For {\bf constant-depth circuits}, we construct an algorithm for quantified derandomization that works for a parameter $B(n)$ that is only \emph{slightly smaller} than a ``threshold'' parameter, and is significantly faster than the best currently-known algorithms for standard derandomization. On the way to this result we establish a new derandomization of the switching lemma, which significantly improves on previous results when the width of the formula is small. For {\bf constant-depth circuits with parity gates}, we lower a ``threshold'' of Goldreich and Wigderson from depth five to depth four, and construct algorithms for quantified derandomization of a remaining type of layered depth-$3$ circuit that they left as an open problem. We also consider the question of constructing hitting-set generators for multivariate {\bf polynomials over large fields that vanish rarely}, and prove two lower bounds on the seed length of such generators.
Several of our proofs rely on an interesting technique, which we call the \emph{randomized tests} technique. Intuitively, a standard technique to deterministically find a ``good'' object is to construct a simple deterministic test that decides the set of good objects, and then ``fool'' that test using a pseudorandom generator. We show that a similar approach works also if the simple deterministic test is replaced with a \emph{distribution over simple tests}, and demonstrate the benefits in using a distribution instead of a single test.
15:00 Bounded independence plus noise fools products (abstract, slides)
Elad Haramaty (Harvard University), Chin Ho Lee (Northeastern University), Emanuele Viola (Northeastern University)
Let D be a b-wise independent distribution over {0,1}^m. Let E be the ""noise"" distribution over {0,1}^m where the bits are independent and each bit is 1 with probability η/2. We study which tests f: {0,1}^m → [-1,1] are Ɛ-fooled by D+E, i.e., |E[f(D+E)] - E[f(U)]| ≤ Ɛ where U is the uniform distribution. We show that D+E Ɛ-fools product tests f: ({0,1}^n)^k → [-1,1] given by the product of k bounded functions on disjoint n-bit inputs with error Ɛ = k (1-η)^{Ω(b^2/m)}, where m = nk and b ≥ n. This bound is tight when b = Ω(m) and η ≥ (log k)/m. For b ≥ m^{2/3} log m and any constant η the distribution D+E also 0.1-fools log-space algorithms.
We develop two applications of this type of results. First, we prove communication lower bounds for decoding noisy codewords of length m split among k parties. For Reed-Solomon codes of dimension m/k where k = O(1), communication Ω(ηm) - O(log m) is required to decode one message symbol from a codeword with ηm errors, and communication O(ηm log m) suffices. Second, we obtain pseudorandom generators. We can Ɛ-fool product tests f: ({0,1}^n)^k → [-1,1] under any permutation of the bits with seed lengths 2n + ~O(k^2 log (1/Ɛ)) and O(n) + ~O(\sqrt{nk \log 1/Ɛ}). Previous generators had seed lengths ≥ nk/2 or ≥ n \sqrt{nk}.
15:30 Tight bounds on The Fourier spectrum of AC0 (abstract, slides)
Avishay Tal (Institute for Advanced Study)
We show that $AC^0$ circuits on $n$ variables with depth $d$ and size $m$ have at most $2^{-\Omega(k/\log^{d-1} m)}$ of their Fourier mass at level $k$ or above. Our proof builds on a previous result by H{\aa}stad (SICOMP, 2014) who proved this bound for the special case $k=n$. Our result improves the seminal result of Linial, Mansour and Nisan (JACM, 1993) and is tight up to the constants hidden in the $\Omega$ notation.
As an application, we improve Braverman's celebrated result (JACM, 2010). Braverman showed that any $r(m,d,\varepsilon)$-wise independent distribution $\varepsilon$-fools $AC^0$ circuits of size $m$ and depth $d$, for $$r(m,d,\varepsilon) = O(\log (m/\varepsilon))^{2d^2+7d+3}.$$ Our improved bounds on the Fourier tails of $AC^0$ circuits allows us to improve this estimate to $$r(m,d,\varepsilon) = O(\log(m/\varepsilon))^{3d+3}.$$ In contrast, an example by Mansour (appearing in Luby and Velickovic's paper - Algorithmica, 1996) shows that there is a $\log^{d-1}(m) \cdot \log(1/\varepsilon)$-wise independent distribution that does not $\varepsilon$-fool $AC^0$ circuits of size $m$ and depth $d$. Hence, our result is tight up to the factor $3$ in the exponent.
16:30 Complexity-theoretic foundations of quantum supremacy experiments (abstract, slides)
Scott Aaronson (UT Austin), Lijie Chen (Tsinghua University)
In the near future, there will likely be special-purpose quantum computers with 40-50 high-quality qubits. This paper lays general theoretical foundations for how to use such devices to demonstrate "quantum supremacy": that is, a clear quantum speedup for some task, motivated by the goal of overturning the Extended Church-Turing Thesis as confidently as possible.
First, we study the hardness of sampling the output distribution of a random quantum circuit, along the lines of a recent proposal by by the Quantum AI group at Google. We show that there's a natural average-case hardness assumption, which has nothing to do with sampling, yet implies that no polynomial-time classical algorithm can pass a statistical test that the quantum sampling procedure's outputs do pass. Compared to previous work---for example, on BosonSampling and IQP---the central advantage is that we can now talk directly about the observed outputs, rather than about the distribution being sampled.
Second, in an attempt to refute our hardness assumption, we give a new algorithm, inspired by Savitch's Theorem, for simulating a general quantum circuit with n qubits and m gates in polynomial space and m^O(n) time. We then discuss why this and other known algorithms fail to refute our assumption.
Third, resolving an open problem of Aaronson and Arkhipov, we show that any strong quantum supremacy theorem---of the form ""if approximate quantum sampling is classically easy, then the polynomial hierarchy collapses""---must be non-relativizing. This sharply contrasts with the situation for exact sampling.
Fourth, refuting a conjecture by Aaronson and Ambainis, we show that the Fourier Sampling problem achieves a constant versus linear separation between quantum and randomized query complexities.
Fifth, in search of a ""happy medium"" between black-box and non-black-box arguments, we study quantum supremacy relative to oracles in P/poly. Previous work implies that, if one-way functions exist, then quantum supremacy is possible relative to such oracles. We show, conversely, that some computational assumption is needed: if SampBPP=SampBQP and NP is in BPP, then quantum supremacy is impossible relative to oracles with small circuits.
17:00 Stochasticity in algorithmic statistics for polynomial time (abstract, slides)
Alexey Milovanov (National Research University Higher School of Economics), Nikolay Vereshchagin (National Research University Higher School of Economics)
A fundamental notion in Algorithmic Statistics (without resource bounds) is that of a stochastic object, for which there is a simple plausible explanation. Informally, a probability distribution is a plausible explanation for $x$ if it looks likely that $x$ was drawn at random with respect to that distribution. In this paper, we suggest three definitions of a plausible statistical hypothesis for Algorithmic Statistics with polynomial time bounds, which are called acceptability, plausibility and optimality. Roughly speaking, a probability distribution $\mu$ is called an acceptable explanation for $x$, if $x$ possesses all properties decidable by short programs in a short time and shared by almost all objects (with respect to $\mu$). Plausibility is a similar notion, however this time we require that $x$ possess properties $T$ decidable even by long programs in a short time and shared by almost all objects. To compensate the increase in program length, we strengthen the notion of `almost all'---the longer the program recognizing the property is, the more objects must share the property. Finally, a probability distribution $\mu$ is called an optimal explanation for $x$ if $\mu(x)$ is large (close to $2^{-\K^{\poly}(x)}$).
Almost all our results hold under some plausible complexity theoretic assumptions. Our main result states that for acceptability and plausibility there are infinitely many non-stochastic objects, which do not have simple plausible (acceptable) explanations. We explain why we need assumptions---our main result implies that P $\ne$ PSPACE. In the proof of that result, we use the notion of an elusive set, which is interesting in its own right. Using elusive sets, we show that the distinguishing complexity of a string $x$ can be super-logarithmically less than the conditional complexity of $x$ with condition $r$ for almost all $r$ (for polynomial time bounded programs). Such a gap was known before, however only in the case when both complexities are conditional, or both complexities are unconditional.
It follows from the definition that plausibility implies acceptability and optimality. We show that there are objects that have simple acceptable but implausible and non-optimal explanations. We prove that for strings whose distinguishing complexity is close to Kolmogorov complexity (with polynomial time bounds) plausibility is equivalent to optimality for all simple distributions, which fact can be considered as a justification of the Maximal Likelihood Estimator.
17:30 Conspiracies between learning algorithms, circuit lower bounds and pseudorandomness (abstract, slides)
Igor Carboni Oliveira (Charles University Prague), Rahul Santhanam (University of Oxford)
We prove several results giving new and stronger connections between learning theory, circuit complexity and pseudorandomness. Let $\mathfrak{C}$ be any typical class of Boolean circuits, and $\mathfrak{C}[s(n)]$ denote $n$-variable $\mathfrak{C}$-circuits of size $\leq s(n)$. We show:\\
\noindent \textbf{Learning Speedups.} If $\mathfrak{C}[\mathsf{poly}(n)]$ admits a randomized weak learning algorithm under the uniform distribution with membership queries that runs in time $2^n/n^{\omega(1)}$, then for every $k\geq 1$ and $\varepsilon>0$ the class $\mathfrak{C}[n^k]$ can be learned to high accuracy in time $O(2^{n^\varepsilon})$. There is $\varepsilon > 0$ such that $\mathfrak{C}[2^{n^{\varepsilon}}]$ can be learned in time $2^n/n^{\omega(1)}$ if and only if $\mathfrak{C}[\mathsf{poly}(n)]$ can be learned in time $2^{(\log n)^{O(1)}}$.\\
\noindent \textbf{Equivalences between Learning Models.} We use learning speedups to obtain equivalences between various randomized learning and compression models, including sub-exponential time learning with membership queries, sub-exponential time learning with membership and equivalence queries, probabilistic function compression and probabilistic average-case function compression.\\
\noindent \textbf{A Dichotomy between Learnability and Pseudorandomness.} In the non-uniform setting, there is non-trivial learning for $\mathfrak{C}[\mathsf{poly}(n)]$ if and only if there are no exponentially secure pseudorandom functions computable in $\mathfrak{C}[\mathsf{poly}(n)]$.\\
\noindent \textbf{Lower Bounds from Nontrivial Learning.} If for each $k \geq 1$, (depth-$d$)-$\mathfrak{C}[n^k]$ admits a randomized weak learning algorithm with membership queries under the uniform distribution that runs in time $2^n/n^{\omega(1)}$, then for each $k\geq 1$, $\mathsf{BPE} \nsubseteq$ (depth-$d$)-$\mathfrak{C}[n^k]$. If for some $\varepsilon > 0$ there are $\mathsf{P}$-natural proofs useful against $\mathfrak{C}[2^{n^{\varepsilon}}]$, then $\mathsf{ZPEXP} \nsubseteq \mathfrak{C}[\mathsf{poly}(n)]$.\\
\noindent \textbf{Karp-Lipton Theorems for Probabilistic Classes.} If there is a $k > 0$ such that $\mathsf{BPE} \subseteq \mathtt{i.o.}\mathsf{Circuit}[n^k]$, then $\mathsf{BPEXP} \subseteq \mathtt{i.o.}\mathsf{EXP}/O(\mathsf{log}\,n)$. If $\mathsf{ZPEXP} \subseteq \mathtt{i.o.}\mathsf{Circuit}[2^{n/3}]$, then $\mathsf{ZPEXP} \subseteq \mathtt{i.o.}\mathsf{ESUBEXP}$.\\
\noindent \textbf{Hardness Results for $\mathsf{MCSP}$.} All functions in non-uniform $\mathsf{NC}^1$ reduce to the Minimum Circuit Size Problem via truth-table reductions computable by $\mathsf{TC}^0$ circuits. In particular, if $\mathsf{MCSP} \in \mathsf{TC}^0$ then $\mathsf{NC}^1 = \mathsf{TC}^0$.
18:15 Rump session
Saturday July 8
8:30 A quadratic lower bound for homogeneous algebraic branching programs (abstract, slides)
Mrinal Kumar (Rutgers University)
An algebraic branching program (ABP) is a directed acyclic graph, with a start vertex $s$, and end vertex $t$ and each edge having a weight which is an affine form in $\F[x_1, x_2, \ldots, x_n]$. An ABP computes a polynomial in a natural way, as the sum of weights of all paths from $s$ to $t$, where the weight of a path is the product of the weights of the edges in the path. An ABP is said to be homogeneous if the polynomial computed at every vertex is homogeneous. In this paper, we show that any homogeneous algebraic branching which computes the polynomial $x_1^n + x_2^n + \ldots + x_n^n$ has at least $\Omega(n^2)$ vertices (and hence edges).
To the best of our knowledge, this seems to be the first non-trivial super-linear lower bound on the number of vertices for a general \emph{homogeneous} ABP and slightly improves the known lower bound of $\Omega(n\log n)$ on the number of edges of in a general (possibly \emph{non-homogeneous}) ABP, which follows from the classical results of Strassen~\cite{Strassen73} and Baur \& Strassen~\cite{BS83}.
Our proof is quite simple, and also gives an alternate and unified proof of an $\Omega(n\log n)$ lower bound on the size of a \emph{homogeneous} arithmetic circuit (follows from~\cite{Strassen73, BS83}), and an $n/2$ lower bound ($n$ over $\mathbb{R}$) on the determinantal complexity of an explicit polynomial~\cite{mr04, CCL10, Yabe15}. These are currently the best lower bounds known for these problems for any explicit polynomial, and were originally proved nearly two decades apart using seemingly different proof techniques.
9:00 On algebraic branching programs of small width (abstract, slides)
Karl Bringmann (Max-Planck-Institut für Informatik), Christian Ikenmeyer (Max-Planck-Institut für Informatik), Jeroen Zuiddam (CWI)
In 1979 Valiant showed that the complexity class VP_e of families with polynomially bounded formula size is contained in the class VP_s of families that have algebraic branching programs (ABPs) of polynomially bounded size. Motivated by the problem of separating these classes we study the topological closure VP_e-bar, i.e. the class of polynomials that can be approximated arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a strikingly simple complete polynomial (in characteristic different from 2) whose recursive definition is similar to the Fibonacci numbers. Further understanding this polynomial seems to be a promising route to new formula lower bounds.
Our methods are rooted in the study of ABPs of small constant width. In 1992 Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction lead to the aforementioned characterization of VP_e-bar.
As a natural continuation of this work we prove that the class VNP can be described as the class of families that admit a hypercube summation of polynomially bounded dimension over a product of polynomially many affine linear forms. This gives the first separations of algebraic complexity classes from their nondeterministic analogs.
9:30 Reconstruction of full rank algebraic branching programs (abstract, slides)
Neeraj Kayal (Microsoft Research India), Vineet Nair (Indian Institute of Science), Chandan Saha (Indian Instituite of Science), Sebastien Tavenas (University of Savoie Mont Blanc)
An algebraic branching program (ABP) A can be modelled as a product expression X_1.X_2...X_d, where X_1 and X_d are 1 x w and w x 1 matrices respectively, and every other X_k is a w x w$ matrix; the entries of these matrices are linear forms in m variables over a field F (which we assume to be either Q or a field of characteristic poly(m)). The polynomial computed by A is the entry of the 1 x 1 matrix obtained from the product \prod_{k=1}^{d} X_k. We say A is a full rank ABP if the w^2.(d-2) + 2w linear forms occurring in the matrices X_1,X_2,...,X_d are F-linearly independent. Our main result is a randomized reconstruction algorithm for full rank ABPs: Given blackbox access to an m-variate polynomial f of degree at most m, the algorithm outputs a full rank ABP computing f if such an ABP exists, or outputs `no full rank ABP exists' (with high probability). The running time of the algorithm is polynomial in m and \beta, where \beta is the bit length of the coefficients of f. The algorithm works even if X_k is a w_{k-1} x w_k matrix (with w_0 = w_d = 1), and \vecw = (w_1, ..., w_{d-1}) is unknown.
The result is obtained by designing a randomized polynomial time equivalence test for the family of iterated matrix multiplication polynomial IMM_{\vecw,d}, the (1,1)-th entry of a product of d rectangular symbolic matrices whose dimensions are according to \vecw \in N^{d-1}. At its core, the algorithm exploits a connection between the irreducible invariant subspaces of the Lie algebra of the group of symmetries of a polynomial f that is equivalent to IMM_{\vecw,d} and the `layer spaces' of a full rank ABP computing f. This connection also helps determine the group of symmetries of IMM_{\vecw,d} and show that IMM_{\vecw,d} is characterized by its group of symmetries.
14:30 Trading information complexity for error (abstract, slides)
Yuval Dagan (Technion), Yuval Filmus (Technion), Hamed Hatami (McGill University), Yaqiao Li (McGill University)
We consider the standard two-party communication model. The central problem studied in this article is how much one can save in information complexity by allowing an error. Our major result is that the epsilon-error randomized communication complexity of set disjointness is n[C_DISJ - Theta(h(epsilon))] + o(n), where C_DISJ ≈ 0.4827 is the constant of set disjointness found by Braverman et al.
15:00 Augmented index and quantum streaming algorithms for DYCK(2) (abstract, slides)
Ashwin Nayak (University of Waterloo), Dave Touchette (University of Waterloo and Perimeter Institute)
We show how two recently developed quantum information theoretic tools can be applied to obtain lower bounds on quantum information complexity. We also develop new tools with potential for broader applicability, and use them to establish a lower bound on the quantum information complexity for the Augmented Index function on an easy distribution. This approach allows us to handle superpositions rather than distributions over inputs, the main technical challenge faced previously. By providing a quantum generalization of the argument of Jain and Nayak~[IEEE TIT'14], we leverage this to obtain a lower bound on the space complexity of multi-pass, unidirectional quantum streaming algorithms for the DYCK(2) language.
15:30 Separating quantum communication and approximate rank (abstract, slides)
Anurag Anshu (National University of Singapore), Shalev Ben-David (MIT), Ankit Garg (MSR New England), Rahul Jain (National University of Singapore and MajuLab), Robin Kothari (MIT), Troy Lee (Nanyang Technological University, Centre for Quantum Technologies, and MajuLab)
One of the best lower bound methods for the quantum communication complexity of a function H (with or without shared entanglement) is the logarithm of the approximate rank of the communication matrix of H. This measure is essentially equivalent to the approximate gamma_2 norm and generalized discrepancy, and subsumes several other lower bounds. All known lower bounds on quantum communication complexity in the general unbounded-round model can be shown via the logarithm of approximate rank, and it was an open problem to give any separation at all between quantum communication complexity and the logarithm of the approximate rank.
In this work we provide the first such separation: We exhibit a total function H with quantum communication complexity almost quadratically larger than the logarithm of its approximate rank. We construct H using the communication lookup function framework of Anshu et al. (FOCS 2016) based on the cheat sheet framework of Aaronson et al. (STOC 2016). From a starting function F, this framework defines a new function H=F_G. Our main technical result is a lower bound on the quantum communication complexity of F_G in terms of the discrepancy of F, which we do via quantum information theoretic arguments. We show the upper bound on the approximate rank of F_G by relating it to the Boolean circuit size of the starting function F.
16:00 Optimal quantum sample complexity of learning algorithms (abstract, slides)
Srinivasan Arunachalam (CWI), Ronald de Wolf (CWI and University of Amsterdam)
In learning theory, the {VC dimension} of a concept class \C is the most common way to measure its ``richness.'' A fundamental result says that the number of examples needed to learn an unknown target concept c\in\C under an unknown distribution D, is tightly determined by the VC dimension d of the concept class \C. Specifically, in the PAC model
\Theta\Big(\frac{d}{\eps} + \frac{\log(1/\delta)}{\eps}\Big)
examples are necessary and sufficient for a learner to output, with probability 1-\delta, a hypothesis h that is \eps-close to the target concept c (measured under D). In the related {agnostic} model, where the samples need not come from a c\in\C, we know that
\Theta\Big(\frac{d}{\eps^2} + \frac{\log(1/\delta)}{\eps^2}\Big)
examples are necessary and sufficient to output an hypothesis h \in \C whose error is at most \eps worse than the error of the best concept in \C.
Here we analyze quantum sample complexity, where each example is a coherent quantum state. This model was introduced by Bshouty and Jackson, who showed that quantum examples are more powerful than classical examples in some fixed-distribution settings. However, Atici and Servedio, improved by Zhang, showed that in the PAC setting (where the learner has to succeed for every distribution), quantum examples cannot be much more powerful: the required number of quantum examples is
\Omega\Big(\frac{d^{1-\eta}}{\eps} + d + \frac{\log(1/\delta)}{\eps}\Big)\mbox{ for arbitrarily small constant }\eta>0.
Our main result is that quantum and classical sample complexity are in fact equal up to constant factors in both the PAC and agnostic models. We give two proof approaches. The first is a fairly simple information-theoretic argument that yields the above two classical bounds and yields the same bounds for quantum sample complexity up to a \log(d/\eps) factor. We then give a second approach that avoids the log-factor loss, based on analyzing the behavior of the ``Pretty Good Measurement'' on the quantum state identification problems that correspond to learning. This shows classical and quantum sample complexity are equal up to constant factors for every concept class \C.
16:30 Excursion
20:00 Banquet
Sunday July 9
8:30 Settling the query complexity of non-adaptive junta testing (abstract, slides)
Xi Chen (Columbia University), Rocco A. Servedio (Columbia University), Li-Yang Tan (TTI Chicago), Erik Waingarten (Columbia University), Jinyu Xie (Columbia University)
We prove that any non-adaptive algorithm that tests whether an unknown Boolean function f:{0,1}^n -> {0,1} is a k-junta or \eps-far from every k-junta must make \widetilde{\Omega}(k^{3/2}/\eps) many queries for a wide range of parameters k and \eps. Our result dramatically improves previous lower bounds from [BGSMdW13, STW15], and is essentially optimal given Blais's non-adaptive junta tester from [Bla08], which makes \widetilde{O}(k^{3/2})/\eps queries. Combined with the adaptive tester of [Bla09] which makes O(k log k + k/\eps) queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing.
9:00 An adaptivity hierarchy theorem for property testing (abstract, slides)
Clément Canonne (Columbia University), Tom Gur (Weizmann Institute of Science)
The notion of adaptivity can play a crucial role in property testing. In particular, there exist properties for which there is an exponential gap between the power of adaptive testing algorithms, wherein each query may be determined by the answers received to prior queries, and their non-adaptive counterparts, in which all queries are independent of answers obtained from previous queries.
In this work, we investigate the role of adaptivity in property testing at a finer level. We first quantify the degree of adaptivity of a testing algorithm by considering the number of "rounds of adaptivity" it uses. More accurately, we say that a tester is k-(round) adaptive if it makes queries in k rounds, where the queries in the (i+1)'st round may depend on the answers obtained in the previous i rounds. Then, we ask the following question:
"Does the power of testing algorithms smoothly grow with the number of rounds of adaptivity?"
We provide a positive answer to the foregoing question by proving an adaptivity hierarchy theorem for property testing. Specifically, our main result shows that for every n and 0 <= k <= n^{0.99} there exists a property P_{n,k} of functions for which (1) there exists a k-adaptive tester for P_{n,k} with query complexity ~O(k), yet (2) any (k-1)-adaptive tester for P_{n,k} must make Omega(n) queries. In addition, we show that such a qualitative adaptivity hierarchy can be witnessed for testing natural properties of graphs.
9:30 Distribution testing lower bounds via reductions from communication complexity (abstract, slides)
Eric Blais (University of Waterloo), Clément Canonne (Columbia University), Tom Gur (Weizmann Institute of Science)
We present a new methodology for proving distribution testing lower bounds, establishing a connection between distribution testing and the simultaneous message passing (SMP) communication model. Extending the framework of Blais, Brody, and Matulef [BBM12], we show a simple way to reduce (private-coin) SMP problems to distribution testing problems. This method allows us to prove several new distribution testing lower bounds, as well as to provide simple proofs of known lower bounds. Our main result is concerned with testing identity to a specific distribution p, given as a parameter. In a recent and influential work, Valiant and Valiant [VV14] showed that the sample complexity of the aforementioned problem is closely related to the 2/3-quasinorm of p. We obtain alternative bounds on the complexity of this problem in terms of an arguably more intuitive measure and using simpler proofs. More specifically, we prove that the sample complexity is essentially determined by a fundamental operator in the theory of interpolation of Banach spaces, known as Peetre's K-functional. We show that this quantity is closely related to the size of the effective support of p (loosely speaking, the number of supported elements that constitute the vast majority of the mass of p). This result, in turn, stems from an unexpected connection to functional analysis and refined concentration of measure inequalities, which arise naturally in our reduction.
10:00 Exponentially small soundness for the direct product Z-test (abstract, slides)
Irit Dinur (Weizmann Institute of Science), Inbal Livni Navon (Weizmann Institute of Science)
Given a function f:[N]^k\rightarrow[M]^k, the Z-test is a three query test for checking if a function f is a direct product, namely if there are functions g_1,\dots g_k:[N]\to[M] such that f(x)=(g_1(x_1),\dots g_k(x_k)) for every input x\in [N]^k. This test was introduced by Impagliazzo et. al. (SICOMP 2012), who showed that if the test passes with probability \epsilon > \exp(-\sqrt k) then f is poly(\epsilon) close to a direct product function in some precise sense. It remained an open question whether the soundness of this test can be pushed all the way down to \exp(-k) (which would be optimal). This is our main result: we show that whenever f passes the Z test with probability \epsilon > \exp(-k), there must be a global reason for this: namely, f must be close to a product function on some poly(\epsilon) fraction of its domain. Towards proving our result we analyze the related (two-query) V-test, and prove a ``restricted global structure'' theorem for it. Such theorems were also proven in previous works on direct product testing in the small soundness regime. The most recent work, by Dinur and Steurer (CCC 2014), analyzed the V test in the exponentially small soundness regime. We strengthen the conclusion of that theorem by moving from an ``in expectation'' statement to a stronger ``concentration of measure'' type of statement, which we prove using hyper-contractivity. This stronger statement allows us to proceed to analyze the Z test. We analyze two variants of direct product tests. One for functions on tuples, as above, and another for functions on sets, f:\binom{[N]}{k}\to[M]^k. The work of Impagliazzo et. al was actually focused only on functions of the later type, i.e. on sets. We prove exponentially small soundness for the Z-test for both variants. Although the two appear very similar, the analysis for tuples is more tricky and requires some additional ideas.
11:00 On the polynomial parity argument complexity of the combinatorial Nullstellensatz (abstract, slides)
Aleksandrs Belovs (University of Latvia), Gabor Ivanyos (Hungarian Academy of Sciences), Youming Qiao (University of Technology Sydney), Miklos Santha (CNRS, Université Paris Diderot, and National University of Singapore), Siyi Yang (National University of Singapore)
The complexity class PPA consists of NP-search problems which are reducible to the parity principle in undirected graphs. It contains a wide variety of interesting problems from graph theory, combinatorics, algebra and number theory, but only a few of these are known to be complete in the class. Before this work, the known complete problems were all discretizations or combinatorial analogues of topological fixed point theorems.
Here we prove the PPA-completeness of two problems of radically different style. They are PPA-Circuit CNSS and PPA-Circuit Chevalley, related respectively to the Combinatorial Nullstellensatz and to the Chevalley-Warning Theorem over the two elements field F2. The input of these problems contain PPA-circuits which are arithmetic circuits with special symmetric properties that assure that the polynomials computed by them have always an even number of zeros. In the proof of the result we relate the multilinear degree of the polynomials to the parity of the maximal parse subcircuits that compute monomials with maximal multilinear degree, and we show that the maximal parse subcircuits of a PPA-circuit can be paired in polynomial time.
11:30 An exponential lower bound for homogeneous depth-5 circuits over finite fields (abstract, slides)
Mrinal Kumar (Rutgers University), Ramprasad Saptharishi (TIFR Bombay)
In this paper, we show exponential lower bounds for the class of homogeneous depth-$5$ circuits over all small finite fields. More formally, we show that there is an explicit family $\{P_d : d \in \N\}$ of polynomials in $\VNP$, where $P_d$ is of degree $d$ in $n = d^{O(1)}$ variables, such that over all finite fields $\F_q$, any homogeneous depth-$5$ circuit which computes $P_d$ must have size at least $\exp(\Omega_q(\sqrt{d}))$.
To the best of our knowledge, this is the first super-polynomial lower bound for homogeneous depth-5 circuits for any field $\F_q \neq \F_2$.
Our proof builds up on the ideas developed on the way to proving lower bounds for homogeneous depth-$4$ circuits~\cite{gkks13, FLMS13, KLSS, KS14} and for non-homogeneous depth-$3$ circuits over finite fields~\cite{grigoriev98, gr00}. Our key insight is to look at the space of shifted partial derivatives of a polynomial as a space of functions from $\F_q^n \rightarrow \F_q$ as opposed to looking at them as a space of formal polynomials and builds over a tighter analysis of the lower bound of Kumar and Saraf~\cite{KS14}.
12:00 Complete derandomization of identity testing and reconstruction of read-once formulas (abstract, slides 1, slides 2)
Daniel Minahan (University of Michigan), Ilya Volkovich (University of Michigan)
In this paper we study the identity testing problem of \emph{arithmetic read-once formulas} (ROF) and some related models. A read-once formula is formula (a circuit whose underlying graph is a tree) in which the operations are $\set{+,\times}$ and such that every input variable labels at most one leaf. We obtain the first polynomial-time deterministic identity testing algorithm that operates in the black-box setting for read-once formulas, as well as some other related models. As an application, we obtain the first polynomial-time deterministic reconstruction algorithm for such formulas. Our results are obtained by improving and extending the analysis of the algorithm of \cite{ShpilkaVolkovich15a}.
12:30 Greedy strikes again: A deterministic PTAS for commutative rank of matrix spaces (abstract, slides)
Markus Blaeser (Saarland University), Gorav Jindal (Max-Planck-Institute for Informatics), Anurag Pandey (Max-Planck-Institute for Informatics)
We consider the problem of commutative rank computation of a given matrix space, $\mathcal{B}\subseteq\mathbb{F}^{n\times n}$. The problem is fundamental, as it generalizes several computational problems from algebra and combinatorics. For instance, checking if the commutative rank of the space is $n$, subsumes problems such as testing perfect matching in graphs and identity testing of algebraic branching programs. An efficient deterministic computation of the commutative rank is a major open problem, although there is a simple and efficient randomized algorithm for it. Recently, there has been a series of results on computing the non-commutative rank of matrix spaces in deterministic polynomial time. Since the non-commutative rank of any matrix space is at most twice the commutative rank, one immediately gets a deterministic $\frac{1}{2}$-approximation algorithm for the computation of the commutative rank. This leads to a natural question of whether this approximation ratio can be improved. In this paper, we answer this question affirmatively.
We present a deterministic $\textrm{PTAS}$ for computing the commutative rank of a given matrix space. More specifically, given a matrix space $\mathcal{B}\subseteq\F^{n\times n}$ and a rational number $\epsilon > 0$, we give an algorithm, that runs in time $O(n^{4+\frac{3}{\epsilon}})$ and computes a matrix $A\in\mathcal{B}$ such that the rank of $A$ is at least $(1-\epsilon)$ times the commutative rank of $\mathcal{B}$. The algorithm is the natural greedy algorithm. It always takes the first set of $k$ matrices that will increase the rank of the matrix constructed so far until it does not find any improvement, where the size of the set $k$ depends on $\epsilon$.
13:00 End of the conference
In Cooperation with:
|
CommonCrawl
|
MikeMirzayanov → 2019-2020 ICPC, NERC, Southern and Volga Russian Regional Contest (Online Mirror, ICPC Rules, Teams Preferred)
hmehta
hmehta's blog
Topcoder SRM 774
By hmehta, history, 12 months ago,
Topcoder SRM 774 is scheduled to start at 12:00 UTC -5, Jan 11, 2020. Registration is now open in the Web Arena or Applet and will close 5 minutes before the match begins.
Thanks to lg5293 and misof for writing the problems and coordinating the round.
This is the first SRM of Stage 2 of TCO20 Algorithm Tournament and TCO20 Regional Events Qualification
Stage 2 TCO20 Points Leaderboard | Topcoder Java Applet | Upcoming SRMs and MMs
Some Important Links
Match Results (To access match results, rating changes, challenges, failure test cases)
Problem Archive (Top access previous problems with their categories and success rates)
Problem Writing (To access everything related to problem writing at Topcoder)
Algorithm Rankings (To access Algorithm Rankings)
Editorials (To access recent Editorials)
#topcoder, srm #774, #tco20
square1001
12 months ago, # |
Happy new year! The first SRM in 2020 :)
It's nice to see Lewin / lg5293 as writer of SRM! Last time was about 2 years ago, but it's still one of my favorite.
Gentle Reminder: The match begins in 1hr 30mins
IWantToBeNutellaSomeday
How to solve land splitter?:(
Xellos
12 months ago, # ^ |
The cost is (sum of squares of starting sizes — sum of squares of goal sizes) / 2. You just need to find the maximum of $$$\sum x_i^2$$$ for any $$$A \le x_i \le B$$$, $$$\sum x_i = N$$$.
royappa
Oh, right. Thank you.
So, how to solve landsplitter? :-)
You've got a hint, you could try finishing it yourself.
Next hint: you never want two $$$x_i$$$ in the interval $$$(A, B)$$$ because moving them away from each other improves the sum of squares. Now there are several $$$O(N)$$$ solutions.
zed_b
Is there a book or set of tutorials to help you prepare for such problems or is this supposed to be basic stuff?
Subtract x from one element, add x to another, see how the optimised function changes is supposed to be basic stuff. It works for a lot of problems.
For continuous variables and arbitrary conditions, there's something called Lagrange multipliers that gives you a set of equations that any extremum point (that isn't on the border of the allowed region) must satisfy.
As Xellos is telling us, it does involve basic stuff. For example, if you break it into two pieces then they have to be equal like he said, as moving them apart increases the cost.
I had to prove that differently, if we break N into two pieces N = X + (N-X), then the cost is X*(N-X) = N*X-X^2. You can take derivative: N-2*X and set that to zero to minimize, solving that gives us X = N/2 as the lowest cost if you split into two pieces. That too is basic calculus.
The difficulty I have is that these tasks are a linked series of basic facts and it's hard to know how to link them together to solve the whole problem. Anyway, that's where the learning comes in, so it's good. Thanks for the hints, Xellos!
Thank you, but why is that the cost?
misof
An insight for LandSplitter: Imagine that the land you are dividing is a complete graph on N vertices. The tax you pay when you cut into pieces is equal to the number of edges you cut. Thus, minimizing the total cost is the same as maximizing the number of edges that remain uncut in the end.
r0ck1n70sh
Can anyone tell how this problem could be solved in O(1). This has done that.
aryanc403
Why topcoder doesn't allow people with the non-positive score to hack?
Probably so that you couldn't have a random new account (totally not a smurf) who challenges everyone in the room on 100 tests.
Really? I thought it's "negative" instead. We are able to challenge (we don't say "hack" in topcoder) for having 0.00 pts, I guess.
Um_nik
Do we say "f*ck you moron" in topcoder?
-is-this-fft-
...what?
My score was -25pts and I got this popup. I was denied hacks.
BSBandme
Possibly how many failed challenge attempts for a user is mistakenly stored in an integer. If somebody failed for 1 << 31 times, they would get positive score :-p
Lol Jatana got more points from 250 by challenging than by solving but got challenged on it too.
How to solve Div1 1000?
I did something like "repeatedly get the non-decided longest path and assign values of arithmetic progression", but it failed in system test.
Here's a simple counter case
Answer should be {0, 1/4, 5/8, 1/4, 1/2, 3/4, 1}, but yours gives {0, 1/3, 2/3, 1/4, 1/2, 3/4, 1} instead.
Instead of longest path, choose a path where (difference in endpoints) / (length of path) is minimized. We can show that this quantity only increases at each iteration which helps prove correctness.
Hi, is the editorial from this SRM will be publish? ( Cuz in the link you provided the last editorial is from SRM 771 ). Thanks.
ritik_patel05
What was the intended solution for Div2 500 points? Thanks.
Primes are quite dense, so anything reasonable that goes through all possibilities will work. (I.e., iterate over all possibilities of how many digits you add, where you add them and what they are. Stop once you find a prime.)
For a short correct solution, special-case N=0 and N=10^9, and for any other input N check the numbers N000 through N999.
(The first sequence of 1000+ consecutive numbers occurs only somewhere after 10^15. Our numbers are smaller than 10^12, so there is certainly at least one prime among them. See Prime gap (Wikipedia))
sh_maestro
The last editorial is for SRM 771. What about editorials for 772, 773 and 774 — are they going to be made available?
We are working on it. Should have them up within a week.
forcdc
11 days ago, # ^ |
hmehta it's literally been a year, no editorial for 774? :-(
|
CommonCrawl
|
Improved user similarity computation for finding friends in your location
Georgios Tsakalakis1 &
Polychronis Koutsakis ORCID: orcid.org/0000-0002-4168-08882
Recommender systems are most often used to predict possible ratings that a user would assign to items, in order to find and propose items of possible interest to each user. In our work, we are interested in a system that will analyze user preferences in order to find and connect people with common interests that happen to be in the same geographical area, i.e., a "friend" recommendation system. We present and propose an algorithm, Egosimilar+, which is shown to achieve superior performance against a number of well-known similarity computation methods from the literature. The algorithm adapts ideas and techniques from the recommender systems literature and the skyline queries literature and combines them with our own ideas on the importance and utilization of item popularity.
The diversity of social networks makes the problem of correctly estimating user preferences essential for personalized applications [1]. Most recommender systems suggest items of possible interest to their users by employing collaborative filtering to predict the attractiveness of an item for a specific user, based on the user's previous rating and the ratings of "similar" users. In this work, rather than focusing on possible items of interest for a user, we are interested in designing an algorithm that will utilize user ratings in order to recommend one user to another as a possible friend.
This paper continues our recent work [2], where we presented the architectural design, the functional requirements and the user interface of eMatch [3], an Android application which was inspired by the idea of finding people with common interests in the same geographical area. Close friendship is a measure of trust between individuals [4] and friends have the tendency to share common interests and activities, as has been shown in multiple studies in the literature starting with important work on personality similarities and friendship dating in the 70 s [5, 6] and continuing until today [7]. In terms of social networks, people selectively establish social links with those who are similar to them, and their attitudes, beliefs and behavioral propensities are affected by their social ties [8, 9]. In eMatch, in order to compare people's interests, users rate as many as nine interest categories: {Movies, Music, Books, Games, Sports, Science, Shopping, Food, Travel}, while they can add and rate items to each one of them. For example a user could rate the category "Sports" with "7" on a scale of 1 to 10, and add to this category the item "football" with rating "9". Based on this type of rating, the application's algorithm computes users' matching in order to suggest potential friends.
The information location is used in eMatch only for practical reasons, i.e., in order to locate potential friends in the same area and not for tracking on the map and revealing the user's location as other applications do. The goal of eMatch is to facilitate potential friends to meet and introduce themselves to each other only if they so wish. The user's location is considered private and sensitive information and is treated that way. The only information that is public is the matching percentage for all pairs of "Visible" users inside the geographical area. In this way the individual's privacy is preserved. More information can be found in [2].
The work in that paper introduced EgoSimilar, an algorithm which computes the similarity between users and is implemented in eMatch. Based on a dataset of 57 users, EgoSimilar was found to outperform two of the most well-known similarity measures, the Pearson Correlation and the Cosine Similarity, in regard to the most significant metrics used our study. Unlike other approaches in the literature, EgoSimilar takes into account the popularity of the items that have been rated in its computations.
The main contributions of the present work are as follows. We collected a much larger number of completed questionnaires (286 in total) from users of eMatch and evaluated EgoSimilar again, in order to study whether the conclusions of the work in [2] were confirmed. After confirming the excellence of EgoSimilar again in comparison to the Pearson Correlation and Cosine Similarity, however, we added into our new study several similarity measures, one of which was found to outperform EgoSimilar. For this reason, we substantially changed EgoSimilar by adapting ideas and techniques from the recommender systems literature and the skyline queries literature. The new algorithm, Egosimilar+, presented in this paper for the first time, is compared against several of the most well-known similarity computation methods from the literature and is shown to outperform all of them in regard to being able to identify existing friendships.
The rest of the paper is structured as follows. In "Related work" section we discuss related work in the field. "EgoSimilar" section briefly presents the original EgoSimilar algorithm. In "Evaluation of Egosimilar" section we evaluate EgoSimilar versus other similarity computation methods. "EgoSimilar+" section presents EgoSimilar+ and discusses the ideas and the motivation behind the new algorithm. "Evaluation of EgoSimilar+" section presents the results with the use of EgoSimilar+. Finally, "Conclusions and future work" section presents the conclusions of our study and the next steps in our work.
The Youhoo application [10] is the closest to eMatch among all current applications in iOS and Android that are related to finding friends in an area near the user. Its goal is to create circles of people with common interests in an area. However, Youhoo profiles are created from Facebook, therefore users who do not use Facebook are excluded from using the application, and users who wish to create a different profile in their Youhoo profile cannot do so. Additionally, the circles of people with common interests created by the application are quite generic or one-dimensional, e.g., students in the same university, people working in the same field, fans of a specific singer. On the contrary, eMatch computes the match between users based on the whole profile that the users wish to share through the application, and of course allows users to create a profile that is independent from any other application. Other social networking applications like GeoSocials [11] and Jiveocity [12] simply present to the user other users in the same location, without any recommendation on whether they would be a good match as friends.
Other proposals from the literature for friend recommendation focus on link prediction utilizing node proximity [13], on recommending which Twitter users to follow based on user-generated content which indicates profile similarity [14], on recommending friends according to the degree to which a friend satisfies the target user's unfulfilled informational need [15], and on selecting a community of users that can meet the specific requirements of an application [16]. The work in [16] differs from our work not only because of its different goal, but also because it computes a metric based on a binary characterization of users' interest in a specific item (interested/not interested) which does not include information on the degree of user interest for the item; it also does not consider common interest categories between users as our work does, therefore related interests are considered to be completely different.
The authors in [17] use ranking functions to propose a method that represents people's preferences in a metric space, where it is possible to define a kernel-based similarity function; they then use clustering to discover significant groups with homogeneous states. The authors point out the success of the Pearson Correlation and the cosine similarity in order to make comparisons between the rating vectors of different users and they use cosine similarity in their work. As it will be shown in our results, our proposed similarity computation approach outperforms both the Pearson Correlation and the Cosine Similarity. Also, the proposed class separation technique in [17], which utilizes Support Vector Machines, becomes computationally complex and leads the authors to avoid using K-means clustering, to decrease the computational complexity of combining K-means with their technique.
EgoSimilar
In this section we present our "matching" algorithm from [2], EgoSimilar, for computing the similarity between users based on their interests and preferences. We also present, briefly, all the other widely used methods for assessing similarity that we will compare to EgoSimilar. All approaches were implemented in eMatch, in order to find potential friends based on user ratings. These algorithms run from the server side in order to keep the computational cost contained. Their running at the smartphone would be uneconomic, and also battery- and time-consuming, since it would constantly require data transfers via mobile internet and many calculations to be executed.
For the matching algorithm to run at the server, the mobile must have Internet access and at least one location provider activated. It should also store periodically (e.g., every 10 min) the geographical location of the user.
EgoSimilar takes the following rationale into account:
The matching is done in an "egocentric" way because each user should search friends based on his/her own criteria and interests. Thus, the matching percentage between two users that will appear on each user's screen will most likely be different. Hence, if for example user X has one active category of interest while user Y has five, the matching percentage (X, Y) will be based on that one category, while the matching percentage (Y, X) on all five, leading to different results showing on each user's screen.
More popular items (popular in the sense that they are rated positively or negatively by many users) should not affect matching results as much as less popular items do, if users "agree" on them. The reason is that even if users share, e.g., a favorable opinion on a very well-known band, book, movie, etc., this does not really give a substantial hint that their tastes match in general. A similar case regarding a relatively unknown band/book/movie gives a much stronger indication of common interests. This was also pointed out in [18], where it was explained that the presence of popular objects that meet the general interest of a broad spectrum of audience may introduce weak relationships between users and adversely influence the correct ranking of candidate objects. The work in [18], however, is different from ours, as it begins with the construction of a user similarity network from historical data, in order to calculate scores for candidate objects. Our work in this paper focuses on recommending people as potential friends, not items of interest, and no historical data is relevant due to the nature of our study.
The rating choices of users are on a scale from 1 to 10. Consequently the maximum rating difference will be 9 and the weight of one unit in rating difference will be 1/9 ≈ 0.11. This weight is included in the computation of the similarity between users.
The steps followed by eMatch in computing the matching between users are described below. The first three steps are followed regardless of the matching computation method, which is implemented in step 4.
Let X be the user who runs the application, therefore, the matching is done according to X's tastes.
Check if the user's location is stored. If not, inform the user, else go to the next step.
Find users that are in close geographical proximity with user X.
Find all the active interest categories of user X.
The matching in EgoSimilar is computed as follows: for each user Y found in step 2, calculate the
$$ {\text{Matching}}\left( {{\text{X}},{\text{ Y}}} \right) = \frac{1}{{k_{\text{x}} }}\mathop \sum \limits_{c = 1}^{{k_{\text{X}} }} \left[ {{\text{w}}_{1} \left[ {1 - 0.11{\text{d}}_{1} \left( {{\text{X}},{\text{Y}},{\text{c}}} \right)} \right] + \frac{{{\text{w}}_{2} }}{{{\text{n}}_{\text{x}}^{\text{c}} }}\mathop \sum \limits_{{{\text{i}} = 1}}^{{{\text{n}}_{\text{x}}^{\text{c}} }} \left[ {1 - 0.11{\text{d}}_{2} \left( {{\text{X}},{\text{Y}},{\text{c}},{\text{i}}} \right)} \right]} \right] $$
where kX is the number of active categories of user X, X, kX ∊ [1, 9]; w1 is the weight attributed to the general rating of a category; w2 is the weight of the ratings of all individual items of a category. In our case, w1 should be smaller than w2, as we consider the "general" matching of users (e.g., both of them loving movies), to be of smaller importance, as their specific tastes in that category may differ significantly or even completely. The exact values of w1 and w2 are discussed in "Evaluation of EgoSimilar" section; \( {\text{n}}_{\text{x}}^{\text{c}} \) is the number of items user X has inserted in category c; d1(X,Y,c) is a function which computes the absolute difference in ratings between users X and Y for the cth activated category of user X. If user Y has deactivated the specific category, then we set (1 − 0.11·d1(X,Y,c)) = 0; d2(X,Y,c,i) is associated with the ith item inserted by user X in cth activated category and denotes the distance of ratings between users X and Y for the specific item.
If user Y has not rated this item, then we set (1 − 0.11·d2(X,Y,c,i)) = 0,
Otherwise d2(X,Y,c,i) is calculated, taking into account the popularity of the specific item, as follows:
Compute d2(X,Y,c,i).
Let m be the number of users that have inserted this item, and n be the number of users that have inserted items in the cth activated category of user X. Then, the popularity weight of the specific item is defined as: W c i (X) = m/n. An item is assumed to be popular if W c i (X) > 0.5, which means that more than half of the users that "voted" for this category have inserted the specific item (with either negative or positive rating).
d2(X,Y,c,i) is adapted with respect to the popularity of the item and the rationale explained above, as follows:
If (W c i (X) > 0.5 AND d2(X,Y,c,i) < 5), then
d2(X,Y,c,i) = d2(X,Y,c,i) + Wchange∙d2(X,Y,c,i).
else if (W c i (X) > 0.5 AND d2(X,Y,c,i) ≥ 5), then
d2(X,Y,c,i) = d2(X,Y,c,i)
else if (W c i (X) ≤ 0.5 AND d2(X,Y,c,i) < 5), then
d2(X,Y,c,i) = d2(X,Y,c,i) − Wchange∙d2(X,Y,c,i).
else if (W c i (X) ≤ 0.5 AND d2(X,Y,c,i) ≥ 5), then
This states that when an item is popular and the ratings of users are close, then this item should not affect matching results as much as less popular items do. Therefore, the distance of the ratings between users X and Y must be increased in order to decrease their matching. This increase is implemented via the Wchange weight, the value of which is discussed in "Evaluation of EgoSimilar" section.
If, however, the item is popular and the ratings of users are close, then this item should affect matching results more than the popular items do. Accordingly, the distance of the ratings between users X and Y must be decreased in order to increase their matching. This is implemented once again via the Wchange weight.
Similarly, in the case where the item is not popular and the ratings of users are not close, we infer that this is an indication of users that do not have common interests. So, by increasing the distance of their ratings, their matching is decreased.
The complexity of the algorithm is: Ο(pqr), where p is the number of the users, q is the number of categories (in our case, nine) and r is the maximum number of items inserted in one of the categories.
Evaluation of EgoSimilar
For our preliminary evaluation we wanted to confirm whether the results presented in [2] would stand for a much larger dataset and to investigate whether EgoSimilar would also excel in comparison with several additional similarity computation measures.
We collected data from 286 users (in comparison to the 57 users in our previous work), ages 18–40. Of the 286 participants, 272 had at least one connection (i.e., were friends) with a person from our dataset in real life. The collected information consisted of the activation/deactivation of the 9 interest categories, the Ratings for all active categories and the Ratings for the individual items in all the active categories. The items rated in each category were either new insertions by the users or as many of the default items as the users wished to rate. The mean rating given by the users was 6.6 and the standard deviation 2.5. These statistics confirmed the tendency shown in [2], of users mainly rating items that they like instead of taking the time to also add several items that they dislike. The details of the dataset are presented in Table 1.
Table 1 Dataset
The reason we chose to collect data mainly from groups of friends, a choice which carries a bias in the dataset and the results, was that in this way it would be feasible to evaluate whether the similarity computation methods would be able to "discover", through higher matching values, existing friendships.
To compare the results we ran the K-means clustering algorithm, each time with a different similarity computation measure (EgoSimilar and five other measures that are presented later in this section). We derived results for a number of clusters K ranging from 5 to 20 in order to evaluate how (and if) the number of clusters influences the user matching. K-means is preferred in comparison to new efficient methods like the one proposed in [19], because we do not want to have predefined classes in our system; classes (clusters) need to change based on the users who find themselves in the same area. Also, contrary to several approaches where it is useful to have weighted information incorporated into similarity scores (e.g. [20]), in our system all users should have equal weights when computing their similarity.
The following metrics and parameters were used in our study (the abbreviations are also presented in Table 2 for ease of reading):
Table 2 Abbreviations
Average friends' placement (AFP). This is arguably the most important metric of all, in terms of evaluating the quality of a similarity metric, as it refers to the order in which "matching users" appear on the user's screen, in decreasing percentages. A user would obviously consider first the users with whom he/she has the highest matching, regardless of the actual matching percentage (unless the matching percentage is very low even for the "top matched" user, which would be discouraging). Most importantly, in this matching list we would expect existing friends to place "low", i.e., to appear among the top matching choices. Therefore, we can study whether our approach outperforms other similarity computation methods in placing existing friends higher on the list, as existing friends should have similar interests [5, 6]. The similarity computation method that performs better in finding actual friends would be expected to be able to outperform others in finding potential friends as well.
N1: the number of users in the cluster.
N2: the number of users in the cluster that have a network (i.e., they are connected with at least one other user, who may be in that cluster or in another one).
N3: the number of users in the cluster that are connected in reality as friends.
Average valid connections (AVC): for each user in a cluster we computed the percentage of their connections that are included in the specific cluster, and derived the average percentage.
Average matching (AM): This is the average matching percentage of all users of the specific cluster.
Average matching of connected users (AMC): This is the average matching percentage of all the connected users of the cluster.
Average matching of not connected users (AMnC): This is the average matching percentage of all the users of the cluster who are not connected.
We have used several additional similarity measures in our study and implemented them in eMatch in order to compare them against Egosimilar. These similarity measures include the Pearson Correlation and the Cosine Similarity [21], which were also used in [2] and were found to provide inferior results to EgoSimilar for the smaller dataset of 57 users. The other similarity measures that we used in the present work are:
The Jaccard Index [22], also known as the Jaccard similarity coefficient, which is a statistic used for comparing the similarity and diversity of two sample sets. The Jaccard coefficient measures similarity between finite sample sets and is defined as the size of the intersection divided by the size of the union of two sample sets, as depicted in Eq. (2) below
$$ {\text{J}}\left( {\text{A,B}} \right) =\frac{{ | {\text{A}} \cap {\text{B|}}}}{{ | {\text{A}} \cup {\text{B|}}}} $$
where A and B denote the two sample sets.
The π coefficient [23], which is calculated as:
$$ \pi = \, ({\text{p}}_{\text{o}} - {\text{p}}_{\text{e}} )/( 1- {\text{p}}_{\text{e}} ) $$
where po is defined as the observed agreement between two raters who each classify items into a set of M mutually exclusive categories, and pe is defined as the expected agreement by chance.
The κ coefficient [24], which is similar to the π coefficient and is defined again by Eq. (3). The difference between the two coefficients lies in the way the expected agreement pe is computed. In the π coefficient, both annotators are assumed to classify items into a category via the same probability distribution, whereas the κ coefficient does not make this assumption (hence each annotator each assumed to have a different probability distribution). As explained in [25], when using the π coefficient any differences in the observed distributions of users' judgements are considered to be noise in the data. When using the κ coefficient these differences are considered to be related to the biases of the users.
By "agreement" in the case of the π and κ coefficients, and by "intersection of sets" in the case of the Jaccard index, we are referring to two users giving the same rating for a category or for an item within a category.
Finally, in order to examine the results of the above measures, users were separated into groups via the K-means clustering algorithm [26], using the matching percentages derived by each of the similarity computation approaches. The procedure will always terminate, but K-means does not necessarily find the optimal configuration. A disadvantage of K-means is its sensitivity to the random initialization of cluster centroids; generally initial centroids should be "far apart". We addressed this issue by using different centroids and computing average results over 10 independent runs. Later in the paper, in the evaluation of our new algorithm EgoSimilar+ in "Evaluation of EgoSimilar+" section, we focused on finding the appropriate number of clusters by utilizing silhouettes [27].
For space economy purposes and in order to focus on the most important contributions of this study, we will only present here a summary of the results of the new evaluation of EgoSimilar.
Our results were derived for the following sets of weights: (w1,w2) = (0.25, 0.75), (0.5, 0.5), (0.75, 0.25) and for wchange = 0.3, a value which was shown for both the larger dataset and the smaller one in [2] to provide the overall best results across all similarity computation methods. It did not provide the best results in all cases for EgoSimilar, which often had better results for values of 0.1 or 0.2, but for fairness and uniformity purposes we are showing the results for wchange = 0.3. We should emphasize again that we are interested in w1 < w2 as our work is focused on achieving a more specific (items-oriented) matching between users than a more generic (categories-based) one. However, we also experimented with the cases where w1 = w2 and w1 > w2 in order to study the behavior of the different similarity computation methods.
Our results showed that in regard to the comparison between EgoSimilar, the Pearson Correlation and the Cosine Similarity, there were no changes in the conclusions for this larger dataset when compared with the small dataset in our previous work. More specifically, EgoSimilar outperforms both methods in terms of distinguishing between already connected and not already connected users (i.e., already connected users have a higher matching percentage). In regard to the average friends' placement, EgoSimilar also continues to outperform the Cosine Similarity and the Pearson Correlation, as in [2], by placing existing friends "lower" (i.e., closer to the top) in the users' matching list. The reason that EgoSimilar excels is that both the Cosine Similarity and Pearson Correlation only examine the current ratings of each category/item, by each of the two users. EgoSimilar, however, tries to be more sophisticated by using weights to take advantage of the popularity of the rated items.
Tables 3 and 4 present the average results for all similarity computation methods for the number of clusters K taking all values between 5 and 20.
Table 3 Average matching difference between connected and not-connected users
Table 4 Average friends' placement
The results indicate that:
EgoSimilar outperforms all similarity computation methods in terms of distinguishing between already connected and not already connected users for all (w1, w2) weights.
EgoSimilar also outperforms all similarity computation methods in terms of the average friends' placement, which is the most important metric in our study, as explained above, for (w1,w2) = (0.5, 0.5) and (w1,w2) = (0.75, 0.25).
However, EgoSimilar is outperformed in terms of the average friends' placement by all similarity computation methods, except the Cosine Similarity and the Pearson Correlation, for (w1,w2) = (0.25, 0.75). This is an important negative result, given that our focus is on placing larger importance on individual items for making friends suggestions, therefore EgoSimilar should be able to find existing friend connections by placing them "lower" in each user's list.
All similarity computation methods place existing friends on average at around the 36–39% mark (positions 102 to 110 out of 285 users). This means that on average existing friends are placed close to the middle of each user's list, whereas we would expect them to place near the top. Once again, this is a negative result, which in this case applies to all similarity computation methods used in our study.
We should also note that we attempted to create groups of ratings (i.e., {1–2}, {3–5}, {6–8}, {9–10}) to avoid cases where users may like or dislike an item almost equally but a small difference in their rating would cause the similarity computation to miss the common predilection. Therefore, we considered that users "agree" if they give a rating that belongs to the same set of ratings, as defined above. We found, however, that this choice improved the results of the π and κ coefficient only slightly (by about 0.6% in the results of Table 3 and by about 3–4 positions in terms of the average friends' placement shown in Table 4). Hence, in the rest of the paper we kept the standard definition of "agreement" for the π and κ coefficients.
EgoSimilar+
The results of our evaluation of EgoSimilar against all other similarity computation methods showed that the premise of EgoSimilar is promising but was not enough to help our proposed approach excel overall, and in particular in the cases which were of the most interest for our work on eMatch.
Therefore, we first focused on understanding the reasons why EgoSimilar is outperformed by the new similarity computation methods added to our study (Jaccard index, π coefficient, κ coefficient) for the case of (w1,w2) = (0.25, 0.75). All three similarity computation methods that outperformed EgoSimilar focus on computing the exact agreement between users, whereas EgoSimilar computes distance. Therefore, the results in this part of our work seem to indicate that even though it is more rare, exact agreement in items leads to better results in identifying existing (and hence, also possible future) friendships, especially when the element of chance agreement is removed, as is the case for the π and κ coefficients. The improved results achieved by computing exact agreement can be, at least partially, attributed to the fact that some items are essentially categories in themselves, e.g., "football" (in the category Sports) or "pop" (in the category Music); this can lead more often to exact agreement between users than a more specific football-related or pop-related item.
The results in Table 4 show that the use of the κ coefficient achieves the best results among all other similarity computation methods and outperforms EgoSimilar for (w1,w2) = (0.25, 0.75). The distinctive feature of the κ and π coefficients in comparison to the Jaccard index is the removal of chance agreement from the observed agreement and the distinctive feature of the κ coefficient in comparison to the π coefficient is the "acceptance" of differences in user ratings as being related to the biases of the users, instead of noise.
Based on the above, we decided to create an improved version of EgoSimilar which we name EgoSimilar+. This new version incorporates the following differences with the original algorithm:
We added the calculation of biases into EgoSimilar+. Similarly to [28], a first-order approximation of the bias involved in a rating Rui is
$$ {\text{B}}_{{\rm ui}} = \mu + {\text{B}}_{\rm u} + {\text{B}}_{\rm i} $$
where u represents the user, i represents the item. The bias involved in rating Rui is denoted by Bui and accounts for the user and item effects. The overall average rating is denoted by μ, while the parameters Bu, Bi, indicate the observed deviations of user u and item i, respectively. An example, from [28], on how the adding of biases works: suppose that we want a first-order estimate for user Joe's rating of the movie Titanic. Suppose that the average rating for all movies, μ, is 7.4/10 and that Titanic has a better than average rating and tends to be rated 1 star above the average. On the other hand, Joe is a critical user, who tends to rate 0.6 stars lower than the average. Thus, the estimate for Titanic's rating by Joe would be (7.4 + 1 − 0.6) = 7.8/10.
We added biases in order to estimate the users' ratings for items that the user had not rated although the items belonged to the user's favorite categories (categories rated equally or higher than 7/10 by the user).
In recent literature on databases, skyline query processing has received very significant attention [29,30,31]. Skyline queries find within a database the set of points that are not dominated by any other point. A n-dimensional point is not dominated by another point if it is true that it is not worse than any other point in (n − 1) dimensions and is better in at least one dimension. We adapted the idea of choosing non-dominated objects into EgoSimilar+. More specifically, after adding biases as explained above, dividing users in clusters and calculating user matching, EgoSimilar+ creates, for each user, two sets of potential friends. Set A contains the non-dominated potential friends, who are shown in descending matching percentage order, and Set B contains the dominated potential friends, who are again shown in descending matching percentage order. In order to identify the non-dominated potential friends, we use Eq. (1) and we calculate user matching in each category and that category's items. A non-dominated potential friend of user X is one who does not have a smaller matching percentage with X than any other user in 8 interest categories and is better than all other potential friends in at least one interest category.
It should be noted that a potential friend of user X in set B may have a higher overall matching percentage than a potential friend of X in set A. This can happen if the user in set B has a high matching percentage with X in a specific category but is dominated in another category. Still, the fact that users in set A are not dominated lead us to place them "lower" (closer to the top) in user X's matching list.
Evaluation of EgoSimilar+
As mentioned in "Evaluation of EgoSimilar" sectiom, in order to find the appropriate number of clusters to use for K-means clustering in our dataset, we utilized silhouettes [27]. Silhouettes are a widely-used graphical aid for the interpretation and validation of cluster analysis. A silhouette shows which objects lie well within their cluster and which ones are merely somewhere in between clusters. If we take any user i of a cluster A we define as a(i) the average dissimilarity of i to all other objects of A, and as b(i) the dissimilarity of i to all objects of the second-best cluster then the silhouette s(i) is computed as:
$$ {\text{s}}\left( {\text{i}} \right) = \left( {{\text{b}}\left( {\text{i}} \right) - {\text{a}}\left( {\text{i}} \right)} \right)/{ \hbox{max} }\left( {{\text{a}}\left( {\text{i}} \right),{\text{ b}}\left( {\text{i}} \right)} \right) $$
By "dissimilarity" in our study we are referring to the Euclidean distance between user vectors.
Equation (5) indicates that the best possible clustering (i.e., s(i) being close to 1) is achieved when the "within" dissimilarity is much smaller than the smallest "between" dissimilarity b(i). In this case user i is well-clustered. When s(i) is close to zero, this means that a(i) and b(i) are approximately equal and hence it is not clear to which of the two clusters user i should be assigned. When s(i) is close to − 1, this means that the clustering is erroneous as user i is closer to the second best cluster and should have been assigned to it.
Silhouettes are especially useful because they can help identify cases where we have set k to be too low or too high; in both cases s(i) would be low, in the first due to a high a(i) and in the second due to a low b(i).
To find the appropriate K we studied the 286 users of our datasets and we clustered them with K ranging from 1 to 143, i.e., up to the case where we would have on average two users in each cluster.
Figure 1 shows the average silhouette for all objects (users) in the dataset, for different values of K (average over 10 independent runs for each value).
Average silhouette for EgoSimilar+
Our results are qualitatively similar with those in [32] where for a different problem it was again shown through silhouettes that the increase of K up to a point increases the probability of a user being in the best possible cluster, however the general quality of the solution decreases with a too big increase of K. We derived the best silhouette when K is around 25, i.e., for an average number of 11–12 users per cluster. The best K values when using all other similarity computation methods in eMatch were in the range of [20, 23], and for each method we used its best K for the results that follow, in order to make a fair comparison.
In order to make a fair comparison between EgoSimilar+ and the other similarity computation methods we initially implemented on them the same new ideas that we implemented on EgoSimilar+, which are presented in "EgoSimilar+" section. However, the first idea, of adding biases, leads to a smaller exact agreement between users and this in turn led to worse results for the κ coefficient, π coefficient and Jaccard index. The addition of biases had a negligible effect on the Pearson Correlation and Cosine Similarity results. Therefore, for fairness reasons we implemented for all five similarity computation methods only the second idea, of finding and presenting first the non-dominated potential friends.
Figure 2 presents the matching difference between connected and not-connected users for "the best" version of all similarity computation methods (best K, addition of the new idea or ideas the improve the method's results). EgoSimilar+ is shown not only to excel, once again but to significantly improve its results in comparison to EgoSimilar. Especially for the case that is of the most interest for us, i.e., (w1,w2) = (0.25, 0.75), EgoSimilar+ shows a 34% improvement in comparison to EgoSimilar in distinguishing between already connected and not already connected users.
Average matching difference between connected and not-connected users
The actual matching percentages between users in each cluster vary for all similarity computation methods between 50 and 70%, with the exception of the Cosine Similarity metric which, when used in eMatch, shows an average matching percentage larger than 80% between the users in most clusters. However, the actual matching percentage is of little value. The only substantial effect that it might have, especially in the case of not connected users, is that a quantitatively higher percentage might be more intriguing for a user in order to decide to communicate with another user. What is truly substantial is the order in which "matching users" appear on the user's screen, in decreasing percentages (high to low), where, as it will be explained below, EgoSimilar+ clearly outperforms all similarity computation metrics.
Table 5 presents the average friends' placement results for EgoSimilar+ and the other similarity computation methods, again all of them in their "best" version. It is clear from the results presented in the Table that:
Table 5 Average friends' placement for the best version of all methods
EgoSimilar+ now outperforms all similarity computation methods in terms of the average friends' placement, which is the most important metric in our study, for all values of (w1,w2), including (w1,w2) = (0.25, 0.75) which is the most important case of our study, as explained earlier.
The improvement of EgoSimilar+ through the use of the two new ideas (adding biases, finding and promoting non-dominated potential friends) is very substantial, leading it to place existing friends on average at around the 29% mark (position 83/285). This improves our confidence on the quality of friend recommendations that EgoSimilar can make. EgoSimilar+ also clearly excels against all other similarity computation methods for weights (w1,w2) = (0.5, 0.5) and (w1,w2) = (0.75, 0.25), however its improvement over EgoSimilar is not as large since the critical factor in its improvement is the addition of biases to compute unknown ratings for items, and in the above cases item similarity has a smaller weight than in the (w1,w2) = (0.25, 0.75) case.
Figure 3 furthers shows visually the percentage improvement provided by EgoSimilar+ in terms of the average friends' placement in comparison to all other similarity computation measures. For (w1,w2) = (0.25, 0.75) this improvement ranges between 8.5% and 25.5% and even for (w1,w2) = (0.75, 0.25) the smallest improvement is still 4.5%.
Percentage of improvement offered by EgoSimilar+ in average friends' placement in comparison to all other similarity computation measures
Table 6 presents the same type of results as Table 5, with the difference that the same value of K for the K-means clustering was used for all the experiments, instead of using the best K for each method. The conclusions derived by Fig. 2 and Table 5 stand, once again, for the results of Table 6, which show that EgoSimilar+ outperforms all other similarity computation method in reard to the average friends' placement in the users' matching list.
Table 6 Average friends' placement for K = 40
Conclusions and future work
We have presented and proposed a user similarity computation algorithm, Egosimilar+, with the aim of using it to find and connect people with common interests in the same geographical area. The algorithm is incorporated into a mobile application that serves as a "friend" recommendation system. EgoSimilar+ adapts ideas and techniques from the recommender systems literature and the skyline queries literature and combines them with our own ideas on the importance and utilization of item popularity. Our proposed algorithm is compared against five well-known similarity computation methods from the literature and is shown to excel in comparison with all of them, improving by 4.5–25.5% their results in terms of identifying true friends based on their interests.
The idea for eMatch, and hence the need for an algorithm like EgoSimilar+, was created by the fact that the contemporary way of life leads a large number of people to spend much time away from home, often alone among strangers. Therefore, it makes sense for them to connect right on the spot with someone close by who shares their interests. This is a decision that can be made quickly with the help of an intelligent application, as opposed to decisions regarding finding possible life partners, which would usually require much more thought and study from the user (other applications focus on this area). Even at home, however, users spend a large amount of time using their mobile devices. Therefore, even users who want to take their time with evaluating possible friends will have the opportunity to do so.
One limitation of the existing work is the fact that the extended dataset is still relatively small. In future work, we will use EgoSimilar+ in large datasets from other sources in order to provide recommendations for users/items and we will compare it once again against benchmark similarity computation methods. We also intend to incorporate semantic similarity computation algorithms into eMatch, to further improve the clustering and the implicit (via the matching percentage) friendship recommendations. The use of such algorithms is important, so that relevant concepts, names and items will be linked automatically by the application (e.g., soccer and football, or soccer and Manchester United). The incorporation of spell check software is also important, in order to avoid spelling errors that can cause the algorithm to miss a commonly liked or disliked item by two users.
Oommen BJ, Yazidi A, Granmo O-C (2012) An adaptive approach to learning the preferences of users in a social network using weak estimators. J Inf Process Syst 8:191–212
Athanasopoulou G, Koutsakis P (2015) eMatch: an android application for finding friends in your location. Mob Inf Syst J. Article ID 463791
Athanasopoulou G (2013) https://androidappsapk.co/detail-ematch-com-tuc-ematch/Accessed. 06 Nov 2018
Farrahi K, Zia K (2017) Trust reality-mining: evidencing the role of friendship for trust diffusion. HumanCentric Comput Inf Sci 7:4
Duck SW, Craig G (1978) Personality similarity and the development of friendship: a longitudinal study. Br J Soc Clin Psychol 17:237–242
Werner C, Parmelee P (1979) Similarity of activity preferences among friends: those who play together stay together. Soc Psychol Quart 42:62–66
Han X, Wang L, Crespi N, Park S, Cuevas A (2015) Alike people, alike interests? Inferring interest similarity in online social networks. Decis Support Syst 69:92–106
Lee D (2015) Personalizing information using users' online social networks: a case study of CiteULike. J Inf Process Syst 11:1–21
Souri A, Hosseinpour S, Rahmani AM (2018) Personality classification based on profiles of social networks' users and the five-factor model of personality. HumanCentric Comput Inf Sci 8:24
Youhoo (2018) http://appcrawlr.com/android/youhoo. Accessed 06 Nov 2018
GeoSocials (2018) http://appcrawlr.com/android/geosocials. Accessed 06 Nov 2018
Jiveocity (2018) http://appcrawlr.com/android/jiveocity. Accessed 06 Nov 2018
Liben-Nowell D, Kleinberg J (2007) The link prediction problem for social networks. J Assoc Inf Sci Technol 58:1019–1031
Hannon J, Bennett M, Smyth B (2010) Recommending Twitter users to follow using content and collaborative filtering approaches. In: Paper presented at the 4th ACM conference on recommender systems (RecSys), Barcelona; 2010
Wan S et al (2013) Informational friend recommendation in social media. In: Paper presented at the 36th international ACM SIGIR conference on research and development in information retrieval (SIGIR), Dublin; 2013
Han X et al (2016) CSD: a multi-user similarity metric for community recommendation in online social networks. Expert Syst Appl 53:14–26
Diez J, del Coz JJ, Luaces O, Bahamonde A (2008) Clustering people according to their preference criteria. Expert Syst Appl 34:1274–1284
Gan M, Jiang R (2013) Constructing a user similarity network to remove adverse influence of popular objects for personalized recommendation. Expert Syst Appl 40:4044–4053
Hwang D, Kim D (2017) Nearest neighbor based prototype classification preserving class regions. J Inf Process Syst 13:1345–1357
Wu J et al (2017) Weighted local Naïve Bayes link prediction. J Inf Process Syst 13:914–927
Mekouar L, Iraqi Y, Boutaba R (2012) An analysis of peer similarity for recommendations in P2P systems. Multimedia Tools Appl 60:277–303
Jaccard P (1908) Nouvelles Recherches Sur la Distribution Florale. Bulletin de la Societe Vaudoise des Sciences Naturelles 44:223–270
Scott WA (1955) Reliability of content analysis: the case of nominal scale coding. Public Opin Quart 19:321–325
Cohen J (1960) A Coefficient of agreement for nominal scales. Educ Psychol Measur 20:37–46
Di Eugenio B, Glass M (2004) The kappa statistic: a second look. Comput Linguistics 30:95–101
Forgy EW (1965) Cluster analysis of multivariate data: efficiency versus interpretability of classifications. Biometrics 21:768–769
Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65
Koren Y, Bell R, Volinsky C (2009) Matrix factorization techniques for recommender systems. Computer 42:42–49
Borzsony S, Kossman D, Stocker K (2001) The skyline operator. In: Paper presented at the 17th international conference on data engineering (ICDE), Heidelberg; 2001
Papadias D, Tao Y, Fu G, Seeger B (2005) Progressive skyline computation in database systems. ACM Trans Database Syst 30:41–82
Zhang K et al (2017) Probabilistic skyline on incomplete data. In: Paper presented at the 26th ACM international conference on information and knowledge management (CIKM), Singapore; 2017
Thuillier E, Moalic L, Lamrous S, Caminada A (2018) Clustering weekly patterns of human mobility through mobile phone data. IEEE Trans Mob Comput 17:817–830
GS analysed the extended dataset, produced and analyzed the results of the evaluation of EgoSimilar. PK collected the extended dataset and analyzed the results of the evaluation of EgoSimilar. PK also designed and evaluated EgoSimilar+. Both authors read and approved the final manuscript.
The authors wish to sincerely thank the developer of eMatch, Georgia Athanasopoulou, for her valuable help during the time that this work was conducted.
The datasets used and analysed in this study are available from the corresponding author on reasonable request.
This was not a funded research project.
School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece
Georgios Tsakalakis
School of Engineering and Information Technology, Murdoch University, Science and Computing Building 245, SC1.012, 90 South Street, Murdoch, WA, 6150, Australia
Polychronis Koutsakis
Correspondence to Polychronis Koutsakis.
Tsakalakis, G., Koutsakis, P. Improved user similarity computation for finding friends in your location. Hum. Cent. Comput. Inf. Sci. 8, 36 (2018). https://doi.org/10.1186/s13673-018-0160-7
Accepted: 26 November 2018
Similarity Calculation Method
Recommender Systems Literature
Skyline Queries
Normal Friends
|
CommonCrawl
|
Machine translation Evaluation Natural language processing modal logic Machine learning TEI SGML Statistical machine translation English Sentiment analysis WordNet Modal logic Annotation Information extraction evaluation
United States 2453 (%)
Germany 1353 (%)
United Kingdom 997 (%)
France 594 (%)
DFKI 221 (%)
Carnegie Mellon University 117 (%)
University of Amsterdam 117 (%)
University of California 104 (%)
University of Edinburgh 85 (%)
( see all 13138)
Uszkoreit, Hans 228 (%)
Rehm, Georg 224 (%)
Hausser, Roland 113 (%)
Rao, K. Sreenivasa 57 (%)
Minker, Wolfgang 55 (%)
Studia Logica 3094 (%)
Computers and the Humanities 1628 (%)
Computational Linguistics and Intelligent Text Processing 992 (%)
Linguistics and Philosophy 698 (%)
Machine Translation 531 (%)
Book 7297 (%)
Springer 14157 (%)
Computational Linguistics 14157 (%)
Linguistics 5325 (%)
Artificial Intelligence (incl. Robotics) 5303 (%)
Language Translation and Linguistics 3985 (%)
Computer Science 3953 (%)
14157 Articles
13138 Authors
Showing 1 to 100 of 14157 matching Articles Results per page: 10 20 50
Alzheimer's Disease and Speech Background
The Art and Science of Machine Intelligence (2020-01-01): 107-135 , January 01, 2020
By Land, Walker H., Jr.; Schaffer, J. David
Developing a possible diagnostic test for Alzheimer's disease (AD) based on speech is used throughout this book to illustrate the application of the machine learning methods we describe. This application has many of the characteristics typical of such tasks: one has some idea that a set of features characterizing samples that one possesses might be able to classify the objects into two or more classes. Lacking sufficient depth of understanding of how this might be caused, one goes searching for useful patterns in the data—a fishing expedition.
This chapter provides background material on AD for the reader interested in how it is defined, what is known about its underlying pathology, how it is usually diagnosed in the clinic, and some of its known effects on speech. We then quickly summarize previous attempts at this task, older ones doing linguistic operations largely by hand, and more current attempts using computers. We then provide details of the speech samples we have collected, and how an array of speech features was extracted from them fully automatically, including speech to text with punctuation. Brief comments are included on issues related to experimental design.
Back Matter - The Art and Science of Machine Intelligence
The Art and Science of Machine Intelligence (2020-01-01) , January 01, 2020
By Land Jr., Walker H.; Schaffer, J. David
The Art and Science of Machine Intelligence
Background on Genetic Algorithms
The Art and Science of Machine Intelligence (2020-01-01): 1-44 , January 01, 2020
This chapter introduces evolutionary computation/genetic algorithms starting at a high level. It uses the schema sampling theorem to provide an intuitive understanding for how evolution, operating on a population of chromosomes (symbol strings), will produce offspring that contain variants of the symbol patterns in the more fit parents each generation, and shows how the recombination operators will be biased for and against some patterns. The No Free Lunch (NFL) theorem of Wolpert and Macready for optimization search algorithms has shown that over the space of all possible problems, there can be no universally superior algorithm. Hence, it is incumbent on any algorithm to attempt to identify the domain of problems for which it is effective and try to identify its strengths and limitations. In the next section, we introduce Eshelman's CHC genetic algorithm and recombination operators that have been developed for bit string and integer chromosomes. After showing its strengths particularly in dealing with some of the challenges for traditional genetic algorithms, its limitations are also shown. The final section takes up the application of CHC to subset selection problems, a domain of considerable utility for many machine learning applications. We present a series of empirical tests that lead us to the index chromosome representation and the match and mix set-subset size (MMX_SSS) recombination operator that seem well suited for this domain. Variants are shown for when the size of the desired subset is known and when it is not known. We apply this algorithm in later chapters to the feature subset selection problem that is key to our application of developing a speech-based diagnostic test for dementia.
Front Matter - The Art and Science of Machine Intelligence
Bayesian Probabilistic Neural Network (BPNN)
The purpose of this chapter is to introduce a different way of implementing Bayes theorem using a distributed, parallel algorithm first introduced by Specht (1990), which he named the probabilistic neural network (PNN). We have discussed in Chap. 6 the difficulties in constructing Bayesian networks using both the data-driven and expert-driven approaches. A significant advantage of the Bayesian PNN (BPNN) is that the node edge architecture is theoretically predetermined by the Parzen (1962)-Cacoullos (1966) theoretical formulation.
Specifically, this chapter covers the following topics.
It develops the mathematical formulation for the PNN.
It demonstrates that the normal PNN can be configured as an optimal Bayesian classifier (BPNN).
It shows how Parzen's theorem maps into Cacoullos's theorem.
It provides an illustrative toy example, showing a BPNN analysis for two classes and nine samples (four benign and five malignant) for a twofold cross-validation analysis.
It shows how to develop the optimal standard or variance sigma value for the Gaussian density function (a significant problem) and discusses PNN training methods.
It provides a BPNN application to the Alzheimer's speech data.
The Generalized Regression Neural Network Oracle
The Art and Science of Machine Intelligence (2020-01-01): 77-105 , January 01, 2020
In this chapter, we describe what are best characterized as complex adaptive systems and give several mixture of expert systems as examples of these complex systems. This background discussion is followed by three theoretical sections covering the topics of kernel-based probability estimation systems, a generalized neural network example, and a derivation of an ensemble combination and finally, a two-view ensemble combination. A summary of the equations describing the oracle follows these sections for those readers who do not want to work through all that mathematics. The next section introduces Receiver Operator Characteristic (ROC) analysis, a popular method for quantitatively assessing the performance of learning classifier systems. Next is the definition of "trouble-makers", and how they were discovered, followed by a discussion of the development of two hybrids: an Evolutionary Programming-Adaptive boosting (EP-AB) and a Generalized Regression Neural Network (GRNN) oracle for the purpose of demonstrating the existence of the trouble-makers by using an ROC measure of performance analysis. That discussion is followed by a detailed discussion of how to perform and evaluate an ROC analysis as well as a detailed practice example for those readers not familiar with this measure of performance technology. This chapter concludes with a research study on how to use the oracle to establish if the data sample size is adequate to accurately meet a 95% confidence interval imposed on the variance (or standard deviation) for the oracle. This is an important research study as very little effort is generally put into establishing the correct data set size for accurate, predictable, and repeatable performance results.
Classical Bayesian Theory and Networks
By their very nature, Bayesian networks (BN) represent cause-effect relationships by their parent-child structure. One can provide an observation of some events and then execute a Bayesian network with this information to ascertain the estimated probabilities of other events. Another significant advantage is that they can make very good estimates in the presence of missing information, which means that they will make the most accurate estimate with whatever information (or knowledge) is available and will provide these results in a computationally efficient manner as well.
This chapter comprises three separate sections. The first develops some of the basic probability concepts on which classical Bayes theory is based. The second develops Bayes theorem and describes several examples using Bayes theorem. The third addresses classical methods for constructing the Bayesian network structure.
This chapter introduces what we believe is a novel approach, allowing a trained classifier system to "know what it doesn't know" and to use this procedure to allow predictions for new cases to be flagged as unreliable if they lie in a region of feature space where the classifier knows its knowledge is likely to be faulty. We show how this approach may also be used to compare alternative classifiers trained on the same data in terms of their uncertainly areas. The better classifier is the one with the smaller uncertainty area. We illustrate three approaches to define these uncertainty areas, and we are quite sure additional research may identify newer and likely better ways. This work is significantly less mature than that of previous chapters; it represents a very recent insight, and the methods described are quite heuristic. We look forward to others taking this approach in more rigorous directions.
The Support Vector Machine
The Art and Science of Machine Intelligence (2020-01-01): 45-76 , January 01, 2020
Three separate sections comprise this chapter. The first presents an overview of statistical learning theory (SLT) as applied to machine learning. The topics covered are empirical or true risk minimization, the risk minimization principle (RMP), theoretical concept of risk minimization, function f0(X) that minimizes the expected (or true) risk, asymptotic consistency or uniform convergence, an example of the generalized bound for binary classification and finally, how are learning machines formed.
The second part of the chapter covers the theory of support vector machine (SVM) learning theory and develops solutions for the following three classes of problems:
Linear separable systems
Linear non-separable systems
And non-linear, non-separable systems
It then introduces the topic of kernels, what they are, and how they might be chosen. A brief pointer is provided to the SVM literature available on the web.
These sections are followed by a sketch of how the SVM may be hybridized with the GA for feature subset selection and points the way to the value of further hybridization with an ensemble approach, the topic of the next chapter.
Hybrid GA-SVM-Oracle Paradigm
In this chapter, we show how the methods described in previous chapters can be effectively combined. The application is the supervised learning of an accurate classifier. An example is characterized by a vector of features, and one presumes more features than needed, so some form of feature subset selection is needed. We describe what is often called a "wrapper approach" in which the feature subset selection algorithm (CHC in this case) repeatedly calls a leaning classifier system (SVM in this case) to compute an evaluation of each proffered feature set. We then describe an approach to data normalization that allows new cases that need to be classified in future, to have feature values outside the range observed in the training set. The two measures of desired performance, classification accuracy and subset size, are shown both evolving in the desired directions. We introduce an extension of the conventional area under the ROC curve metric (Sect. 3.7 ) that has desirable properties for tasks of this type. Then we apply this hybrid approach to our task of discovering a speech-based diagnostic test for dementia including aspects of the cross-validation experimental design and schemes for picking the winning feature sets. Finally, we show how the multiple discovered feature sets may be effectively combined using the GRNN oracle. This revealed some cases that we call trouble-makers, discussed in Chap. 3 . These provide examples for the new method presented in Chap. 9 that allows a classifier system to know what it doesn't know.
Machine Intelligence Mixture of Experts and Bayesian Networks
In Chap. 6 , we introduced Bayesian networks and some of the traditionAbstractal methods for designing them from data and pointed out some of the challenges with these approaches. In this chapter, we introduce a much simpler approach that escapes some of these challenges. This approach, we admit, will clearly not be an adequate substitute for all BN applications, but may well be adequate for many classification tasks. This approach uses a simple three-tier network, with the central node (or nodes) for the classification, an input layer for variables that influence the probability of membership in each class, and an output layer for variables whose probability is influenced by the class. We did not invent this approach, but we show how the MI methods in previous chapters, particularly feature subset selection, can greatly simplify the task of specifying the BN topology by reducing the available features to a small and fairly uncorrelated set. This does two things. By reducing the number features, we reduce the data requirements—maybe to manageable levels. By finding uncorrelated features, the assumptions of the simple three-tier design are less likely to be violated.
We illustrate the approach applied to four tasks. Two use the full three-tier design, and two others use reduced forms. The BN for each is compared to other MI methods previously used on these four data sets. We show how the conditional probabilities needed for the BN are estimated from the data. While none of the examples provided show the BN outperforming other MI methods, we offer it in the hope that the approach may demonstrate good performance in some future applications.
Frequency and Stylistic Variability of Discourse Markers in Nigerian English
Corpus Pragmatics (2019-07-10): 1-23 , July 10, 2019
By Unuabonah, Foluke Olayinka
This paper examines the choice, frequency and stylistic variability of discourse markers in Nigerian English, using the International Corpus of English-Nigeria. Three types of discourse markers: elaborative, contrastive and inferential discourse markers were examined in the Nigerian corpus and these were compared with the International Corpus of English-Great Britain, from a variational pragmatic approach. The results were subjected to loglikelihood test and paired sample t test. The results indicate that there was both a significant difference in the overall frequency of discourse markers in Nigerian English and British English, and in the stylistic variability of these markers in the two corpora. Nigerian English speakers use elaborative and contrastive discourse markers less frequently than British English speakers, but utilise inferential discourse markers more frequently than British English speakers. Moreover, speakers of Nigerian English use a reduced inventory of discourse markers compared to British English speakers and exhibit distinct preference patterns for a few individual discourse markers. The paper also identifies the rise of a new discourse marker moreso/more so in Nigerian English which is used differently from its adverbial form. There were also differences and similarities in the stylistic variability of the discourse markers across the two varieties, which may be dependent on the status of Nigerian English as a second language and the influence of British English on Nigerian English.
Interpolation in Extensions of First-Order Logic
Studia Logica (2019-07-06): 1-30 , July 06, 2019
By Gherardi, Guido; Maffezioli, Paolo; Orlandelli, Eugenio
We prove a generalization of Maehara's lemma to show that the extensions of classical and intuitionistic first-order logic with a special type of geometric axioms, called singular geometric axioms, have Craig's interpolation property. As a corollary, we obtain a direct proof of interpolation for (classical and intuitionistic) first-order logic with identity, as well as interpolation for several mathematical theories, including the theory of equivalence relations, (strict) partial and linear orders, and various intuitionistic order theories such as apartness and positive partial and linear orders.
Artificial imagination, imagine: new developments in digital scholarly editing
International Journal of Digital Humanities (2019-07-04) 1: 137-140 , July 04, 2019
By Hulle, Dirk
Editing social media: the case of online book discussion
By Boot, Peter
Online book discussion is a popular activity on weblogs, specialized book discussion sites, booksellers' sites and elsewhere. These discussions are important for research into literary reception and should be made and kept accessible for researchers. This article asks what an archive of online book discussion should and could look like, and how we could describe such an archive in terms of some of the central concepts of textual scholarship: work, document, text, transcription and variant. What could an approach along the lines of textual scholarship mean for such a collection? If such a collection holds many pieces of information that would not usually be considered text (such as demographic information about contributors), could we still call such a collection an edition, and could we call editing the activity of preparing such a collection?The article introduces some of the relevant (Dutch-language) sites, and summarizes their properties (among others: they are dynamic and vulnerable, they contain structured data and are very large) from the perspective of creating a research collection. It discusses the interpretation of some essential terms of textual studies in this context, and briefly lists a number of components that a digital edition of these sites might or should contain. It argues that such a collection is the result of scholarly work and should not be considered as 'just' a web archive.
The 'assertive edition'
By Vogeler, Georg
The paper describes the special interest among historians in scholarly editing and the resulting editorial practice in contrast to the methods applied by pure philological textual criticism. The interest in historical 'facts' suggests methods the goal of which is to create formal representations of the information conveyed by the text in structured databases. This can be achieved with RDF representations of statements extracted from the text, by automatic information extraction methods, or by hand. The paper suggests the use of embedded RDF representations in TEI markup, following the practice in several recent projects, and it concludes with a proposal for a definition of the 'assertive edition'.
On edited archives and archived editions
By Dillen, Wout
Building on a longstanding terminological discussion in the field of textual scholarship, this essay explores the archival and editorial potential of the digital scholarly edition. Following Van Hulle and Eggert, the author argues that in the digital medium these traditionally distinct activities now find the space they need to complement and reinforce one another. By critically examining some of the early and more recent theorists and adaptors of this relatively new medium, the essay aims to shed a clearer light on some of its strengths and pitfalls. To conclude, the essay takes the discussion further by offering a broader reflection on the difficulties of providing a 'definitive' archival base transcription of especially handwritten materials, questioning if this should be something to aspire to for the edition in the first place.
Textuality in 3D: three-dimensional (re)constructions as digital scholarly editions
By Schreibman, Susan; Papadopoulos, Costas
3D (re)constructions of heritage sites and Digital Scholarly Editions face similar needs and challenges and have many concepts in common, although they are expressed differently. 3D (re)constructions, however, lack a framework for addressing them. The goal of this article is not to create a single or the lowest common denominator to which both DSEs and 3D models subscribe, nor is it to reduce 3D to one scholarly editing tradition. It is rather to problematise the development of a model by borrowing concepts and values from editorial scholarship in order to enable public-facing 3D scholarship to be read in the same way that scholarly editions are by providing context, transmission history, and transparency of the editorial method/decision-making process.
Charles Chesnutt and the case for hybrid editing
By Browner, Stephanie P.; Price, Kenneth M.
In the context of a specific hybrid project—a digital archive and a print edition of the complete works of American writer Charles W. Chesnutt (1858–1932)--we consider three issues: (1) the value of print editions, notwithstanding the flexibility, capaciousness, and accessibility of digital editions; (2) the distinct affordances of digital editing in general and in this case; and (3) the challenges of a hybrid approach, and the possiblility of supplementing the now standard digital approach to rendering paper manuscripts (high quality scans and TEI-compliant transcriptions) with approaches borrowed from print and print aesthetics.
What future for digital scholarly editions? From Haute Couture to Prêt-à-Porter
By Pierazzo, Elena
Digital scholarly editions are expensive to make and to maintain. As such, they prove unattainable for less established scholars like early careers and PhD students, or indeed anyone without access to significant funding. One solution could be to create tools and platforms able to provide a publishing framework for digital scholarly editions that requires neither a high-tech skillset nor big investment. I call this type of edition "Prêt-à-Porter", to be distinguished from "haute couture" editions which are tailored to the specific needs of specific scholars. I argued that both types of editions are necessary for a healthy scholarly environment.
The Charles Harpur Critical Archive
By Schmidt, Desmond; Eggert, Paul
This is a history of and a technical report on the Charles Harpur Critical Archive (CHCA), which has been in preparation since 2009. Harpur was a predominantly newspaper poet in colonial New South Wales from the 1830s to the 1860s. Approximately 2700 versions of his 700 poems in newspaper and manuscript form have been recovered. In order to manage the complexity of his often heavily revised manuscripts, traditional encoding in XML-TEI, with its known difficulties in handling overlapping structures and complex revisions, was rejected. Instead, the transcriptions were split into simplified versions and layers of revision. Markup describing textual formats was stored externally using properties that may freely overlap. Both markup and the versions and layers were merged into multi-version documents (MVDs) to facilitate later comparison, editing and searching. This reorganisation is generic in design and should be reusable in other editorial projects.
From graveyard to graph
By Bleeker, Elli; Buitendijk, Bram; Haentjens Dekker, Ronald
The technological developments in the field of textual scholarship lead to a renewed focus on textual variation. Variants are liberated from their peripheral place in appendices or footnotes and are given a more prominent position in the (digital) edition of a work. But what constitutes an informative and meaningful visualisation of textual variation? The present article takes visualisation of the result of collation software as point of departure, examining several visualisations of collation output that contains a wealth of information about textual variance. The newly developed collation software HyperCollate is used as a touchstone to study the issue of representing textual information to advance literary research. The article concludes with a set of recommendations in order to evaluate different visualisations of collation output.
Opening the book: data models and distractions in digital scholarly editing
By Cummings, James
This article argues that editors of scholarly digital editions should not be distracted by underlying technological concerns except when these concerns affect the editorial tasks at hand. It surveys issues in the creation of scholarly digital editions and the open licensing of resources and addresses concerns about underlying data models and vocabularies, such as the Guidelines of the Text Encoding Initiative. It calls for solutions which promote the collaborative creation, annotation, and publication of scholarly digital editions. The article draws a line between issues with which editors of scholarly digital editions should concern themselves and issues which may only prove to be distractions.
Tracking the evolution of translated documents: revisions, languages and contaminations
By Barabucci, Gioele
Dealing with documents that have changed through time requires keeping track of additional metadata, for example the order of the revisions. This small issue explodes in complexity when these documents are translated. Even more complicate is keeping track of the parallel evolution of a document and its translations. The fact that this extra metadata has to be encoded in formal terms in order to be processed by computers has forced us to reflect on issues that are usually overlooked or, at least, not actively discussed and documented: How do I record which document is a translation of which? How do I record that this document is a translation of that specific revision of another document? And what if a certain translation has been created using one or more intermediate translations with no access to the original document? In this paper we addresses all these issues, starting from first principles and incrementally building towards a comprehensive solution. This solution is then distilled in terms of formal concepts (e.g., translation, abstraction levels, comparability, division in parts, addressability) and abstract data structures (e.g., derivation graphs, revisions-alignment tables, source-document tables, source-part tables). The proposed data structures can be seen as a generalization of the classical evolutionary trees (e.g., stemma codicum), extended to take into account the concepts of translation and contamination (i.e., multiple sources). The presented abstract data structures can easily be implemented in any programming language and customized to fit the specific needs of a research project.
Copies and facsimiles
By Dahlström, Mats
The concepts of original and copy, of source and facsimile, always convey particular understandings of the process of reproducing documents. This essay is an analysis of these concepts, in particular copies and facsimiles, framed within the context of digital reproduction. The activities and cases discussed are picked from two areas: digital scholarly editing and cultural heritage digitization performed by research libraries. The conceptual analysis draws on three fields of scholarly inquiry: scholarly editing, library and information science, and philosophical aesthetics.
Exercises in modelling: textual variants
By Spadini, Elena
The article presents a model for annotating textual variants. The annotations made can be queried in order to analyse and find patterns in textual variation. The model is flexible, allowing scholars to set the boundaries of the readings, to nest or concatenate variation sites, and to annotate each pair of readings; furthermore, it organizes the characteristics of the variants in features of the readings and features of the variation. After presenting the conceptual model and its applications in a number of case studies, this article introduces two implementations in logical models: namely, a relational database schema and an OWL 2 ontology. While the scope of this article is a specific issue in textual criticism, its broader focus is on how data is structured and visualized in digital scholarly editing.
Jan Dejnožka, The Concept of Relevance and the Logic Diagram Tradition
Studia Logica (2019-06-27): 1-5 , June 27, 2019
By Bellucci, Francesco
Computational text analysis within the Humanities: How to combine working practices from the contributing fields?
Language Resources and Evaluation (2019-06-26): 1-38 , June 26, 2019
By Kuhn, Jonas
This position paper is based on a keynote presentation at the COLING 2016 Workshop on Language Technology for Digital Humanities in Osaka, Japan. It departs from observations about working practices in Humanities disciplines following a hermeneutic tradition of text interpretation versus the method-oriented research strategies in Computational Linguistics (CL). The respective praxeological traditions are quite different. Yet more and more researchers are willing to open up towards truly transdisciplinary collaborations, trying to exploit advanced methods from CL within research that ultimately addresses questions from the traditional Humanities disciplines and the Social Sciences. The article identifies two central workflow-related issues for this type of collaborative project in the Digital Humanities (DH) and Computational Social Science: (1) a scheduling dilemma, which affects the point in the course of the project when specifications of the core analysis task are fixed (as early as possible from the computational perspective, but as late as possible from the Humanities perspective) and (2) the subjectivity problem, which concerns the degree of intersubjective stability of the target categories of analysis. CL methodology demands high inter-annotator agreement and theory-independent categories, while the categories in hermeneutic reasoning are often tied to a particular interpretive approach (viz. a theory of literary interpretation) and may bear a non-trivial relation to a reader's pre-understanding. Building a comprehensive methodological framework that helps overcome these issues requires considerable time and patience. The established computational methodology has to be gradually opened up to more hermeneutically oriented research questions; resources and tools for the relevant categories of analysis have to be constructed. This article does not call into question that well-targeted efforts along this path are worthwhile. Yet, it makes the following additional programmatic point regarding directions for future research: It might be fruitful to explore—in parallel—the potential lying in DH-specific variants of the concept of rapid prototyping from Software Engineering. To get an idea of how computational analysis of some aspect of text might contribute to a hermeneutic research question, a prototypical analysis model is constructed, e.g., from related data collections and analysis categories, using transfer techniques. While the initial quality of analysis may be limited, the idea of rapid probing allows scholars to explore how the analysis fits in an actual workflow on the target text data and it can thus provide early feedback for the process of refining the modeling. If the rapid probing method can indeed be incorporated in a hermeneutic framework to the satisfaction of well-disposed Humanities scholars, a swifter exploration of alternative paths of analysis would become possible. This may generate considerable additional momentum for transdisciplinary integration. It is as yet too early to point to truly Humanities-oriented examples of the proposed rapid probing technique. To nevertheless make the programmatic idea more concrete, the article uses two experimental scenarios to argue how rapid probing might help addressing the scheduling dilemma and the subjectivity problem respectively. The first scenario illustrates the transfer of complex analysis pipelines across corpora; the second one addresses rapid annotation experiments targeting character mentions in literary text.
Enrico Martino, Intuitionistic Proof Versus Classical Truth: The Role of Brouwer's Creative Subject in Intuitionistic Mathematics, Springer, 2018
By Fletcher, Peter
A Reanalysis of the Uses of Can and Could: A Corpus-Based Approach
Corpus Pragmatics (2019-06-18): 1-23 , June 18, 2019
By Whitty, Lauren
This study offers an in-depth analysis of the English modal auxiliaries CAN and COULD, using both spoken and written components of the British National Corpus. An examination of previous corpus-based studies of the modal auxiliaries CAN and COULD highlights discrepancies in the terminology utilised and the main categories associated with CAN and COULD, as well as insufficient surrounding context for a confident categorisation and a lack of clarity in explanations for classification. Based on findings from a new investigation of these modal auxiliaries in the BNC, I argue for a wider range of usage categories for CAN and COULD. The categories identified here differ from those reported in previous studies, as the present study differentiates categories of use beyond the traditional distinction between 'ability', 'possibility' and 'permission'. This study offers transparency on categorical criteria and the usage category assigned to individual tokens and demonstrates expanded context is an essential requirement in the semantic and pragmatic (re)analysis of corpus data.
SenseDefs: a multilingual corpus of semantically annotated textual definitions
By Camacho-Collados, Jose; Delli Bovi, Claudio; Raganato, Alessandro; Navigli, Roberto Show all (4)
Definitional knowledge has proved to be essential in various Natural Language Processing tasks and applications, especially when information at the level of word senses is exploited. However, the few sense-annotated corpora of textual definitions available to date are of limited size: this is mainly due to the expensive and time-consuming process of annotating a wide variety of word senses and entity mentions at a reasonably high scale. In this paper we present SenseDefs, a large-scale high-quality corpus of disambiguated definitions (or glosses) in multiple languages, comprising sense annotations of both concepts and named entities from a wide-coverage unified sense inventory. Our approach for the construction and disambiguation of this corpus builds upon the structure of a large multilingual semantic network and a state-of-the-art disambiguation system: first, we gather complementary information of equivalent definitions across different languages to provide context for disambiguation; then we refine the disambiguation output with a distributional approach based on semantic similarity. As a result, we obtain a multilingual corpus of textual definitions featuring over 38 million definitions in 263 languages, and we publicly release it to the research community. We assess the quality of SenseDefs's sense annotations both intrinsically and extrinsically on Open Information Extraction and Sense Clustering tasks.
An error analysis for image-based multi-modal neural machine translation
Machine Translation (2019-06-15) 33: 155-177 , June 15, 2019
By Calixto, Iacer; Liu, Qun
In this article, we conduct an extensive quantitative error analysis of different multi-modal neural machine translation (MNMT) models which integrate visual features into different parts of both the encoder and the decoder. We investigate the scenario where models are trained on an in-domain training data set of parallel sentence pairs with images. We analyse two different types of MNMT models, that use global and local image features: the latter encode an image globally, i.e. there is one feature vector representing an entire image, whereas the former encode spatial information, i.e. there are multiple feature vectors, each encoding different portions of the image. We conduct an error analysis of translations generated by different MNMT models as well as text-only baselines, where we study how multi-modal models compare when translating both visual and non-visual terms. In general, we find that the additional multi-modal signals consistently improve translations, even more so when using simpler MNMT models that use global visual features. We also find that not only translations of terms with a strong visual connotation are improved, but almost all kinds of errors decreased when using multi-modal models.
A user study of neural interactive translation prediction
By Knowles, Rebecca; Sanchez-Torron, Marina; Koehn, Philipp
Machine translation (MT) on its own is generally not good enough to produce high-quality translations, so it is common to have humans intervening in the translation process to improve MT output. A typical intervention is post-editing (PE), where a human translator corrects errors in the MT output. Another is interactive translation prediction (ITP), which involves an MT system presenting a translator with translation suggestions they can accept or reject, actions the MT system then uses to present them with new, corrected suggestions. Both Macklovitch (2006) and Koehn (2009) found ITP to be an efficient alternative to unassisted translation in terms of processing time. So far, phrase-based statistical ITP has not yet proven to be faster than PE (Koehn 2009; Sanchis-Trilles et al. 2014; Underwood et al. 2014; Green et al. 2014; Alves et al. 2016; Alabau et al. 2016). In this paper we present the results of an empirical study on translation productivity in ITP with an underlying neural MT system (NITP). Our results show that over half of the professional translators in our study translated faster with NITP compared to PE, and most preferred it over PE. We also examine differences between PE and ITP in other translation productivity indicators and translators' reactions to the technology.
A product and process analysis of post-editor corrections on neural, statistical and rule-based machine translation output
Machine Translation (2019-06-15) 33: 61-90 , June 15, 2019
By Koponen, Maarit; Salmi, Leena; Nikulin, Markku
This paper presents a comparison of post-editing (PE) changes performed on English-to-Finnish neural (NMT), rule-based (RBMT) and statistical machine translation (SMT) output, combining a product-based and a process-based approach. A total of 33 translation students acted as participants in a PE experiment providing both post-edited texts and edit process data. Our product-based analysis of the post-edited texts shows statistically significant differences in the distribution of edit types between machine translation systems. Deletions were the most common edit type for the RBMT, insertions for the SMT, and word form changes as well as word substitutions for the NMT system. The results also show significant differences in the correctness and necessity of the edits, particularly in the form of a large number of unnecessary edits in the RBMT output. Problems related to certain verb forms and ambiguity were observed for NMT and SMT, while RBMT was more likely to handle them correctly. Process-based comparison of effort indicators shows a slight increase of keystrokes per word for NMT output, and a slight decrease in average pause length for NMT compared to RBMT and SMT in specific text blocks. A statistically significant difference was observed in the number of visits per sub-segment, which is lower for NMT than for RBMT and SMT. The results suggest that although different types of edits were needed to outputs from NMT, RBMT and SMT systems, the difference is not necessarily reflected in process-based effort indicators.
Interactive adaptive SMT versus interactive adaptive NMT: a user experience evaluation
By Daems, Joke; Macken, Lieve
Neural machine translation is increasingly being promoted and introduced in the field of translation, but research into its applicability for post-editing by human translators and its integration within existing translation tools is limited. In this study, we compare the quality of SMT and NMT output of the commercially-available interactive and adaptive translation environment Lilt, as well as the translation process of professional translators working with both versions of the tool, their preference for SMT vs. NMT for post-editing, and their attitude towards such an interactive and adaptive translation tool compared to their usual translation environments.
Applying data mining and machine learning techniques for sentiment shifter identification
By Rahimi, Zeinab; Noferesti, Samira; Shamsfard, Mehrnoush
Sentiment shifters, as a set of words and expressions that can affect text polarity, play a fundamental role in opinion mining. However, the limited ability of current automated opinion mining systems in handling shifters is a major challenge. This paper presents three novel and efficient methods for identifying sentiment shifters in reviews in order to improve the overall accuracy of opinion mining systems: two data mining based algorithms and a machine learning based algorithm. The data mining algorithms do not need shifter tagged datasets. They use weighted association rule mining (WARM) for finding frequent patterns representing sentiment shifters from a domain-specific and a general corpus. These patterns include different kinds of shifter words such as shifter verbs and quantifiers and are able to handle both local and long-distance shifters. The items in WARM for the two designed methods are in the form of dependency relations and SRL arguments of sentences, respectively. Secondly, we implemented a supervised machine learning system based on semantic features of sentences for shifter identification and polarity classification. This method obviously needs shifter tagged dataset for shifter identification. We tested our proposed algorithms on polarity classification task for 2 domains: a specific domain (drug reviews) and a general domain. Experiments demonstrate that (1) the extracted shifters improve the performance of the polarity classification, (2) the proposed data mining methods outperform other implemented methods in shifter identification, and (3) the proposed semantic based machine learning method has the best efficiency among all implemented methods in polarity classification.
The Spoken Wikipedia Corpus collection: Harvesting, alignment and an application to hyperlistening
By Baumann, Timo; Köhn, Arne; Hennig, Felix
Spoken corpora are important for speech research, but are expensive to create and do not necessarily reflect (read or spontaneous) speech 'in the wild'. We report on our conversion of the preexisting and freely available Spoken Wikipedia into a speech resource. The Spoken Wikipedia project unites volunteer readers of Wikipedia articles. There are initiatives to create and sustain Spoken Wikipedia versions in many languages and hence the available data grows over time. Thousands of spoken articles are available to users who prefer a spoken over the written version. We turn these semi-structured collections into structured and time-aligned corpora, keeping the exact correspondence with the original hypertext as well as all available metadata. Thus, we make the Spoken Wikipedia accessible for sustainable research. We present our open-source software pipeline that downloads, extracts, normalizes and text–speech aligns the Spoken Wikipedia. Additional language versions can be exploited by adapting configuration files or extending the software if necessary for language peculiarities. We also present and analyze the resulting corpora for German, English, and Dutch, which presently total 1005 h and grow at an estimated 87 h per year. The corpora, together with our software, are available via http://islrn.org/resources/684-927-624-257-3/ . As a prototype usage of the time-aligned corpus, we describe an experiment about the preferred modalities for interacting with information-rich read-out hypertext. We find alignments to help improve user experience and factual information access by enabling targeted interaction.
Special issue: selected papers from LREC 2016
By Ide, Nancy; Calzolari, Nicoletta
Multi-modal indicators for estimating perceived cognitive load in post-editing of machine translation
Machine Translation (2019-06-15) 33: 91-115 , June 15, 2019
By Herbig, Nico; Pal, Santanu; Vela, Mihaela; Krüger, Antonio; Genabith, Josef Show all (5)
In this paper, we develop a model that uses a wide range of physiological and behavioral sensor data to estimate perceived cognitive load (CL) during post-editing (PE) of machine translated (MT) text. By predicting the subjectively reported perceived CL, we aim to quantify the extent of demands placed on the mental resources available during PE. This could for example be used to better capture the usefulness of MT proposals for PE, including the mental effort required, in contrast to the mere closeness to a reference perspective that current MT evaluation focuses on. We compare the effectiveness of our physiological and behavioral features individually and in combination with each other and with the more traditional text and time features relevant to the task. Many of the physiological and behavioral features have not previously been applied to PE. Based on the data gathered from ten participants, we show that our multi-modal measurement approach outperforms all baseline measures in terms of predicting the perceived level of CL as measured by a psychological scale. Combinations of eye-, skin-, and heart-based indicators enhance the results over each individual measure. Additionally, adding PE time improves the regression results further. An investigation of correlations between the best performing features, including sensor features previously unexplored in PE, and the corresponding subjective ratings indicates that the multi-modal approach takes advantage of several weakly to moderately correlated features to combine them into a stronger model.
Post-editing neural machine translation versus phrase-based machine translation for English–Chinese
Machine Translation (2019-06-15) 33: 9-29 , June 15, 2019
By Jia, Yanfang; Carl, Michael; Wang, Xiangling
This paper aims to shed light on the post-editing process of the recently-introduced neural machine translation (NMT) paradigm. Using simple and more complex texts, we first evaluate the output quality from English to Chinese phrase-based statistical (PBSMT) and NMT systems. Nine raters assess the MT quality in terms of fluency and accuracy and find that NMT produces higher-rated translations than PBSMT for both texts. Then we analyze the effort expended by 68 student translators during HT and when post-editing NMT and PBSMT output. Our measures of post-editing effort are all positively correlated for both NMT and PBSMT post-editing. Our findings suggest that although post-editing output from NMT is not always significantly faster than post-editing PBSMT, it significantly reduces the technical and cognitive effort. We also find that, in contrast to HT, post-editing effort is not necessarily correlated with source text complexity.
The Systematic Adaptation of Violence Contexts in the ISIS Discourse: A Contrastive Corpus-Based Study
Corpus Pragmatics (2019-06-15) 3: 173-203 , June 15, 2019
By Abdelzaher, Esra' Moustafa
Categorizing an act as 'violence', 'resistance', 'defense' or 'punishment' depends on the context within which the act occurs. Concerned with the context of violence, this study adopts a pragmatic approach to focus on the techniques ISIS uses to project participants involved in the context of violence, in corpora representing the Arabic and English ISIS discourses in 2014 and 2015. FrameNet is employed to identify the typical participants in the violence contexts/frames. Results show that the projection of participants in violent events systematically varies according to the addressed society ISIS targets. ISIS depends on priming historical conflicts to urge hostile encounter against socio-linguistically and historically diverse enemies. Addressing the Arab world in which the Shia/Sunni conflict is central, ISIS wages the war mainly against Safawi armies. However, the ISIS English discourse basically activates the Crusader/Muslim war. Overall, four expandable designations are constantly used to label any ISIS enemy: 'crusader' for Christian enemies; 'Murtadd' for Sunni Muslim opponents; 'Safawi' for Shia adversaries and 'Kuffar/disbelievers' for religiously undefined groups.
Evaluation of the impact of controlled language on neural machine translation compared to other MT architectures
By Marzouk, Shaimaa; Hansen-Schirra, Silvia
Many studies have shown that the application of controlled languages (CL) is an effective pre-editing technique to improve machine translation (MT) output. In this paper, we investigate whether this also holds true for neural machine translation (NMT). We compare the impact of applying nine CL rules on the quality of NMT output as opposed to that of rule-based, statistical, and hybrid MT by applying three methods: error annotation, human evaluation, and automatic evaluation. The analyzed data is a German corpus-based test suite of technical texts that have been translated into English by five MT systems (a neural, a rule-based, a statistical, and two hybrid MT systems). The comparison is conducted in terms of several quantitative parameters (number of errors, error types, quality ratings, and automatic evaluation metrics scores). The results show that CL rules positively affect rule-based, statistical, and hybrid MT systems. However, CL does not improve the results of the NMT system. The output of the neural system is mostly error-free both before and after CL application and has the highest quality in both scenarios among the analyzed MT systems showing a decrease in quality after applying the CL rules. The qualitative discussion of the NMT output sheds light on the problems that CL causes for this kind of MT architecture.
Studia Logica (2019-06-15) 107: 581-582 , June 15, 2019
By Freytes, H.
Editors' foreword to the special issue on human factors in neural machine translation
Machine Translation (2019-06-15) 33: 1-7 , June 15, 2019
By Castilho, Sheila; Gaspari, Federico; Moorkens, Joss; Popović, Maja; Toral, Antonio Show all (5)
Jan von Plato, Saved from the Cellar. Gerhard Gentzen's Shorthand Notes on Logic and the Foundations of Mathematics
By Rezuş, Adrian
By Merlussi, Pedro
Introduction to the Special Issue on Logic and the Foundations of Game and Decision Theory (LOFT12)
By Bonanno, Giacomo; Hoek, Wiebe; Perea, Andrés
The Use of the English Progressive Form in Discourse: An Analysis of a Corpus of Interview Data
By Frazier, Stefan; Koo, Hahn
The form and meaning of the English progressive have received a great deal of attention in linguistic and corpus-linguistic literature. The pragmatic/discourse uses of the progressive, however, are much less comprehensively understood from a corpus-linguistic perspective, largely because of the limitations of machine reading of discourse contexts. This paper uses a corpus-linguistic approach to categorizing and counting the distributions of some pragmatic functions for the English present and past progressive, replicating some, but challenging other, results of prior analyses of the progressive. Specifically, we address the use of the progressive in prompt/response sequences, in referring to co-text and its extent of agreement or disagreement with that co-text, in providing narrative background, and in interpreting prior discourse. We also provide several semantic features summarized through the corpus. The analyzed data comes from a corpus of 12.2 million words of spoken data—interviews from U.S. television and radio programs—and are selected via a systematic method. We finally discuss pedagogical and other implications.
The Monodic Fragment of Propositional Term Modal Logic
By Padmanabha, Anantha; Ramanujam, R.
We study term modal logics, where modalities can be indexed by variables that can be quantified over. We suggest that these logics are appropriate for reasoning about systems of unboundedly many reasoners and define a notion of bisimulation which preserves propositional fragment of term modal logics. Also we show that the propositional fragment is already undecidable but that its monodic fragment (formulas using only one free variable in the scope of a modality) is decidable, and expressive enough to include interesting assertions.
Post-editing neural machine translation versus translation memory segments
By Sánchez-Gijón, Pilar; Moorkens, Joss; Way, Andy
The use of neural machine translation (NMT) in a professional scenario implies a number of challenges despite growing evidence that, in language combinations such as English to Spanish, NMT output quality has already outperformed statistical machine translation in terms of automatic metric scores. This article presents the result of an empirical test that aims to shed light on the differences between NMT post-editing and translation with the aid of a translation memory (TM). The results show that NMT post-editing involves less editing than TM segments, but this editing appears to take more time, with the consequence that NMT post-editing does not seem to improve productivity as may have been expected. This might be due to the fact that NMT segments show a higher variability in terms of quality and time invested in post-editing than TM segments that are 'more similar' on average. Finally, results show that translators who perceive that NMT boosts their productivity actually performed faster than those who perceive that NMT slows them down.
Logics for Moderate Belief-Disagreement Between Agents
By Chen, Jia; Pan, Tianqun
A moderate belief-disagreement between agents on proposition p means that one agent believes p and the other agent does not. This paper presents two logical systems, $$\mathbf {MD}$$ and $$\mathbf {MD}^D$$ , that describe moderate belief-disagreement, and shows, using possible worlds semantics, that $$\mathbf {MD}$$ is sound and complete with respect to arbitrary frames, and $$\mathbf {MD}^D$$ is sound and complete with respect to serial frames. Syntactically, the logics are monomodal, but two doxastic accessibility relations are involved in their semantics. The notion of moderate belief-disagreement, which is in accordance with the understanding of belief-disagreement in everyday life, is an epistemic one related to multiagent situations, and $$\mathbf {MD}$$ and $$\mathbf {MD}^D$$ are two epistemic logics.
The Dynamics of Epistemic Attitudes in Resource-Bounded Agents
By Balbiani, Philippe; Fernández-Duque, David; Lorini, Emiliano
The paper presents a new logic for reasoning about the formation of beliefs through perception or through inference in non-omniscient resource-bounded agents. The logic distinguishes the concept of explicit belief from the concept of background knowledge. This distinction is reflected in its formal semantics and axiomatics: (i) we use a non-standard semantics putting together a neighborhood semantics for explicit beliefs and relational semantics for background knowledge, and (ii) we have specific axioms in the logic highlighting the relationship between the two concepts. Mental operations of perceptive type and inferential type, having effects on epistemic states of agents, are primitives in the object language of the logic. At the semantic level, they are modelled as special kinds of model-update operations, in the style of dynamic epistemic logic. Results about axiomatization, decidability and complexity for the logic are given in the paper.
Dynamic Epistemic Logics of Diffusion and Prediction in Social Networks
By Baltag, Alexandru; Christoff, Zoé; Rendsvig, Rasmus K.; Smets, Sonja Show all (4)
We take a logical approach to threshold models, used to study the diffusion of opinions, new technologies, infections, or behaviors in social networks. Threshold models consist of a network graph of agents connected by a social relationship and a threshold value which regulates the diffusion process. Agents adopt a new behavior/product/opinion when the proportion of their neighbors who have already adopted it meets the threshold. Under this diffusion policy, threshold models develop dynamically towards a guaranteed fixed point. We construct a minimal dynamic propositional logic to describe the threshold dynamics and show that the logic is sound and complete. We then extend this framework with an epistemic dimension and investigate how information about more distant neighbors' behavior allows agents to anticipate changes in behavior of their closer neighbors. Overall, our logical formalism captures the interplay between the epistemic and social dimensions in social networks.
Word Order, Heaviness, and Animacy
By Imamura, Satoshi
In experiments, previous studies have observed that the choice of word order in Japanese is influenced not only by heaviness but also by animacy. To be more concrete, the heavy constituent tends to precede the light one and the animate referent tends to come before the inanimate one. Yet, the present corpus analysis demonstrates that word order changes are not motivated by animacy. This discrepancy can be accounted for by supposing that animacy has an impact on word order only when the effects of other factors are neutralized. It is possible that the effects of animacy are so weak that they work only in psycholinguistic experiments because other factors are controlled. On the other hand, the effects of animacy are not observed in this corpus study probably because other factors are not controlled in actual examples. Thus, I propose that researchers should utilize both naturalistic and experimental evidence in order to draw reasonable conclusions about the grammatical aspects of language.
"Believe My Word Dear Father that You Can't Pick Up Money Here as Quick as the People at Home Thinks It": Exploring Migration Experiences in Irish Emigrants' Letters
By Avila-Ledesma, Nancy E.
This paper aims to investigate the conceptualisation of migration experiences in the personal correspondence exchanged between Irish emigrants to the United States, Australia and New Zealand and their significant others in Ireland between 1840 and 1930. In doing so, the study proposes a corpus-pragmatic examination of the words land and situation in order to elucidate the various ways in which the concepts of migration, enhancement of social standing and belonging are linguistically and pragmatically constructed in epistolary discourse. Using the Word Sketch function on Sketch Engine corpus tool, the quantitative analysis involves examining the collocational behaviour of land and situation in both datasets. Secondly, a qualitative examination of the linguistic patterns is conducted in order to compare and contrast migration experiences in Australia/New Zealand and USA and ascertain the extend to which specific migration experiences influenced Irish emigrants' emotional attitudes towards departure and life abroad. The collocational analyses of land(s) and situation(s) highlight two main themes in the Australian letters: (1) settlerism and the search for restoration of social status and (2) the role of letter writing as a means for sense-making. In contrast, the USA data unveils a contradictory and rather negative image of America that couples with an acute homesickness. Finally, the study discusses the pragmatic functions homesickness may have served to encourage or discourage emigration in rural Ireland.
The DialogBank: dialogues with interoperable annotations
By Bunt, Harry; Petukhova, Volha; Malchanau, Andrei; Fang, Alex; Wijnhoven, Kars Show all (5)
This paper presents the DialogBank, a new language resource consisting of dialogues with gold standard annotations according to the ISO 24617-2 standard. Some of these dialogues have been taken from existing corpora and have been re-annotated, offering the possibility to compare annotations according to different schemes; others have been newly annotated directly according to the standard. The ISO standard annotations in the DialogBank make use of three alternative representation formats, which are shown to be interoperable. The (re-)annotation brought certain deficiencies and limitations of the ISO standard to light, which call for considering possible revisions and extensions, and for exploring the possible integration of dialogue act annotations with other semantic annotations.
Current Issues in Intercultural Pragmatics
By Lee, Cynthia
Extending cluster-based ensemble learning through synthetic population generation for modeling disparities in health insurance coverage across Missouri
Journal of Computational Social Science (2019-06-10): 1-21 , June 10, 2019
By Mueller, Erik D.; Sandoval, J. S. Onésimo; Mudigonda, Srikanth P.; Elliott, Michael Show all (4)
In a previous study, Mueller et al. (ISPRS Int J Geo-Inf 8(1):13, 2019), presented a machine learning ensemble algorithm using K-means clustering as a preprocessing technique to increase predictive modeling performance. As a follow-on research effort, this study seeks to test the previously introduced algorithm's stability and sensitivity, as well as present an innovative method for the extraction of localized and state-level variable importance information from the original dataset, using a nontraditional method known as synthetic population generation. Through iterative synthetic population generation with similar underlying statistical properties to the original dataset and exploration of the distribution of health insurance coverage across the state of Missouri, we identified variables that contributed to decisions for clustering, variables that contributed most significantly to modeling health insurance distribution status throughout the state, and variables that were most influential in optimizing model performance, having the greatest impact on change-in-mean-squared-error (MSE) measurements. Results suggest that cluster-based preprocessing approaches for machine learning algorithms can result in significantly increased performance, and also demonstrate how synthetic populations can be used for performance measurement to identify and test the extent to which variable statistical properties within a dataset can vary without resulting in significant performance loss.
Digitization of data for a historical medical dictionary
By Norri, Juhani; Junkkari, Marko; Poranen, Timo
What are known as specialized or specialist dictionaries are much more than lists of words and their definitions with occasional comments on things such as synonymy and homonymy. That is to say, a particular specialist term may be associated with many other concepts, including quotations, different senses, etymological categories, semantic categories, superordinate and subordinate terms in the terminological hierarchy, spelling variants, and references to background sources discussing the exact meaning and application of the term. The various concepts, in turn, form networks of mutual links, which makes the structure of the background concepts demanding to model when designing a database structure for this type of dictionary. The Dictionary of medical vocabulary in English, 1375–1550 is a specialized historical dictionary that covers the vast medical lexicon of the centuries examined. It comprises over 12,000 terms, each of them associated with a host of background concepts. Compiling the dictionary took over 15 years. The process started with an analysis of hand-written manuscripts and early printed books from different sources and ended with the electronic dictionary described in the present paper. Over these years, the conceptual structure, database schema, and requirements for essential use cases were iteratively developed. In our paper, we introduce the conceptual structure and database schema modelled for implementing an electronic dictionary that involves different use cases such as term insertion and linking a term to related concepts. The achieved conceptual model, database structure, and use cases provide a general framework for reference-oriented specialized dictionaries, including ones with a historical orientation.
Historical corpora meet the digital humanities: the Jerusalem Corpus of Emergent Modern Hebrew
By Rubinstein, Aynat
The paper describes the creation of the first open access multi-genre historical corpus of Emergent Modern Hebrew, made possible by implementation of digital humanities methods in the process of corpus curation, encoding, and dissemination. Corpus contents originate in the Ben-Yehuda Project, an open access repository of Hebrew literature online, and in digital images curated from the collections of the National Library of Israel, a selection of which have been transcribed through a dedicated crowdsourcing task that feeds back into the library's online catalog. Texts in the corpus are encoded following best practices in the digital humanities, including markup of metadata that enables time-sensitive research, linguistic and other, of the corpus. Evaluation of morphological analysis based on Modern Hebrew language models is shown to distinguish between genres in the historical variety, highlighting the importance of ephemeral materials for linguistic research and for potential collaboration with libraries and cultural institutions in the process of corpus creation. We demonstrate the use of the corpus in diachronic linguistic research and suggest ways in which the association it provides between digital images and texts can be used to support automatic language processing and to enhance resources in the digital humanities.
Spanish corpora for sentiment analysis: a survey
Language Resources and Evaluation (2019-05-31): 1-38 , May 31, 2019
By Navas-Loro, María; Rodríguez-Doncel, Víctor
Corpora play an important role when training machine learning systems for sentiment analysis. However, Spanish is underrepresented in these corpora, as most primarily include English texts. This paper describes 20 Spanish-language text corpora—collected to support different tasks related to sentiment analysis, ranging from polarity to emotion categorization. We present a brand-new framework for the characterization of corpora. This includes a number of features to help analyze resources at both corpus level and document level. This survey—besides depicting the overall landscape of corpora in Spanish—supports sentiment analysis practitioners with the task of selecting the most suitable resources.
Pairwise similarity of jihadist groups in target and weapon transitions
Journal of Computational Social Science (2019-05-27): 1-26 , May 27, 2019
By Campedelli, Gian Maria; Bartulovic, Mihovil; Carley, Kathleen M.
Tactical decisions made by jihadist groups can have extremely negative impacts on societies. Studying the characteristics of their attacks over time is therefore crucial to extract relevant knowledge on their operational choices. In light of this, the present study employs transition networks to construct trails and analyze the behavioral patterns of the world's five most active jihadist groups using open access data on terror attacks from 2001 to 2016. Within this frame, we propose Normalized Transition Similarity (NTS), a coefficient that captures groups' pairwise similarity in terms of transitions between different temporally ordered sequences of states. For each group, these states respectively map attacked targets, employed weapons, and targets and weapons combined together with respect to the entire sequence of attacks. Analyses show a degree of stability of results among a number of pairs of groups across all trails. With this regard, Al Qaeda and Al Shabaab exhibit the highest NTS scores, while the Taliban and Al Qaeda prove to be the most different groups overall. Finally, potential policy implications and future work directions are also discussed.
The role of space and place in social media communication: two case studies of policy perspectives
By Sharag-Eldin, Adiyana; Ye, Xinyue; Spitzberg, Brian; Tsou, Ming-Hsiang Show all (4)
The study of how space and place intersect with social policy is still nascent but developing rapidly. As two exemplars of the potential that such research offers, the objective of this review is to integrate the research collected during recent studies of fracking and the death penalty. The primary disciplinary value of this review is to demonstrate the spatial value of communication and social media studies. This study adopts a communication-based theoretical framework as a lens to guide methodological choices in analyzing public perceptions. The social media application from Twitter is used as the engine to capture opinions of social media users engaging public controversies. This review locates connections in the literature between geographers/spatial scientists and communication media theorists.
Rosser Provability and Normal Modal Logics
Studia Logica (2019-05-16): 1-21 , May 16, 2019
By Kurahashi, Taishi
In this paper, we investigate Rosser provability predicates whose provability logics are normal modal logics. First, we prove that there exists a Rosser provability predicate whose provability logic is exactly the normal modal logic $$\mathsf{KD}$$ . Secondly, we introduce a new normal modal logic $$\mathsf{KDR}$$ which is a proper extension of $$\mathsf{KD}$$ , and prove that there exists a Rosser provability predicate whose provability logic includes $$\mathsf{KDR}$$ .
Completeness and Cut-Elimination for First-Order Ideal Paraconsistent Four-Valued Logic
By Kamide, Norihiro; Zohar, Yoni
In this study, we prove the completeness and cut-elimination theorems for a first-order extension F4CC of Arieli, Avron, and Zamansky's ideal paraconsistent four-valued logic known as 4CC. These theorems are proved using Schütte's method, which can simultaneously prove completeness and cut-elimination.
Automatic detection and correction of discourse marker errors made by Spanish native speakers in Portuguese academic writing
By Sepúlveda-Torres, Lianet; Sanches Duran, Magali; Aluísio, Sandra Maria
Discourse markers are words and expressions (such as: firstly, then, for example, because, as a result, likewise, in comparison, in contrast) that explicitly state the relational structure of the information in the text, i.e. signalling a sequential relationship between the current message and the previous discourse. Using these markers improves the cohesion and coherence of texts, facilitating reading comprehension. Although often included in tools that support the rhetoric structuring of texts, discourse markers have hardly been explored in writing support tools for learners of a second language. However, learners of a second language, including those at advanced levels, have trouble producing these lexical items, frequently replacing them with items from their native language or with literal translations of items in their own language, which often do not result in proper lexical items in the second language. In addition, students learn a single marker per function and use it repeatedly, producing monotonous texts. With the aim of contributing to reducing these difficulties, this paper presents a lexicon that will be used to support the task of automatically detecting and correcting discourse marker errors. Several heuristics have been evaluated to generate different types of errors. Automatic translation methods were used to semi-automatically compile the lexicon used in these heuristics. Similarity measures were also combined with these heuristics to correct discourse marker errors. The evaluated methods proved to be suitable for the task of identifying some types of discourse marker errors and can potentially identify many others, as long as new lexical inputs are incorporated into them.
Dialogue analysis: a case study on the New Testament
By Yeung, Chak Yan; Lee, John
There has been much research on the nature of dialogues in the Bible. While the research literature abounds with qualitative analyses on these dialogues, they are rarely corroborated on statistics from the entire text. In this article, we leverage a corpus of annotated direct speech in the New Testament, as well as recent advances in automatic speaker and listener identification, to present a quantitative study on dialogue structure in the Gospels. The contributions of this article are three-fold. First, we quantify a variety of features that are widely used in characterizing dialogue structure—including dialogue length, turn length, and the initiation and conclusion of a dialogue—and show how they distinguish between different Gospels. Second, we compare our statistics with qualitative comments in the New Testament research literature, and extend them to cover the entirety of the Gospels. Most significantly, we gauge the feasibility of applying our approach to other literary works, by measuring the amount of errors that would be introduced by automatically identified dialogues, speakers and listeners.
Truthmakers and Normative Conflicts
By Anglberger, Albert; Korbmacher, Johannes
By building on work by Kit Fine, we develop a sound and complete truthmaker semantics for Lou Goble's conflict tolerant deontic logic $$\mathbf {BDL}$$ .
Gender and Discipline: Intensifier Variation in Academic Lectures
Corpus Pragmatics (2019-05-02): 1-14 , May 02, 2019
By Liu, Chen-Yu
Many studies have explored differences in the use of intensifiers by people of different genders, but few have focused on additional compounding variables that may affect gendered intensifier use. This study thus explored the effects of academics' gender and subject area on intensifier use in their lectures, as well as the interactions between these two variables. Significant differences were found in the use of intensifiers between genders and between academic disciplines, with male lecturers using significantly more intensifiers than female ones did, and significantly fewer intensifiers occurring in the hard sciences than the soft ones. Of the two variables, discipline was more influential on intensifier variation than gender was. The interaction data, meanwhile, indicated that although male lecturers in both disciplinary groups used intensifiers frequently, their language behaviors were affected far more by the norms of the lecture genre than by discipline. The female lecturers' intensifier usage, on the other hand, was substantially influenced by both gender and discipline. Taken as a whole, these findings reveal that, although gender is undeniably an important factor in it, intensifier variation is more likely to be explicated through research on the interactive effects of gender and other variables.
Restoring Arabic vowels through omission-tolerant dictionary lookup
Language Resources and Evaluation (2019-04-25): 1-65 , April 25, 2019
By Neme, Alexis Amid; Paumier, Sébastien
Vowels in Arabic are optional orthographic symbols written as diacritics above or below letters. In Arabic texts, typically more than 97 percent of written words do not explicitly show any of the vowels they contain; that is to say, depending on the author, genre and field, less than 3 percent of words include any explicit vowel. Although numerous studies have been published on the issue of restoring the omitted vowels in speech technologies, little attention has been given to this problem in papers dedicated to written Arabic technologies. In this research, we present Arabic-Unitex, an Arabic Language Resource, with emphasis on vowel representation and encoding. Specifically, we present two dozens of rules formalizing a detailed description of vowel omission in written text. They are typographical rules integrated into large-coverage resources for morphological annotation. For restoring vowels, our resources are capable of identifying words in which the vowels are not shown, as well as words in which the vowels are partially or fully included. By taking into account these rules, our resources are able to compute and restore for each word form a list of compatible fully vowelized candidates through omission-tolerant dictionary lookup. In our previous studies, we have proposed a straightforward encoding of taxonomy for verbs (Neme in Proceedings of the international workshop on lexical resources (WoLeR) at ESSLLI, 2011) and broken plurals (Neme and Laporte in Lang Sci, 2013, http://dx.doi.org/10.1016/j.langsci.2013.06.002 ). While traditional morphology is based on derivational rules, our description is based on inflectional ones. The breakthrough lies in the reversal of the traditional root-and-pattern Semitic model into pattern-and-root, giving precedence to patterns over roots. The lexicon is built and updated manually and contains 76,000 fully vowelized lemmas. It is then inflected by means of finite-state transducers (FSTs), generating 6 million forms. The coverage of these inflected forms is extended by formalized grammars, which accurately describe agglutinations around a core verb, noun, adjective or preposition. A laptop needs one minute to generate the 6 million inflected forms in a 340-MB flat file, which is compressed in 2 min into 11 MB for fast retrieval. Our program performs the analysis of 5000 words/second for running text (20 pages/second). Based on these comprehensive linguistic resources, we created a spell checker that detects any invalid/misplaced vowel in a fully or partially vowelized form. Finally, our resources provide a lexical coverage of more than 99 percent of the words used in popular newspapers, and restore vowels in words (out of context) simply and efficiently.
Efficient document alignment across scenarios
Machine Translation (2019-04-24): 1-33 , April 24, 2019
By Azpeitia, Andoni; Etchegoyhen, Thierry
We present and evaluate an approach to document alignment meant for efficiency and portability, as it relies on automatically extracted lexical translations and simple set-theoretic operations for the computation of document-level similarity. We compare our approach to the state of the art on a variety of alignment scenarios, showing that it outperforms alternative document-alignment methods in the vast majority of cases, on both parallel and comparable corpora. We also explore several forms of simple component optimisation to evaluate the potential for improvement of the core method, and describe several successful optimisation paths that lead to significant improvements over strong baselines. The proposed approach constitutes an effective and easy to deploy method to perform accurate document alignment across scenarios, with the potential to improve the creation of parallel corpora.
Inner-Model Reflection Principles
Studia Logica (2019-04-20): 1-23 , April 20, 2019
By Barton, Neil; Caicedo, Andrés Eduardo; Fuchs, Gunter; Hamkins, Joel David; Reitz, Jonas; Schindler, Ralf Show all (6)
We introduce and consider the inner-model reflection principle, which asserts that whenever a statement $$\varphi (a)$$ in the first-order language of set theory is true in the set-theoretic universe V, then it is also true in a proper inner model $$W\subsetneq V$$ . A stronger principle, the ground-model reflection principle, asserts that any such $$\varphi (a)$$ true in V is also true in some non-trivial ground model of the universe with respect to set forcing. These principles each express a form of width reflection in contrast to the usual height reflection of the Lévy–Montague reflection theorem. They are each equiconsistent with ZFC and indeed $$\Pi _2$$ -conservative over ZFC, being forceable by class forcing while preserving any desired rank-initial segment of the universe. Furthermore, the inner-model reflection principle is a consequence of the existence of sufficient large cardinals, and lightface formulations of the reflection principles follow from the maximality principle MP and from the inner-model hypothesis IMH. We also consider some questions concerning the expressibility of the principles.
Building the first comprehensive machine-readable Turkish sign language resource: methods, challenges and solutions
By Eryiğit, Gülşen; Eryiğit, Cihat; Karabüklü, Serpil; Kelepir, Meltem; Özkul, Aslı; Pamay, Tuğba; Torunoğlu-Selamet, Dilara; Köse, Hatice Show all (8)
This article describes the procedures employed during the development of the first comprehensive machine-readable Turkish Sign Language (TiD) resource: a bilingual lexical database and a parallel corpus between Turkish and TiD. In addition to sign language specific annotations (such as non-manual markers, classifiers and buoys) following the recently introduced TiD knowledge representation (Eryiğit et al. 2016), the parallel corpus contains also annotations of dependency relations, which makes it the first parallel treebank between a sign language and an auditory-vocal language.
A hybrid approach for paraphrase identification based on knowledge-enriched semantic heuristics
By Mohamed, Muhidin; Oussalah, Mourad
In this paper, we propose a hybrid approach for sentence paraphrase identification. The proposal addresses the problem of evaluating sentence-to-sentence semantic similarity when the sentences contain a set of named-entities. The essence of the proposal is to distinguish the computation of the semantic similarity of named-entity tokens from the rest of the sentence text. More specifically, this is based on the integration of word semantic similarity derived from WordNet taxonomic relations, and named-entity semantic relatedness inferred from Wikipedia entity co-occurrences and underpinned by Normalized Google Distance. In addition, the WordNet similarity measure is enriched with word part-of-speech (PoS) conversion aided with a Categorial Variation database (CatVar), which enhances the lexico-semantics of words. We validated our hybrid approach using two different datasets; Microsoft Research Paraphrase Corpus (MSRPC) and TREC-9 Question Variants. In our empirical evaluation, we showed that our system outperforms baselines and most of the related state-of-the-art systems for paraphrase detection. We also conducted a misidentification analysis to disclose the primary sources of our system errors.
Exploiting languages proximity for part-of-speech tagging of three French regional languages
By Magistry, Pierre; Ligozat, Anne-Laure; Rosset, Sophie
This paper presents experiments in part-of-speech tagging of low-resource languages. It addresses the case when no labeled data in the targeted language and no parallel corpus are available. We only rely on the proximity of the targeted language to a better-resourced language. We conduct experiments on three French regional languages. We try to exploit this proximity with two main strategies: delexicalization and transposition. The general idea is to learn a model on the (better-resourced) source language, which will then be applied to the (regional) target language. Delexicalization is used to deal with the difference in vocabulary, by creating abstract representations of the data. Transposition consists in modifying the target corpus to be able to use the source models. We compare several methods and propose different strategies to combine them and improve the state-of-the-art of part-of-speech tagging in this difficult scenario.
On Principal Congruences in Distributive Lattices with a Commutative Monoidal Operation and an Implication
Studia Logica (2019-04-15) 107: 351-374 , April 15, 2019
By Jansana, Ramon; San Martín, Hernán Javier
In this paper we introduce and study a variety of algebras that properly includes integral distributive commutative residuated lattices and weak Heyting algebras. Our main goal is to give a characterization of the principal congruences in this variety. We apply this description in order to study compatible functions.
Analyticity, Balance and Non-admissibility of $$\varvec{Cut}$$ Cut in Stoic Logic
By Bobzien, Susanne; Dyckhoff, Roy
This paper shows that, for the Hertz–Gentzen Systems of 1933 (without Thinning), extended by a classical rule T1 (from the Stoics) and using certain axioms (also from the Stoics), all derivations are analytic: every cut formula occurs as a subformula in the cut's conclusion. Since the Stoic cut rules are instances of Gentzen's Cut rule of 1933, from this we infer the decidability of the propositional logic of the Stoics. We infer the correctness for this logic of a "relevance criterion" and of two "balance criteria", and hence (in contrast to one of Gentzen's 1933 results) that a particular derivable sequent has no derivation that is "normal" in the sense that the first premiss of each cut is cut-free. We also infer that Cut is not admissible in the Stoic system, based on the standard Stoic axioms, the T1 rule and the instances of Cut with just two antecedent formulae in the first premiss.
A Categorical Equivalence for Stonean Residuated Lattices
By Busaniche, Manuela; Cignoli, Roberto; Marcos, Miguel Andrés
We follow the ideas given by Chen and Grätzer to represent Stone algebras and adapt them for the case of Stonean residuated lattices. Given a Stonean residuated lattice, we consider the triple formed by its Boolean skeleton, its algebra of dense elements and a connecting map. We define a category whose objects are these triples and suitably defined morphisms, and prove that we have a categorical equivalence between this category and that of Stonean residuated lattices. We compare our results with other works and show some applications of the equivalence.
A Duality for Involutive Bisemilattices
By Bonzio, Stefano; Loi, Andrea; Peruzzi, Luisa
We establish a duality between the category of involutive bisemilattices and the category of semilattice inverse systems of Stone spaces, using Stone duality from one side and the representation of involutive bisemilattices as Płonka sum of Boolean algebras, from the other. Furthermore, we show that the dual space of an involutive bisemilattice can be viewed as a GR space with involution, a generalization of the spaces introduced by Gierz and Romanowska equipped with an involution as additional operation.
Jody Azzouni, The Rule-Following Paradox and Its Implications for Metaphysics, Springer (Synthese Library Series No. 382), 2017, pp. $$\hbox {viii} + 124$$ viii + 124 , ISBN: 978-3-319-49060-1 (Hardcover) $99.99; (Softcover) $89.99; (eBook) $69.99
By Colomina, Juan J.
Intermediate Logics Admitting a Structural Hypersequent Calculus
By Lauridsen, Frederik M.
We characterise the intermediate logics which admit a cut-free hypersequent calculus of the form $$\mathbf {HLJ} + \mathscr {R}$$ , where $$\mathbf {HLJ}$$ is the hypersequent counterpart of the sequent calculus $$\mathbf {LJ}$$ for propositional intuitionistic logic, and $$\mathscr {R}$$ is a set of so-called structural hypersequent rules, i.e., rules not involving any logical connectives. The characterisation of this class of intermediate logics is presented both in terms of the algebraic and the relational semantics for intermediate logics. We discuss various—positive as well as negative—consequences of this characterisation.
A Deterministic Weakening of Belnap–Dunn Logic
By Ma, Minghui; Lin, Yuanlei
A deterministic weakening $$\mathsf {DW}$$ of the Belnap–Dunn four-valued logic $$\mathsf {BD}$$ is introduced to formalize the acceptance and rejection of a proposition at a state in a linearly ordered informational frame with persistent valuations. The logic $$\mathsf {DW}$$ is formalized as a sequent calculus. The completeness and decidability of $$\mathsf {DW}$$ with respect to relational semantics are shown in terms of normal forms. From an algebraic perspective, the class of all algebras for $$\mathsf {DW}$$ is described, and found to be a subvariety of Berman's variety $$\mathcal {K}_{1,2}$$ . Every linearly ordered frame is logically equivalent to its dual algebra. It is proved that $$\mathsf {DW}$$ is the logic of a nine-element distributive lattice with a negation. Moreover, $$\mathsf {BD}$$ is embedded into $$\mathsf {DW}$$ by Glivenko's double-negation translation.
Rasiowa–Sikorski Deduction Systems with the Rule of Cut: A Case Study
By Leszczyńska-Jasion, Dorota; Ignaszak, Mateusz; Chlebowski, Szymon
This paper presents Rasiowa–Sikorski deduction systems (R–S systems) for logics $$\mathsf {CPL}$$ , $$\mathsf {CLuN}$$ , $$\mathsf {CLuNs}$$ and $$\mathsf {mbC}$$ . For each of the logics two systems are developed: an R–S system that can be supplemented with admissible cut rule, and a $$\mathbf {KE}$$ -version of R–S system in which the non-admissible rule of cut is the only branching rule. The systems are presented in a Smullyan-like uniform notation, extended and adjusted to the aims of this paper. Completeness is proved by the use of abstract refutability properties which are dual to consistency properties used by Fitting. Also the notion of admissibility of a rule in an R–S-system is analysed.
Studying the history of the Arabic language: language technology and a large-scale historical corpus
By Belinkov, Yonatan; Magidow, Alexander; Barrón-Cedeño, Alberto; Shmidman, Avi; Romanov, Maxim Show all (5)
Arabic is a widely-spoken language with a long and rich history, but existing corpora and language technology focus mostly on modern Arabic and its varieties. Therefore, studying the history of the language has so far been mostly limited to manual analyses on a small scale. In this work, we present a large-scale historical corpus of the written Arabic language, spanning 1400 years. We describe our efforts to clean and process this corpus using Arabic NLP tools, including the identification of reused text. We study the history of the Arabic language using a novel automatic periodization algorithm, as well as other techniques. Our findings confirm the established division of written Arabic into Modern Standard and Classical Arabic, and confirm other established periodizations, while suggesting that written Arabic may be divisible into still further periods of development.
Approaching terminological ambiguity in cross-disciplinary communication as a word sense induction task: a pilot study
By Mennes, Julie; Pedersen, Ted; Lefever, Els
Cross-disciplinary communication is often impeded by terminological ambiguity. Hence, cross-disciplinary teams would greatly benefit from using a language technology-based tool that allows for the (at least semi-) automated resolution of ambiguous terms. Although no such tool is readily available, an interesting theoretical outline of one does exist. The main obstacle for the concrete realization of this tool is the current lack of an effective method for the automatic detection of the different meanings of ambiguous terms across different disciplinary jargons. In this paper, we set up a pilot study to experimentally assess whether the word sense induction technique of 'context clustering', as implemented in the software package 'SenseClusters', might be a solution. More specifically, given several sets of sentences coming from a cross-disciplinary corpus containing a specific ambiguous term, we verify whether this technique can classify each sentence in accordance to the meaning of the ambiguous term in that sentence. For the experiments, we first compile a corpus that represents the disciplinary jargons involved in a project on Bone Tissue Engineering. Next, we conduct two series of experiments. The first series focuses on determining appropriate SenseClusters parameter settings using manually selected test data for the ambiguous target terms 'matrix' and 'model'. The second series evaluates the actual performance of SenseClusters using randomly selected test data for an extended set of target terms. We observe that SenseClusters can successfully classify sentences from a cross-disciplinary corpus according to the meaning of the ambiguous term they contain. Hence, we argue that this implementation of context clustering shows potential as a method for the automatic detection of the meanings of ambiguous terms in cross-disciplinary communication.
The Explosion Calculus
By Arndt, Michael
A calculus for classical propositional sequents is introduced that consists of a restricted version of the cut rule and local variants of the logical rules. Employed in the style of proof search, this calculus explodes a given sequent into its elementary structural sequents—the topmost sequents in a derivation thus constructed—which do not contain any logical constants. Some of the properties exhibited by the collection of elementary structural sequents in relation to the sequent they are derived from, uniqueness and unique representation of formula occurrences, will be discussed in detail. Based on these properties it is suggested that a collection of elementary structural sequents constitutes the purely structural representation of the sequent from which it is obtained.
Digitising Swiss German: how to process and study a polycentric spoken language
By Scherrer, Yves; Samardžić, Tanja; Glaser, Elvira
Swiss dialects of German are, unlike many dialects of other standardised languages, widely used in everyday communication. Despite this fact, automatic processing of Swiss German is still a considerable challenge due to the fact that it is mostly a spoken variety and that it is subject to considerable regional variation. This paper presents the ArchiMob corpus, a freely available general-purpose corpus of spoken Swiss German based on oral history interviews. The corpus is a result of a long design process, intensive manual work and specially adapted computational processing. We first present the modalities of access of the corpus for linguistic, historic and computational research. We then describe how the documents were transcribed, segmented and aligned with the sound source. This work involved a series of experiments that have led to automatically annotated normalisation and part-of-speech tagging layers. Finally, we present several case studies to motivate the use of the corpus for digital humanities in general and for dialectology in particular.
Personal stories matter: topic evolution and popularity among pro- and anti-vaccine online articles
Journal of Computational Social Science (2019-04-09): 1-14 , April 09, 2019
By Xu, Zhan
People tend to read health articles that have gone viral online. A large portion of online popular vaccine articles are against vaccines, which lead to increased exemption rates and recent outbreaks of vaccine-preventable diseases. Since anti-vaccine articles' themes and persuasive strategies change fast, their effects on viewers' behaviors may change over time. This study examined how pro- and anti-vaccine topics and public interests have changed from 2007 to 2017. Computational methods (e.g., topic modeling) were used to analyze 923 online vaccine articles and over 4 million shares, reactions, and comments that they have received on social media. Pro-vaccine messages (PVMs) that used personal stories received the most heated discussion online and pure scientific knowledge received the least attention. PVMs that present vaccine disagreements and limitations were not popular. These findings indicate the importance of narratives and directly attacking opposing arguments in health message design. Anti-vaccine messages (AVMs) that discussed flu shots and government conspiracy received the most attention. Since April 2015, even though more PVMs appeared online, AVMs, especially those about vaccine damage, were increasingly more popular than PVMs. Some social events and disease outbreaks might contribute to the popularity of AVMs. Newly emerged anti-vaccine topics (e.g., false rumors of CDC conspiracy) should be noted. This study shows that certain topics can be more popular online and can potentially reach a larger population. It also reveals the evolution of vaccine-related topics and public's interest. Findings can help to design effective interventions and develop programs to track and combat misinformation.
From 0 to 10 million annotated words: part-of-speech tagging for Middle High German
By Schulz, Sarah; Ketschik, Nora
By building a part-of-speech (POS) tagger for Middle High German, we investigate strategies for dealing with a low resource, diverse and non-standard language in the domain of natural language processing. We highlight various aspects such as the data quantity needed for training and the influence of data quality on tagger performance. Since the lack of annotated resources poses a problem for training a tagger, we exemplify how existing resources can be adapted fruitfully to serve as additional training data. The resulting POS model achieves a tagging accuracy of about 91% on a diverse test set representing the different genres, time periods and varieties of MHG. In order to verify its general applicability, we evaluate the performance on different genres, authors and varieties of MHG, separately. We explore self-learning techniques which yield the advantage that unannotated data can be utilized to improve tagging performance on specific subcorpora.
Beyond lexical frequencies: using R for text analysis in the digital humanities
By Arnold, Taylor; Ballier, Nicolas; Lissón, Paula; Tilton, Lauren Show all (4)
This paper presents a combination of R packages—user contributed toolkits written in a common core programming language—to facilitate the humanistic investigation of digitised, text-based corpora. Our survey of text analysis packages includes those of our own creation (cleanNLP and fasttextM) as well as packages built by other research groups (stringi, readtext, hyphenatr, quanteda, and hunspell). By operating on generic object types, these packages unite research innovations in corpus linguistics, natural language processing, machine learning, statistics, and digital humanities. We begin by extrapolating on the theoretical benefits of R as an elaborate gluing language for bringing together several areas of expertise and compare it to linguistic concordancers and other tool-based approaches to text analysis in the digital humanities. We then showcase the practical benefits of an ecosystem by illustrating how R packages have been integrated into a digital humanities project. Throughout, the focus is on moving beyond the bag-of-words, lexical frequency model by incorporating linguistically-driven analyses in research.
The invention and dissemination of the spacer gif: implications for the future of access and use of web archives
International Journal of Digital Humanities (2019-04-08) 1: 71-84 , April 08, 2019
By Owens, Trevor; Thomas, Grace Helen
Over the last two decades publishing and distributing content on the Web has become a core part of society. This ephemeral content has rapidly become an essential component of the human record. Writing histories of the late 20th and early 21st century will require engaging with web archives. The scale of web content and of web archives presents significant challenges for how research can access and engage with this material. Digital humanities scholars are advancing computational methods to work with corpora of millions of digitized resources, but to fully engage with the growing content of two decades of web archives, we now require methods to approach and examine billions, ultimately trillions, of incongruous resources. This article approaches one seemingly insignificant, but fundamental, aspect in web design history: the use of tiny transparent images as a tool for layout design, and surfaces how traces of these files can illustrate future paths for engaging with web archives. This case study offers implications for future methods allowing scholars to engage with web archives. It also prompts considerations for librarians and archivists in thinking about web archives as data and the development of systems, qualitative and quantitative, through which to make this material available.
A landscape of data – working with digital resources within and beyond DARIAH
International Journal of Digital Humanities (2019-04-08) 1: 113-131 , April 08, 2019
By Kálmán, Tibor; Ďurčo, Matej; Fischer, Frank; Larrousse, Nicolas; Leone, Claudio; Mörth, Karlheinz; Thiel, Carsten Show all (7)
The way researchers in the arts and humanities disciplines work has changed significantly. Research can no longer be done in isolation as an increasing number of digital tools and certain types of knowledge are required to deal with research material. Research questions are scaled up and we see the emergence of new infrastructures to address this change. The DigitAl Research Infrastructure for the Arts and Humanities (DARIAH) is an open international network of researchers within the arts and humanities community, which revolves around the exchange of experiences and the sharing of expertise and resources. These resources comprise not only of digitised material, but also a wide variety of born-digital data, services and software, tools, learning and teaching materials. The sustaining, sharing and reuse of resources involves many different parties and stakeholders and is influenced by a multitude of factors in which research infrastructures play a pivotal role. This article describes how DARIAH tries to meet the requirements of researchers from a broad range of disciplines within the arts and humanities that work with (born-)digital research data. It details approaches situated in specific national contexts in an otherwise large heterogeneous international scenario and gives an overview of ongoing efforts towards a convergence of social and technical aspects.
Born digital preservation of e-lit: a live internet traversal of Sarah Smith's King of Space
By Schiller, Nicholas; Grigar, Dene
Sarah Smith's King of Space, published in 1991, is the first work of science fiction produced as electronic literature. Released on a 3.5-in. floppy disk and requiring a Macintosh computer running System Software 7.0-MacOS 9x, it is now inaccessible to scholars interested in early digital literary forms, particularly of science fiction by women authors. Because this work is interactive and involves animations, images, sound, and words, preserving it requires an approach that retains as much of these experiences as possible for future audiences. To accomplish this task, our lab––the Electronic Literature Lab at Washington State University, Vancouver––used the Pathfinders methodology developed by Grigar and Stuart Moulthrop, adding to it Live Stream play-throughs on YouTube promoted through social media channels. This essay outlines our process and discusses the potential of this methodology for preserving other kinds of multimedia and interactive work.
From time theft to time stamps: mapping the development of digital forensics from law enforcement to archival authority
By Rogers, Corinne
The field of digital forensics seems at first glance quite separate from archival work and digital preservation. However, professionals in both fields are trusted to attest to the identity and integrity of digital documents and traces – they are regarded as experts in the acquisition, interpretation, description and presentation of that material. Archival science and digital forensics evolved out of practice and grew into established professional disciplines by developing theoretical foundations, which then returned to inform and standardize that practice. They have their roots in legal requirements and law enforcement. A significant challenge to both fields, therefore, is the identification of records (archival focus) and evidence (digital forensics focus) in digital systems, establishing their contexts, provenance, relationships, and meaning. This paper traces the development of digital forensics from practice to theory and presents the parallels with archival science.
Web 25. Histories from the first 25 years of the world wide web
By Mechant, Peter
Born-digital archives
International Journal of Digital Humanities (2019-04-08) 1: 1-11 , April 08, 2019
By Ries, Thorsten; Palkó, Gábor
Web archives as a data resource for digital scholars
International Journal of Digital Humanities (2019-04-08) 1: 85-111 , April 08, 2019
By Vlassenroot, Eveline; Chambers, Sally; Pretoro, Emmanuel; Geeraert, Friedel; Haesendonck, Gerald; Michel, Alejandra; Mechant, Peter Show all (7)
The aim of this article is to provide an exploratory analysis of the landscape of web archiving activities in Europe. Our contribution, based on desk research, and complemented with data from interviews with representatives of European heritage institutions, provides a descriptive overview of the state-of-the-art of national web archiving in Europe. It is written for a broad interdisciplinary audience, including cultural heritage professionals, IT specialists and managers, and humanities and social science researchers. The legal, technical and operational aspects of web archiving and the value of web archives as born-digital primary research resources are both explored. In addition to investigating the organisations involved and the scope of their web archiving programmes, the curatorial aspects of the web archiving process, such as selection of web content, the tools used and the provision of access and discovery services are also considered. Furthermore, general policies related to web archiving programmes are analysed. The article concludes by offering four important issues that digital scholars should consider when using web archives as a historical data source. Whilst recognising that this study was limited to a sample of only nine web archives, this article can nevertheless offer some useful insights into the technical, legal, curatorial and policy-related aspects of web archiving. Finally, this paper could function as a stepping stone for more extensive and qualitative research.
Anarchive as technique in the Media Archaeology Lab | building a one Laptop Per Child mesh network
By striegl, libi; Emerson, Lori
The Media Archaeology Lab (MAL) at the University of Colorado at Boulder (U.S.A.) acts as both an archive and a site for what the authors describe as 'anarchival' practice-based research and research creation. 'Anarchival' indicates research and creative activity enacted as a complement to an existing, stable archive. In researching the One Laptop Per Child Initiative, by way of a donation of XO laptops, the MAL has devised a modular process which could be used by other research groups to investigate the gap between the intended use and the affordances of any given piece of technology.
The .txtual condition, .txtual criticism and .txtual scholarly editing in Spanish philology
By Vauthier, Bénédicte
The impact of New Technologies on writing proccess is not new at all. This digital revolution first resulted in the appearance of new text formats and the development of an ad hoc literary theory. In Angloamerican area, this revolution made philologists and patrimonial institutions reflect on the necessity of developing formats of study, edition and perennial conservation of these new formats of digital texts. What is the reason for such a delay in these disciplines that can be observed in Europe? Why can we say that digital forensics and media archaeology (Kirschenbaum) are not trasnational disciplines? In this paper, I assess the impact in Europe and in Angloamerican area of .Txtual condition. Moreover, I make a contrast between these conclusions and the answers given by three emblematic writers of the 'new Spanish narrative' to a survey about ways of managing and preserving digital files.
|
CommonCrawl
|
Shannon Entropy of 0.922, 3 Distinct Values
Given a string of values $AAAAAAAABC$, the Shannon Entropy in log base $2$ comes to $0.922$. From what I understand, in base $2$ the Shannon Entropy rounded up is the minimum number of bits in binary to represent a single one of the values.
Taken from the introduction on this wikipedia page:
https://en.wikipedia.org/wiki/Entropy_%28information_theory%29
So, how can three values be represented by one bit? $A$ could be $1$, $B$ could be $0$; but how could you represent $C$?
information-theory mathematical-foundations entropy binary
Sean CSean C
The entropy you've calculated isn't really for the specific string but, rather, for a random source of symbols that generates $A$ with probability $\tfrac{8}{10}$, and $B$ and $C$ with probability $\tfrac1{10}$ each, with no correlation between successive symbols. The calculated entropy for this distribution, $0.922$ means that you can't represent strings generated from this distribution using less than $0.922$ bits per character, on average.
It might be quite hard to develop a code that will achieve this rate.* For example, Huffman coding would allocate codes $0$, $10$ and $11$ to $A$, $B$ and $C$, respectively, for an average of $1.2$ bits per character. That's quite far from the entropy, though still a good deal better than the naive encoding of two bits per character. Any attempt at a better coding will probably exploit the fact that even a run of ten consecutive $A$s is more likely (probability $0.107$) than a single $B$.
* Turns out that it isn't hard to get as close as you want – see the other answers!
David RicherbyDavid Richerby
Here is a concrete encoding that can represent each symbol in less than 1 bit on average:
First, split the input string into pairs of successive characters (e.g. AAAAAAAABC becomes AA|AA|AA|AA|BC). Then encode AA as 0, AB as 100, AC as 101, BA as 110, CA as 1110, BB as 111100, BC as 111101, CB as 111110, CC as 111111. I've not said what happens if there is an odd number of symbols, but you can just encode the last symbol using some arbitrary encoding, it doesn't really matter when the input is long.
This is a Huffman code for the distribution of independent pairs of symbols, and corresponds to choosing $n = 2$ in Yuval's answer. Larger $n$ would lead to even better codes (approaching the Shannon entropy in the limit, as he mentioned).
The average number of bits per symbol pair for the above encoding is $$\frac{8}{10} \cdot \frac{8}{10} \cdot 1 + 3 \cdot \frac{8}{10} \cdot \frac{1}{10} \cdot 3 + \frac{1}{10} \cdot \frac{8}{10} \cdot 4 + 4 \cdot \frac{1}{10} \cdot \frac{1}{10} \cdot 6 = 1.92$$ i.e. $1.92/2 = 0.96$ bits per symbol, not that far from the Shannon entropy actually for such a simple encoding.
nomadictypenomadictype
Let $\mathcal{D}$ be the following distribution over $\{A,B,C\}$: if $X \sim \mathcal{D}$ then $\Pr[X=A] = 4/5$ and $\Pr[X=B]=\Pr[X=C]=1/10$.
For each $n$ we can construct prefix codes $C_n\colon \{A,B,C\}^n \to \{0,1\}^*$ such that $$ \lim_{n\to\infty} \frac{\operatorname*{\mathbb{E}}_{X_1,\ldots,X_n \sim \mathcal{D}}[C_n(X_1,\ldots,X_n)]}{n} = H(\mathcal{D}). $$
In words, if we encode a large number of independent samples from $\mathcal{D}$, then on average we need $H(\mathcal{D}) \approx 0.922$ bits per sample. Intuitively, the reason we can do with less than one bit is that each individual sample is quite likely to be $A$.
This is the real meaning of entropy, and it shows that computing the "entropy" of a string $A^8BC$ is a rather pointless exercise.
Not the answer you're looking for? Browse other questions tagged information-theory mathematical-foundations entropy binary or ask your own question.
What units should Shannon entropy be measured in?
Shannon Entropy to Min-Entropy
Is Morse Code binary, ternary or quinary?
Why Shannon's Entropy is said to be a measure of information?
Shannon Entropy for Binary Numbers
Numerical example of theoretical diff file size using Kullback Leibler?
Is von Neumann's randomness in sin quote no longer applicable?
Applying Shannon Entropy to data storage
Dependency on adjacent blocks decreases as block count increases
Does a megabyte represent (2^8)^1000000 bits in base2? Isn't that more than enough combinations to represent everything?
|
CommonCrawl
|
For numerical modeling of river flows, typically water elevation is required at the upstream boundary. Yet water elevation in natural environmental systems is often unknown and has to be estimated. Improper elevation estimation, however, can generate nonphysical results. In FLOW-3D v11.1, which has just been released, users now have the option of having boundary water elevations dynamically adapt to the conditions inside the domain. This can be achieved through the use of rating curves provided by the user, or in the absence of rating curves; the solver can dynamically adjust the elevation to vary smoothly with the conditions inside the fluid domain. These variations may be further constrained to certain Froude regimes or absolute elevation bounds.
Figure 1. Rating curve for John Creek at Sycamore from USGS
Rating curves
Rating curves define elevation variations at a given location in a river reach according to inflow rates at that location. A relationship between elevation and volume flow rate is established by physical measurements at a particular cross section of the river. Rating curves for rivers in the United States are available from the USGS (U. S. Geological Survey). A typical rating curve will have volume flow rate on the X-axis and elevation on the Y-axis (Figure 1).
Natural inlets
In a case where inflow rate is known but a rating curve is unavailable, a natural boundary condition can be selected in the FLOW-3D model setup interface. At a given cross-section, for a certain specific energy, there can be two possible depths. This arises from the quadratic relationship between specific energy and the depth (see the equation below). The two mathematical depths manifest into supercritical and subcritical depths in reality. In the case of a perfect unique solution to the quadratic equation, the flow is critical.
$latex E=\frac{{{{q}^{2}}}}{{2g{{y}^{2}}}}+y&s=3$
Here, E is the specific energy, q is the unit discharge, g is acceleration due to gravity and y is the height of fluid. Graphically, the specific energy and depth relationship can be seen in Figures 2-4.
Figure 2. Changes to E-y curve, changing q
Figure 3. Possibility of two flow depths (supercritical and subcritical) for the same value of specific energy
Figure 4. Flow depth can be critical (yc) for a unique value of depth and specific energy. In this case, flow is neither subcritical nor supercritical.
Applying new boundary conditions
A rating curve can only be defined for volume flow rate and pressure boundary conditions in FLOW-3D v11.1. For volume flow rate type boundary conditions, instantaneous elevations are calculated using the rating curve to find the elevation corresponding to the flow rate. For a pressure type boundary condition, the volume flow rate is calculated by the solver and elevation is calculated using the rating curve. Rating curves can be applied at both upstream and downstream boundaries. It is important to note that an incorrect rating curve can result in nonphysical flow fluctuations.
Natural boundary conditions can only be defined at the inlet. Flow categories can be defined from one of the following:
Supercritical flow (y<yc)
Subcritical flow (y>yc)
Critical flow (y=yc)
Automatic flow regime (calculated by the solver)
The user can define maximum and minimum limits of elevation for any of these flows. If the depth for a particular flow regime violates the maximum and minimum limits of elevation, the latter will take precedence.
Sample simulation results
Simulation 1 shows the river reach with a natural inlet under volume flow rate boundary condition at the left boundary and a rating curve for the outlet is defined as a pressure boundary condition at the right boundary. The evolution of water elevation is shown for both upstream and downstream boundaries simultaneously. The simulation shows smooth variation of elevations at the boundaries without any fluctuations or nonphysical behavior. Therefore, this new development in FLOW-3D v11.1 allows for more natural variations of the water level for environmental applications.
Evolution of water elevation in a river reach with natural boundary condition at the inlet and a rating curve at the outlet.
|
CommonCrawl
|
Author Archives: Gautam Kamath
News for February 2020
Despite a wealth of February conference deadlines, papers were fairly sparse. We found two EDIT: three EDIT EDIT: four papers for the month of February, please share if we missed any relevant papers.
Monotone probability distributions over the Boolean cube can be learned with sublinear samples, by Ronitt Rubinfeld and Arsen Vasilyan (arXiv). By now, it is well known that assuming an (unknown) distribution enjoys some sort of structure can lead to more efficient algorithms for learning and testing. Often one proves that the structure permits a convenient representation, and exploits this representation to solve the problem at hand. This paper studies the learning of monotone distributions over the Boolean hypercube. The authors exploit and extend a structural statement about monotone Boolean functions by Blais, Håstad, Servedio, and Tan, using it to provide sublinear algorithms for estimating the support size, distance to uniformity, and the distribution itself.
Locally Private Hypothesis Selection, by Sivakanth Gopi, Gautam Kamath, Janardhan Kulkarni, Aleksandar Nikolov, Zhiwei Steven Wu, and Huanyu Zhang (arXiv). Given a collection of \(k\) distributions and a set of samples from one of them, can we identify which distribution it is? This paper studies this problem (and an agnostic generalization of it) under the constraint of local differential privacy. The authors show that this problem requires \(\Omega(k)\) samples, in contrast to the \(O(\log k)\) complexity in the non-private model. Furthermore, they give \(\tilde O(k)\)-sample upper bounds in various interactivity models.
Efficient Distance Approximation for Structured High-Dimensional Distributions via Learning, by Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, and N. V. Vinodchandran (arXiv). Given samples from two distributions, can you estimate the total variation distance between them? This paper gives a framework for solving this problem for structured distribution classes, including Ising models, Bayesian networks, Gaussians, and causal models. The approach can be decomposed properly learning the distributions, followed by estimating the distance between the two hypotheses. Challenges arise when densities are hard to compute exactly.
Profile Entropy: A Fundamental Measure for the Learnability and Compressibility of Discrete Distributions, by Yi Hao and Alon Orlitsky (arXiv). The histogram of a dataset is the collection of frequency counts of domain elements. The profile of a dataset can be succinctly described as the histogram of the histogram. Recent works have shown that, in some sense, discarding information about your dataset by looking solely at the profile can be beneficial for certain problems in which it is "universal". This work explores two new quantities, the entropy and dimension of the profile, which turn out to play a key role in quantifying the performance of estimators based on the profile.
This entry was posted in Monthly digest on March 7, 2020 by Gautam Kamath.
We hit the mother-lode of property testing papers this month. Stick with us, as we cover 10 (!) papers that appeared online in November.
EDIT: We actually have 11 papers, check out Optimal Adaptive Detection of Monotone Patterns at the bottom.
Testing noisy linear functions for sparsity, by Xue Chen, Anindya De, and Rocco A. Servedio (arXiv). Given samples from a noisy linear model \(y = w\cdot x + \mathrm{noise}\), test whether \(w\) is \(k\)-sparse, or far from being \(k\)-sparse. This is a property testing version of the celebrated sparse recovery problem, whose sample complexity is well-known to be \(O(k\log n)\), where the data lies in \(\mathbb{R}^n\). This paper shows that the testing version of the problem can be solved (tolerantly) with a number of samples independent of \(n\), assuming technical conditions: the distribution of coordinates of \(x\) are i.i.d. and non-Gaussian, and the noise distribution is known to the algorithm. Surprisingly, all these conditions are needed, otherwise the dependence on \(n\) is \(\tilde \Omega(\log n)\), essentially the same as the recovery problem.
Pan-Private Uniformity Testing, by Kareem Amin, Matthew Joseph, Jieming Mao (arXiv). Differentially private distribution testing has now seen significant study, in both the local and central models of privacy. This paper studies a distribution testing in the pan-private model, which is intermediate: the algorithm receives samples one by one in the clear, but it must maintain a differentially private internal state at all time steps. The sample complexity turns out to be qualitatively intermediate to the two other models: testing uniformity over \([k]\) requires \(\Theta(\sqrt{k})\) samples in the central model, \(\Theta(k)\) samples in the local model, and this paper shows that \(\Theta(k^{2/3})\) samples are necessary and sufficient in the pan-private model.
Almost Optimal Testers for Concise Representations, by Nader Bshouty (ECCC). This work gives a unified approach for testing for a plethora of different classes which possess some sort of sparsity. These classes include \(k\)-juntas, \(k\)-linear functions, \(k\)-terms, various types of DNFs, decision lists, functions with bounded Fourier degree, and much more.
Unified Sample-Optimal Property Estimation in Near-Linear Time, by Yi Hao and Alon Orlitsky (arXiv). This paper presents a unified approach for estimating several distribution properties with both near-optimal time and sample complexity, based on piecewise-polynomial approximation. Some applications include estimators for Shannon entropy, power sums, distance to uniformity, normalized support size, and normalized support coverage. More generally, results hold for all Lipschitz properties, and consequences include high-confidence property estimation (outperforming the "median trick") and differentially private property estimation.
Testing linear-invariant properties, by Jonathan Tidor and Yufei Zhao (arXiv). This paper studies property testing of functions which are in a formal sense, definable by restrictions to subspaces of bounded degree. This class of functions is a broad generalization of testing whether a function is linear, or a degree-\(d\) polynomial (for constant \(d\)). The algorithm is the oblivious one, which simply repeatedly takes random restrictions and tests whether the property is satisfied or not (similar to the classic linearity test of BLR, along with many others).
Approximating the Distance to Monotonicity of Boolean Functions, by Ramesh Krishnan S. Pallavoor, Sofya Raskhodnikova, Erik Waingarten (ECCC). This paper studies the following fundamental question in tolerant testing: given a Boolean function on the hypercube, test whether it is \(\varepsilon'\)-close or \(\varepsilon\)-far from monotone. It is shown that there is a non-adaptive polynomial query algorithm which can solve this problem for \(\varepsilon' = \varepsilon/\tilde \Theta(\sqrt{n})\), implying an algorithm which can approximate distance to monotonicity up to a multiplicative \(\tilde O(\sqrt{n})\) (addressing an open problem by Sesh). They also give a lower bound demonstrating that improving this approximating factor significantly would necessitate exponentially-many queries. Interestingly, this is proved for the (easier) erasure-resilient model, and also implies lower bounds for tolerant testing of unateness and juntas.
Testing Properties of Multiple Distributions with Few Samples, by Maryam Aliakbarpour and Sandeep Silwal (arXiv). This paper introduces a new model for distribution testing. Generally, we are given \(n\) samples from a distribution which is either (say) uniform or far from uniform, and we wish to test which is the case. The authors here study the problem where we are given a single sample from \(n\) different distributions which are either all uniform or far from uniform, and we wish to test which is the case. By additionally assuming a structural condition in the latter case (it is argued that some structural condition is necessary), they give sample-optimal algorithms for testing uniformity, identity, and closeness.
Random Restrictions of High-Dimensional Distributions and Uniformity Testing with Subcube Conditioning, by Clément L. Canonne, Xi Chen, Gautam Kamath, Amit Levi, and Erik Waingarten (ECCC, arXiv). By now, it is well-known that testing uniformity over the \(n\)-dimensional hypercube requires \(\Omega(2^{n/2})\) samples — the curse of dimensionality quickly makes this problem intractable. One option is to assume that the distribution is product, which causes the complexity to drop to \(O(\sqrt{n})\). This paper instead assumes one has stronger access to the distribution — namely, one can receive samples conditioned on being from some subcube of the domain. With this, the paper shows that the complexity drops to the near-optimal \(\tilde O(\sqrt{n})\) samples. The related problem of testing whether a distribution is either uniform or has large mean is also considered.
Property Testing of LP-Type Problems, by Rogers Epstein, Sandeep Silwal (arXiv). An LP-Type problem (also known as a generalized linear program) is an optimization problem sharing some properties with linear programs. More formally, they consist of a set of constraints \(S\) and a function \(\varphi\) which maps subsets of \(S\) to some totally ordered set, such that \(\varphi\) possesses monotonicity and locality properties. This paper considers the problem of testing whether \(\varphi(S) \leq k\), or whether at least an \(\varepsilon\)-fraction of constraints in \(S\) must be removed for \(\varphi(S) \leq k\) to hold. This paper gives an algorithm with query complexity \(O(\delta/\varepsilon)\), where \(\delta\) is a dimension measure of the problem. This is applied to testing problems for linear separability, smallest enclosing ball, smallest intersecting ball, smallest volume annulus. The authors also provide lower bounds for some of these problems as well.
Near-Optimal Algorithm for Distribution-Free Junta Testing, by Xiaojin Zhang (arXiv). This paper presents an (adaptive) algorithm for testing juntas, in the distribution-free model with one-sided error. The query complexity is \(\tilde O(k/\varepsilon)\), which is nearly optimal. Algorithms with this sample complexity were previously known under the uniform distribution, or with two-sided error, but this is the first paper to achieve it in the distribution-free model with one-sided error.
Optimal Adaptive Detection of Monotone Patterns, by Omri Ben-Eliezer, Shoham Letzter, Erik Waingarten (arXiv). Consider the problem of testing whether a function has no monotone increasing subsequences of length \(k\), versus being \(\varepsilon\)-far from having this property. Note that this is a generalization of testing whether a function is monotone (decreasing), which corresponds to the case \(k = 2\). This work shows that the adaptive sample complexity of this problem is \(O_{k,\varepsilon}(\log n)\), matching the lower bound for monotonicity testing. This is in comparison to the non-adaptive sample complexity, which is \(O_{k,\varepsilon}((\log n)^{\lfloor \log_2 k\rfloor})\). In fact, the main result provides a certificate of being far, in the form of a monotone increasing subsequence of length \(k\).
This entry was posted in Monthly digest on December 6, 2019 by Gautam Kamath.
A comparatively slow month, as summer draws to a close: we found three papers online. Please let us know if we missed any! (Edit: And we added two papers missed from June.)
Testing convexity of functions over finite domains, by Aleksandrs Belovs, Eric Blais, and Abhinav Bommireddi (arXiv). This paper studies the classic problem of convexity testing, and proves a number of interesting results on the adaptive and non-adaptive complexity of this problem in single- and multi-dimensional settings. In the single-dimensional setting on domain \([n]\), they show that adaptivity doesn't help: the complexity will be \(O(\log n)\) in both cases. However, in the simplest two-dimensional setting, a domain of \([3] \times [n]\), they give a polylogarithmic upper bound in the adaptive setting, but a polynomial lower bound in the non-adaptive setting, showing a strong separation. Finally, they provide a lower bound for \([n]^d\) which scales exponentially in the dimension. This leaves open the tantalizing open question: is it possible to avoid the curse of dimensionality when testing convexity?
Testing Isomorphism in the Bounded-Degree Graph Model, by Oded Goldreich (ECCC). This work investigates the problem of testing isomorphism of graphs, focusing on the special case when the connected components are only polylogarithmically large (the general bounded-degree case is left open). One can consider when a graph is given as input, and we have to query a graph to test if they are isomorphic. This can be shown to be equivalent (up to polylogarithmic factors) to testing (from queries) whether a sequence is a permutation of a reference sequence. In turn, this can be shown to be equivalent to the classic distribution testing question of testing (from samples) whether a distribution is equal to some reference distribution. The same sequence of equivalences almost works for the case where there is no reference graph/sequence/distribution, but we only have query/query/sample access to the object. The one exception is that the reduction doesn't work to reduce from testing distributions to testing whether a sequence is a permutation, due to challenges involving sampling with and without replacement. However, the author still shows the lower bound which would be implied by such a reduction by adapting Valiant's proof for the distribution testing problem to this case.
Learning Very Large Graphs with Unknown Vertex Distributions, by Gábor Elek (arXiv). In this note, the author studies a variant of distribution-free property testing on graphs, in which (roughly) neighboring vertices have probabilities of bounded ratio, and a query reveals this ratio. Applications to local graph algorithms and connections to dynamical systems are also discussed.
EDIT: We apparently missed two papers from June — the first paper was accepted to NeurIPS 2019, the second to COLT 2019.
The Broad Optimality of Profile Maximum Likelihood, by Yi Hao and Alon Orlitsky (arXiv). Recently, Acharya, Das, Orlitsky, and Suresh (ICML 2017) showed that the Profile Maximum Likelihood (PML) estimator enables a unified framework for estimating a number of distribution properties, including support size, support coverage, entropy, and distance to uniformity, obtaining estimates which are competitive with the best possible. The approach is rather clean: simply estimate the PML of the distribution (i.e., the maximum likelihood distribution of the data, if the the labels are discarded and only the multiplicities of elements are kept), and apply the plug-in estimator (i.e., if you want to estimate entropy, compute the entropy of the resulting PML distribution). The present work shows that PML is even more broadly applicable — such an approach applies to any property which is additive, symmetric, and appropriately Lipschitz. They also show specific results for many other properties which have been considered in the past, including Rényi entropy, distribution estimation, and identity testing.
Sample-Optimal Low-Rank Approximation of Distance Matrices by Piotr Indyk, Ali Vakilian, Tal Wagner, David Woodruff (arXiv). Getting a rank \(k\) approximation of an \(n \times m\) matrix \(M\) is about as classic a problem as it gets. Suppose we wanted a running time of \(O(n+m)\), which is sublinear in the matrix size. In general, this is not feasible, since there could be a single large entry that dominates the matrix norm. This paper studies the case where the matrix is itself a distance matrix. So there is an underlying point set in a metric space, and the \(i, j\)th entry of \(M\) is the distance between the $i$th and $j$th point. Previous work showed the existence of \(O((n+m)^{1+\gamma})\) time algorithms (for arbitrary small constant $\gamma > 0$, with polynomial dependence on \(k\) and error parameters). This work gives an algorithm that runs in \(\widetilde{O}(n+m)\) time. The main idea is to sample the rows and columns according to row/column norms.
News for May 2019
We were able to find four new papers for May 2019 — as usual, please let us know if we missed any!
EDIT: We did, in fact, miss one paper, which is the bottom one listed below.
On Local Testability in the Non-Signaling Setting, by Alessandro Chiesa, Peter Manohar, and Igor Shinkar (ECCC). This paper studies testability of a certain generalization of (distributions over) functions, known as \(k\)-non-signalling functions, objects which see use in hardness of approximation and delegation of computation. Prior work by the authors show the effectiveness of the linearity test in this setting, leading to the design of PCPs. On the other hand, in this work, the authors show that two types of bivariate tests are ineffective in revealing low-degree structure of these objects.
Computing and Testing Small Vertex Connectivity in Near-Linear Time and Queries, by Danupon Nanongkai, Thatchaphol Saranurak, and Sorrachai Yingchareonthawornchai (arXiv). This work, apparently simultaneous with the one by Forster and Yang that we covered last month, also studies the problem of locally computing cuts in a graph. The authors also go further, and study approximation algorithms for the same problems. Inspired by the connections to property testing in the work of Forster and Yang, they apply these approximation algorithms to get even more query-efficient algorithms for the problems of testing \(k\)-edge- and \(k\)-vertex-connectivity.
Testing Graphs against an Unknown Distribution, by Lior Gishboliner and Asaf Shapira (arXiv). This paper studies graph property testing, under the vertex-distribution-free (VDF) model, as recently introduced by Goldreich. In the VDF model, rather than the ability to sample a random node, the algorithm has the ability to sample a node from some unknown distribution, and must be accurate with respect to the same distribution (reminiscent of the PAC learning model). In Goldreich's work, it was shown that every property which is testable in the VDF model is semi-hereditary. This work strengthens this statement and proves a converse, thus providing a characterization: a property is testable in the VDF model if and only if it is both hereditary and extendable. These descriptors roughly mean that the property is closed under both removal and addition of nodes (with the choice of addition of edges in the latter case). This is a far simpler characterization than that of properties which are testable in the standard model, which is a special case of the VDF model.
Private Identity Testing for High-Dimensional Distributions, by Clément L. Canonne, Gautam Kamath, Audra McMillan, Jonathan Ullman, and Lydia Zakynthinou (arXiv). This work continues a recent line on distribution testing under the constraint of differential privacy. The settings of interest are multivariate distributions: namely, product distributions over the hypercube and Gaussians with identity covariance. An application of a statistic of CDKS, combined with a Lipschitz extension from the set of datasets likely to be generated by such structured distributions, gives a sample-efficient algorithm. A time-efficient version of this extension is also provided, at the cost of some loss in the sample complexity. Some tools of independent interest include reductions between Gaussian mean and product uniformity testing, balanced product identity to product uniformity testing, and an equivalence between univariate and "extreme" product identity testing.
Testing Bipartitness in an Augmented VDF Bounded-Degree Graph Model, by Oded Goldreich (arXiv). Another work on the vertex-distribution-free (VDF) model, as described above. In this one, Goldreich considers an augmentation of the model, where the algorithm is further allowed to query the probability of each node. With this augmentation, he gives \(\tilde O(\sqrt{n})\)-time algorithms for testing bipartiteness and cycle-free-ness, where \(n\) is the "effective support" of the distribution. That is, \(n\) is the number of nodes in the graph after discarding the nodes with minimal probability until \(\varepsilon/5\) mass is removed.
This entry was posted in Monthly digest on June 10, 2019 by Gautam Kamath.
News for January 2019
Minimax Testing of Identity to a Reference Ergodic Markov Chain, by Geoffrey Wolfer and Aryeh Kontorovich (arXiv). This work studies distributional identity testing on Markov chains from a single trajectory, as recently introduced by Daskalakis, Dikkala, and Gravin: we wish to test whether a Markov chain is equal to some reference chain, or far from it. This improves on previous work by considering a stronger distance measure than before, and showing that the sample complexity only depends on properties of the reference chain (which we are trying to test identity to). It additionally proves instance-by-instance bounds (where the sample complexity depends on properties of the specific chain we wish to test identity to).
Almost Optimal Distribution-free Junta Testing, by Nader H. Bshouty (arXiv). This paper provides a \(\tilde O(k/\varepsilon)\)-query algorithm with two-sided error for testing if a Boolean function is a \(k\)-junta (that is, its value depends only on \(k\) of its variables) in the distribution-free model (where distance is measured with respect to an unknown distribution from which we can sample). This complexity is a quadratic improvement over the \(\tilde O(k^2)/\varepsilon\)-query algorithm of Chen, Liu, Servedio, Sheng, and Xie. This complexity is also near-optimal, as shown in a lower bound by Saglam (which we covered back in August).
Exponentially Faster Massively Parallel Maximal Matching, by Soheil Behnezhad, MohammadTaghi Hajiaghayi, and David G. Harris (arXiv). The authors consider maximal matching in the Massively Parallel Computation (MPC) model. They show that one can compute a maximal matching in \(O(\log \log \Delta)\)-rounds, with \(O(n)\) space per machine. This is an exponential improvement over the previous works, which required either \(\Omega(\log n)\) rounds or \(n^{1 + \Omega(1)}\) space per machine. Corollaries of their result include approximation algorithms for vertex cover, maximum matching, and weighted maximum matching.
This entry was posted in Monthly digest on February 8, 2019 by Gautam Kamath.
As October draws to a close, we are left with four new papers this month.
Testing Matrix Rank, Optimally, by Maria-Florina Balcan, Yi Li, David P. Woodruff, Hongyang Zhang (arXiv). This work investigates the problem of non-adaptively testing matrix properties, in both the standard query model and the more general sensing model, in which the algorithm may query the component-wise inner product of the matrix with "sensing" matrices. It proves tight upper and lower bounds of \(\tilde \Theta(d^2/\varepsilon)\) for the query model, and eliminating the dependence on \(\varepsilon\) in the sensing model. Furthermore, they introduce a bounded entry model for testing of matrices, in which the entries have absolute value bounded by 1, in which they prove various bounds for testing stable rank, Schatten-\(p\) norms, and SVD entropy.
Testing Halfspaces over Rotation-Invariant Distributions, by Nathaniel Harms (arXiv). This paper studies the problem of testing from samples whether an unknown boolean function over the hypercube is a halfspace. The algorithm requires \(\tilde O(\sqrt{n}/\varepsilon^{7})\) random samples (which has a dependence on \(n\) which is tight up to logarithmic factors) and works for any rotation-invariant distribution, generalizing previous works that require the distribution be Gaussian or uniform.
Testing Graphs in Vertex-Distribution-Free Models, by Oded Goldreich (ECCC). While distribution-free testing has been well-studied in the context of Boolean functions, it has not been significantly studied in the context of testing graphs. In this context, distribution-free roughly means that the algorithm can sample nodes of the graph according to some unknown distribution \(D\), and must be accurate with respect to the measure assigned to nodes by the same distribution. The paper investigates various properties which may be tested with a size-independent number of queries, including relationships with the complexity of testing in the standard model.
A Theory-Based Evaluation of Nearest Neighbor Models Put Into Practice, by Hendrik Fichtenberger and Dennis Rohde (arXiv). In the \(k\)-nearest neighbor problem, we are given a set of points \(P\), and the answer to a query \(q\) is the set of the \(k\) points in \(P\) which are closest to \(q\). This paper considers the following property testing formulation of the problem: given a set of points \(P\) and a graph \(G = (P,E)\), is each point \(p \in P\) connected to its \(k\)-nearest neighbors, or is it far from being a \(k\)NN graph? The authors prove upper and lower bounds on the complexity of this problem, which are both sublinear in the number of points \(n\).
This entry was posted in Monthly digest on November 5, 2018 by Gautam Kamath.
Three papers this month close out Summer 2018.
Test without Trust: Optimal Locally Private Distribution Testing, by Jayadev Acharya, Clément L. Canonne, Cody Freitag, and Himanshu Tyagi (arXiv). This work studies distribution testing in the local privacy model. While private distribution testing has recently been studied, requiring that the algorithm's output is differentially private with respect to the input dataset, local privacy has this requirement for each individual datapoint. The authors prove optimal upper and lower bounds for identity and independence testing, using a novel public-coin protocol named RAPTOR which can outperform any private-key protocol.
Testing Graph Clusterability: Algorithms and Lower Bounds, by Ashish Chiplunkar, Michael Kapralov, Sanjeev Khanna, Aida Mousavifar, and Yuval Peres (arXiv). This paper studies the problem of testing whether a graph is \(k\)-clusterable (based on the conductance of each cluster), or if it is far from all such graphs — this is a generalization of the classical problem of testing whether a graph is an expansion. It manages to solve this problem under weaker assumptions than previously considered. Technically, prior work embedded a subset of the graph into Euclidean space and clustered based on distances between vertices. This work uses richer geometric structure, including angles between the points, in order to obtain stronger results.
Near log-convexity of measured heat in (discrete) time and consequences, by Mert Saglam (ECCC). Glancing at the title, it might not be clear how this paper relates to property testing. The primary problem of study is the quantity \(m_t = uS^tv\), where \(u, v\) are positive unit vectors and \(S\) is a symmetric substochastic matrix. This quantity can be viewed as a measurement of the heat measured at vector \(v\), after letting the initial configuration of \(u\) evolve according to \(S\) for \(t\) time steps. The author proves an inequality which roughly states \(m_{t+2} \geq t^{1 – \varepsilon} m_t^{1 + 2/t}\), which can be used as a type of log-convexity statement. Surprisingly, this leads to lower bounds for the communication complexity of the \(k\)-Hamming problem, which in turns leads to optimal lower bounds for the complexity of testing \(k\)-linearity and \(k\)-juntas.
Six papers for May, including new models, hierarchy theorems, separation results, resolution of conjectures, and a lot more fun stuff. A lot of things to read this month!
Lower Bounds for Tolerant Junta and Unateness Testing via Rejection Sampling of Graphs, by Amit Levi and Erik Waingarten (ECCC). This paper proves a number of new lower bounds for tolerant testing of Boolean functions, including non-adaptive \(k\)-junta testing and adaptive and non-adaptive unateness testing. Combined with upper bounds for these and related problems, these results establishes separation between the complexity of tolerant and non-tolerant testing for natural properties of Boolean functions, which have so far been elusive. As a technical tool, the authors introduce a new model for testing graph properties, termed the rejection sampling model. In this model, the algorithm queries a subset \(L\) of the vertices, and the oracle will sample an edge uniformly at random and output the intersection of the edge endpoints with the query set \(L\). The cost of an algorithm is measured as the sum of the query sizes. In order to prove the above lower bounds (in the standard model), they show a non-adaptive lower bound for testing bipartiteness (in their new model).
Hierarchy Theorems for Testing Properties in Size-Oblivious Query Complexity, by Oded Goldreich (ECCC). This work proves a hierarchy theorem for properties which are independent of the size of the object, and depend only on the proximity parameter \(\varepsilon\). Roughly, for essentially every function \(q : (0,1] \rightarrow \mathbb{N}\), there exists a property for which the query complexity is \(\Theta(q(\varepsilon))\). Such results are proven for Boolean functions, dense graphs, and bounded-degree graphs. This complements hierarchy theorems by Goldreich, Krivelevich, Newman, and Rozenberg, which give a hierarchy which depends on the object size.
Finding forbidden minors in sublinear time: a \(O(n^{1/2+o(1)})\)-query one-sided tester for minor closed properties on bounded degree graphs, by Akash Kumar, C. Seshadhri, and Andrew Stolman (ECCC). At the core of this paper is a sublinear algorithm for the following problem: given a graph which is \(\varepsilon\)-far from being \(H\)-minor free, find an \(H\)-minor in the graph. The authors provide a (roughly) \(O(\sqrt{n})\) time algorithm for such a task. As a concrete example, given a graph which is far from being planar, one can efficiently find an instance of a \(K_{3,3}\) or \(K_5\) minor. Using the graph minor theorem, this implies analogous results for any minor-closed property, nearly resolving a conjecture of Benjamini, Schramm and Shapira.
Learning and Testing Causal Models with Interventions, by Jayadev Acharya, Arnab Bhattacharyya, Constantinos Daskalakis, and Saravanan Kandasamy (arXiv). This paper considers the problem of learning and testing on causal Bayesian networks. Bayesian networks are a type of graphical model defined on a DAG, where each node has a distribution defined based on the value of its parents. A causal Bayesian network further allows "interventions," where one may set nodes to have certain values. This paper gives efficient algorithms for learning and testing the distribution of these models, with \(O(\log n)\) interventions and \(\tilde O(n/\varepsilon^2)\) samples per intervention
Property Testing of Planarity in the CONGEST model, by Reut Levi, Moti Medina, and Dana Ron (arXiv). It is known that, in the CONGEST model of distributed computation, deciding whether a graph is planar requires a linear number of rounds. This paper considers the natural property testing relaxation, where we wish to determine whether a graph is planar, or \(\varepsilon\)-far from being planar. The authors show that this relaxation allows one to bypass this linear lower bound, obtaining a \(O(\log n \cdot \mathrm{poly(1/\varepsilon))}\) algorithm, complemented by an \(\Omega(\log n)\) lower bound.
Flexible models for testing graph properties, by Oded Goldreich (ECCC). Usually when testing graph properties, we assume that the vertex set is \([n]\), implying that we can randomly sample nodes from the graph. However, this assumes that the tester knows the value of \(n\), the number of nodes. This note suggests more "flexible" models, in which the number of nodes may be unknown, and we are only given random sampling access. While possible definitions are suggested, this note contains few results, leaving the area ripe for investigation of the power of these models.
This entry was posted in Monthly digest on June 8, 2018 by Gautam Kamath.
February had a flurry of conference deadlines, and they seem to have produced six papers for us to enjoy, including three on estimating symmetric properties of distributions.
Locally Private Hypothesis Testing, by Or Sheffet (arXiv). We now have a very mature understanding of the sample complexity of distributional identity testing — given samples from a distribution \(p\), is it equal to, or far from, some model hypothesis \(q\)? Recently, several papers have studied this problem under the additional constraint of differential privacy. This paper strengthens the privacy constraint to local privacy, where each sample is locally noised before being provided to the testing algorithm.
Distribution-free Junta Testing, by Xi Chen, Zhengyang Liu, Rocco A. Servedio, Ying Sheng, and Jinyu Xie (arXiv). Testing whether a function is a \(k\)-junta is very well understood — when done with respect to the uniform distribution. In particular, the adaptive complexity of this problem is \(\tilde \Theta(k)\), while the non-adaptive complexity is \(\tilde \Theta(k^{3/2})\). This paper studies the more challenging task of distribution-free testing, where the distance between functions is measured with respect to some unknown distribution. The authors show that, while the adaptive complexity of this problem is still polynomial (at \(\tilde O(k^2)\)), the non-adaptive complexity becomes exponential: \(2^{\Omega(k/3)}\). In other words, there's a qualitative gap between the adaptive and non-adaptive complexity, which does not appear when testing with respect to the uniform distribution.
The Vertex Sample Complexity of Free Energy is Polynomial, by Vishesh Jain, Frederic Koehler, and Elchanan Mossel (arXiv). This paper studies the classic question of estimating (the logarithm of) the partition function of a Markov Random Field, a highly-studied topic in theoretical computer science and statistical physics. As the title suggests, the authors show that the vertex sample complexity of this quantity is polynomial. In other words, randomly subsampling a \(\mathrm{poly}(1/\varepsilon)\)-size graph and computing its free energy gives a good approximation to the free energy of the overall graph. This is in contrast to more general graph properties, for the vertex sample complexity is super-exponential in \(1/\varepsilon\).
Entropy Rate Estimation for Markov Chains with Large State Space, by Yanjun Han, Jiantao Jiao, Chuan-Zheng Lee, Tsachy Weissman, Yihong Wu, and Tiancheng Yu (arXiv). Entropy estimation is now quite well-understood when one observes independent samples from a discrete distribution — we can get by with a barely-sublinear sample complexity, saving a logarithmic factor compared to the support size. This paper shows that these savings can also be enjoyed in the case where we observe a sample path of observations from a Markov chain.
Local moment matching: A unified methodology for symmetric functional estimation and distribution estimation under Wasserstein distance, by Yanjun Han, Jiantao Jiao, and Tsachy Weissman (arXiv). Speaking more generally of the above problem: there has been significant work into estimating symmetric properties of distributions, i.e., those which do not change when the distribution is permuted. One natural method for estimating such properties is to estimate the sorted distribution, then apply the plug-in estimator for the quantity of interest. The authors give an improved estimator for the sorted distribution, improving on the results of Valiant and Valiant.
INSPECTRE: Privately Estimating the Unseen, by Jayadev Acharya, Gautam Kamath, Ziteng Sun, and Huanyu Zhang (arXiv). One final work in this area — this paper studies the estimation of symmetric distribution properties (including entropy, support size, and support coverage), but this time while maintaining differentially privacy of the sample. By using estimators for these tasks with low sensitivity, one can additionally obtain privacy for a low or no additional cost over the non-private sample complexity.
A quiet month, with only two papers. Perhaps the calm before the storm? Please let us know in the comments if something slipped under our radar.
Agreement tests on graphs and hypergraphs, by Irit Dinur, Yuval Filmus, and Prahladh Harsha (ECCC). This work looks at agreement tests and agreement theorems, which argue that if one checks if a number of local functions agree, then there exists a global function which agrees with most of them. This work extends previous work on direct product testing to local functions of higher degree, which corresponds to agreement tests on graphs and hypergraphs.
Testing Conditional Independence of Discrete Distributions, by Clément L. Canonne, Ilias Diakonikolas, Daniel M. Kane, and Alistair Stewart (arXiv). This paper focuses on testing whether a bivariate discrete distribution has independent marginals, conditioned on the value of a tertiary discrete random variable. More precisely, given realizations of \((X, Y, Z)\), test if \(X \perp Y \mid Z\). Unconditional independence testing (corresponding to the case when \(Z\) is constant) has been extensively studied by the community, with tight upper and lower bounds showing that the sample complexity has two regimes, depending on the tradeoff between the support size and the accuracy desired. This paper shows gives upper and lower bounds for this more general problem, showing a rich landscape depending on the relative value of the parameters.
|
CommonCrawl
|
scale a number between a range [duplicate]
How to normalize data to 0-1 range? (8 answers)
How to normalize data between -1 and 1? (2 answers)
Proper way to scale feature data (1 answer)
Normalize sample data for clustering (2 answers)
What's the difference between Normalization and Standardization? (5 answers)
I have been trying to achieve a system which can scale a number down and in between two ranges. I have been stuck with the mathematical part of it.
What im thinking is lets say number 200 to be normalized so it falls between a range lets say 0 to 0.66 or 0.66 to 1 or 1 to 1.66. The range being variable as well.
normalization scales
Saneesh BSaneesh B
$\begingroup$ Do you know the (theoretical) minimum and maximum of the original values? Or can we use the entire range of available values to obtain these values (e.g. would you accept a max(x) and min(x) in 'the maths')? $\endgroup$ – IWS May 23 '17 at 9:07
$\begingroup$ yes, That would involve a couple of additional loops but can be found. $\endgroup$ – Saneesh B May 23 '17 at 9:08
$\begingroup$ There are many more duplicates. Here's a search: stats.stackexchange.com/…. $\endgroup$ – whuber♦ May 23 '17 at 13:39
Your scaling will need to take into account the possible range of the original number. There is a difference if your 200 could have been in the range [200,201] or in [0,200] or in [0,10000].
So let
$r_{\text{min}}$ denote the minimum of the range of your measurement
$r_{\text{max}}$ denote the maximum of the range of your measurement
$t_{\text{min}}$ denote the minimum of the range of your desired target scaling
$t_{\text{max}}$ denote the maximum of the range of your desired target scaling
$m\in[r_{\text{min}},r_{\text{max}}]$ denote your measurement to be scaled
$$ m\mapsto \frac{m-r_{\text{min}}}{r_{\text{max}}-r_{\text{min}}}\times (t_{\text{max}}-t_{\text{min}}) + t_{\text{min}}$$
will scale $m$ linearly into $[t_{\text{min}},t_{\text{max}}]$ as desired.
To go step by step,
$ m\mapsto m-r_{\text{min}}$ maps $m$ to $[0,r_{\text{max}}-r_{\text{min}}]$.
Next, $$ m\mapsto \frac{m-r_{\text{min}}}{r_{\text{max}}-r_{\text{min}}} $$
maps $m$ to the interval $[0,1]$, with $m=r_{\text{min}}$ mapped to $0$ and $m=r_{\text{max}}$ mapped to $1$.
Multiplying this by $(t_{\text{max}}-t_{\text{min}})$ maps $m$ to $[0,t_{\text{max}}-t_{\text{min}}]$.
Finally, adding $t_{\text{min}}$ shifts everything and maps $m$ to $[t_{\text{min}},t_{\text{max}}]$ as desired.
Stephan KolassaStephan Kolassa
$\begingroup$ Great explanation! I'm in a scenario, where I want to minimize $m$ and want $m$ to take the value of $t_\text{min}$ if $m=r_\text{max}$ and to take the value of $t_\text{max}$ if $m=r_\text{min}$. Thanks to your explanation, this was easy to achieve: Simply swap the numerator to $r_\text{min} - m$ and add $t_\text{max}$ instead of $t_\text{min}$ in the end: $$ m\mapsto \frac{r_{\text{min}}-m}{r_{\text{max}}-r_{\text{min}}}\times (t_{\text{max}}-t_{\text{min}}) + t_{\text{max}}$$ $\endgroup$ – CGFoX Feb 13 '20 at 8:19
In general, to scale your variable $x$ into a range $[a,b]$ you can use: $$ x_{normalized} = (b-a)\frac{x - min(x)}{max(x) - min(x)} + a $$
drgxfsdrgxfs
Not the answer you're looking for? Browse other questions tagged normalization scales or ask your own question.
How to normalize data to 0-1 range?
What's the difference between Normalization and Standardization?
How to normalize data between -1 and 1?
Normalize sample data for clustering
Proper way to scale feature data
How to normalize data between 0 and 1?
Process for Standardising and Normalising data
Scaling array magnitude between a range
Summing the abundances of multiple peaks, and comparing the sum to another sample's sum
How do I measure impact of an intervention of time series without historical data of the same series?
Effect of rescaling of inputs on loss for a simple neural network
|
CommonCrawl
|
Technical feasibility of decrypting https by replacing the computer's PRNG
Intel has an on-chip RdRand function which supposedly bypasses the normally used entropy pool for /dev/urandom and directly injects output. Now rumors are going on that Intel works together with the NSA... and knowing that PRNGs are important for cryptography is enough to get this news spreading.
I personally don't believe this is true, so this is entirely hypothetical: Let's assume that indeed RdRand does what news says it does and that it indeed outputs randomness into a place where applications and libraries would look for cryptographically secure randomness.
How feasible is it that the chip's manufacturer can predict the output of this PRNG when it passed tests from the people applying the use of this RdRand instruction in kernels?
If the chip's manufacturer can predict the output of the PRNG to some extent, how feasible is it that they can decrypt any https traffic between two systems using their chips? (Or anything else requiring randomness, https is only an example.)
My reason for asking: http://cryptome.org/2013/07/intel-bed-nsa.htm
As said, I don't believe everything written here, but I find it very interesting to discuss the possibility technically.
tls randomness pseudo-random-generator cryptographic-hardware backdoors
LucLuc
$\begingroup$ I just removed all comments because they were not related to the topic of the question, or about cryptography at all. Please constrain yourself to clarifications and similar comments about the question. $\endgroup$ – Paŭlo Ebermann Jul 15 '13 at 18:33
$\begingroup$ This question appears to be off-topic because it is about Hardware backdooring of a system. Probably belongs elsewhere. $\endgroup$ – minar Jul 18 '13 at 6:13
$\begingroup$ Steve Blank thinks it's feasible, in fact, it sounds like he'd be surprised if there weren't a back door: steveblank.com/2013/07/15/… $\endgroup$ – Kinnard Hockenhull Jul 23 '13 at 3:00
1 - How feasible is it that the chip's manufacturer can predict the output of this PRNG when it passed tests from the people applying the use of this RdRand instruction in kernels?
A strong stream cipher's output is random and unpredictable to anyone not knowing the key. See where this is heading? Just because something looks random doesn't mean it's random.
2 - If the chip's manufacturer can predict the output of the PRNG to some extent, how feasible is it that they can decrypt any https traffic between two systems using their chips? (Or anything else requiring randomness, https is only an example.)
If you can predict the PRNG you can basically predict the secrets used for key exchange, and from that deduce the shared secret. Then you can simply decrypt the communication.
orlporlp
$\begingroup$ I think it wouldn't be that hard for hardware experts to reverse-engineer the RdRand algorithm so as to establish whether it is a legitimate TRNG or is doing something strange like generating some kind of keystream to introduce a backdoor (publishing their research, of course). Though again, as said in the question's link, entropy pool poisoning is rather difficult to exploit. $\endgroup$ – Thomas Jul 14 '13 at 4:29
$\begingroup$ @Thomas I heard that Linux uses RdRand directly in some places instead of just mixing it into the pool. In that case you don't need to poison the entropy pool. $\endgroup$ – CodesInChaos Jul 14 '13 at 8:53
$\begingroup$ Hmm I think I get it. I was thinking "how'd you make it seem random while knowing the output" and an encryption algorithm is the obvious answer. But then how do you know how far into the output stream the PRNG is? Well you don't really need to, deducing that is much faster (just reproduce the output stream and test everything) than cracking true randomness. Use the chip's serial + a static salt as first input, then make the PRNG's state persistent between reboots. That could probably totally work. Waiting for other answers, but I'll accept soon if none are added. Thanks for your answer! $\endgroup$ – Luc Jul 14 '13 at 13:16
$\begingroup$ Can this be tested? Is there a way to know if this is true? Can some test suite be written which can be run on our machines to test if RdRand function is doing something nefarious? $\endgroup$ – notthetup Jul 14 '13 at 15:06
$\begingroup$ @notthetup Yes and no. It might be possible that hardware experts are capable of opening the chip and study the circuit and deduce from that what RdRand does, but this is extremely hard to do due to the small scale (not to mention expensive). But if we assume Intel does have nefarious purposes with RdRand and built it around a cipher of which they know the key, then no we can't detect it from a test suite, if the used cipher is strong. A strong cipher is explicitly designed to be indistinguishable from random noise - in fact, it's considered to be broken if it is. $\endgroup$ – orlp Jul 14 '13 at 16:51
Have you heard of the strange story of Dual_EC_DRBG? A random number generator suggested and endorsed by the government that exhibits some very suspicious properties.
http://www.schneier.com/blog/archives/2007/11/the_strange_sto.html
From that article:
This is how it works: There are a bunch of constants -- fixed numbers -- in the standard used to define the algorithm's elliptic curve. These constants are listed in Appendix A of the NIST publication, but nowhere is it explained where they came from.
What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output. To put that in real terms, you only need to monitor one TLS internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.
So the short answer is yes, it is possible to create a random number generating algorithm that has exploitable weaknesses, in particular weakness that only the creator of the algorithm may be able to exploit.
CodesInChaos
Joshua KoldenJoshua Kolden
$\begingroup$ This attack reminds me of the attack on the Netscape browser's PRNG back in the 1990s: cs.berkeley.edu/~daw/papers/ddj-netscape.html Today, people better understand the need for a secure PRNG, so it should be better tested. But all it would take is some espionage and skullduggery, and a weak algorithm could be dropped into the chip after testing. Unless a production chip is retested, it would go unnoticed. It's like the old joke: how do you really know when a random number generator is broken? $\endgroup$ – John Deters Jul 15 '13 at 22:03
$\begingroup$ Also it's worth noting that a similar method has the added property that even the chip manufacture would not necessarily know they where introducing a back door. In their eyes they are just using a government approved algorithm, and would naturally assume it is secure. $\endgroup$ – Joshua Kolden Jul 16 '13 at 2:10
$\begingroup$ Dual_EC_DRBG wasn't exploitable because of some weakness only the author knew how to exploit. In fact, as soon as it was released people concluded that its very design was such that kleptography was possible. $\endgroup$ – forest Jan 9 '19 at 11:21
As nightcracker correctly stated, any strong cryptographic PRNG will produce a stream of numbers that pass statistical tests.
However, the manufacturer has some constraints:
Independent tests will be performed on multiple processors that are set up in an identical manner, so each processor must produce different outputs.
Any given processor must produce a different output stream on each power up.
A simple scheme would be to use the processor serial number as an input to the PRNG to ensure different processors had different outputs, and have an undisclosed non-volatile register (e.g. a power-on counter) to ensure each boot was different.
A scheme such as this would probably resist any attempts at analysis using only its outputs: a standard cryptographic PRNG with a global secret (common across all processors), processor ID and power on counter as inputs. At this point, a large scale surveillance infrastructure on observing a new user would only have a space of a few millions of possible processor IDs, plus a few hundreds or thousands of possible boot counts. This could all be easily precomputed too, so would be very practical to hook into a surveillance infrastructure with today's computing power. (Once a user's processor ID and boot count have been identified once, it is of course much easier to keep track of this than have to do a full search each time).
However, the odds are that Intel aren't betting their international sales solely on another fab not having the inclination to open up their chip and check this (e.g. ARM would have a strong incentive to identify such foul play). Update: but they could be compelled by the government to put such a back door in whether it is in their commercial interests or not. Update 2: They, or their fab, could also use stealthy dopant-level modifications to make it extremely hard to detect the modifications, even by someone with Intel-like capabilities (see the first case study, Chapter 3 in the referenced paper).
I'm not an expert in microprocessor hardware, so can't comment on techniques that might introduce biases or other predictable features without being detected. One possible backdoor might be to severely constrain the next requested output from RdRand only after performing a computation such as would be needed to verify the authenticity of a certificate signed by one of a set of long-lived root CAs (perhaps China's CNNIC would be a useful candidate?).
Being able to predict that the output of RdRand is within a searchable subset of possible outputs doesn't alone mean an attacker could break the system - it depends how that output is used. For example if the consuming application uses it as just another optional input to its entropy pool, then being able to predict that input means the user is no better than without RdRand, but equally is not worse off.
CodesInChaos points out that Linux has used RdRand directly at times; Intel are also encouraging direct use of the instruction. So it is not unreasonable to imagine a browser or other TLS client that uses output from RdRand as its sole source of entropy. If this is the case then an observer who can predict the output from RdRand can indeed compromise your security.
Most cryptosystems fail if the entropy input can be predicted, including SSL/TLS.
To pick a couple of examples in use by popular websites from the many possible TLS key exchange options:
My TLS connection to gmail currently uses Ephemeral Elliptic Curve Diffie-Hellman (ECDHE; I believe this is Google's default these days if your browser supports it). If an observer can enumerate the possible random numbers used by my browser, then the observer knows my secret key $d$, so can compute the shared secret $x_k$ by calculating $x_k = dQ$, where Q is the Google server's ephemeral public key (and vice versa if the observer can predict Google's secret key). $x_k$ is used as the premaster secret - the secret from which all other secrets are derived, so obtaining it breaks all of the assurances that TLS aims to provide.
Your TLS connection to Wikipedia uses an RSA based key exchange, as that's all they support (e.g. mine is currently TLS_RSA_WITH_RC4_128_SHA). With these ciphersuites, the premaster secret is generated by the client using its random number generator, and sent to the server. Being able to predict the random number directly gives an attacker the secrets they need.
MichaelMichael
$\begingroup$ Useful information, thanks for your answer! And you mention a very legitimate question: What's in it for Intel besides becoming good mates with the NSA? Merely being good mates doesn't bring them much profit. Yet they could still say it's secure crypto, and it is as long as nobody knows the initial state. They might just get away with it in the corporate world. But I'm just guessing really. $\endgroup$ – Luc Jul 14 '13 at 22:54
$\begingroup$ @Luc, can you say "government contracts"? Sure, I knew you could. $\endgroup$ – John Deters Jul 15 '13 at 22:37
I am the designer of the random number generator that is behind the Intel RdRand instruction.
It isn't. We cannot. It passes the tests because it is a cryptographically secure random number generator being fed by a 2.5Gbps TRNG source.
The manufacturer would not intercept traffic at this point. The plaintext is present on the system. The attacker would attack the place in the system that the plaintext resides. E.G. in the network stack where the encryption/decryption of the link cipher takes place, or in the key establishment code. This is more the realm of traditional software vulnerability attacks. There's no need to pull off the very difficult task of injecting known non-random numbers into an RNG and trying to make sure it gets used on the right cycle by the right instruction in the right bit of the software such that you can reverse engineer the key.
David JohnstonDavid Johnston
$\begingroup$ Just playing devils advocate: A cryptographically secure RNG being fed about no input (or the current timestamp + device serial number or such) would also pass the same tests, wouldn't it? $\endgroup$ – Paŭlo Ebermann Sep 10 '13 at 19:19
$\begingroup$ @PaŭloEbermann Yep, it would. A backdoored RNG would be as simple as combining a very weak (say 32-bit) random value with a master key. Without knowledge of the master key, the output seems completely random and is unbreakable. With knowledge of the master key, the output has a keyspace of only $2^{32}$. There would be no way to tell based on the output only if that were the case. $\endgroup$ – forest Dec 4 '18 at 11:21
There are two ways I can see for the RNG to be cooked. (For the record, I don't see any reason at all to suspect this of Intel, but I also think prudent cryptographic design requires us to think through what would happen if our RNG were flawed or backdoored.)
First, your RNG could not have enough entropy. That's what got the Netscape RNG many years ago, and also what is apparently behind all those RSA keys with shared prime factors that Lenstra et al and Henninger et al found a couple years ago. And it's what got the Taiwanese smart cards that have been in the news more recently. If you don't get enough entropy, then you will not get any security. You can imagine the Intel RNG having some kind of flaw or intentional weakness where it never gets more than, say, 40 bits of entropy, and then generates outputs using CTR-DRBG with AES (from SP 900-90).
Second, your RNG could have an actual trapdoor or an unintentional weakness. That's what's alleged to have with the Dual EC DRBG in SP 800-90. If an attacker knows the relationship between P and Q, and sees one full DRBG output, he can recover the future state of the DRBG and predict every future output. You can imagine something like this--the Intel RNG uses CTR-DRBG, but if it wired all but 40 bits of the key to some known values, then the outputs would pass statistical tests, but would be weak to an attacker who knew those bits.
In either case, if you used the RNG outputs directly, you would be vulnerable. In the first case, anything you do with the RNG outputs that doesn't add in some other entropy will be vulnerable, since the attacker can just guess the entropy and then generate everything himself. In the second case, you'd need to give the attacker a little output to run his attack on, but (depending on details of the backdoor) a couple IVs for the encryption algorithm might be enough to leak the secret.
The best way to avoid both of these potential attacks is to combine the Intel RNG outputs with OS-collected entropy. There are several ways to do this, but I think the best one is something you can find in SP 800-90--you can seed an RNG, and then keep generating outputs with prediction resistance.
a. Use /dev/random to get at least 128 bits of entropy, concatenate that with a few outputs from RDRAND, and use the result to seed a software instance of CTR-DRBG using AES.
b. Each time you get a request to generate some bits of output, you do the following:
(i) Get an output from RDRAND.
(ii) Reseed the CTR-DRBG instance using that output as the new seed material. (The previous state's entropy is preserved by reseeding, so this can't weaken what you already had.)
(iii)Generate your outputs.
If /dev/random gives you 128 bits of entropy and CTR-DRBG is secure, then you could let your attacker choose every value you get from RDRAND, and he couldn't make your random numbers any less secure.
If RDRAND is good, then the attacker could choose the bits you get from /dev/random and your random numbers would still be secure.
Alternatively, you could simply XOR RDRAND outputs with /dev/urandom outputs. It's easy to see that this can't be any weaker than the stronger of the two, as long as neither one is able to predict the other's values. That's discussed in the draft SP 800-90C.
Disclaimer: I'm one of the authors of the 800-90 standards, and the designer of CTR-DRBG.
$\begingroup$ "you could let your attacker choose every value you get from RDRAND, and he couldn't make your random numbers any less secure" is not true because the CPU can see your /dev/random pool. See DJB's blog.cr.yp.to/20140205-entropy.html $\endgroup$ – Navin Mar 14 '17 at 18:24
Not the answer you're looking for? Browse other questions tagged tls randomness pseudo-random-generator cryptographic-hardware backdoors or ask your own question.
Is HTTPS secure if someone snoops the initial handshake?
Why is the following RSA PRNG cryptographically secure?
Reseeding a PRNG with the generated PRN
PRNG determinism?
Symmetric encryption vulnerable when encrypting and decrypting are the same?
Is using the same HTTPS cipher suite as Google a good idea?
Is this truly a TRNG?
Does the Shappening mean HTTPS can be broken?
How does HTTPS transfer the symmetric key?
Attack of the Middle Square Weyl Sequence PRNG
|
CommonCrawl
|
Differential connectivity of splicing activators and repressors to the human spliceosome
Martin Akerman1,2,
Oliver I. Fregoso1,3,6,
Shipra Das1,
Cristian Ruse1,7,
Mads A. Jensen1,8,
Darryl J. Pappin1,
Michael Q. Zhang4,5 &
Adrian R. Krainer1
Genome Biology volume 16, Article number: 119 (2015) Cite this article
During spliceosome assembly, protein-protein interactions (PPI) are sequentially formed and disrupted to accommodate the spatial requirements of pre-mRNA substrate recognition and catalysis. Splicing activators and repressors, such as SR proteins and hnRNPs, modulate spliceosome assembly and regulate alternative splicing. However, it remains unclear how they differentially interact with the core spliceosome to perform their functions.
Here, we investigate the protein connectivity of SR and hnRNP proteins to the core spliceosome using probabilistic network reconstruction based on the integration of interactome and gene expression data. We validate our model by immunoprecipitation and mass spectrometry of the prototypical splicing factors SRSF1 and hnRNPA1. Network analysis reveals that a factor's properties as an activator or repressor can be predicted from its overall connectivity to the rest of the spliceosome. In addition, we discover and experimentally validate PPIs between the oncoprotein SRSF1 and members of the anti-tumor drug target SF3 complex. Our findings suggest that activators promote the formation of PPIs between spliceosomal sub-complexes, whereas repressors mostly operate through protein-RNA interactions.
This study demonstrates that combining in-silico modeling with biochemistry can significantly advance the understanding of structure and function relationships in the human spliceosome.
The major spliceosome is a biological machine that excises >99 % of human introns. It is composed of approximately 150–300 proteins [1–3], depending on the stage of the splicing reaction and the affinity of proteins for their pre-mRNA substrates [2]. A subset of proteins associate with small nuclear RNAs (snRNAs) to form five small nuclear ribonucleoprotein complexes (snRNPs): U1, U2, U4, U5, and U6. The snRNPs, together with other proteins, constitute the catalytic core of the spliceosome [2, 3]. The spliceosome forms step-wise on the pre-mRNA [2], through sequential rearrangements in which various protein and RNP complexes form and disassemble distinct protein-protein interactions (PPIs), in addition to RNA-RNA and RNA-protein interactions. These transformations, some of which require ATP hydrolysis, are the driving force of splicing catalysis [2, 3].
The structural plasticity of the spliceosome makes it susceptible to regulation, allowing for the skipping or inclusion of alternative exons or exon segments [2], known as alternative splicing. More than 90 % of human primary transcripts undergo alternative splicing [4, 5]. Splicing efficiency and alternative splicing regulation are controlled by trans-acting splicing factors, which bind to cis-acting elements on the pre-mRNA to either activate or repress the selection of particular splice sites [6].
SR proteins [7] and hnRNPs [8] are two important families of splicing factors. The SR proteins SRSF1-7 typically activate exon inclusion through sequence-specific binding to exonic enhancers [7]. SRSF9-11 share sequence and structure similarity with the rest of the SR family, but they uncharacteristically act as repressors [7]. The hnRNPs are also diverse: a recent study [9] addressing the sequence specificity and splicing activity of five hnRNPs using high-throughput techniques, concluded that hnRNPF, H1, M, and U are primarily activators, whereas hnRNPA1 and A2B1 are primarily repressors.
The regulation of alternative splicing by activators and repressors has been studied by a variety of methods, revealing RNA-binding patterns, cooperative effects, and regulatory targets of particular splicing factors. Although the functions of these factors can be studied in isolation, activators and repressors must work coordinately with the core spliceosome machinery responsible for constitutive and alternative splicing [9–14].
To understand the contextual differences shaping the behavior of activators and repressors, we assembled and studied the PPI networks of all SR proteins and hnRNPs. We conducted a top-down study in three stages: first, we predicted PPIs in the human spliceosome through a probabilistic model that integrates annotated PPIs with gene-expression microarray profiles; second, we implemented the resulting interactome network to investigate the connectivity of SR proteins and hnRNPs to the rest of the spliceosome; and third, we validated the structure of the network by performing immunoprecipitation and mass spectrometry (IP-MS) of two prototypical splicing factors: the activator SRSF1 and the repressor hnRNPA1.
By regarding spliceosomal PPIs as probabilistic (rather than deterministic) events, our model uncovered novel information about the involvement of SR proteins and hnRNPs in splicing regulation. We found that a splicing factor's property as an activator or repressor can be predicted from its overall connectivity to the spliceosome. Whereas activators (from either the SR or hnRNP families) form several PPIs showing prominent centrality in the spliceosome, repressors are peripheral, and therefore loosely connected to other spliceosomal proteins. We confirmed these observations through IP-MS, and demonstrated that many hnRNPA1 interactions are RNA-dependent, whereas SRSF1 does not require RNA to remain bound to spliceosomal proteins. We discovered that SRSF1 forms multiple PPIs with the early-acting U2-snRNP-specific SF3 complex, which we confirmed by in-vitro pull-down experiments. Finally, by combining our data with previously reported co-regulatory interactions, we demonstrate that hnRNPs are distributed in at least two highly interconnected clusters forming regulatory collaborations, consistent with the large cooperativity and functional interchangeability among proteins of this family.
A probabilistic model of the human spliceosome
The amount of high-quality yeast two-hybrid (Y2H) data has grown remarkably in the last two decades [15], as has the number of analytical methods to interpret PPI networks. Probabilistic modeling is an increasingly popular approach to interrogate PPI data, allowing the integration of diverse types of evidence to prioritize biological associations and demote spurious PPIs [16–18]. To investigate the differential connectivity and relative network occupancy of spliceosomal proteins, we modeled PPIs in the spliceosome as probabilistic events, and built a Bayesian probability model using transitivity and co-expression as supporting evidence (Fig. 1 and Additional file 1). In graph theory, transitivity (also known as clustering coefficient) measures the extent to which a pair of nodes in a network share common interactions with other nodes [19]. This concept was successfully applied to study the organization of other biological networks, such as metabolic networks [20]. In a PPI network, the existence or lack of third-party PPIs can serve as evidence to predict new PPIs or reject false PPIs [21].
Workflow of the Bayesian probability model to predict protein-protein interactions. Example of how the probability of direct interaction (Pin) between SRSF1 and TRA2B was calculated. a We first extracted all known PPIs formed by SRSF1 or TRA2B from a PPI database. b We used the number of shared PPIs between both proteins (blue nodes) and exclusive PPIs (white nodes) to calculate the Transitivity (T). c We then extracted their co-expression profile from the BioGPS microarray database and computed the Pearson correlation coefficient (C). d By transforming the calculated values of T and C through conditional-probability models, we estimated the probability that both T and C may occur in a true PPI network (e = 1, left network) and a false (that is, shuffled) interactome (e = 0, right network). e Finally, the probability Pin was calculated using the Bayes rule, as the posterior probability that SRSF1 and TRA2B directly bind each other, given T and C as evidence
Transitivity is appropriate to study a macromolecular complex like the spliceosome, because it rewires PPIs within the boundaries of neighboring proteins. The spliceosome's structure and function are dictated by the assembly and dissociation of sub-complex units, which are necessary for accurate splicing [2, 3]. It is therefore plausible that spliceosomal proteins remain within the microenvironment of one or a few sub-complexes, so as to maintain the integrity of the entire system.
We made the further assumption that a pair of proteins has to be co-expressed in order to form a PPI. This should ensure a reduction of the number of false positives while emphasizing functionally related PPIs. To this end, we calculated co-expression profiles from microarray data and penalized protein pairs that showed poor co-expression.
To generate this probabilistic model of the spliceosome (Fig. 1 and Additional file 1), we calculated the interaction probability (P in ) of 198,135 PPIs formed by 630 splicing-related proteins (Additional file 2: Table S1A) using as evidence 37,231 PPIs and 31,363 co-expression profiles (Additional file 2: Table S1B). We collected these probabilities into an adjacency matrix, showing relationships between all spliceosomal proteins (Fig. 2a, Additional file 2: Table S1C). We used a similar approach as Ravasz et al. [20] to visualize associations between topological and functional modules through hierarchical clustering, followed by functional enrichment analysis. Accordingly, we used Pearson correlation coefficients between the binding profiles of each protein (based on P in scores against the remaining 629 proteins) as a distance metric for hierarchical clustering. We then examined the resulting clusters against a custom list of spliceosome-specific functions (Fig. 2c, Additional file 2: Table S1D) using the hypergeometric test.
Assembly of the PS network. The flowchart illustrates the identification of functional clusters (FC) of physically/functionally related proteins within the PS network. a The adjacency matrix of P in values for all possible protein pairs was processed with the Hierarchical Clustering algorithm, using Pearson correlation as a distance metric. Clusters were automatically assigned using the Genesis program (every cluster is represented by a different color). b Assembly of the PPI network, showing in this example PPIs with cutoff P in ≥ 0.9. c q-values resulting from the hypergeometric test to assess the relationship between every cluster and each functional category. Only q <0.1 are shown. The size of the bubble is inversely proportional to the q-value (bottom right). Functional terms were divided into four categories, and represented as a tree structure. The asterisks indicate groups of proteins that are exclusive to a particular category (for example, C-complex-specific proteins). The colored circles on the right correspond to the clusters identified in A. d A network of FCs. FCs are represented as squares labeled with the most significantly enriched functional categories. The square size is proportional to the number of proteins in the FC. Edges are shown for connections with CIJ score >0.2. E.T. = Export and Turnover
We identified 10 different functional clusters (numbered FC1-10) and determined the relative position of the clusters in the spliceosome by scoring cluster-cluster interactions among FCs (Fig. 2d). We refer to the resulting model as the 'probabilistic spliceosome' or PS-network. We used this PS-network as a contextual framework to investigate the differential connectivity and relative location of spliceosomal proteins, with a focus on SR proteins and hnRNPs.
Probabilistic vs. deterministic spliceosome
Y2H datasets are often applied to the construction of deterministic PPI networks (DET), strictly based on direct observations from the data. This approach is subject to multiple errors, due to stochastic undersampling or spurious interactions [22]. One way to reduce false positives in DET networks is to prioritize reproducible PPIs across Y2H experiments [23]. However, among all spliceosomal proteins, PPIs formed by SR proteins and hnRNPs are relatively hard to reproduce (Additional file 3: Table S2A). These selective splicing regulators act by recruiting or blocking spliceosomal sub-complexes (for example, snRNPs) via interactions with proteins or RNA. They also participate in additional processes, such as mRNA export and surveillance or translation regulation [7, 8, 24], and thus they may form transient PPIs with the spliceosome. To circumvent the barrier posed by limited support from Y2H PPIs, we studied the SR protein and hnRNP interactomes through probabilistic modeling.
We conducted a cross-validation analysis to compare the predictability of the PS-network to that of a deterministic network (DET). We used the PPI network from [23] as a test set, and the Human Protein Reference Database (HPRD, [25]) as a training set (see Methods for details). The PS-network was trained as tresholded at P in ≥0.001, P in ≥0.01, P in ≥0.1, P in ≥0.5, and P in ≥0.9. Direct PPIs present in the test set were removed from the training set, leaving neighboring PPIs as the sole evidence for probabilistic prediction. We quantified the effect of ignoring direct PPIs for transitivity scoring, and observed that their exclusion left 99.8 % of the estimated P in probabilities unaffected; only 80/198,135 P in scores showed residuals ≥0.1 (Additional file 4: Figure S1). Hence, in this work we treat direct and neighboring PPIs equally. Finally, to predict DET PPIs, we counted the net overlap between direct PPIs in the training and test sets. The resulting networks are shown in Fig. 3a.
Predictability of the probabilistic spliceosome. a PS-networks visualized at different cutoffs: P in ≥0.001, P in ≥0.01, P in ≥0.1, P in ≥0.5, and P in ≥0.9 along with a deterministic network of PPIs detected by Y2H. b-d Cross-validation results. b Predictability by protein family. The height of the column indicates the percent of correctly predicted PPIs for SR proteins (red), hnRNPs (blue), snRNPs (purple), and LSm proteins (yellow). c Sensitivity (dark gray) and specificity (light gray). d Mathew's correlation coefficient. e Distribution of P in values in the PS-network. Dark gray indicates values above the threshold P in ≥0.1. f Independent contribution of transitivity and co-expression. The plot shows the percent of correctly predicted PPIs for the full model, using: a combination of transitivity and co-expression (black); transitivity only (dark gray); co-expression (light gray); and as predicted by chance (white)
We tested the ability of the PS- and DET networks to predict transient SR and hnRNP PPIs, as compared to the constitutive interactions of core spliceosomal snRNP and LSm family proteins. Interestingly, SR and hnRNP PPIs could only be predicted using the PS-network. In contrast, core spliceosomal PPIs were detected using either the PS-network or DET network (Fig. 3b, Additional file 3: Table S2B), probably because they are obligatory for spliceosome assembly and therefore easier to detect.
When considering the spliceosome as a whole, probabilistic modeling still outperformed the deterministic approach. For example, the prediction sensitivity of the PS-network was 0.55 using a moderate threshold (P in ≥0.1) and 0.22 with a stringent threshold (P in ≥0.9). In contrast, DET network predicted PPIs with sensitivity of 0.1 (Fig. 3c, Additional file 3: Table S2C). The PS-Network predicted up to six times more true positives, with half the number of false negatives compared to DET network (Additional file 3: Table S2C).
For both PS-network and DET network, the prediction specificity was very high (approximately 1), only decreasing to 0.85 and 0.53 when using permissive thresholds of P in ≥0.01 and P in ≥0.001, respectively (Fig. 3c, Additional file 3: Table S2C). High specificity is indicative of a low number of false positives. This could be due to the rigorous negative set used in this assay, with pairs of proteins unreachable to each other in a network (see Methods).
We estimated the correlation between the trained and tested classifications using Matthew's correlation coefficient (MCC), a metric that varies between −1 and 1, 1 being equivalent to a perfect prediction. The PS-network's top MCC was 0.65 for P in ≥0.1, whereas DET's MCC was only 0.25 (Fig. 3d, Additional file 3: Table S2C), demonstrating a gain in predictability by using probabilistic modeling. Based on these results, we set P in ≥0.1 as the minimal threshold for PPI probabilities, which retained a total of 30,065 PPIs, accounting for less than 5 % of the data variance (Fig. 3e).
In summary, probabilistic modeling through the PS-network is an effective way to predict spliceosomal PPIs. It surpasses deterministic modeling in sensitivity and predictability, and performs with similar specificity. Probabilistic modeling proved especially critical for the study of SR proteins and hnRNPs, for which Y2H data availability is limited.
Functional clusters represent topologic units
The proteins in the PS-network are not randomly distributed, but instead are clustered in topological modules or FCs (Fig. 2d, Additional file 2: Table S1C). A compacted version of the PS-network (Fig. 2d) shows that early (3 and 8) and late (4, 7, and 10) spliceosomal FCs, as well as pre- (1) and post-splicing FCs (2, 6), are physically separated and resemble functional modules. Of particular interest for this study, FC5 comprises a mixture of nine splicing activators (SRSF1-7, hnRNPU, and RBMX) and five splicing repressors (hnRNAPA1, A2B1, C, H, and SRSF10). In addition FC9 contains a number of activators (hnRNPs F, K, and SRSF9) and repressors (hnRNPL and PTBP1). The activator/repressor activities were assigned based on comprehensive aggregation of literature references derived from the RegRNA database [26] (Additional file 5: Table S3). Although both SR proteins and hnRNPs have been documented to function as activators or repressors depending upon the context, in each individual case one of these two functions occurs much more frequently, allowing for a clear cutoff to distinguish between both groups (Additional file 6: Figure S2).
To examine the topology of the PS-network, we computed the density, modularity, centralization, and average shortest-path length at different P in thresholds (Additional file 7: Table S4). As P in increased, the PS network became less dense, more modular, and decentralized. The use of transitivity in our model helped maintain the overall topology by rewiring PPIs only among third-party PPIs. In addition, examination of the independent contributions of transitivity and co-expression to the model revealed that transitivity was the most predictive feature (Fig. 3f). The PS-network at P in ≥0.9 was topologically identical to DET (Additional file 7: Table S4), indicating that the predicted PPIs are not promiscuous, but reflect selective rewiring of the network. Altogether, we observed that regulatory splicing factors are topologically independent from core spliceosomal proteins, in agreement with the widely accepted notion that the spliceosome is a modular system [2].
A splicing factor's activity can be predicted from its connectivity to the spliceosome
To identify regulators that play centralizing roles during spliceosome assembly, we computed two standard centrality metrics for every member of the PS-network: 'Degree', which is the number of interactions formed by a protein; and 'Betweenness', which reflects the extent to which a protein lies between other proteins, acting as a 'bridge' in the network. The balance between Degree and Betweenness can shape the modularity of the network, whereby high Degree tends to contribute to intramodular interactions that define biological processes, and high Betweenness contributes to intermodular connections linking different processes [27]. In the case of the spliceosome, we expect that proteins with high Degree are important for complex formation and stabilization, whereas those with high Betweenness control interactions among spliceosomal sub-complexes. We used the P in values on the edges to compute probability-weighted Degree and Betweenness for every protein in the network. We refer to these as wDEG and wBET, respectively (Fig. 4a, Additional file 8: Table S5).
Connectivity of splicing factors to the human spliceosome. a Relationship between the Weighted Degree (wDEG) and Betweenness (wBET) among spliceosomal proteins. Each spliceosomal protein is represented as a bubble. The bubble's position indicates wDEG and wBET scores. The size of the bubble denotes wDEG or wBET statistical significance (−log10 of the minimum q-value). The color of the bubble specifies the FC to which it belongs (same color code as Fig. 2). White bubbles correspond to unclustered proteins. Black dots represent the wDEG and wBET scores of 1,000 randomized PS networks. Names of the top 20 statistically significant proteins are shown. For more information, see Additional file 8: Table S5. b High-connectivity spliceosomal proteins. Top 20 proteins for wDEG and/or wBET, based on rankings from Additional file 8: Table S5. The yellow square contains proteins in the top 20 for both wDEG and wBET; the blue and red squares contain top scorers for wDEG or wBET, respectively. Both X and Y axes show ranks in logarithmic scale. c PPIs at P in ≥0.9 formed by the designated proteins are shown as red edges (node colors as in Fig. 2). The pie charts indicate the proportion of interactions at P in ≥0.9 formed between each protein and members of its own cluster (black), other clusters (white), and unclustered proteins (gray). For additional information, see Additional file 10: Figure S3. d wDEG and wBET for splicing activators (red) and repressors (blue) of the SR and hnRNP families, according to annotations in the RegRNA database ([26], Additional file 5: Table S3). The traced square indicates a speculative boundary separating activators from repressors
A common property of biological (that is, scale-free) networks is the presence of a few nodes with outstanding Degree and/or Betweenness, called hubs, which tend to be encoded by essential genes [28]. To identify hubs that can potentially shape the spliceosome's modularity, we focused on the top 20 high-connectivity proteins ranked by minimum (wDEG,wBET) q-values. Interestingly, many of these proteins are known to play central roles in splicing, and 8/20 have been implicated in diseases, such as cancer (Additional file 9: Table S6). We observed that 10/20 of these proteins were ranked among the top 20 for both wDEG and wBET, 8/20 were top 20 scorers for wBET but not wDEG, and only 2/20 scored with high wDEG and low wBET (Fig. 4b, ranks in Additional file 8: Table S5). This result suggests that spliceosomal hubs often play a dual role of bridging among and within topological modules. For instance, high connectivity proteins tend to form PPIs with multiple FCs, including but not limited to their own FC. Conversely, proteins which scored low in both wDEG and wBET, such as hnRNP A1, showed skewed interaction profiles: the vast majority of PPIs involving hnRNPA1 were formed with proteins from its own FC (Fig. 4c, Additional file 10: Figure S3).
Of note, seven of the top 20 high-connectivity proteins were SR proteins or hnRNPs, including five known splicing factors. When addressing their centrality, we observed a clear trend: splicing factors labeled activators showed high wDEG and wBET, whereas repressors scored very low for both (Fig. 4d). With the exception of SRSF10 and hnRNPH1, no splicing repressor scored higher than wDEG = 60 and wBET = 1,000. Conversely, splicing activators were above these values, with the exception of SRSF7 and SRSF9. Thus, the connectivity of splicing factors to the spliceosome is a strong predictor of their regulatory activity. Moreover, these findings suggest that activators and repressors communicate with the spliceosome's machinery with different levels of closeness to perform their regulatory tasks.
IP-MS preparations are enriched in high-probability interactions
To validate the predictability of our model, we performed IP-MS of the prototypical splicing activator SRSF1 and splicing repressor hnRNPA1 (Additional file 11: Figure S4A, B), using T7-tagged constructs that accurately replicate the activities of endogenous SRSF1 and hnRNPA1 (Additional file 11: Figure S4C-M). IP-MS is a useful technique to identify large multimeric protein assemblies. Unlike Y2H, which is designed to capture direct PPIs, IP-MS identifies mixed populations of proteins held in physical proximity through direct or indirect interactions [29].
Because the spliceosome is a ribonucleoprotein complex, we distinguished direct PPIs from PPIs stabilized or mediated by RNA, using differential nuclease treatment [29], followed by IP-MS (Additional file 11: Figure S4N, O). We then classified PPIs as nuclease-resistant (nucR) or nuclease-sensitive (nucS).
We identified 203 significantly enriched proteins that co-purified with SRSF1, and 152 with hnRNPA1 (114 and 60, respectively, were nucR) (Additional file 12: Table S7). In all cases, we detected a mixture of spliceosomal and non-spliceosomal proteins, such as histones, ribosomal, cytoskeletal, polynucleotide-binding, and other proteins (Fig. 5a). However, high-probability PPIs where dominated by spliceosomal proteins (Additional file 13: Figure S5A).
High-probability PPIs enriched by IP-MS. Two splicing factors, SRSF1 and hnRNPA1, were used as baits for IP-MS. The identified proteins were overlaid with the PS-network to identify meaningful patterns of enrichment. a The most frequent categories of ligands identified by IP-MS with (blue) or without (green) nuclease treatment. b P in distribution of bait-ligand interactions for SRSF1 and hnRNPA1. The stacked bars illustrate for every Pin interval, the proportion of IP-MS ligands recovered with (blue) or without (green) nuclease versus the remaining spliceosomal proteins not identified by IP-MS (gray). The values were normalized by the total number of proteins in every group. c Similar to B, comparing Pin values from nuclease resistant bait-ligand interactions (blue) to those of ligand-ligand interactions (black). d Linear regression between the proportions of observed and expected PPIs (P in ≥0.1) from each FC. Each dot represents an FC, according to the Fig. 2 color code. Complementary information about this figure is presented in Additional file 13: Figure S5
We computed the probability P in that every identified ligand forms a binary PPI with the baits SRSF1 or hnRNPA1. We observed that IP-MS experiments validated the overall structure of the PS-network, based on the following lines of evidence. First, both nucR fractions were enriched with high-probability PPIs, as opposed to nucS fractions that did not show significant deviation from spliceosomal proteins undetectable by IP-MS (Fig. 5b and Additional file 13: Figure S5B). This suggests that nuclease treatment increased the relative proportion of direct PPIs in IP-MS preparations. Second, the average P in between baits (SRSF1 or hnRNPA1) and ligands (any other protein) was significantly higher than the average P in between pairs of co-purified ligands (Fig. 5c and Additional file 13: Figure S5C), as expected due to antibody-mediated selective enrichment for bait-ligand PPIs. Third, linear regression between predicted (PS-network) and observed (IP-MS) PPIs in each FC yielded R2 scores in the range of 0.45 to 0.99, depending on the bait and the use of nuclease (Fig. 5d). Fourth, co-purified proteins were not scattered throughout the PS-network, but tended to be located in the vicinity of their respective baits (Fig. 6a, b). The average shortest path length between nucR ligands and the baits was significantly lower compared to IP-MS-undetectable proteins (Additional file 13: Figure S5D). This was not the case for nucS ligands, implying that only nucR ligands were predicted by the PS-network as being physically close to the baits SRSF1 and HNRNPA1.
The SRSF1 and hnRNPA1 interactomes. Visualization of the SRSF1 (a, c) and hnRNPA1 (b, d) interactomes. (a) SRSF1 and (b) hnRNPA1 interactomes in the context of the PS network. Nodes representing ligands that form nucR PPIs with the bait are in blue; nucS proteins are colored green; IP-MS baits are colored red, and the node sizes are proportional to the P in scores between each ligand and the bait. Pie charts show the number of nucR (blue) and nucS (green) bait-to-ligand interactions with P in ≥0.1 (dark) and all co-purified proteins (light). (c,d) High-probability interactions (P in ≥0.1) detected by IP-MS for (c) SRSF1 and (d) hnRNPA1. Blue nodes and edges show nucR PPIs; green nodes and edges show nucS PPIs; the baits are colored red; functionally related groups of ligands are labeled and indicated with dashed circles
Taken together, these results demonstrate that the PS-network can identify biologically relevant PPIs and categorize spliceosomal proteins. By overlaying the PS-network onto IP-MS data, we uncovered the most plausible interactions, while eliminating contaminants and unspecific PPIs. Thus, we narrowed down SRSF1 and hnRNPA1 IP-MS outputs to generate more specific lists of proteins with high interaction probability. Below we discuss the characteristics of the SRSF1 (Fig. 6a, c) and hnRNPA1 (Fig. 6b, d) interactomes.
The SRSF1 and hnRNPA1 interactomes
SR proteins and hnRNPs regulate splicing cooperatively or antagonistically, as in the case of the splicing activator SRSF1 and the repressor hnRNPA1 [10, 14]. Here we found that the connectivities of these two proteins to the spliceosome are substantially different. Whereas SRSF1 shows high connectivity to multiple spliceosomal subgroups, the hnRNPA1 interactome is largely restricted (that is, it is mostly composed of additional members of the hnRNP superfamily). In addition, we found that the SRSF1 interactome is rich in direct and RNA-independent PPIs (Fig. 6a, c). In contrast, the hnRNPA1 interactome is smaller and more RNA-dependent (Fig. 6b, d).
Multiple connections of SRSF1 to the spliceosome
The largest proportion of SRSF1 ligands detected by IP-MS was, as predicted, dominated by members of FC5 (rich in SR proteins and hnRNPs). Other members of the SRSF1 interactome were previously described, such as the EJC in FC4 [30]. In addition, both IP-MS and our PS-network identified novel interactions of SRSF1 with spliceosomal proteins and complexes. Of particular interest are the SF3a/b proteins, which are components of the U2 snRNP required for early steps in spliceosome assembly. The SF3a/b complex is also the target of many anti-tumor drugs, and among the most highly mutated in various hematological malignancies such as chronic lymphocytic leukemia and myelodysplastic syndromes [31].
We screened the HPRD database for protein complexes containing at least one spliceosomal protein, and counted bait-to-ligand PPIs at P in ≥0.1 (Additional file 14: Figure S6A). This revealed that out of 144 possible complexes, the SF3a/b complex was the only one predicted to interact with SRSF1 through all of its seven members (0.29 ≥ P in ≥ to 0.99). In addition, 4/7 members of the SF3a/b complex (SF3A1, SF3A3, SF3B1, SF3B2) were enriched through nucR IP-MS of SRSF1.
To rigorously validate the direct interaction of SRSF1 with the SF3a/b complex, we tested the binding of three of the IP-MS identified SF3A subunits (SF3A1, SF3A2 and SF3A3) to glutathione-S-transferase (GST)-tagged SRSF1 in vitro. GST-SRSF1 interacted efficiently with purified recombinant His-tagged SF3A2 and SF3A3 in the presence of RNase, indicating RNA-independent, direct PPI (Fig. 7). Our predictions were further verified by the absence of interaction between GST-SRSF1 and another splicing regulator, FOX1, which scored very low as an SRSF1-interacting partner (P in = 0.0002). Although SF3A1 was predicted to interact with SRSF1 and detected as an SRSF1-binding partner in our IP-MS analysis, it did not bind to GST-tagged SRSF1 in vitro. Though this indicates an absence of a robust direct interaction between the two proteins, it is also possible that SRSF1 and SF3A1 are weak interactors and require other members of the complex for PPI stability.
Experimental validation of SRSF1-SF3A PPIs. Purified GST-SRSF1 recombinant protein was incubated with (a) His-SF3A3, (b) His-SF3A2, (c) His-SF3A1, or (d) His-FOX1, in the presence of nuclease. GST-SRSF1 was pulled down using glutathione-Sepharose beads, resolved by SDS-PAGE, and interacting partners were detected by anti-His antibody. Purified GST protein was used as a pulldown control
In summary, our results indicate that SRSF1 physically interacts with several spliceosomal sub-complexes through RNA-independent interactions. PPIs formed with complexes such as SF3a/b and the EJC are consistent with the fact that SRSF1 is recruited early in spliceosome assembly, yet remains bound throughout the splicing reaction, even after the mRNA is released [2].
hnRNPA1 forms RNA-dependent regulatory interactions
Most PPIs formed by hnRNPA1 were with other hnRNP proteins (Fig. 6d, Additional file 14: Figure S6B). A minority of PPIs were nucR, mostly from FC5 (hnRNPs A2B1, A3, C, U, and RBMX) which also contains hnRNPA1 itself. In contrast, nucS PPIs localized mostly to FC9 (hnRNPs A0, F, H3, K, L, and UL1), suggesting that within FCs, hnRNPs are physically bound, whereas across FCs, they interact through binding the same mRNA.
To investigate the interplay between PPIs and regulatory interactions among hnRNPs, we utilized a list of frequently co-occurring hnRNP binding sites in pairs of intronic regions associated with alternative splicing [32]. Strikingly, we observed that the vast majority of regulatory interactions among hnRNPs involved members across different clusters, rather than members of the same cluster (Additional file 15: Figure S7A). Using Fisher's test, we estimated that the probability of such a distribution to occur by chance is approximately 10−7. Taking into consideration the information about nuclease sensitivity obtained by IP-MS, we then generated a combined picture of PPIs, regulatory interactions, and RNA dependence (Additional file 15: Figure S7B). We observed a clear pattern in which hnRNPA1 interacted with proteins from its own group (FC5) through physical contact in an RNA-independent way, albeit without forming regulatory collaborations. Conversely, hnRNPA1 connected with members of another group (FC9) by forming multiple co-regulatory interactions, but no direct, RNA-independent physical contact.
These results suggest that the partition of hnRNPs into two separate domains of the spliceosome may be important for their function in splicing regulation (Additional file 15: Figure S7C). Furthermore, our data on hnRNPA1 support a previously suggested regulatory mechanism of hnRNP-mediated bridging, and helps to explain why hnRNPs are so highly cooperative and often interchangeable [9, 11].
The mechanism of splicing has been extensively studied; previous work has largely focused on constitutive elements necessary for precise splicing [1, 23, 33, 34] or on the discovery of alternative exons regulated by individual splicing factors [9–12, 14]. Here we emphasized the contextual connectivity of splicing factors in the spliceosome, and their relationships with other spliceosomal proteins.
We used Bayesian probability to predict PPIs by interrogating different data types (for example, Spliceosome DB, KEGG, regRNA) and many literature resources to construct a probabilistic model of the human spliceosome. The posterior probability of true PPIs was computed using the connectivity of PPI modules as evidence, and fine-tuned by orthogonal information obtained from gene-expression microarrays. The resulting PS-network was essential to uncover a large number of novel PPIs among SR proteins and hnRNPs (Fig. 3b). In contrast, the number of newly discovered PPIs in the subset of core spliceosomal proteins was small. This distinction between selective and core spliceosomal proteins may be due to differences in their functional properties. For instance, Papasaikas et al. [35] recently reported a functional splicing network integrating knockdown profiles for all spliceosomal proteins. A key observation in this study was that core spliceosomal proteins show outstanding functional connectivity, compared to selective splicing regulators, including SR proteins and hnRNPs. This finding reinforces the notion that the functional selectivity of regulatory splicing factors may negatively affect the reproducibility of PPI detection through Y2H.
Analysis of the PS-network revealed a trend whereby splicing activators engage in a relatively large number of PPIs with other proteins in the spliceosome, perhaps playing an active role in recruiting spliceosomal proteins. In contrast, repressors display fewer PPIs (as was the case for hnRNPA1), suggesting that they predominantly affect splicing by steric interference through RNA binding. IP-MS experiments confirmed these rules for the prototypical splicing factors SRSF1 and hnRNPA1. In both cases, IP-MS fractions were enriched in high-probability interactions, as predicted by our model. This was especially noticeable for the samples treated with nuclease.
SRSF1 formed multiple nucR PPIs with multiple FCs. Among the top-scoring ones, we observed components of the SF3a/b complexes, which are essential for spliceosome assembly, and tether the U2 snRNP to the pre-mRNA, contributing to branch-site recognition [33]. Interestingly, SR proteins were previously observed to promote the recruitment of U2 snRNP to the pre-mRNA branch site [36]. In addition, a study of the protein composition of the 17S U2 snRNP revealed that SRSF1 is present in immunopurified complexes containing SF3a66 [34]. Our analysis here predicted that all members of the SF3a/b complex bind SRSF1, with probabilities in the range of 0.29 to 0.99. The fact that all SF3 subunits are predicted to bind to SRSF1 with high probability is not surprising, given that interactions among the SF3 subunits are strong, not only physically, but also functionally [35]. We validated these interactions in cells (SF3A1, SF3A3, SF3B1, and SF3B2 were significantly enriched by IP-MS with nuclease treatment), and in vitro (SF3A2 and SF3A3 were validated through GST-pull-downs). These results indicate that our Bayesian model faithfully predicts PPIs that can be experimentally validated. In addition, these novel PPIs are interesting for their potential implications in cancer. As both SRSF1 and SF3B1 are misregulated in various human tumors [31, 37], and as SRSF1 can transform epithelial cells in vivo [38], it would be of interest to determine if altering the SRSF1 and SF3-mediated recruitment of the U2 snRNP plays a role in tumorigenesis.
In contrast to SRSF1, hnRNPA1 displays weaker and less widespread interactions with the spliceosome. Most high-probability hnRNPA1 PPIs were nuclease-sensitive, and as predicted, most IP-MS-confirmed PPIs involved additional members of the hnRNP superfamily. Combining our data with previously reported regulatory interactions [32], we demonstrate that hnRNPs are distributed in at least two highly interconnected clusters, forming regulatory collaborations. Our data strengthen the notion that hnRNPs collaborate through RNA binding. A recent study [9] showed that a group of six hnRNPs (A1, A2B1, H1, F, M, and U) are highly cooperative in regulating alternative splicing. Using CLIP-seq and microarray analyses, the authors observed robust co-regulation between pairs of hnRNPs. Our analysis not only supports this observation, but further indicates that many of these interactions occur between hnRNPs that belong to different clusters, such as hnRNPs A1 (FC5) and M (FC4) or hnRNPs F (FC9) and U or A2/B1 (FC5).
One possible reason for this disparity stems from the inherent differences between activators and repressors as biochemical entities. Splicing activators may modulate spliceosome assembly through the formation of multiple PPIs, and in this way ensure bona-fide splice site recognition and exon inclusion. In contrast, repressors may form fewer interactions to block the spliceosome's attempts to recognize and eventually include an exon. Hence, whereas activators may coordinate and enhance the connectivity of spliceosomal sub-complexes, in the case of repressors it may be sufficient to bind specifically to cognate motifs on the RNA and block spliceosome assembly or activity. The functionality of SR proteins and hnRNPs is evolutionarily conserved [39] and their selective roles as activators or repressors has been documented in numerous studies, ranging from cell-free splicing to minigene transfection experiments to high-throughput analyses (Additional file 5: Table S3 and Additional file 6: Figure S2). Some of these proteins, like SRSF1 and hnRNPA1, have been intensely studied, whereas others have only recently been functionally characterized (for example, SRSF10 and hnRNPU). Previous work has demonstrated the complexity of splicing regulation by showing that a given SR protein or hnRNP can function as both activator and repressor, depending on the sequence-specific and positional context [40, 41]. In these studies, tethering SR proteins (or hnRNPs) upstream or downstream of the 5′SS [40], or changing the position of an SR protein binding motif along the exon [41] resulted in alteration of the regulatory activity of splicing activators to repressors or vice versa. Thus, consistent with annotations in RegRNA, under certain conditions splicing activators and repressors can switch their activities. The generality of this duality remains to be determined, for example, by integrating multiple RNA-seq datasets to assess the reproducibility of effects on specific splicing targets, while neutralizing indirect or sporadic splicing changes.
This work summarizes our initial attempt to combine public data with our own IP-MS data to understand structure/function relationships in the human spliceosome. Our network-based approach utilized data integration to understand the contribution of individual proteins to the spliceosome as a whole. We characterized key splicing factors, expanding the knowledge about their regulatory mechanisms and discovering new PPIs with therapeutic potential. Altogether, this demonstrates the usefulness of our approach to explore and characterize the mechanistic principles governing complex biological machines.
A total of 630 spliceosomal and splicing-related proteins were collected from the Spliceosome DB [42], KEGG [43], and other literature references [1, 23] (Additional file 2: Table S1A). This compendium comprises functionally confirmed spliceosomal proteins, but also proteins related to other RNA-maturation processes, such as mRNA surveillance, export, capping, and polyadenylation. We included the latter proteins because they typically co-purify with the spliceosome [1, 23] and are functionally associated or coupled with splicing [7, 8, 44]. Throughout the manuscript, we consider this extended set of proteins as 'spliceosomal proteins'. A total of 37,231 PPIs formed by these proteins were extracted from HPRD [25] and Hegele et al. [23]. In total, 31,363 co-expression profiles between mRNAs coding for these proteins and PPI partners were collected from the Human U133A/GNF1H microarray dataset [45] (Additional file 2: Table S1B).
Probabilistic reconstruction of the spliceosome
We developed a Bayesian model to estimate the posterior probability that any given pair of proteins in the spliceosome forms a binary PPI. Our model is based on the principle of transitivity (T), which states that a binary interaction between two proteins is more likely if they share a substantial number of interacting partners [19]. The model also incorporates microarray co-expression profiles (C), to prioritize genuine from spurious PPIs.
We treated T and C as two independent variables, and computed conditional probabilities using HPRD data to represent binding instances in a true PPI network (e = 1), (Additional file 16: Figure S8A, C) and a 'decoy' PPI network to represent non-binding instances (e = 0) (Additional file 16: Figure S8B, D). The model is fully explained in Additional file 1.
Pearson correlation coefficients between all pairs of proteins in the PS-network were calculated using as an input the adjacency matrix of PPI probabilities (P in ) (Additional file 2: Table S1C). As a result a second matrix (distance matrix) was obtained, describing the extent of similarity between protein pairs in terms of their binding preferences. Subsequently, this matrix was clustered using averaged hierarchical clustering on both columns and rows. All the clusters and distance matrices were derived using the Genesis program [46, 47].
Hypergeometric test
To dissect the functionality of every cluster, we performed enrichment analysis using the hypergeometric test. We tested every cluster against a custom list of spliceosome-specific functions, similar to gene ontologies or gene lists (Additional file 2: Table S1D). This list was constructed based on information from Spliceosome DB [42] and KEGG [43], allowing us to explore splicing-related functions in greater detail than offered by standard tools.
This test attempts to reject the null hypothesis that the overlap between two categorical groups (a cluster and a biological function) is due to chance. We used the hypergeometric test to compute exact P values for the enrichment of functional terms (that is, ontologies) in the network clusters, according to the formula:
$$ HG\left(b;N,B,n\right)=\frac{\left(\begin{array}{c}\hfill n\hfill \\ {}\hfill b\hfill \end{array}\right)\left(\begin{array}{c}\hfill N-n\hfill \\ {}\hfill B-b\hfill \end{array}\right)}{\left(\begin{array}{c}\hfill N\hfill \\ {}\hfill B\hfill \end{array}\right)} $$
Where 'N' is the total number proteins in the network, 'B' is the number of proteins that belong to a given functional term, 'b' is the number of proteins that belong to a certain cluster, and 'n' is the number of proteins that belong both to a cluster and a functional term. Finally, we applied the false discovery rate (FDR) procedure to adjust the resulting P values.
Network layout
Network topologies were generated using Cytoscape [48]. This implements a force-directed algorithm that sets the positions of the nodes by minimizing a function that mimics physical repulsion between nodes. Accordingly, the positions of the nodes depend on the length and number of edges. The edge length is inversely proportional to the value of P in ; as a result, the layout of the network is such that densely connected proteins appear in the center, whereas low-degree proteins are more peripheral. We used P in ≥0.1, P in ≥0.5, and P in ≥0.9 for visualization. The corresponding thresholds are stated in each figure legend.
Cluster-cluster interactions
The connectivity CIJ between two clusters I and J was calculated as the sum of the interaction probabilities between all protein pairs spanning FCs I and J, normalized by the sum of probabilities connecting I and J to all possible FCs in the network.
$$ {C}_{IJ}=\frac{{\displaystyle {\sum}_{i\in I,j\in J}}{P}_{Ij}}{{\displaystyle {\sum}_{i\in I,n\in N}}{P}_{in}+{\displaystyle {\sum}_{j\in J,n\in N}}{P}_{jn}-{\displaystyle {\sum}_{i\in I,j\in J}}{P}_{ij}} $$
Cross-validation assay
We tested the predictability of our model using the network derived from [23] (test set). As a training set we used PPIs from HPRD. To train the PS-network, we omitted PPIs in HPRD that were also present in [23]. These were set aside and used for deterministic predictions. We considered as positive PPIs any pair of proteins i and j from the test set with evidence of forming direct PPIs. The total number of positive PPIs was 601. Negative PPIs were protein pairs from the test network whose shortest path length was L(i,j) ➔ ∞. In this way, both proteins are unreachable through any path in the network, and are not expected to interact directly or indirectly. The number of negative PPIs was 1524. Consequently, true positives (TP) were defined as all successfully predicted PPIs using the training set, whereas false negatives (FN) were PPIs that failed to be predicted. Similarly, false positives (FP) were positively predicted PPIs from the negative set, and finally, true negatives (TN) were undetected protein pairs from the negative set.
To quantify the predictive performance we computed the following metrics: (1) sensitivity (also known as true positive rate) and (2) specificity (also known as true negative rate), both of which return values between 0 and 1. A value of 1 means that there are no false positives/negatives; 0.5 means that there are as many false positives/negatives as true positives/negatives; 0 means that no true positives/negatives were detected. In addition, we reported (3) Matthew's Correlation Coefficient, which measure the extent of agreement between observed and predicted binary classifications. It returns values between −1 and +1. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction, and −1 indicates total disagreement between prediction and observation:
\( sensitivity = \raisebox{1ex}{$TP$}\!\left/ \!\raisebox{-1ex}{$\left(TP+FN\right)$}\right. \)
\( specificity = \raisebox{1ex}{$TN$}\!\left/ \!\raisebox{-1ex}{$\left(TN+FP\right)$}\right. \)
\( Mathe{w}^{\hbox{'}}s\ CC = \frac{\left(TP*TN\right)-\left(FP*FN\right)}{\sqrt{\left(TP+FP\right)\left(TP+FN\right)\left(TN+FP\right)\left(TN+FN\right)}} \)
Topological network measures
The ratio between existing and potential edges. It is a measure of how heavily interconnected the nodes in a network are.
Average shortest path length
The average number of steps along the shortest path, for all possible pairs of network nodes. It is a measure of the closeness between the nodes.
A measure of how strongly the network is divided into communities of highly interconnected nodes. It is measured as the fraction of edges that fall within given communities minus the expected fraction if the edges were distributed at random.
Networks whose topologies resemble a star have centralization close to 1, whereas decentralized networks are characterized by having centralization close to 0. This is a measure of how evenly distributed the edge density of the network is.
Network density, shortest path length, and modularity were calculated using the iGraph R package ([49]). Weighted Degree (wDEG) and Betweenness (wBET) were calculated using the Tnet R Package [50]. Briefly, the wDEG was calculated as the sum of the probability of the edges connecting protein i to any protein j.
$$ wDE{G}_i={\displaystyle \sum_{i\ne j\in N}}{P}_{ij} $$
WBET of protein i in a network N is defined as:
$$ wBE{T}_i={\displaystyle \sum_{s\ne i\ne t\in N}}\frac{W{L}_{st}(i)}{W{L}_{st}} $$
Where WL st is the probability-weighted path length from node s to node t, and WL st (i) is the number of those paths passing through i. The minimal WL st for every protein pair is considered as the weighted shortest path length.
To identify proteins with statistically significant wDEG or wBET, we estimated q-values by comparing wDEG or wBET scores to the distribution of 1,000 randomized networks generated through the Erdős–Rényi procedure [51], followed by FDR correction.
Annotated protein complexes were downloaded from HPRD and searched against our list of 630 spliceosomal proteins. A total of 144 complexes containing spliceosomal proteins were selected and tested for whether they form P in ≥0.1 PPIs with either SRSF1 or hnRNPA1, were detected by IP-MS, and survived nuclease treatment.
Construction of MSCV-TT-T7SRSF1 was previously described [52]. MSCV-TT-T7hnRNP A1 was generated by subcloning the hnRNPA1 open reading frame (ORF) from pCG-hnRNPA1 [53] downstream of the tetracycline-responsive promoter (TRE-tight) in the retroviral MSCV vector (kindly provided by Scott Lowe). The GST-SRSF1 bacterial expression plasmid was generated by subcloning the SRSF1 open reading frame from pCG-SRSF1 in the pGEX-3X vector (GE Lifesciences). His-tagged recombinant SF3A (SF3A1, SF3A2, SF3A3) plasmids were generated by sub-cloning the SF3A ORFs from pBLUESCRIPT plasmids generously provided by Robin Reed into the pET28a (+) vector. His-tagged FOX1 was generated by sub-cloning the FOX1 ORF [54] into the pET28a (+) vector.
Cell culture and cell lines
All cells were grown in DMEM-Complete (Gibco) supplemented with 10 % (v/v) fetal bovine serum (FBS, Thermo), 100 U/mL penicillin (Gibco), and 1,000 μg/mL streptomycin (Gibco). Lentiviruses were generated as described [37]. To generate Doxycycline-inducible cell lines, HeLa Tet-on Advanced cells (Clontech) were infected for 48 h, allowed to recover for an additional 24 h, and selected with the appropriate antibiotic.
To induce HeLa TT-T7SRSF1 and TT-T7hnRNPA1 cells, doxycycline was added to the cells at a concentration in the range of 0.01 to 10 μg/mL for 24 to 48 h, depending on the assay. For affinity purifications and immunofluorescence, TT-T7SRSF1 cells were induced with 0.1 μg/mL, and TT-T7hnRNPA1 cells with 0.5 μg/mL doxycycline for 36 h. These values were determined by western blotting (Additional file 11: Figure S4A, B) as resulting in overexpression of the T7-tagged protein within two-fold compared to the endogenous counterpart, and at the same time not resulting in any visible cell death.
Gel electrophoresis and immunoblotting
Lysates were separated by SDS-PAGE and probed with the indicated antibodies. Primary antibodies against the following proteins/epitopes were used: T7 (1:500), SRSF1 (AK-96, 1:500), hnRNPA1 (AK-55, 1:50). HRP-conjugated goat anti-mouse or anti-rabbit (Biorad, 1:10,000) antibodies were used for chemiluminescent detection [52]. AK-96 and AK-55 were previously described [55]. Silver-stained gels were stained with a SilverQuest kit (Invitrogen), following the manufacturer's instructions.
Fluorescence microscopy and immunolocalization
Cells were plated in Fisher 6-well chamber slides at a density of 20,000 cells/well. Twenty-four hours later, doxycycline was added and the cells were incubated for an additional 36 h. Indirect immunofluorescence was modified from [56]. Cells were incubated with the appropriate fluorescence-conjugated secondary antibody (Invitrogen). 4′,6-diamidino-2-phenylindole (DAPI; Boehringer-Mannheim) was used to stain the nuclei. Microscopy was performed on a Zeiss Axiovert 200 M, using Axiovision 4.4 and the ApoTome imaging system.
Preparation of cell extracts
For general protein analysis of whole-cell lysates, cells were lysed in RIPA Buffer (150 mM NaCl, 1 % (v/v) NP-40, 0.5 % (w/v) deoxycholic acid, 0.1 % (w/v) sodium dodecyl sulfate (SDS), 50 mM Tris pH 8.0) plus Roche Protease Inhibitor Cocktail EDTA-free. Cell lysis followed by immunoprecipitation was performed as in [52]. Four 15-cm plates were used for each condition. Where appropriate, nuclease was added (1 U/mL RNase A, 40 U/mL RNase T1, 500 U/mL Benzonase, plus 2 mM MgCl2) for 30 min, prior to immunoprecipitation.
Immunoprecipitation of protein complexes
Dynabeads Protein G (Invitrogen) was used for all IPs, according to the manufacturer's instructions. For all immunoprecipitations, lysates were incubated with immobilized antibodies while rotating for 1 h at 4 °C and washed five times with 1 mL of Lysis Buffer (0.05-0.5 % (v/v) NP-40, 100–500 mM NaCl, 50 mM Tris, pH 7.4, 1 mM DTT). For mass spectrometry, peptides were eluted by on-bead digestion [57] and samples were prepared as in [52].
Multidimensional chromatography and tandem mass spectrometry
Following immunoprecipitation and on-bead trypsin digestion, samples were analyzed by on-line 7-step MudPIT HPLC, and LTQ mass spectrometry.
Briefly, peptide mixtures were analyzed by MudPIT through a protocol adapted from [58] using a two-dimensional vented volume setup with a Proxeon nano-flow HPLC pump [59]. Triphasic MudPIT columns were packed in-house with alternating Aqua C-18 reverse phase material and Luna strong cation exchange material (Phenomenex). HPLC runs were automated following a protocol adapted from [60] with a constant flow-rate of 300 nL/min. Following separation on the MudPIT column, peptides eluted from the microcapillary fritless column were directly electrosprayed into a linear ion trap (LTQ) mass spectrometer (Thermo Finnigan). A cycle of one full-scan mass spectrum (400–1700 m/z) was acquired with enhanced scan rate, followed by six data-dependent MS/MS spectra at a 35 % normalized collision energy. Dynamic exclusion lists of 500 spectra were set to exclude peptides for a duration of 90 s. Mass-spectrometer scan functions were controlled by the Xcalibur data system (Thermo Finnigan) and data were processed with MASCOT Distiller (Matrix Science) using the default parameters for ion-trap data analysis. LTQ MS/MS spectra were searched with MASCOT version 2.2.04 against the human IPI non-redundant database (version 3.35). The number of hits identified by Mascot in every replicate is shown in Additional file 17: Table S8. The MS dataset is available at [61].
Identification of proteins over-represented upon doxycycline treatment
Each IP-MS experiment was carried out in duplicate (Additional file 17: Table S8). The overlap between the duplicates was approximately 50 % for the Dox+ and approximately 30 % for the Dox−, for both SRSF1 and hnRNPA1 with nuclease versus without nuclease.
The enrichment of every protein identified upon IP-MS of SRSF1 or hnRNPA1 was calculated as follows:
$$ \mathrm{logE}={ \log}_2\left(\frac{\left({\mathrm{P}}_{\mathrm{DOX}+}\right)}{\left({\mathrm{P}}_{\mathrm{DOX}-}+1\right)}\right) $$
Where PDox+ was the number of unique peptide counts per protein identified at >95 % confidence in the IP experiment, and PDox- was the corresponding number of peptides identified without doxycycline induction. To account for cases in which the protein was below the detection sensibility in Dox− but not Dox+, we added a pseudo-count to the denominator. We set a cutoff at logE = 1 as a threshold. In this way, we ensured that all the selected proteins would be represented by a two-fold ratio and at least three peptides.
Purification of recombinant proteins
GST-SRSF1 was expressed in E. coli BL21(DE3)pLysS strain by induction with 0.5 mM IPTG overnight at 18 °C and purified using glutathione-Sepharose beads (GE Healthcare). His-tagged SF3A1, SF3A2, SF3A3, and FOX1 were similarly expressed and induced, but purified using Ni-NTA Agarose (Qiagen).
GST pulldown
Purified GST-SRSF1 (200 pM) was incubated with His-tagged recombinant proteins (200 pM) (His-SF3A1, His-SF3A2, His-SF3A3, His-FOX1) and 50 μL glutathione-Sepharose beads in 300 μL of pull-down buffer (20 mM HEPES, pH 7.5, 100–500 mM NaCl, 0.5 mM EDTA and 0.1-0.5 % (v/v) NP-40) for 2 h at 4 °C in the presence of nuclease (1 U/mL RNase A, 40 U/mL RNase T1). The resin was washed three times with 500 μL of pull-down buffer. Proteins were eluted with 40 μL 1X Laemmli buffer, resolved by SDS-PAGE probed with anti-GST and anti-His antibodies and analyzed on an Odyssey infrared-imaging system (LI-COR Biosciences).
Chen YI, Moore RE, Ge HY, Young MK, Lee TD, Stevens SW. Proteomic analysis of in vivo-assembled pre-mRNA splicing complexes expands the catalog of participating factors. Nucleic Acids Res. 2007;35:3928–44.
Wahl MC, Will CL, Luhrmann R. The spliceosome: design principles of a dynamic RNP machine. Cell. 2009;136:701–18.
Will CL, Luhrmann R. Spliceosome structure and function. Cold Spring Harb Perspect Biol. 2011;3:pii: a003707.
Pan Q, Shai O, Lee LJ, Frey BJ, Blencowe BJ. Deep surveying of alternative splicing complexity in the human transcriptome by high-throughput sequencing. Nat Genet. 2008;40:1413–5.
Wang ET, Sandberg R, Luo S, Khrebtukova I, Zhang L, Mayr C, et al. Alternative isoform regulation in human tissue transcriptomes. Nature. 2008;456:470–6.
Hastings ML, Krainer AR. Pre-mRNA splicing in the new millennium. Curr Opin Cell Biol. 2001;13:302–9.
Long JC, Caceres JF. The SR protein family of splicing factors: master regulators of gene expression. Biochem J. 2009;417:15–27.
Han SP, Tang YH, Smith R. Functional diversity of the hnRNPs: past, present and perspectives. Biochem J. 2010;430:379–92.
Huelga SC, Vu AQ, Arnold JD, Liang TY, Liu PP, Yan BY, et al. Integrative genome-wide analysis reveals cooperative regulation of alternative splicing by hnRNP proteins. Cell Rep. 2012;1:167–78.
Eperon IC, Makarova OV, Mayeda A, Munroe SH, Caceres JF, Hayward DG, et al. Selection of alternative 5′ splice sites: role of U1 snRNP and models for the antagonistic effects of SF2/ASF and hnRNP A1. Mol Cell Biol. 2000;20:8303–18.
Martinez-Contreras R, Fisette JF, Nasim FU, Madden R, Cordeau M, Chabot B. Intronic binding sites for hnRNP A/B and hnRNP F/H proteins stimulate pre-mRNA splicing. PLoS Biol. 2006;4, e21.
Sanford JR, Coutinho P, Hackett JA, Wang X, Ranahan W, Caceres JF. Identification of nuclear and cytoplasmic mRNA targets for the shuttling protein SF2/ASF. PLoS One. 2008;3, e3369.
Zhang C, Frias MA, Mele A, Ruggiu M, Eom T, Marney CB, et al. Integrative modeling defines the Nova splicing-regulatory network and its combinatorial controls. Science. 2010;329:439–43.
Zhu J, Mayeda A, Krainer AR. Exon identity established through differential antagonism between exonic splicing silencer-bound hnRNP A1 and enhancer-bound SR proteins. Mol Cell. 2001;8:1351–61.
Rolland T, Tasan M, Charloteaux B, Pevzner SJ, Zhong Q, Sahni N, et al. A proteome-scale map of the human interactome network. Cell. 2014;159:1212–26.
Armean IM, Lilley KS, Trotter MW. Popular computational methods to assess multiprotein complexes derived from label-free affinity purification and mass spectrometry (AP-MS) experiments. Mol Cell Proteomics. 2013;12:1–13.
Asthana S, King OD, Gibbons FD, Roth FP. Predicting protein complex membership using probabilistic network reliability. Genome Res. 2004;14:1170–5.
Jansen R, Yu H, Greenbaum D, Kluger Y, Krogan NJ, Chung S, et al. A Bayesian networks approach for predicting protein-protein interactions from genomic data. Science. 2003;302:449–53.
Wasserman S, Faust K. Social Network Analysis: Methods and Applications. Cambridge: Cambridge University Press; 1994.
Ravasz E, Somera AL, Mongru DA, Oltvai ZN, Barabasi AL. Hierarchical organization of modularity in metabolic networks. Science. 2002;297:1551–5.
Tang YT, Kao HY. Augmented transitive relationships with high impact protein distillation in protein interaction prediction. Biochim Biophys Acta. 1824;2012:1468–75.
Wodak SJ, Vlasblom J, Turinsky AL, Pu S. Protein-protein interaction networks: the puzzling riches. Curr Opin Struct Biol. 2013;23:941–53.
Hegele A, Kamburov A, Grossmann A, Sourlis C, Wowro S, Weimann M, et al. Dynamic protein-protein interaction wiring of the human spliceosome. Mol Cell. 2012;45:567–80.
Martinez-Contreras R, Cloutier P, Shkreta L, Fisette JF, Revil T, Chabot B. hnRNP proteins and splicing control. Adv Exp Med Biol. 2007;623:123–47.
Mishra GR, Suresh M, Kumaran K, Kannabiran N, Suresh S, Bala P, et al. Human protein reference database–2006 update. Nucleic Acids Res. 2006;34:D411–4.
RegRNA database. Available at: http://regrna2.mbc.nctu.edu.tw/.
Girvan M, Newman ME. Community structure in social and biological networks. Proc Natl Acad Sci U S A. 2002;99:7821–6.
Barabasi AL, Gulbahce N, Loscalzo J. Network medicine: a network-based approach to human disease. Nat Rev Genet. 2011;12:56–68.
Nguyen TN, Goodrich JA. Protein-protein interaction assays: eliminating false positive interactions. Nat Methods. 2006;3:135–9.
Singh G, Kucukural A, Cenik C, Leszyk JD, Shaffer SA, Weng Z, et al. The cellular EJC interactome reveals higher-order mRNP structure and an EJC-SR protein nexus. Cell. 2012;151:750–64.
Bonnal S, Vigevani L, Valcarcel J. The spliceosome as a target of novel antitumour drugs. Nat Rev Drug Discov. 2012;11:847–59.
Ke S, Chasin LA. Intronic motif pairs cooperate across exons to promote pre-mRNA splicing. Genome Biol. 2010;11:R84.
Champion-Arnaud P, Reed R. The prespliceosome components SAP 49 and SAP 145 interact in a complex implicated in tethering U2 snRNP to the branch site. Genes Dev. 1994;8:1974–83.
Will CL, Urlaub H, Achsel T, Gentzel M, Wilm M, Luhrmann R. Characterization of novel SF3b and 17S U2 snRNP proteins, including a human Prp5p homologue and an SF3b DEAD-box protein. EMBO J. 2002;21:4978–88.
Papasaikas P, Tejedor JR, Vigevani L, Valcarcel J. Functional splicing network reveals extensive regulatory potential of the core spliceosomal machinery. Mol Cell. 2015;57:7–22.
Tarn WY, Steitz JA. Modulation of 5′ splice site choice in pre-messenger RNA by two distinct steps. Proc Natl Acad Sci U S A. 1995;92:2504–8.
Karni R, de Stanchina E, Lowe SW, Sinha R, Mu D, Krainer AR. The gene encoding the splicing factor SF2/ASF is a proto-oncogene. Nat Struct Mol Biol. 2007;14:185–93.
Anczukow O, Rosenberg AZ, Akerman M, Das S, Zhan L, Karni R, et al. The splicing factor SRSF1 regulates apoptosis and proliferation to promote mammary epithelial cell transformation. Nat Struct Mol Biol. 2012;19:220–8.
Busch A, Hertel KJ. Evolution of SR protein and hnRNP splicing regulatory factors. Wiley Interdiscip Rev RNA. 2012;3:1–12.
Erkelenz S, Mueller WF, Evans MS, Busch A, Schoneweis K, Hertel KJ, et al. Position-dependent splicing activation and repression by SR and hnRNP proteins rely on common mechanisms. RNA. 2013;19:96–102.
Goren A, Ram O, Amit M, Keren H, Lev-Maor G, Vig I, et al. Comparative analysis identifies exonic splicing regulatory sequences–The complex definition of enhancers and silencers. Mol Cell. 2006;22:769–81.
Spliceosome DB. Available at: http://spliceosomedb.ucsc.edu/.
KEGG. Available at: http://www.genome.jp/kegg/.
Maniatis T, Reed R. An extensive network of coupling among gene expression machines. Nature. 2002;416:499–506.
Su AI, Wiltshire T, Batalov S, Lapp H, Ching KA, Block D, et al. A gene atlas of the mouse and human protein-encoding transcriptomes. Proc Natl Acad Sci U S A. 2004;101:6062–7.
Sturn A, Quackenbush J, Trajanoski Z. Genesis: cluster analysis of microarray data. Bioinformatics. 2002;18:207–8.
Genesis. Available at: http://genome.tugraz.at/genesisclient/genesisclient_description.shtml.
Cytoscape. Available at: http://www.cytoscape.org/.
Igraph package. Available at: http://cran.r-project.org/web/packages/igraph/.
Opsahl T, Agneessens F, Skvoretz J. Node centrality in weighted networks: Generalizing degree and shortest paths. Soc Networks. 2010;32:245–51.
Erdős PRA. On random graphs. I Publ Math Debrecen. 1959;6:290–7.
Fregoso OI, Das S, Akerman M, Krainer AR. Splicing-factor oncoprotein SRSF1 stabilizes p53 via RPL5 and induces cellular senescence. Mol Cell. 2013;50:56–66.
Caceres JF, Screaton GR, Krainer AR. A specific subset of SR proteins shuttles continuously between the nucleus and the cytoplasm. Genes Dev. 1998;12:55–66.
Sun S, Zhang Z, Fregoso O, Krainer AR. Mechanisms of activation and repression by the alternative splicing factors RBFOX1/2. RNA. 2012;18:274–83.
Hanamura A, Cáceres JF, Mayeda A, Franza Jr BR, Krainer AR. Regulated tissue-specific expression of antagonistic pre-mRNA splicing factors. RNA. 1998;4:430–44.
Aranda V, Haire T, Nolan ME, Calarco JP, Rosenberg AZ, Fawcett JP, et al. Par6-aPKC uncouples ErbB2 induced disruption of polarized epithelial organization from proliferation control. Nat Cell Biol. 2006;8:1235–45.
Bish RA, Fregoso OI, Piccini A, Myers MP. Conjugation of complex polyubiquitin chains to WRNIP1. J Proteome Res. 2008;7:3481–9.
Washburn MP, Wolters D, Yates 3rd JR. Large-scale analysis of the yeast proteome by multidimensional protein identification technology. Nat Biotechnol. 2001;19:242–7.
Taylor IW, Linding R, Warde-Farley D, Liu Y, Pesquita C, Faria D, et al. Dynamic modularity in protein interaction networks predicts breast cancer outcome. Nat Biotechnol. 2009;27:199–204.
Motoyama A, Venable JD, Ruse CI, Yates 3rd JR. Automated ultra-high-pressure multidimensional protein identification technology (UHP-MudPIT) for improved peptide identification of proteomic samples. Anal Chem. 2006;78:5109–18.
IP-MS data deposited in the Peptide Atlas database. Available at: https://db.systemsbiology.net/sbeams/cgi/PeptideAtlas/PASS_View?identifier=PASS00498.
We thank Olga Anczuków and Jesse Gillis for helpful comments on the manuscript, Isabel Aznarez, Antonius Koller, and Jie Wu for helpful discussions. This work was supported by NIH grant R37-GM42699 to ARK. MQZ acknowledges support from NIH grants HG001696 and GM074688. This work was performed with assistance from CSHL Shared Resources, which are funded in part by the Cancer Center Support Grant 5P30CA045508.
Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
Martin Akerman
, Oliver I. Fregoso
, Shipra Das
, Cristian Ruse
, Mads A. Jensen
, Darryl J. Pappin
& Adrian R. Krainer
Present address: Envisagenics, Inc, 315 Main St., 2nd floor, Huntington, NY, 11743, USA
Watson School of Biological Sciences, Cold Spring Harbor, NY, 11724, USA
Oliver I. Fregoso
Department of Molecular and Cell Biology, Center for Systems Biology, The University of Texas at Dallas, Richardson, TX, 75080, USA
Michael Q. Zhang
Bioinformatics Division, TNLIST, Tsinghua University, Beijing, 100084, China
Present address: Fred Hutchinson Cancer Research Center, Division of Human Biology, 1100 Fairview Ave N, Seattle, WA, 98109, USA
Present address: New England Biolabs, 240 County Road, Ipswich, MA, 01938, UK
Cristian Ruse
Present address: Santaris Pharma A/S, Horsholm, Denmark
Mads A. Jensen
Search for Martin Akerman in:
Search for Oliver I. Fregoso in:
Search for Shipra Das in:
Search for Cristian Ruse in:
Search for Mads A. Jensen in:
Search for Darryl J. Pappin in:
Search for Michael Q. Zhang in:
Search for Adrian R. Krainer in:
Correspondence to Adrian R. Krainer.
MA is a founder and shareholder of Envisagenics, Inc.
MA developed, tested and implemented the probabilistic model. He performed all network analyses, and contributed to the interpretation of the data and preparation of the manuscript. OF carried out molecular, cellular, and mass spectrometry experiments. He contributed to the experimental design, interpretation of the data, and preparation of the manuscript. SD carried out the pull-down experiments. MJ contributed to the generation of reagents. CR contributed technical and conceptual guidance on mass spectrometry. DP contributed conceptual guidance in mass spectrometry. MZ contributed conceptual guidance in mathematical modeling and network analysis. AK contributed conceptual guidance for the experimental procedures and biological interpretation. He contributed to the preparation of the manuscript. All authors read and approved the final manuscript.
Martin Akerman and Oliver I. Fregoso contributed equally to this work.
Supplementary Methods section.
Components of the PS-network. (A) Proteins that constitute the nodes in the PS-network and sources linking them to the spliceosome. (B) PPIs used to train the Bayesian model with their corresponding co-expression levels (Pearson correlation). (C) PPI probability adjacency matrix representing the edges of the PS-network. The colors correspond to the clusters in Fig. 2. (D) Functional categories of spliceosomal and splicing-related proteins.
Annotated spliceosomal PPIs and predictability of the PS-network. (A) Reproducibility of PPIs annotated in HPRD and Hegele et al. for each functional category of spliceosomal proteins. The column '#proteins' indicates the total number of proteins in every group. 'Intersection' is the number of overlapping PPIs between both datasets, and 'ratio' is the 'intersection' normalized by '#proteins'. The functional groups are ranked by their 'ratio'. SR proteins and hnRNPs are highlighted in gray. (B) Number of PPIs per protein predicted by the PS-network versus deterministic modeling. The left side of the table shows the absolute number of PPI predicted for every protein in the test set. The right side shows the percent of the total PPIs per protein. (C) Predictability metrics of the PS-network and deterministic network.
Effect of direct PPIs on the PS-network. The effect of excluding direct PPI annotations, as opposed to treating them equally to neighboring interactions, for transitivity estimation based on Y2H data. We computed P in scores with direct PPIs (Full model) or without (Partial model). (A) Correlation between P in using vs. ignoring direct PPIs. (B) Residual P in between Full vs. Partial models. (C) Histogram of the distribution of residual scores.
References for splicing activator or repressor activities of SR proteins and hnRNPs . Following annotations in RegRNA [26] and additional literature sources ESE (exonic splicing enhancer), ESS (exonic splicing silencer), ISE (intronic splicing enhancer), ISS (intronic splicing silencer).
Classification of SR proteins and hnRNPs into activators and repressors. Based on annotations from RegRNA ([26], Additional file 5: Table S3). The bar plot shows the number of literature references supporting a splicing factor's activity as activator (red) or repressor (blue). The labels activator or repressors were assigned based on the best supported function of each protein. Labels are shown on the left side of the chart.
Topological metrics of the PS-network and deterministic network.
Centrality metrics of spliceosomal proteins.
Involvement of high-connectivity spliceosomal proteins in human disease.
Additional file 10: Figure S3.
PPIs formed by top-centrality spliceosomal proteins. See legend from Fig. 4b.
Immunoprecipitation of SRSF1 and hnRPA1. (A, B) Induced expression of SRSF1 and hnRNPA1 in HeLa cells. (A) HeLa T7-SRSF1 and (B) HeLa T7-hnRNPA1 cells were induced for 36 h with increasing Dox concentrations (from 0.01 to 10.0 μg/mL). Whole-cell lysates were analyzed by western blotting. T7-tagged proteins are marked with an arrow to distinguish them from endogenous proteins. β-catenin was used as a loading control. All cell lines showed a linear response to Dox when probed with endogenous-protein and T7-tag antibodies, although the levels of the T7-tagged splicing factors varied between cell lines. We used 36 h of induction at 0.1 μg/mL Dox for T7-SRSF1, and 0.5 μg/mL Dox for hnRNPA1. (C-M) Cellular localization of induced HeLa TT-SRSF1 and hnRNP A1 is consistent with expression of the endogenous proteins. (C) Indirect immunofluorescence of endogenous (uninduced) SRSF1, (D-F) induced T7-tagged SRSF1 and (G-I) hnRNP A1. (J-M) Co-staining of endogenous SRSF1 (AK-96) and induced T7-SRSF1. Cells were induced with Dox for 36 h at 0.1 μg/mL (SRSF1) and 0.5 μg/mL (hnRNP A1). DNA was stained with DAPI. (N, O) T7-SRSF1 immunoprecipitation (IP) with nuclease treatment: (N) Whole-cell lysates of HeLa TT-SRSF1 cells, with and without nuclease treatment. Cells were induced with Dox for 36 h at 0.1 μg/mL. Nuclease consists of RNases A and T1, plus Benzonase. (Middle) Co-IP of T7-SRSF1 in the presence or absence of nuclease. Co-IP was performed in the presence of 200 mM NaCl. (O) Silver stain of immunoprecipitates.
Additional file 12: Table S7.
PPIs detected by IP-MS and estimated through the PS-network. For (A) SRSF1 and (B) hnRNPA1 against other proteins.
High-probability PPIs enriched by IP-MS (continued from Fig. 5). (A) The colored tables show the distribution of P in scores for SRSF1 and hnRNPA1 co-purified proteins. These were divided into three categories, depending on whether they are spliceosomal proteins assigned to specific FCs (spliceosomal, clustered) unassigned to FCs (spliceosomal, non-clustered) or non-spliceosomal. The average P in scores for each category are shown at the bottom. (B) Beanplot showing P in distribution of bait-ligand interactions for SRSF1 and hnRNPA1 for IP-MS with (blue) or without (green) nuclease, and for the remaining spliceosomal proteins not identified by IP-MS (gray). Red lines indicate mean values. (C) Similar to B, comparing Pin values from nuclease resistant bait-ligand interactions (blue) to those of ligand-ligand interactions (black). (D) Boxplot showing the distribution of weighted shortest path lengths for either SRSF1 or hnRNPA1 co-purified proteins with (blue) or without (green) nuclease treatment. The remaining spliceosomal proteins that failed to be purified are marked in gray. Wilcoxon test P values are shown in B-D represented by stars as follows: *P <0.05, **P <0.01, ***P <10−3, and ****P <10−5.
Complex-composition analysis. Distribution of predicted and co-purified PPIs for (A) SRSF1 and (B) hnRNPA1 among protein complexes annotated in HPRD.
Physical and regulatory interactions among hnRNPs are mutually exclusive. (A) Regulatory interactions among hnRNPs. The node color indicates the cluster to which each protein belongs, according to the code in Fig. 3d. The edges represent regulatory interactions, as reported by Ke and Chasin [32]. (B) Patterns of interaction between hnRNPA1 and other members of the hnRNP superfamily. Solid edges denote PPIs resistant to nuclease treatment. Dashed edges indicate nuclease-sensitive PPIs. Red edges denote regulatory interactions. The numbers along the edges indicate PI values. (C) Model summarizing co-regulation among hnRNPs. hnRNP pairs from different spliceosomal blocks usually cooperate to regulate alternative splicing. In particular, most proteins belong either to FC5 (blue) or FC9 (gray).
Conditional probability models. Cumulative distributions for transitivity (A) were calculated using HPRD to represent true binding instances. (B) A similar distribution was derived from a 'decoy' HPRD (dHPRD), to represent non-binding instances. In the same way, we generated true (C) and decoy (D) co-expression distributions, by combining both HPRD and dHPRD with the Human U133A/GNF1H microarray dataset.
Additional file 17: Table S8
Summary of MASCOT results. Number of significant peptides detected.
Akerman, M., Fregoso, O.I., Das, S. et al. Differential connectivity of splicing activators and repressors to the human spliceosome. Genome Biol 16, 119 (2015). https://doi.org/10.1186/s13059-015-0682-5
Splice Factor
Protein Pair
Average Short Path Length
Splice Activator
Spliceosome Assembly
Submission enquiries: [email protected]
|
CommonCrawl
|
iGen: The automated generation of a parameterisation of entrainment in marine stratocumulus
Tang, D.F. and Dobbie, S. (2011) iGen: The automated generation of a parameterisation of entrainment in marine stratocumulus. [MIMS Preprint]
iGenMims2.pdf
In a previous paper we described a new technique for automatically generating parameterisations using a program called iGen. iGen generates parameterisations by analysing the source code of a high resolution model that resolves the physics to be parameterised. In order to demonstrate that this technique works with a model of realistic complexity we have used iGen to generate a parameterisation of entrainment in marine stratocumulus. We present details of our technique in which iGen was used to analyse the source code of a cloud resolving model and generate a parameterisation of the mean and standard deviation of entrainment velocity in marine stratocumulus in terms of the large-scale state of the boundary layer. The parameterisation was tested against results from the DYCOMS-II intercomparison of cloud resolving models and the parameterisation of mean entrainment velocity was found to be $5.27\times 10 ^{-3} \pm 0.62 \times 10^{-3}ms^{-1}$ compared to $5.2 \times 10^{-3} \pm 0.8 \times 10^{-3} ms^{-1}$ for the ensemble of cloud resolving models
iGen, stratocumulus, entrainment, model reduction, parameterisation, parameterization, symbolic analysis
MSC 2010, the AMS's Mathematics Subject Classification > 68 Computer science
PACS 2010, the AIP's Physics and Astronomy Classification Scheme > 90 GEOPHYSICS, ASTRONOMY, AND ASTROPHYSICS > 92 Hydrospheric and atmospheric geophysics
Mr D.F. Tang
|
CommonCrawl
|
Circulating natural antibodies to inflammatory cytokines are potential biomarkers for atherosclerosis
Peng Wang1,
Huan Zhao1,
Zhenqi Wang2 &
Xuan Zhang1
Inflammatory cytokines contribute to the development of atherosclerosis. Natural antibodies in the circulation have protective effects on common diseases including atherosclerosis-related conditions.
The present study aimed to investigate the possible involvement of circulating IgG antibodies against inflammatory cytokines in atherosclerosis.
A total of 220 patients with diagnosis of atherosclerosis and 200 healthy controls were recruited. Seven linear peptide antigens were used to develop an enzyme-linked immunosorbent assay in-house for detection of plasma IgG antibodies against interleukin 1β (fragments IL1β-1 and IL1β-2), IL6, IL8, tumor necrosis factor alpha (fragments TNFα-1 and TNFα-2) and IL1α.
Atherosclerotic patients had an increase in the levels of circulating IgG to TNFα-1(adjusted r2 = 0.038, p < 0.001) and IL1α (adjusted r2 = 0.025, p = 0.002) compared with control subjects. Female patients mainly contributed to increased anti-TNFα-1 IgG levels (adjusted r2 = 0.073, p < 0.001) and anti-IL1α IgG levels (adjusted r2 = 0.044, p = 0.003). In addition, female patients showed higher anti-IL1β-2 IgG levels than controls (adjusted r2 = 0.023, p = 0.026). There was no significant change of circulating IgG antibodies to other cytokines. ROC curve analysis showed an AUC of 0.564 for anti-TNFα-1 IgG assay with 22.8% sensitivity against a specificity of 90.0%, and an AUC of 0.539 for anti-IL1α IgG assay with 17.8% sensitivity against a specificity of 90.0%; the anti-IL1β-2 IgG assay had an AUC of 0.580 with 26.3% sensitivity against a specificity of 89.8% in female patients. There was no correlation between plasma IgG levels and carotid intima-media thickness.
Natural antibodies to inflammatory cytokines may be potential biomarkers for atherosclerosis.
Atherosclerosis, the most common cause of cardiovascular diseases, is a chronic and systemic inflammatory disorder mainly affecting the intima of large and medium-sized arteries [1]. Several risk factors are likely to be responsible for the development of atherosclerosis, such as smoking, diabetes mellitus, abdominal obesity, atherogenic dyslipidemia and hypertension [2, 3]. Typically, it has long disease latency and frequently coexists in > 1 vascular bed [4]. This disease can lead to myocardial and cerebral infarctions, stroke as well as coronary heart disease [4, 5]. Nowadays, cardiovascular disease accounts for approximately 30% of all health-related deaths worldwide, making atherosclerotic lesions a common cause of death [6]. Early diagnosis and treatment can slow or halt the worsening of atherosclerosis. It is thus imperative to develop effective methods for the detection of atherosclerosis at an early stage.
Natural autoantibodies in the circulation are emerging as promising diagnostic biomarkers for malignant diseases [7, 8]. Recently, an increase in circulating levels of certain immunoglobulin G (IgG) has been identified in patients with atherosclerosis-related diseases. For instance, Machida et al. reported that circulating levels of auto-antibodies against replication protein A2 (RPA2) were found to be increased in patients with stroke [9]. Elevated levels of antibodies against coatomer protein complex subunit epsilon (COPE) and DAN family BMP antagonist (NBL1) have been suggested to contribute to a high risk of cardiovascular events in patients with obstructive sleep apnea [10, 11]. Therefore, the identification of natural autoantibodies may provide a promising approach for early detection of atherosclerosis-related diseases.
Inflammatory cytokines, including tumor necrosis factor alpha (TNF-α), interleukin (IL) 1α (IL1α), IL1ß, IL-2, IL6 and IL8 are involved in the pathogenesis of atherosclerosis, participating in all stages of this condition [12, 13]. In this study, our main goal was to examine the association between circulating antibody levels against inflammatory cytokines and atherosclerosis, which may identify effective biomarkers for the diagnosis of atherosclerosis.
A total of 220 patients diagnosed with atherosclerosis and 200 healthy controls were enrolled from the Second Hospital of Jilin University for this study. Their demographic characteristics are given in Table 1. The inclusion criteria was the presence of abnormal carotid intima-media thickness (CIMT), which is a useful measure for subclinical atherosclerosis and directly associated with an increased risk of both myocardial infarction and ischemic stroke, was measured by vascular ultrasound. Control samples were collected from local communities in the same period as the collection of case samples. Those subjects who had thyroid diseases or other autoimmune diseases such as type-1 diabetes and inflammatory bowel diseases, and those with any type of malignant tumors, were not included for this study. All subjects were of Han Chinese origin and all provided written informed consent to participant in this study. This work was approved by the Ethics Committee of Jilin University Second Hospital and conformed to the Declaration of Helsinki.
Table 1 Demographic and clinical characteristics of patients and control subjects
Seven linear peptide antigens derived from IL1α, IL1β, IL6, IL8 and TNFα were designed based on computational epitope prediction tools (www.iedb.org) and then synthesized by solid-phase chemical method with a purity of > 95%. The detail of each antigen designed is listed in Table 2. An enzyme-linked immunosorbent assay (ELISA) was developed in-house with these 7 peptide antigens based on previous studies [14]. Briefly, each synthetic peptide antigens was dissolved in 67% acetic acid as a stock solution of 5 mg/ml; a working solution of 20 μg/ml in coating buffer (0.1 m phosphate buffer containing 10 mm EDTA and 0.15 m NaCl, pH 7.2) was used to coat maleimide-activated 96 well microtiter plates (Cat. 15,150, Thermo Scientific, Rockford, IL, USA) based on the manufacturer's instructions and stored at 4 °C until use. After the antigen-coated microplate was washed twice with 200 μl phosphate-buffered saline (PBS) containing 0.1% Tween-20, 50 μl Assay Buffer (PBS containing 0.5% bovine serum albumin (BSA)) was added to each negative control (NC) well and 50 μl of the diluted plasma and positive control (PC) sample (1100 dilution) in Assay Buffer to the sample wells. After incubation for 1.5 h at room temperature, the plate was washed 3 times with 200 μl Wash Buffer and 50 μl goat anti-human IgG antibody conjugated to peroxidase (ab98567, Abcam, Beijing, China) (1,50,000 dilution in Assay Buffer) was then added. The plate was incubated at room temperature for 1 h, and 50 μl Stabilized Chromogen (SB02, Life Technologies, Guangzhou, China) was used for color development. A microplate reader (BioTek, Winooski, VT, USA) was used to measure the optical density (OD) at 450 nm and a reference wavelength of 620 nm.
Table 2 Sequence of peptide antigens used for the in-house ELISA
All the samples were assayed in duplicate and relative levels of plasma IgG antibodies were expressed as the specific binding ratio (SBR) that was calculated as follows:
$$ \mathrm{SBR}=\left({\mathrm{OD}}_{\mathrm{Sample}}-{\mathrm{OD}}_{\mathrm{NC}}\right)/\left({\mathrm{OD}}_{\mathrm{PC}}-{\mathrm{OD}}_{\mathrm{NC}}\right). $$
An inter-assay deviation was estimated by the coefficient of variation (CV) calculated based on the measurement of a pooled plasma sample randomly collected from > 20 healthy subjects, namely quality control (QC) sample that was tested on every plate. Plasma IgG levels were expressed as the mean ± standard deviation (SD) in SBR, and Student's t-test was used to examine the differences between the patient group and the control group. Linear regression analysis was applied to test the effects of disease status on the IgG levels with adjustment for age and gender. Receiver operating characteristic (ROC) curve analysis was performed to assess the value of plasma IgG indicators for the diagnosis of atherosclerosis, with calculation of an area of the ROC curve (AUC) and 95% confidence interval (CI) as well as a sensitivity against the specificity of > 90%. The p-value of < 0.05 was considered to be statistically significant. All statistical tests were performed with SPSS 19.0 software (IBM, Armonk, New York).
All CV calculated with SBR of the QC sample was less than 20% (Table 3), suggesting a good reproducibility for the in-house ELISA. As shown in Table 4, plasma anti-TNFα-1 IgG levels were significantly higher in patients with atherosclerosis patients than control subjects (t = 3.588, p < 0.001), female patients mainly contributing to increased anti-TNFα-1IgG levels in atherosclerosis (t = 3.810, p < 0.001). In addition, patients with atherosclerosis had a significantly higher level of circulating IgG against IL1α antigens than control subjects (t = 3.084, p = 0.002) and female patients mainly contributed to the significant change (t = 2.964, p = 0.003). While anti-IL1β-2 IgG level was significantly higher than in female patients than female controls (p = 0.026), there was no significant difference was found in combined groups.
Table 3 Inter-assay deviation between ELISA-testing plates
Table 4 Differences of circulating IgG between the two groups
As shown in Table 5, linear regression analysis showed that atherosclerotic patients had an increase in circulating anti-TNFα-1 IgG levels (adjusted r2 = 0.038, p < 0.001) and anti-IL1α IgG level (adjusted r2 = 0.025, p = 0.002) compared with control subjects, female patients mainly contributing to increased anti-TNFα-1 IgG levels (adjusted r2 = 0.073, p < 0.001) and increased anti-IL1α IgG levels (adjusted r2 = 0.044, p = 0.003). Additionally, female patients had an increase in anti-IL1β-2 IgG levels compared to female controls (adjusted r2 = 0.023, p = 0.026).
Table 5 Multivariate linear regression analysis of circulating IgG against cytokines in atherosclerosis
ROC curve analysis revealed an AUC of 0.564 (95% CI0.509–0.619) for anti-TNFα-1 IgG assay with 22.8% sensitivity against the specificity of 90.0%, and an AUC of 0.539 (95% CI 0.484–0.594) for anti-IL1α IgG assay with17.8% sensitivity against the specificity of 90.0%. Moreover, ROC curve analysis performed only in females showed that anti-TNFα-1 IgG assay had an AUC of 0.591 (95% CI 0.509–0.673) with 28.4% sensitivity against a specificity of 89.8%, and anti-IL1α IgG assay had an AUC of 0.549 (95% CI 0.466–0.632) with 20.0% sensitivity against a specificity of 89.8%, and anti-IL1β-2 IgG assay had an AUC of 0.580 (95% CI 0.498–0.663) with 26.3% sensitivity against a specificity of 89.8% (Fig. 1).
ROC curve analysis of circulating IgG in atherosclerosis. a combined subjects; (b) male subjects; (c) female subjects
As shown in Table 6, there was no correlation between plasma IgG levels and CIMT.
Table 6 The correlations between plasma IgG levels and CIMT
Recent studies have demonstrated the presence of natural autoantibodies in blood of patients with atherosclerosis, such as anti-apolipoprotein A-1 antibodies and anti-lipoprotein lipase antibodies [15, 16]. TNFα induces a pro-inflammatory process in endothelial cells, altering function of endothelial and vascular smooth muscle cells, which is crucially involved in the progression of atherosclerosis and heart failure [17, 18]. A study suggests that the role of IL-1α in atherogenesis should be targeted in patients with cardiovascular disease [19]. Decreased IL1β level was found to be related to the inhibition of platelet aggregation and thromboembolic-related disorders [20]. In this study, we found that plasma anti-TNFα and anti-IL1α IgG levels were significantly increased in patients with atherosclerosis compared with control subjects, and an increase in anti-IL1β IgG level was found in female patients (Table 4). Although circulating levels of anti-TNFα and anti-IL1α IgG antibodies were significantly increased in atherosclerosis, ROC curve analysis revealed relatively low sensitivity (Fig. 1). Possibly, such an antibody test cannot serve as highly effective biomarkers for diagnosis of the disease but represent a subgroup of atherosclerotic patients who may have developed chronic inflammation in their body. Nevertheless, the findings suggest that natural antibodies against inflammatory cytokines such as TNFα, IL1α, and IL1β may serve as useful biomarkers for the identification of atherosclerotic subgroup that may need immunological treatment although whether the levels of these inflammatory cytokines are correlated with their antibody levels in the circulation need further investigation.
Several studies have indicated gender differences in the development of atherosclerosis [21, 22]. Androgens could up-regulate the expression of atherosclerosis-related genes in macrophages from males but not females, suggesting genetic predisposition to atherosclerosis only in male subjects [23]. Knowledge into biological differences in atherosclerosis between men and women remains incomplete. In this study, the gender differences in circulating IgG antibodies to inflammatory cytokines were observed, so that up-regulation of anti-TNFα, anti-IL1α, and anti-IL1β IgG levels were more likely to occur in female than male patients with atherosclerosis. Collectively, the gender differences in circulating IgG antibodies to inflammatory cytokines may provide a clue to insights into the pathophysiology of atherosclerosis.
Technically, ELISA antibody tests with individual antigens may have a relatively low sensitivity as shown in this study. Possibly, such an antibody test alone is unlikely to screen people with atherosclerosis in clinical practice. Several studies have demonstrated that a panel of cancer-associated antigens had a high sensitivity for early detection of cancer [24, 25]. Future work on identification of a panel of such linear peptide antigens may provide an aid to early screening of atherosclerosis.
Natural antibodies to inflammatory cytokines may be potential biomarkers for atherosclerosis although further replication with a large sample size is needed to confirm this initial finding.
AUC:
Areas under the ROC curve
Confident interval
CIMT:
Carotid intima-media thickness
Coefficients of Variation
IL1α:
Interleukin 1α
IL1β-1:
Interleukin 1β derived antigen 1
IL6:
Interleukin 16
NC:
Negative control
Optical density
Phosphate -buffered saline
QC:
ROC:
Receiver operating characteristic curve
RPA2:
Replication protein A2
SBR:
Specific binding ratio
TNFα-1:
Tumor Necrosis Factor α derived antigen 1
Kao CW, Wu PT, Liao MY, Chung IJ, Yang KC, Tseng WI, et al. Magnetic nanoparticles conjugated with peptides derived from monocyte chemoattractant Protein-1 as a tool for targeting atherosclerosis. Pharmaceutics. 2018;10(2):62.
Knopp RH, Paramsothy P. Treatment of hypercholesterolemia in patients with metabolic syndrome: how do different statins compare? Nat Clin Pract Endocrinol Metab. 2006;2:136–7.
Lantos I, Endresz V, Virok DP, Szabo A, Lu X, Mosolygo T, et al. Chlamydia pneumoniae infection exacerbates atherosclerosis in ApoB100only/LDLR(−/−) mouse strain. Biomed Res Int. 2018;2018:8325915.
Herrington W, Lacey B, Sherliker P, Armitage J, Lewington S. Epidemiology of atherosclerosis and the potential to reduce the global burden of Atherothrombotic disease. Circ Res. 2016;118:535–46.
Berlin-Broner Y, Febbraio M, Levin L. Apical periodontitis and atherosclerosis: is there a link? Review of the literature and potential mechanism of linkage. Quintessence Int. 2017;48:527–34.
Bhatnagar P, Wickramasinghe K, Williams J, Rayner M, Townsend N. The epidemiology of cardiovascular disease in the UK 2014. Heart. 2015;101:1182–9.
Chen C, Huang Y, Zhang C, Liu T, Zheng HE, Wan S, et al. Circulating antibodies to p16 protein-derived peptides in breast cancer. Mol Clin Oncol. 2015;3:591–4.
Zhao H, Zhang X, Han Z, Wang Z, Wang Y. Plasma anti-BIRC5 IgG may be a useful marker for evaluating the prognosis of nonsmall cell lung cancer. FEBS Open Bio. 2018;8:829–35.
Machida T, Kubota M, Kobayashi E, Iwadate Y, Saeki N, Yamaura A, et al. Identification of stroke-associated-antigens via screening of recombinant proteins from the human expression cDNA library (SEREX). J Transl Med. 2015;13:015–0393.
Matsumura T, Terada J, Kinoshita T, Sakurai Y, Yahaba M, Tsushima K, et al. Circulating autoantibodies against neuroblastoma suppressor of tumorigenicity 1 (NBL1): a potential biomarker for coronary artery disease in patients with obstructive sleep apnea. PLoS One. 2018;13(3):e0195015.
Matsumura T, Terada J, Kinoshita T, Sakurai Y, Yahaba M, Ema R, et al. Circulating anti-Coatomer protein complex subunit epsilon (COPE) autoantibodies as a potential biomarker for cardiovascular and cerebrovascular events in patients with obstructive sleep apnea. J Clin Sleep Med. 2017;13:393–400.
Ramji DP, Davies TS. Cytokines in atherosclerosis: key players in all stages of disease and promising therapeutic targets. Cytokine Growth Factor Rev. 2015;26:673–85.
Tousoulis D, Oikonomou E, Economou EK, Crea F, Kaski JC. Inflammatory cytokines in atherosclerosis: current therapeutic approaches. Eur Heart J. 2016;37:1723–32.
Hallford P, St Clair D, Halley L, Mustard C, Wei J. a study of type-1 diabetes associated autoantibodies in schizophrenia. Schizophr Res. 2016;176:186–90.
Montecucco F, Vuilleumier N, Pagano S, Lenglet S, Bertolotto M, Braunersreuther V, et al. Anti-apolipoprotein A-1 auto-antibodies are active mediators of atherosclerotic plaque vulnerability. Eur Heart J. 2011;32:412–21.
Fesmire J, Wolfson-Reichlin M, Reichlin M. Effects of autoimmune antibodies anti-lipoprotein lipase, anti-low density lipoprotein, and anti-oxidized low density lipoprotein on lipid metabolism and atherosclerosis in systemic lupus erythematosus. Rev Bras Reumatol. 2010;50:539–51.
Hot A, Lenief V, Miossec P. Combination of IL-17 and TNFalpha induces a pro-inflammatory, pro-coagulant and pro-thrombotic phenotype in human endothelial cells. Ann Rheum Dis. 2012;71:768–76.
Kleinbongard P, Heusch G, Schulz R. TNFalpha in atherosclerosis, myocardial ischemia/reperfusion and heart failure. Pharmacol Ther. 2010;127:295–314.
Freigang S, Ampenberger F, Weiss A, Kanneganti TD, Iwakura Y, Hersberger M, et al. Fatty acid-induced mitochondrial uncoupling elicits inflammasome-independent IL-1alpha and sterile vascular inflammation in atherosclerosis. Nat Immunol. 2013;14:1045–53.
Alarcon M, Fuentes E, Olate N, Navarrete S, Carrasco G, Palomo I. Strawberry extract presents antiplatelet activity by inhibition of inflammatory mediator of atherosclerosis (sP-selectin, sCD40L, RANTES, and IL-1beta) and thrombus formation. Platelets. 2015;26:224–9.
Bairey Merz CN, Shaw LJ, Reis SE, Bittner V, Kelsey SF, Olson M, et al. Insights from the NHLBI-sponsored Women's ischemia syndrome evaluation (WISE) study: part II: gender differences in presentation, diagnosis, and outcome with regard to gender-based pathophysiology of atherosclerosis and macrovascular and microvascular coronary disease. J Am Coll Cardiol. 2006;47:084.
Mendelsohn ME, Karas RH. Molecular and cellular basis of cardiovascular gender differences. Science. 2005;308:1583–7.
Ng MK, Quinn CM, McCrohon JA, Nakhla S, Jessup W, Handelsman DJ, et al. Androgens up-regulate atherosclerosis-related genes in macrophages from males but not females: molecular insights into gender differences in atherosclerosis. J Am Coll Cardiol. 2003;42:1306–13.
Lam S, Boyle P, Healey GF, Maddison P, Peek L, Murray A, et al. EarlyCDT-lung: an immunobiomarker test as an aid to early detection of lung cancer. Cancer Prev Res (Philadelphia, Pa). 2011;4:1126–34.
Chapman CJ, Healey GF, Murray A, Boyle P, Robertson C, Peek LJ, et al. EarlyCDT(R)-lung test: improved clinical utility through additional autoantibody assays. Tumour Biol. 2012;33:1319–26.
We thank all the patients and control subjects for their participation in this study. This work was supported by Hailanshen Biomedical &Technology Ltd., Shenzhen, China.
This work was supported by Hailanshen Biomedical Technology Ltd., Shenzhen, China.
Jilin Provincial Key Laboratory on Molecular and Chemical Genetics, Second Hospital of Jilin University, 218 Ziqiang Street, Changchun, 130041, China
Peng Wang, Huan Zhao & Xuan Zhang
School of Public Health, Jilin University, Changchun, 130021, China
Zhenqi Wang
Peng Wang
Huan Zhao
Xuan Zhang
Wang P mainly carried out laboratory work and data analysis; Zhao H and Wang Z identified patients with atherosclerosis and collected clinical data; Zhang X conceived of this study, supervised laboratory work and drafted manuscript. All authors read and approved the final manuscript.
Correspondence to Xuan Zhang.
This work was approved by the Ethics Committees of the Second Hospital of Jilin University, Changchun, China, (IRB#: SHJU2017–099), and performed in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
All authors declared that they have no competing interests.
Wang, P., Zhao, H., Wang, Z. et al. Circulating natural antibodies to inflammatory cytokines are potential biomarkers for atherosclerosis. J Inflamm 15, 22 (2018). https://doi.org/10.1186/s12950-018-0199-2
Natural antibodies
Inflammatory cytokines
|
CommonCrawl
|
Fundamental Examples
It is not unusual that a single example or a very few shape an entire mathematical discipline. Can you give examples for such examples? (One example, or few, per post, please)
I'd love to learn about further basic or central examples and I think such examples serve as good invitations to various areas. (Which is why a bounty was offered.)
Related MO questions: What-are-your-favorite-instructional-counterexamples, Cannonical examples of algebraic structures, Counterexamples-in-algebra, individual-mathematical-objects-whose-study-amounts-to-a-subdiscipline, most-intricate-and-most-beautiful-structures-in-mathematics, counterexamples-in-algebraic-topology, algebraic-geometry-examples, what-could-be-some-potentially-useful-mathematical-databases, what-is-your-favorite-strange-function; Examples of eventual counterexamples ;
To make this question and the various examples a more useful source there is a designated answer to point out connections between the various examples we collected.
In order to make it a more useful source, I list all the answers in categories, and added (for most) a date and (for 2/5) a link to the answer which often offers more details. (~year means approximate year, *year means a year when an older example becomes central in view of some discovery, year? means that I am not sure if this is the correct year and ? means that I do not know the date. Please edit and correct.) Of course, if you see some important example missing, add it!
Logic and foundations: $\aleph_\omega$ (~1890), Russell's paradox (1901), Halting problem (1936), Goedel constructible universe L (1938), McKinsey formula in modal logic (~1941), 3SAT (*1970), The theory of Algebraically closed fields (ACF) (?),
Physics: Brachistochrone problem (1696), Ising model (1925), The harmonic oscillator,(?) Dirac's delta function (1927), Heisenberg model of 1-D chain of spin 1/2 atoms, (~1928), Feynman path integral (1948),
Real and Complex Analysis: Harmonic series (14th Cen.) {and Riemann zeta function (1859)}, the Gamma function (1720), li(x), The elliptic integral that launched Riemann surfaces (*1854?), Chebyshev polynomials (?1854) punctured open set in C^n (Hartog's theorem *1906 ?)
Partial differential equations: Laplace equation (1773), the heat equation, wave equation, Navier-Stokes equation (1822),KdV equations (1877),
Functional analysis: Unilateral shift, The spaces $\ell_p$, $L_p$ and $C(k)$, Tsirelson spaces (1974), Cuntz algebra,
Algebra: Polynomials (ancient?), Z (ancient?) and Z/6Z (Middle Ages?), symmetric and alternating groups (*1832), Gaussian integers ($Z[\sqrt -1]$) (1832), $Z[\sqrt(-5)]$,$su_3$ ($su_2)$, full matrix ring over a ring, $\operatorname{SL}_2(\mathbb{Z})$ and SU(2), quaternions (1843), p-adic numbers (1897), Young tableaux (1900) and Schur polynomials, cyclotomic fields, Hopf algebras (1941) Fischer-Griess monster (1973), Heisenberg group, ADE-classification (and Dynkin diagrams), Prufer p-groups,
Number Theory: conics and pythagorean triples (ancient), Fermat equation (1637), Riemann zeta function (1859) elliptic curves, transcendental numbers, Fermat hypersurfaces,
Probability: Normal distribution (1733), Brownian motion (1827), The percolation model (1957), The Gaussian Orthogonal Ensemble, the Gaussian Unitary Ensemble, and the Gaussian Symplectic Ensemble, SLE (1999),
Dynamics: Logistic map (1845?), Smale's horseshoe map(1960). Mandelbrot set (1978/80) (Julia set), cat map, (Anosov diffeomorphism)
Geometry: Platonic solids (ancient), the Euclidean ball (ancient), The configuration of 27 lines on a cubic surface, The configurations of Desargues and Pappus, construction of regular heptadecagon (*1796), Hyperbolic geometry (1830), Reuleaux triangle (19th century), Fano plane (early 20th century ??), cyclic polytopes (1902), Delaunay triangulation (1934) Leech lattice (1965), Penrose tiling (1974), noncommutative torus, cone of positive semidefinite matrices, the associahedron (1961)
Topology: Spheres, Figure-eight knot (ancient), trefoil knot (ancient?) (Borromean rings (ancient?)), the torus (ancient?), Mobius strip (1858), Cantor set (1883), Projective spaces (complex, real, quanterionic..), Poincare dodecahedral sphere (1904), Homotopy group of spheres, Alexander polynomial (1923), Hopf fibration (1931), The standard embedding of the torus in R^3 (*1934 in Morse theory), pseudo-arcs (1948), Discrete metric spaces, Sorgenfrey line, Complex projective space, the cotangent bundle (?), The Grassmannian variety,homotopy group of spheres (*1951), Milnor exotic spheres (1965)
Graph theory: The seven bridges of Koenigsberg (1735), Petersen Graph (1886), two edge-colorings of K_6 (Ramsey's theorem 1930), K_33 and K_5 (Kuratowski's theorem 1930), Tutte graph (1946), Margulis's expanders (1973) and Ramanujan graphs (1986),
Combinatorics: tic-tac-toe (ancient Egypt(?)) (The game of nim (ancient China(?))), Pascal's triangle (China and Europe 17th), Catalan numbers (18th century), (Fibonacci sequence (12th century; probably ancient), Kirkman's schoolgirl problem (1850), surreal numbers (1969), alternating sign matrices (1982)
Algorithms and Computer Science: Newton Raphson method (17th century), Turing machine (1937), RSA (1977), universal quantum computer (1985)
Social Science: Prisoner's dilemma (1950) (and also the chicken game, chain store game, and centipede game), the model of exchange economy, second price auction (1961)
Statistics: the Lady Tasting Tea (?1920), Agricultural Field Experiments (Randomized Block Design, Analysis of Variance) (?1920), Neyman-Pearson lemma (?1930), Decision Theory (?1940), the Likelihood Function (?1920), Bootstrapping (?1975)
soft-question
big-picture
big-list
ho.history-overview
54 revs, 11 users 85%
$\begingroup$ I'm not so sure about that... in my opinion, not every soft-question should be community wiki! Why exactly change this one? $\endgroup$
– Jose Brox
$\begingroup$ @Jose: Hard to say exactly. My instinct is that the kind of answers that this question will garner are those that didn't involve much actual thought, and the votes up or down will be more an assessment of whether the voter liked the example rather than whether the voter liked the answer (which, ideally, should contain an explanation of why that example shaped the discipline); both of these indicate that the answerers should not gain reputation for their answers, hence community wiki. $\endgroup$
– Andrew Stacey
$\begingroup$ I can't imagine a counterexample to the following rule: Any question whose purpose is to produce a sorted list of resources (i.e. the question includes, or should include, "one per post please") should be community wiki. $\endgroup$
– Anton Geraschenko
$\begingroup$ If it is both community wiki, and it has an open bounty, how does that work? $\endgroup$
– Greg Kuperberg
$\begingroup$ Dear Qiaochu, There are various interpretations of the meaning of "examples" for this question and it is nice to see them all. $\endgroup$
– Gil Kalai
140 Answers 140
In convex geometry, the Euclidean ball. In fact (as I think Gil knows but many other readers here probably don't) a huge portion of (high-dimensional) convex geometry consists of results that show that arbitrary high-dimensional convex bodies behave like the Euclidean ball in various ways.
And if I may be permitted to add another complementary example or two, the simplex and cube are for many purposes the least "Euclidean ball-like" convex bodies, so they are useful for understanding the limitations of the Euclidean ball as a prototype for arbitrary bodies.
Mark Meckes
The cyclic polytope in the study of convex polytopes in high dimensions.
It is the convex hull of n points on the moment curve (t,t^2,t^3,...,t^d). It is simplicial and has the property that every [d/2] points form a face. (So, for example, in 4 dimension every two vertices form an edge.)
$\begingroup$ Its boundary also has the greatest number of faces of each dimension among d-1-dimensional simplicial spheres on n vertices. (This is the upper bound theorem.) $\endgroup$
– Hugh Thomas
In modal logic there is a particularly simple formula, called McKinsey formula: ◻⋄p→⋄◻p. It is so simple, yet it defines a frame property which cannot be expressed in first-order logic.
Also, with the right selection of other formulas, it gives rise to frame incompleteness examples (logics that are consistent, but are not logics of any class of frames whatsoever).
Filip Nikšić
(From the Wikipedia article) In cryptography, RSA (which stands for Rivest, Shamir and Adleman) is an algorithm for public-key cryptography. It is the first algorithm known to be suitable for signing as well as encryption, and was one of the first great advances in public key cryptography. RSA is widely used in electronic commerce protocols, and is believed to be secure given sufficiently long keys and the use of up-to-date implementations.
Turing machines (1937) and Boolean circuits: the primary models for digital computers.
Universal Quantum computers, and quantum Turing machines (Deutsch, 1985).
The ring of Gaussian integers Z[i] is a fundamental example of a ring of integers extending Z. Many early results in number theory were motivated by understanding and generalizing properties exhibited by Z[i] and its relationship with Z.
Douglas Zare
The Riemann zeta function is the fundamental example of a Dirichlet L-series. It is central in analytic number theory.
$\begingroup$ (I think we had it this example before.) $\endgroup$
$\begingroup$ Ah, I see it now in the real and complex analysis section. It seems a curious omission in the number theory section, as well as the modular function $j(\tau)$ or the Dedekind eta function. Some of my suggestions for this question are not designed to be new examples to many mathematicians. They indicate that I think those examples are fundamental, which might not be as obvious to mathematicians outside those fields. $\endgroup$
– Douglas Zare
$\begingroup$ Explaining how examples already mentioned are fundamental can be very useful! Note that you can freely edit existing answers. $\endgroup$
$\begingroup$ Actually, I can't edit them. $\endgroup$
The Sorgenfrey line is an example that has motivated a lot of research in general topology, mostly generalized metric properties and ordered space theory. It's an example of a hereditary normal space with non-normal square, it is separable, Lindelöf, first countable, but not second countable; a generalized ordered space that is not orderable, and many more.
Henno Brandsma
Hyperbolic toral automorphisms (viz. the cat map and its generalizations) are the fundamental examples of Anosov diffeomorphisms, and their suspensions are the fundamental examples of Anosov flows. This is because they are "structurally stable", i.e. small perturbations preserve the Anosov property and any Anosov diffeomorphism on a torus is topologically conjugate to a hyperbolic toral automorphism. I am actually not even aware of any other concrete examples of Anosov dynamics other than those derived from geodesic flows on hyperbolic spaces.
Answered by Steve Huntsman
5 revs, 2 users 60%
In combinatorics there are very simple basic graphs from which a whole lot of theory came. For example the complete graphs $K_5$ and $K_{3,3}$ which alone provide the ground level for any non-planar graph according to Kuratowski's theorem. Another simple graph that gave rise to a huge amount of theory is Petersen's graph, which I like to think as the graph whose vertices are the ten two-element subsets of $\{1,2,3,4,5\}$, and for which two such vertices are connected iff they are disjoint.
A link for Kuratowski's theorem is http://en.wikipedia.org/wiki/Kuratowski's_theorem
edited Jul 9, 2013 at 6:17
Vania Mascioni
The Möbius strip or Möbius band (a surface with only one side and only one boundary component).
Artem Pelenitsyn
SLE - stochastic Loewner evolution (or Schramm-Loewner evolution) is a one parameter class of random planar curves. These random curves depend on a real parameter kappa, they are (almost surely) simple curves when kappa is at most 4, they fill the plane when kappa is at least 8. They are related to many planar stochastic models. Look here for more pictures.
edited Dec 22, 2017 at 2:33
Poincare dodecahedral sphere, the 1904 example of a homology sphere was fundamental for the discovery of the fundamental group, and have led to the statement of the Poincare conjecture.
The cone of positive semidefinite matrices is a fundamental example of a convex cone which is important in convexity and for convex and semidefinite programming.
The Reuleaux triangle is the first and most famous example of a set of constant width other than the circle (or ball in higher dimensions).
$\begingroup$ The determinant of symmetric matrix is a fundamental example of hyperbolic polynomial (it is hyperbolic with respect to any positive definite matrix). $\endgroup$
– Petya
Mar 15, 2010 at 4:56
While the group of permutations (and permutation matrices) is probably too fundamental to be included, a mysterious, with much more yet to be understood generalization called alternating sign matrices are important in modern combinatorics. Those are square matrices with entries 1, 0 and -1 so that the non zero entries in each row and column alternate in sign and sum up tp one. There is a simple correspondence between alternating sign matrices and monotone triangles.
Tic-Tac-Toe tends to be the starting example in combinatorial game theory, just because it's simple enough to depict the entire tree on one page yet can still be used to illustrate the standard definitions and notation.
Jason Dyer
$\begingroup$ yes, perhaps along with nim. $\endgroup$
$\begingroup$ xkcd.com/832 $\endgroup$
– kjetil b halvorsen
The Alexander polynomial in knot theory.
Daniel Moskovich
Young tableaux and Schur polynomials
2 revs
Borromean rings
Borromean rings are important in several places. For example, they appear in computations of homotopy groups of the 2-sphere, where they corresponds to the Hopf fibration.
The pseudo-arc in continuum theory.
I think polynomials are one of the greatest inventions of humankind. Not only are they extremely flexible and come up in so many domains of math, but they've lead to interesting breakthroughs. For example, trying to find a closed formed solution to the quintic polynomial lead Galois to develop groups, right?
Answered by Dagit
The gamma function is a fundamental example of an interesting function defined only on the integers which has an analytic (meromorphic) continuation to the whole complex plane. This ability to extend an interesting, seemingly discrete function to a complex differentiable function motivates a lot of later material.
Answered by Davidac897
2 revisions, 2 users
Gil Kalai 64%
Schwarzschild metric as a prototype of black hole was a fundamental example in the development of General Relativity (for instance, it is often referred to when "defending" the ADM mass as a natural concept of mass in General Relativity).
Motivated by Amit Kumar Gupta's answer about the continuum hypothesis, let me add an example that is less natural but has inspired an amazing amount of set theory, namely Suslin's Hypothesis. This conjecture, proposed in 1920 and now known to be independent of ZFC, says that the real line with its usual ordering relation is characterized up to isomorphism by the following properties:
dense linear order without endpoints
Dedekind-complete
No uncountable family of pairwise disjoint open intervals.
The point of the conjecture is that it was proved much earlier by Cantor that one gets a characterization of $\mathbb R$ if one puts in place of the last property the stronger statement that there is a countable dense set. So Suslin is simply asking whether one can weaken this separability assumption to the third property in the list above (often called the "countable chain condition"). I can't claim that this question is anywhere near as natural as the continuum hypothesis, but what makes it important (in my opinion) is its impact on the development of set theory. The fact that Suslin's hypothesis is false in Gödel's constructible universe $L$ was one of the first applications (and probably a major motivation, though I don't actually know that) for Jensen's theory of the fine structure of $L$, a theory that has grown tremendously as a component of the inner model program in contemporary set theory. The fact that Suslin's hypothesis is consistent with ZFC was the initial application and the motivation for the theory of iterated forcing, now a central tool in set theory. It also provided the occasion for the invention of Martin's axiom. That axiom and the combinatorial principles isolated by Jensen from the fine structure of $L$ have become standard tools for proving independence results without explicitly referring to forcing or to $L$.
Andreas Blass
Within the category of algorithms and computer science, I would say Conway's "The Game of Life", where binary, two dimensional structures may evolve, requiring not much than an initial state.
http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
Cellular automatons have spawn practically a branch of computer science on its own right, and has deep connections with dynamical systems and some types of fractals as well, like Sierpinsky's triangle, using rule 90 (in Mathematica):
ArrayPlot[CellularAutomaton[90, {{1}, 0}, 50]] This commands embeds the running of the Rule 90 for 50 steps, from a single 1 on a background of zeros, and then displays Sierpinsky's triangle.
Also, celullar automatons, inspired on the Game of Life, have met their usage as well to study pseudo-randomness, or artificial music (see Stephen Wolfram's work, for example).
Arturo Ortiz Tapia
The Newton-Raphson method. A method for finding successively better approximations to the zeroes of a real-valued function. See also this link.
The Prüfer p-group is a noteworthy example in the theory of Infinite Abelian Groups. (Answer by J. H. S.)
The Ising model (1925)
The symmetric and alternating groups are fundamental examples in group theory, representation theory, and combinatorics.
I presented the emerging list of examples over my blog and several people suggested a few more examples. I will mention them together:
Tom LaGatta proposed to add the percolation model (1854), John Sidles made several suggestions and in particular proposed several examples from Control theory such as the Nyquist criteria, Christian Blatter proposed adding the Peano curve, and Mark Meckes proposed adding the fundamental Banach spaces L_p/l_p and C(K).
Joe Malkevich proposed several basic examples of games in addition to the prisoner dilemma (chicken, chain store game, and centipede) and the Gale-Shapley model of two-sided market model (the model in the famous Gale-Shapley stable marriage theorem). I thought that we should probably add a basic economic model of exchange markets (like the Arrow-Debreu model).
I also thought the configurations of Desargues and Pappus should be added.
There was also some critique on the classification of examples, and an interesting suggestion By Michael Nielsen that "Distilled and expanded, it could form the basis for an excellent book. Perhaps: 'Examples from the book'." (This refers to Aigner and Ziegler's book "Proofs from the book". (In fact, a similar idea by Ziegler and me have motivated the question itsef.)
What are your favorite instructional counterexamples?
Examples of eventual counterexamples
Counterexamples in algebra?
Theorems that are 'obvious' but hard to prove
Most intricate and most beautiful structures in mathematics
A book you would like to write
What is your favorite "strange" function?
What could be some potentially useful mathematical databases?
Theorems for nothing (and the proofs for free)
Algebraic geometry examples
See more linked questions
What are examples of good toy models in mathematics?
books well-motivated with explicit examples
Great mathematicians born 1850-1920 (ET Bell's book ≲ x ≲ Fields Medalists)
What are examples of theorems get extensions based on simple observation?
What are some fundamental "sources" for the appearance of pi in mathematics?
Examples of major theorems with very hard proofs that have not dramatically improved over time
Examples of simultaneous independent breakthroughs
Examples of rich theories that started out as an infinite-dimensional inquiry
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.